Status, as at December 2017

Brain

  • Thinking.

    •  Completely new executive brain subsystem (basically prefrontal cortex). Creatures can now make predictions and plans, and construct simple mental narratives about the world. Currently only forward narratives (starting from now and forming plans for the future) are used, but reflections and dreams are also possible, ready for when I add extra learning processes (e.g. memory consolidation).
  • Short-term memory.

    • What the executive brain is thinking about, in terms of people, places and things, is now governed by STM. It normally recalls facts about the object that the higher brain chooses to think about, but it can interrupt the higher brain if an object with high salience comes into awareness, and can make its own suggestions of salient things to form the basis of plans (e.g. if we’ve recently seen food and we’re hungry then we can encourage the executive brain to think about food).
    • When STM chooses or is asked to recall a particular object, location or person, it also recalls its last known location and hence drives the visual attention and navigation systems.
    • STM also now records the latest STATE of each object we see (e.g. ‘I saw a gloop and it was angry’). The STM basically forms a mental world that the creature’s mind can look around inside using the spotlight of attention, regardless of where in the physical world it is looking (this, in turn, is constantly feeding the STM with new information). It thus binds physical and mental attention.
  • Emotions.

    • As much as possible these are now going to be generated by the brain itself, from perceptual cues. For instance an angry predator doesn’t MAKE us feel something specific; it emits ‘threat’ signals, which are then perceived and might cause the brain to experience either fear or aggression, depending on the recipient’s social status, personality and current mood. How much I can avoid cheating here I don’t know, but it seems important that the emotional consequences be in the eye of the beholder as much as possible. Most of the detailed circuitry for this doesn’t exist yet; only the basic ‘limbic system’ infrastructure. We only have physiological emotions/drives at the moment, like hunger, sleepiness, etc. More complex drives like attraction, anger or grief are represented chemically but yet to be computed neurally. Coming soon, though.
    • Chemically (if not always perceptually or behaviorally) speaking, we currently have the following drives, moods and emotional indices, most of which already have one or more impacts on the brain: Anger, Fear, Sex, Desire, Loneliness, Hunger, Thirst, Tiredness, Drowsiness, Hotness, Coldness, Wetness, Windsweptness, Sadness, Boredom, Disgust, Chronic pain, Acute pain
  • Navigation.

    • Local navigation and obstacle avoidance is complete and mostly works. Still needs tweaking though.
    • Global navigation (region to region) is complete but buggy, so Gloop sometimes confidently strides off to a location where the target object isn’t, because he’s going to the right part of the wrong ‘region’. I want to change the way this works anyway, now that we have the concept of ‘zones’ (below).
    • Walking and postural control are basically complete and the Unity physics engine is no longer the disaster it was when I started.
  • Attention

    • Visual attention is complete, but needs a lot of balancing. There are many kinds of saccade targets: motion, salience, top-down recognition (i.e. we catch sight of something similar to the thing we’re looking for), top-down navigational attention towards waypoints, stimuli coming from the ears and skin, etc. The visual system tries to pick the best saccade target moment by moment, but because I haven’t got the balance right yet, Gloop will sometimes get distracted right at the very moment he should be focusing on a target to be poked, eaten, etc. and hence will miss it entirely. Or he’ll look at the target and bump into an obstacle that he should have been paying more attention to. It’s a very complex system, but it’s complete now and it’s just a matter of tweaking the parameters some more.
    • Primitive auditory attention exists (think ‘cocktail party effect’), but sounds are mostly routed to the visual attention system to control gaze.
  • Perception & motor systems

    • Eyes and low-level visual perception are complete – Gloop can choose saccade targets and glance in the direction of waypoints, objects, sounds, tactile stimuli, etc. and gather multiple visual cues from them. He can learn to recognize objects from these features, and the feature set forms the basis for a complete, global “shadow reality” for all objects and localities in the world (in other words, meshes and textures are what create OUR perception of the 3D virtual world, but Gloop can’t process billions of pixels in order to work out what he sees, so instead he needs an alternative view of the world; one that doesn’t let him cheat too much but does give him information about what he’s seeing in a way that he can learn to make sense of). The framework for perceptual features is now complete but the actual choice of features will evolve. Features can also be perceived relative to the viewer – e.g. Gloop can tell if another creature is bigger or smaller than him, rather than just big or small.
    • Visual recognition uses what’s called perceptual categorization (classifying things according to how similar they look), but I want to try an experimental solution to functional categorization (classifying things according to what they do). That’s much more powerful (my son’s PhD was on this!) but I only recently figured out a way to do it.
    • Ears can hear simple sounds and inform the visual system of their direction and perceptual characteristics. Ears can also hear words spoken by other creatures or the user (typed, not actual voice recognition), and these can form associations inside their brain with objects, actions and moods.
    • Touch is basic but complete. The touch system has a ‘somatic map’ of the body that enables the visual system to look towards the point on the body where we were touched or hurt. Actions like biting and sucking are also triggered by touch, although that needs a bit more work to become reliable. Foot pressure modulates some simple posture maintenance (e.g. leaning backwards when going down a slope) and stumble-correction circuitry in the brain.
    • Also in the skin are sensors that detect how wet, cold and generally miserable gloops feel, due to the weather and their situation. The skin computes the creature’s subjective impression of these things. For instance a breeze makes him feel pleasantly cooler when it’s hot but isn’t very nice when he’s cold and wet. Also he doesn’t feel wet when he’s under shelter, and the sun makes him less hot when he’s in shade, etc.
    • Smell. The hooks are present but I’ve not done any work on this yet.
    • The mouth can bite, suck, poke, etc. at objects or other creatures. It can emit species-specific particles (dragon flames, saliva, kissy hearts, etc.), sounds and other signals that can be picked up by creatures or objects and affect their behavior. It can also speak, although that doesn’t seem to be working right now (I’ve heard some ba-ba baby talk, but no actual words yet). Ostensibly the mouth responds to association signals in the brain in such a way that the creature can narrate what it is doing – “angry, hit mama” – or ask for something it wants – “hungry want food”. There’s a fairly compehensive system in place for defining a vocabulary, attaching user-created sound clips to words and teaching creatures the names of objects, but the association neurons aren’t responding strongly enough at the moment, so it’s very unlikely you’ll hear any of this until I fix it!
    • Muscles and the motor systems in the brain have been complete for a long time, but now there’s also chemistry to suppress muscle movements when we go to sleep, etc.
    • General pattern recognition neural networks (e.g. for recognizing objects, people and places) have a completely new learning algorithm that self-organizes fairly well whilst allowing for one-shot learning (instead of needing zillions of repetitions, like most SOMs do).
  • General infrastructure

    • I’ve added a new parameter to the most fundamental signals that get passed from map to map. I used to simplify the complex patterns of nerve activity into simpler information regarding the amplitude and location of peak activity, but now I also transmit information about the ‘peakiness’ of that peak. I call this Q (by analogy with a property of electronic filters) and it can be used for several things. In the inter-map signals, I use Q to specify the “urge” of the signal (while amplitude carries the valence or expected benefit of that signal). It’s as if a broad, diffuse dome of activity represents mental activity that’s more imaginary, while a sharper spike represents a goal to be acted on. So Q can help us differentiate between intentions, thoughts, and other status information like a map’s failure or refusal to respond to a goal.
    • Inside maps the same parameter can control the tolerance of neurons to incoming signals – how specific those signals need to be in order for the neurons to fire. This is important for the executive system, where a cortical column can now learn that we should only eat carrots, or alternatively that it’s ok to eat things that look something like carrots. It also allows the motor neurons in a column to specify how determined they are that their prediction/intention/attention be achieved (so if we’re cool with eating all kinds of foods and we’re attending to a carrot, the column won’t insist we look for an apple instead).
    • This was a small change, but to an essential and ubiquitous data structure, so it took a lot of recoding, but I think it will have more applications in the future than I’ve thought of so far.
    • The “neural code” is thus now represented by the position, height and breadth of domes of nerve activity, as it seems to be in real brains. By simplifying the dome shape into just four numbers I can massively reduce the amount of computation involved without losing too much information. The only snag is that these notional domes must be circular (Q is effectively their radius), when in real life they can be any shape at all, and that places some constraints on what creatures can learn (for example it’s hard for them to learn something like “all foods except spinach”). C’est la vie.
    • Now that Q solves some irritating problems and the executive maps are in place, I’ve mostly converged on a global scheme for the signals that send value (valence, predicted benefit and intent) around the brain as a whole. Maps can thus negotiate with each other over which goals are pursued, and provide feedback to each other about the cost/benefit of such actions (which helps with planning – we rehearse what we’re considering, to see how other parts of the brain feel about it). This is all very complex and dynamic, so there are bound to be lots of bugs, but basically it works, and so the brain is now a proper corporate entity, with lots of top-down and bottom-up interactions helping it to become “of one mind”.
    • Instincts have been made more sophisticated, and we now have instincts in the genome for reflexive behaviors like tending to lie down when sleepy or pee when desperate to go, which can overrule the more deliberate intentions. Instincts for high-level executive behaviors don’t exist yet – those need a new gene type, if they’re possible at all (e.g. how do you specify “eat carrots when hungry” when a gloop hasn’t even learned what carrots are yet? In Creatures I had a neat solution to this but it won’t work so easily in gloops. At a pinch I can make recognizing carrots instinctive too, to give it a fixed location on the otherwise malleable object-recognizing map, but this is a slippery slope, especially if I introduce functional categorization (long story)).

Biochemistry

  • I had to abandon my lovely ABXD molecular structure some time ago, so chemicals are now just notional ‘substances’ produced by arbitrary reactions defined genetically. I’m sad about this but something had to give. Even so, the current system of receptors and regulators (which combine the functions of enzymes, reactions and active genes) seems to work just fine and so chemistry as a toolkit is basically complete.
  • The digestive system is complete, apart from minor twiddly bits. Gloops eat carbs, which become sugars, which provide energy and get stored up as glycogen and fats for later. Fat actually makes creatures fatter and heavier. They also eat proteins, which get broken down and used for tissue repair (in a very stylized way). Hunger and thirst are modulated by the amount of spare energy/water in the system, etc. At the other end of the pipeline, gloops pee and poop to get rid of waste products (and can choose to do these things deliberately, perhaps for scent marking, hygiene or distress signals?).
  • Neurochemistry is coming along. There’s a sleep cycle, in which drowsiness rises in response to a light-modulated body clock. During part of this cycle we’re able to shift into REM sleep (chemistry exists but the brain doesn’t do anything yet). And while asleep, the muscles are anesthetized and various brain activities change.
  • There’s also some neuromodulation controlling the brain’s thoughts and decisions. For instance, when gloop gets distressed (high drives) the rules his brain has learned become more tolerant to input variation and also more likely to be triggered even if they’re not known to be beneficial, so that his willingness to take risks and try new things goes up.
  • Other chemistry exists for growth and maturation, and control of heart rate and breathing, but there’s more to be done in this area. It’s all just genetics, of course, so it’s pretty easy to do.

Genetics

  • A completely new genetics system was built some time ago. The genome is now primarily in human-readable form (.SR1 and .SR2 files), and these get turned into .DNA form later. The code can read and write all three types.
  • Creatures have genetically controlled markings and coloration, horn size, limb length, natural fatness level, etc.
  • Two sexes exist, based on a similar system to human XY, XX chromosome pairs, but all gloops are currently sexless, since while I’m developing the genome I keep them haploid, so that I don’t have to make two copies of every new gene. Turning them diploid again and adding gene variants to allow for large numbers of phenotypes will be one of the last tasks, once the basic genome is complete. I haven’t tackled sex and reproduction yet – it’s finished at the genetics level, but behaviorally it’s a tricky topic!
  • Genus and species are somewhat redefined now. All members of a genus share the same code and basic body structure, but different species within that genus can have different graphics and broad-outline genetics. Individuals belong to a species and genus but can have unique genomes and hence subtle differences in coloration, shape, personality, etc.

World objects

  • There are new global signaling systems in place for inter-organ and inter-object communication, so creatures can grab objects, poke them, etc. and the result will depend on the object. Also, when things emit sounds that we can hear, they also emit “shadow” sounds that the creatures can hear and interpret. Ditto smells and visual changes.
  • Objects can have hotspots, so that when a creature looks at another creature, it knows to look at its face, or if it looks at a TV set it’ll know to look at the screen or poke the buttons.
  • Creatures and other objects can be imported and exported by placing them on a transporter object. Currently they all go to disk, but in principle they can be transported across the internet. The state of each object is saved on quit (there will be bugs in this, so if things behave differently to how they were in the previous  session that’s probably why).
  • As well as objects and creatures, we can now have zones. A zone is a geometrical region with certain physical and visual properties – a building, a scatter of trees, a scary cliff-edge. Creatures can learn to recognize patterns of zone features and the zones can specify microclimates, so that creatures can learn to seek shelter when it rains, find some protection from wind, etc. The zone system is in place but there are only a very few, largely pointless zones defined in the world as yet, and although I did a lot of work on vision related to zones I can’t for the life of me remember how it works, so creatures don’t tend to pay any attention to them at all at the moment. Eventually I’d like zones to replace “locales” as the means by which creatures learn to navigate over long distances.
  • Every object, person or zone can be given visual, auditory, olfactory and emotional properties. So what we humans see in the virtual world is mirrored by a simpler description that the creatures see. It’s simple enough that they don’t need vast amounts of computer power and solutions to as-yet-unsolved problems in order to be able to see and hear, but it’s complex enough that creatures do have to learn for themselves what things look like, rather than just being told that a ball is a ball, etc. Creatures have additional properties, so that they can recognize each other, both as individuals and in terms of family/status relationships, and also know what mood they’re in etc. (e.g. facial expressions). All of this is working but not yet fully implemented in the world, e.g. lots of objects don’t have features yet, and some of those that do have random values, because every time I change the structure I have to redefince them all, which is a waste of effort at this point. Worth knowing, because two objects that look alike to us might look different to gloops.

Environment

  • Weather system is complete, supporting day and night cycles, weather that develops out to sea and rolls in over time, clouds, rain and thunderstorms come and go, trees sway in the wind, and suchlike.
  • New dynamic fog. Still needs some work to make it look right, but we can now have wafts of it, rather than an often weirdly-rendered general haze.
  • New temporary ocean. This also needs work and I might replace it with a commercial asset but I have some special requirements, and commercial oceans tend to use a lot of computer power (which isn’t so important in games where most of the world can be disabled unless you’re actually looking at it), so we’ll have to see. We can also have ponds for drinking from, which dry up in hot weather and refill after rain and suchlike, but I haven’t implemented any yet.
  • Lots of lab equipment already exists for the bio-curious, but it’ll need to be remodeled to suit whatever the new scenario looks like. A lot of work has already gone into it, though, and that can be kept.

User interface

  • Most of what I’ve written can be reused, but it’ll be in a completely different form. I don’t like the way I’ve done it so far. We need to be IN the creature’s world, and our tools (lab equipment and stuff) need to be in that same world and not somewhere different. I made a mistake doing it the present way.
  • I did quite a bit of work on full-body avatars and was reasonably happy with the results, but I seem to have lost it. If we decide to have avatars then it shouldn’t be too hard to redo that work, but this is all yet to be decided. There are many pros and cons.
  • I’ve visited the gloops in virtual reality and it’s really quite wonderful to meet them face to face at their real size! But moving around and manipulating things in VR has very different requirements from moving via a screen and mouse (for a start you can easily get motion sickness), so that’s still to be thought about. VR does work, though, and I plan to incorporate it. 
  • I started adding some modding facilities based on Lua scripts a long time ago, but I’ve completely lost track of how far I got with that. Maybe Unity has changed to make modding easier – I’ll have to refresh my memory on this.

Click here for the previous change logs