Lets build an ant?

I was just thinking. aren’t we taking on too simple of toy models? wouldn’t it be better to make an embodied system and actually watch it’s behavior just for the sake of the satisfaction of getting it to move?

why not build something akin to a 3D ant body on a simulated game-like enviroment with food sources and predators, the ant would have sticky paws, muscles, pheromone markers, basic sensors like smell, touch, a low resolution ommatidia-like vision.

we could add a interface so that the simulation could comunicate via stdin/out with other processess i.e: the brain, and have it send motor inputs and get sensory inputs.

then we could make the ant body open source and have it be something like a benchmark for embodied models?

I believe the concept of a ant is more suited for this because they are cute, have more interesting behaviors and environment and are easier to move than bipedal bodies.

honestly, I’m suggesting this more because it would be cool rather than being useful.

4 Likes

Also, I’m not suggesting that we should build a full RL system that learns to move each muscle either.

we all know that a large part of insect behavior is hard-coded and inflexible BUT its implemented in a nervous system. I think it would be cool to design networks by hand that can perform the basic motor control tasks. something like a central pattern generator for the spinal chord.

4 Likes

It’s a lovely idea, but unfortunately quite impractical. Basically, we don’t know how ants work.

Yes, we can build actuators based on little motors and sensors that generate electrical signals, but we have absolutely no idea how to build a ‘brain’ that does what an ant brain does. We can study neurons, but all the interesting behaviour is emergent and evolved in over millions of years.

Even if we could precisely emulate an ant brain and watch it work, we still wouldn’t know how. We couldn’t even start to build a ‘better ant’.

Feel free to tell me I’m way off base and send me a link that shows how. I think I’ll be waiting a while.

2 Likes

I’m not talking about making a realistic ant, more like something that can move and get inputs so that we have a body for models to play around.

getting it to walk and avoid obstacles would already be a milestone.

if you hardcore the motor aspects of it, getting it to do interesting behaviors might be easier too.

My rationale is that we gotta start somewhere, and I think moving an ant-shaped oblong hexapod robot with ommatidia cameras is easier than reverse-engineering the cortex anyways.

2 Likes

also, I’m talking about a game simulation and not a real robot, soft mass-spring systems with verlet integration and a raycasting collision detection ought to be enough.

2 Likes

I already have one, and if you visit your local electronics shop you can too. It has 6 legged motion, can walk in any direction, and avoids obstacles. Done.

Did I say realistic? Did you even try to answer my point about evolved behaviours?

Making (or buying) the physical model is trivial, but modelling ant behaviour is simply something we don’t know how to do. Ant behaviour is complicated, and it evolved over millions of years. We have no idea how to model that, and it may not even be possible.

1 Like

https://openworm.org/ ??

I wish I could find the California startup that posted on NuPIC-theory on these lines years back. I don’t recall if they were attempting ants. They were trying to work up from some kind of connectome, though, as I recall. As I recall some people coming from the C. Elegans project.

In general, if you’re talking about embodiment being a simpler path to take than starting with the cortex, you should look at the thread of development around Rodney Brooks and others.

How to Build Complete Creatures Rather than Isolated Cognitive Simulators, Rodney A. Brooks

Rodney Brooks made the company which developed the Roomba. The most successful commercial “robot” so far? But his more recent robot startup company failed.

Hubert Dreyfus gives an interesting summary of the “embodied” thread of AI development in this paper (from the philosophical point of view of rejecting abstraction):

Why Heideggerian AI failed and how fixing it would require making it more Heideggerian☆
Hubert L.Dreyfus

Heidegger: all about embodiment and process, and rejection of abstract ideas? (Unfortunately for him associated with the nature worship thread of fascism!)

Artificial Organisms might also be a buzz word to search on. Rolf Pfeifer that I know of worked on that. Luc Steels big on this at one point?

I like it, because I’m basically about embodiment too. Just, I think the interesting bit is a generalization process on top of that.

5 Likes

Lots of good material there, but I need to remind readers that the reason Numenta even exists is because they focussed on the cortex, and found some algorithms. The regularity of the cortical columns seems to be a key factor in allowing the brain to scale, to rely on algorithms that are applicable across a wide range of higher functions, and adapt much faster than traditional evolution.

From this perspective it would be a mistake to go back to tiny brains, like ants. The focus should be on the smallest, simplest animal that has a cortex (with columns), or on birds (which do the same stuff but differently).

3 Likes

Got it. It wasn’t in NuPIC-theory that they posted. It was in Dan Lovy’s “Bio - AI” Facebook group:

The company must have been PROME. Looks like they are still operating:

https://prome.ai/about.html

In general you might like to poke around in Dan’s FB group if you’re interested in this bottom up type approach. I haven’t attended much in recent years, but lots of the early stuff might be relevant.

Tend to agree. I abstract even more. I focus on process, coming from functional analysis. And that relates to the cortex as well as simpler bodies. But insights from embodiment can also feed back into the idea of process built on top of the cortex.

2 Likes

Some kind of virtual robot toy & environment which eventually is transferable to a real one would be cool.

3 Likes

I’ve played with similar ambitions, and here are some of my notes:


I recommend the MuJoCo simulator.
Its free and open source and they have a great model of a dog.

Once you’ve pip-installed it you can run it with python:

from dm_control import suite, viewer
env = suite.load(domain_name="dog", task_name='walk')
viewer.launch(env)

The python API does not tell you which sensors & muscles are which. Instead it gives you a list of unnamed numbers. For RL-algorithms that is fine, but that’s not helpful for our purposes and it’s clearly not how real animals work either. So to fix this I wrote a snippet of python code to convert the arrays into dictionaries:

dog_body_interface.py
import numpy as np
import math

joint_names = (
    'L_1_extend',
    'L_1_bend',
    'L_2_twist',
    'L_3_extend',
    'L_3_bend',
    'L_4_twist',
    'L_5_extend',
    'L_5_bend',
    'L_6_twist',
    'L_7_extend',
    'L_7_bend',
    'hip_L_supinate',
    'hip_L_abduct',
    'hip_L_extend',
    'knee_L',
    'ankle_L',
    'toe_L',
    'hip_R_supinate',
    'hip_R_abduct',
    'hip_R_extend',
    'knee_R',
    'ankle_R',
    'toe_R',
    'Ca_1_extend',
    'Ca_2_bend',
    'Ca_3_extend',
    'Ca_4_bend',
    'Ca_5_extend',
    'Ca_6_bend',
    'Ca_7_extend',
    'Ca_8_bend',
    'Ca_9_extend',
    'Ca_10_bend',
    'Ca_11_extend',
    'Ca_12_bend',
    'Ca_13_extend',
    'Ca_14_bend',
    'Ca_15_extend',
    'Ca_16_bend',
    'Ca_17_extend',
    'Ca_18_bend',
    'Ca_19_extend',
    'Ca_20_bend',
    'Ca_21_extend',
    'C_7_extend',
    'C_7_bend',
    'C_6_twist',
    'C_5_extend',
    'C_5_bend',
    'C_4_twist',
    'C_3_extend',
    'C_3_bend',
    'C_2_twist',
    'C_1_extend',
    'C_1_bend',
    'atlas',
    'mandible',
    'scapula_L_supinate',
    'scapula_L_abduct',
    'scapula_L_extend',
    'shoulder_L_supinate',
    'shoulder_L_extend',
    'elbow_L',
    'wrist_L',
    'finger_L',
    'scapula_R_supinate',
    'scapula_R_abduct',
    'scapula_R_extend',
    'shoulder_R_supinate',
    'shoulder_R_extend',
    'elbow_R',
    'wrist_R',
    'finger_R')

actuator_names = (
    'lumbar_extend',
    'lumbar_bend',
    'lumbar_twist',
    'cervical_extend',
    'cervical_bend',
    'cervical_twist',
    'caudal_extend',
    'caudal_bend',
    'hip_L_supinate',
    'hip_L_abduct',
    'hip_L_extend',
    'knee_L',
    'ankle_L',
    'toe_L',
    'hip_R_supinate',
    'hip_R_abduct',
    'hip_R_extend',
    'knee_R',
    'ankle_R',
    'toe_R',
    'atlas',
    'mandible',
    'scapula_L_supinate',
    'scapula_L_abduct',
    'scapula_L_extend',
    'shoulder_L_supinate',
    'shoulder_L_extend',
    'elbow_L',
    'wrist_L',
    'finger_L',
    'scapula_R_supinate',
    'scapula_R_abduct',
    'scapula_R_extend',
    'shoulder_R_supinate',
    'shoulder_R_extend',
    'elbow_R',
    'wrist_R',
    'finger_R')

foot_names = ('HL', 'HR', 'FL', 'FR')

touch_sensor_names = ('FL', 'FR', 'HL', 'HR')

def combine_vertebrae(joint_data):
    """ Combine all of the sensory measurements from the vertebrae joints. """
    joint_data['lumbar_extend'] = (
            joint_data['L_1_extend'] +
            joint_data['L_3_extend'] +
            joint_data['L_5_extend'] +
            joint_data['L_7_extend'])
    joint_data['lumbar_bend']  = (
            joint_data['L_1_bend'] +
            joint_data['L_3_bend'] +
            joint_data['L_5_bend'] +
            joint_data['L_7_bend'])
    joint_data['lumbar_twist'] = (
            joint_data['L_2_twist'] +
            joint_data['L_4_twist'] +
            joint_data['L_6_twist'])
    joint_data['cervical_extend']  = (
            joint_data['C_1_extend'] +
            joint_data['C_3_extend'] +
            joint_data['C_5_extend'] +
            joint_data['C_7_extend'])
    joint_data['cervical_bend'] = (
            joint_data['C_1_bend'] +
            joint_data['C_3_bend'] +
            joint_data['C_5_bend'] +
            joint_data['C_7_bend'])
    joint_data['cervical_twist'] = (
            joint_data['C_2_twist'] +
            joint_data['C_4_twist'] +
            joint_data['C_6_twist'])
    joint_data['caudal_extend'] = (
            joint_data['Ca_1_extend'] +
            joint_data['Ca_3_extend'] +
            joint_data['Ca_5_extend'] +
            joint_data['Ca_7_extend'] +
            joint_data['Ca_9_extend'] +
            joint_data['Ca_11_extend'] +
            joint_data['Ca_13_extend'] +
            joint_data['Ca_15_extend'] +
            joint_data['Ca_17_extend'] +
            joint_data['Ca_19_extend'] +
            joint_data['Ca_21_extend'])
    joint_data['caudal_bend'] = (
            joint_data['Ca_2_bend'] +
            joint_data['Ca_4_bend'] +
            joint_data['Ca_6_bend'] +
            joint_data['Ca_8_bend'] +
            joint_data['Ca_10_bend'] +
            joint_data['Ca_12_bend'] +
            joint_data['Ca_14_bend'] +
            joint_data['Ca_16_bend'] +
            joint_data['Ca_18_bend'] +
            joint_data['Ca_20_bend'])

class SensoryInput:
    def __init__(self, sensory_input): # This argument is the "time_step.observation"
        self.joint_pos = dict(zip(joint_names, np.rad2deg(sensory_input['joint_angles'])))
        self.joint_vel = dict(zip(joint_names, np.rad2deg(sensory_input['joint_velocites'])))
        combine_vertebrae(self.joint_pos)
        combine_vertebrae(self.joint_vel)
        self.actuator_state      = dict(zip(actuator_names, sensory_input['actuator_state']))
        self.foot_forces         = dict(zip(foot_names, np.array(sensory_input['foot_forces']).reshape(4,-1)))
        self.touch_sensors       = dict(zip(touch_sensor_names, sensory_input['touch_sensors']))
        self.torso_com_velocity  = sensory_input['torso_com_velocity'] # Velocity at the Center-Of-Mass
        self.torso_height        = sensory_input['torso_pelvis_height'][0]
        self.pelivis_height      = sensory_input['torso_pelvis_height'][1]
        self.accelerometer       = sensory_input['inertial_sensors'][0:3]
        self.velocimeter         = sensory_input['inertial_sensors'][3:6]
        self.gyro                = sensory_input['inertial_sensors'][6:9]
        (self.head_yaw,   self.head_roll,   self.head_pitch,
         self.chest_yaw,  self.chest_roll,  self.chest_pitch,
         self.pelvis_yaw, self.pelvis_roll, self.pelvis_pitch) = np.rad2deg(np.arcsin(sensory_input['z_projection']))
3 Likes

If you’d like to learn more about central pattern generators in the spinal cord then I can recommend reading the work of Professor Ilya Rybak and their associates.

They post free copies of all of their publications on their homepage: http://www.rybak-et-al.net/

2 Likes

Finally I highly recommend reading the first two pages of “Evolution of behavioural control from chordates to primates” by Paul Cisek: https://www.cisek.org/pavel/Pubs/Cisek2022-PTRSB.pdf

Professor Cisek’s homepage has free copies of his other work too: https://www.cisek.org/pavel/

And for more details about cascaded negative-feedback controllers, there is this article: How Basal Ganglia Outputs Generate Behavior

2 Likes

Thankyou, those are great references.

Tbh, I was thinking about making a insect because I’d also like to include visual and olfactory navigation into the model. and also not worry too much about gravity or balancing.

I thought about making the classical 2D race car with radar sensors but I thought its too simple, easy and boring. Still I think a dog is a bit too complex for an enjoyable prototyping experience from a motor control stand point.

3 Likes

I think starting with simpler problems has its own virtues:

  • it increases the chances to figure out why the resulting agent is either “good” or “bad”.
  • also there are chances to spot some potential universal principles that could be available at larger scales
  • reiteration speed. A complex model that needs days to finally solve or (more often) not solve a given task is intrinsically boring by the fact you have to wait for the hoped improvement.

That being said, I would like to start with gym’s CarRacing v0 .
Another advantage of an established problem is you can compare your model performance with all other papers/attempts. If you find a solution that outperforms existing ones is easy to prove it and (hopefully) influence the ML community.
And don’t forget to quote Schmidhuber https://arxiv.org/pdf/1803.10122v4.pdf

PS car racing uses simplified “vision” instead of the simpler “radars”, and of course it doesn’t have smell. The physics are quite accurate, accounting for factors like inertia and drifting (loss of wheel grip)

Smell, while fundamental throughout animal kingdom, is also tricky

1 Like

Another environment to consider is real-time-strategy (RTS) games.
Consider for example StarCraft: [SSCAIT] StarCraft Artificial Intelligence Tournament - YouTube

Advantages:

  • They’re fun to play as a human.
  • Many other researchers write AI’s for RTS games so you can compare results.
  • You don’t need to do visual processing, the game’s API will just tell you where units are located.
  • You don’t need to control muscles, you just tell the game which direction each unit should walk towards.

You could build ant-like creatures in an RTS game:

  • There are many units to control.
  • The units need to work together, and their teamwork directly impacts their survival.
  • RTS games are very simple on the surface, but they have an amazing depth of strategies to explore.
    So you can start with dumb basic units and build your way up to advanced units and strategies.
2 Likes

I am not sure if it is so boring. Important at least for me is to understand and to demonstrate that HTM-based RL works, because until now there are some works on this topic but we still do not see any working demo.

1 Like

I played with Crobots a long time ago and it gave me a healthy respect for the difficulty in programing a moving critter. Just doing basic moving and avoiding running into walls is a lot harder than I thought.

Running into something (wall, other robots) causes damage and you die.
Sitting still means that you get shot at and die.
Figuring out where the other robots are and shooting at them when they are moving and shooting at you manages to capture a significant part of what real critters have to do to survive.
You have a very limited code space and you are limited in how much code runs on each time step, same as the other robots. Again, part of the limits real critters have to work with.

That said, I battled with some gaming buddies and learned a lot about how simple methods could be very effective.

Check out the list of crobot C functions in this site. Basic, but enough to build a moving/fighting critter.
https://corewar.co.uk/crobots.htm

I have thought about porting a version to unity (cool 3D graphics for the robots!) but this is too much of a time sink considering how many other projects are wanting attention.

4 Likes

This makes sense, but keep in mind one thing. In higher brains (mammal and bird) the basic mechanics of movement, including sensory feedback loops, are created by evolution. The cortex is like a jockey or a race-car driver. Likewise the encoding of sensory input to SDRs (or equivalent), and the converse decoding of motor output. If you don’t want to wait millions of years you get to code all that the hard way, by software engineering.

But absolutely, given a machine (real or virtual) with all the sensory, motor, encoding and decoding in place, an AI with the capabilities of a mammalian cortex would be a formidable thing.

But we already have most of that in the modern motor car. What you just described is precisely the challenge facing Tesla, and which so far has defeated them. It must be quite hard.:grin:

2 Likes

I very much disagree with you. Self driving cars (such as Tesla’s) still need to do computer vision and that’s a large part of what has defeated them. For example one Tesla mistook the side of a green truck for an overhead sign and tried to drive under it. In contrast, RTS games (and the StarCraft API in particular) have no computer vision component.

The other problem with self driving cars is that driving is a very high stakes environment. Society wants self driving cars to be at least as safe as human drivers. Compare this to ants which routinely wander into puddles and drown.

3 Likes