Thanks for the compliment! It’s one of several science projects I have been on and off working on since I was old enough to read, which was a little over 50 years ago. I learned basic electronics from reading American Radio Relay League manuals and home course for adults. I have no formal college education but I later did a good job of keeping up with developments in AI, ML and Neuroscience. The internet opened up a whole new world. Forums like this one were once just a dream.
Free space conditions are as in radio broadcast waves places where there is no wire mesh screens or other things disturbing transmission. For the network that’s avoid and reflective places.
To make a wave radiate in a hexagonal geometry as they do from a non directional radio antenna or by casting a stone into a pond each place/column representing it in our mind requires a 3.5/6 = 58% thrust pattern or else signal travels out in a straight line or triangular pattern from the signal source. Beyond that sends spurious signals, wasted energy.
I do not know enough about the neuroscience of what’s being recorded in this figure from the paper I set out to in code replicate but that’s what I ended up with in the numbers when the waves are spreading out real nice, places have their timing right and it shows.
The most recently uploaded and almost alike are below. They include a .exe from back in the days before compilers could be hacked via the internet then were, and does not install anything to the operating system, just runs from PC RAM then is taken out of memory after you quit the program. If you choose to try it and the program errors out that’s because I am running without error traps so that I know it made a divide by zero math error or something else. Several times I added error traps everywhere I could, but then it became impossible to even know it was making errors then got in the habit of leaving them out. Challenge is to have none in the first place, which is another easier said than done thing but I’m working on it.
Google Sites: Sign-in
Access Google Sites with a personal Google account or Google Workspace account (for business use).
Google Sites: Sign-in
Access Google Sites with a personal Google account or Google Workspace account (for business use).
Google Sites: Sign-in
Access Google Sites with a personal Google account or Google Workspace account (for business use).
To force the motor system to only show what the spatial network provides for behavior visual and other cues like taste are not included in addressing. For one or two hemispheres the general arrangement is otherwise as in when I went for a variety of easy sensor types and drawing out the resulting circuits:
You can above see what I meant about two hemispheres sharing the load reducing addressing requirements. Two of the second circuit only doubles the amount once, not a whole bunch of times for everything on the other side too.
What gets drawn in the spatial map and when is the reason I’m here right now and the new ID Lab 7 is running with training wheels, by the program providing X,Y locations of what’s around it in the environment plus whether it’s an attract, avoid, or both.
In the opening post pictures of what I have right now in 7 we’re seeing what happens when I essentially remove the whole RAM system and go straight to motors. It’s then a guided missile with just enough virtual neurology to come to a controlled stop at where its mouth requires. I found this works great for getting it there, but once it does and there is no sensory at all to let them know that they’re eating or not it sways back and forth like a zombie, while getting a tiny bit each time. An incredibly messy eater. But it’s expected that it should do that when it knows it’s stopped at the right spot but senses no food there. I also found that it then only avoids the shock zone after having gone right in and eaten enough times while all the time getting zapped. It then lacks the common sense to try another motor action, get out of there.
For the newest version I had to update older code so it has motion sensing eye signals more typical of biology. Back then I used the simplest thing of them all. Where there are 7 eye facets/RGB photosensors number them 1 to 7 and use the brightest (or alternately most changed) of the seven as the value, and when all see nothing it’s 0. There is then a three bit number needing to be addressed. All seven could have just be connected to addressing but then it takes 7 bits per eye. The rocking back and forth in front of the feeder for a little while made it easy to optimize the retina related code especially the drawing of sectors that involves a double “evolution of an angle” math problem known to drive programmers nuts when it has to (as in VB6) in both cases be perfect or draws all wrong. There are tricks using Trig to find least angle and such but then it’s slowed down by math functions, instead of taking advantage of VB6 compiling 32 bit Long loops and math to machine level code that is as fast or faster than C. Once proficient the Rapid Development Environment does what it says, so it’s been hard for me to give it up.
At this point I more or less have an empty RAM socket and some jumper wires to for now bypass all that too. The zombie has been useful for testing the bump, shock and other things that then become too rare, and now has it ready to plug a RAM of one kind or another back in. This time though visual and other sensors instead through something like HTM connect to the spatial network to supply X,Y locations and head angle. I can take off the training wheels the program was supplying then the question becomes “How close to that floating point double precision perfection can HTM get?”
The spatial network can be thought of as inside the pointy box on top, where in this case the wave direction at the location it’s at in the map is compared to its actual to determine whether previous actions worked or not. Motors directly connect to RAM data outputs. Only thing that can ever load data into the RAM is the guess mechanism. The circuit shown at the very bottom with one end connected to a 5 volt reference “Stays 1” is to take a best and usually good guess that after experiencing something never experienced before it will not like slam on the brakes while driving, upon every new sight passing by. The only way the output data bit can be 0 is by it then addressing something never experienced before, a new memory. After being first time addressed the output data bit gets set to 1 by the reference at its data input. It was a way to show what the code is doing to make it less bungling and be more neural in the sense that it takes more than an insignificant change to be able to stop what’s doing to respond. Like us it will then keep on doing what’s working until something goes wrong.
The two primary Left/Right and Forward/Reverse efferent motor signals are normally included in RAM addressing, to add recall of what it did following a given successful motor action. This causes taking actions step-wise.
What do you mean by outputting the same signals? Isn’t the output of the agent motor commands? If so, how would motor commands reflect input?
Outputting what was input results in wave reflection back to source, as in sound waves reflecting off solid objects. In code I added that as one of the possible properties of an attractor location. All 24 output bits are the same as the 24 input, like this:
OutAll(X, Y) = InAll(X, Y) 'reflect any incoming signals.'
There are no motor commands in this part of the system. The network places only have to as fast as possible relay action potentials from one to another and be set to change behavior to given properties of things other than free space, each place’s default condition.
Where HTM is used to supply the spatial network with the needed signals it only has to connect to that alone. The motor RAM system only needs to know how well it’s doing, and will from there take care of the low level figuring out of what needs to be done after another in order to get all its moves right. It should also be possible to use HTM for the motors too but the hard challenge is from sensory predicting location and orientation of itself and what it sees. The rest is for me the easy part. After that the motor signals can be modulated by other brain areas but you might say that’s the one even zombies have, even fast running virtual kind, where from there it’s a matter of controlling all it’s capable of navigating out by not having added social skills related inhibition to keep it in the mind only.