Oscillatory “Thousand Brains” Mind's Eye For HTM?


#1

Even though I have evidence that the oscillatory dynamics of at least the hippocampal end of the sheet are eventually vital to include in computations it works so well with HTM theory I’m hopeful you’ll none the less take all this as good news. The “Thousand Brains Model of Intelligence” is a good description of the place field properties needed in the brainwaves required spatial reasoning network I developed and experiment with, where my primary focus has been where the 3 layer hippocampal tip of the cortical sheet that from there goes to motors.

I found that simple rules for wave propagation produce complex 2 spatial frame reasoned behavior needed for the kind of navigational intuition that we have. This is the clue filled paper I most modeled from:

As in a figure shown inside the paper there is a very pronounced (ideal free space conditions) 58% ratio that in at least my model exists in the geometry of wave propagation, by input signals being negated to produce output responses needed for multiple direction traveling wave production. From what I can see from testing in a virtual moving shock zone arena all else checks out in regards to displaying the intuition to go around hazards and wait behind where safer, while doing all the amusing things real animals do when in such dilemmas including being “startled”.

Outputting the same signals as were input reflects a wave back towards source. Doing nothing causes waves to flow around that place, but not through. Using If…Then… logic the rules (including echo location mode) are:

Propagate 2D wave signal, from one place to the next, once.
This is accomplished by each place outputing the opposite of whatever signal it receives.
Same as when people make a 1D wave, by standing up while neighbor next to them sits down.
For hexagonally located 2D places this creates 2D waves, as in casting stone into a pond.
In and Out data is stored for later one Bit at a time retreival, or single 24 bit number.
To avoid data clash errors In and Out arrays are separately updated.'

Private Sub Propagate(F As Long)
Dim X As Long
Dim Y As Long
Dim N As Long

'Serialize input Bits from neighboring connections, store in PlaceIn array.'
  For X = 1 To NetWidth
    For Y = 1 To NetWidth
        InAll(X, Y) = 0
      For N = 0 To 23
        InBit(X, Y, N) = OutBit(X + NeiX(N), Y + NeiY(N), NeiN(N))
        InAll(X, Y) = InAll(X, Y) + (InBit(X, Y, N) * Pwr2(N))
      Next N
    Next Y
  Next X

'Store 6 Bit Output action for each, into PlaceOut array.'
  For X = 1 To NetWidth
   For Y = 1 To NetWidth
'ATTRACTS.'
    If Attract(X, Y) = 1 Then                 'If an Attractor location then'
      If EchoLocateCheck = 1 Then               'If echo locate mode as in bats then'
         OutAll(X, Y) = InAll(X, Y)             'reflect any incoming signals.'
      End If
      If MapLocateCheck = 1 Then                'If map mode to/from attractor then'
        If ToggleOnOffCheck = 1 Then            'either toggle attractor on then off.'
          If OutAll(X, Y) = 0 Then
             OutAll(X, Y) = 16777215
          Else
             OutAll(X, Y) = 0
          End If
        Else                                    'or always signaling.'
             OutAll(X, Y) = 16777215            'Start wave outward in all directions.'
        End If
      End If
     GoTo NextPlace
    End If
'AVOIDS.'
    If Avoid(X, Y, T) = 1 Then                'If an Avoid location then'
       OutAll(X, Y) = 0                         'outputs nothing.'
     GoTo NextPlace
    End If
'BARRIER, BORDER, BOUNDARY.'
    If Barrier(X, Y) = 1 Then                 'If a Barrier location then'
       OutAll(X, Y) = 0                         'outputs nothing.'
     GoTo NextPlace
    End If
'PROPAGATE, default mode that waits for signal to be received then does opposite of input.'
    If InAll(X, Y) = 0 Then                   'If all 24 inputs are quiet no AP then'
       OutAll(X, Y) = 0                         'so are all 24 Outputs.'
    Else                                      'Else one or more action potentials were received'
       OutAll(X, Y) = 16777215 - InAll(X, Y)    'negate Input, to derive opposite Output.'
    End If
NextPlace:
   Next Y
  Next X

'Extract single Bits from PlaceOut() into PlaceOutBit().'
   For X = 1 To NetWidth
     For Y = 1 To NetWidth
       For N = 0 To 23
         OutBit(X, Y, N) = (OutAll(X, Y) And Pwr2(N)) / Pwr2(N)
       Next N
     Next Y
   Next X

End Sub

Although it’s easy for many to read I code in VB6 that few use anymore, but that should not be a problem where all you probably needed for code right now is to see the above subroutine with the rules in it. I now have an updated model for going from retinal motion action potential signals to V1 then across & under the sheet to supply the coordinates (instead of program supplying exact locations) to the hippocampal end for the Forward/Reverse and Left/Right signals, which is of course all easier said than done, even where I’m happy to have then only modeled the intelligence level of an insect. Going further with my model made it worthwhile to further study your videos until I had the general idea of how to explain things to you then exchange notes in this forum.

Signals flowing through each place contain information about what’s going on in the outside world. For example intermittently received spiral waves are indicative of what it’s after (attractor) being too dangerously near a place/thing to avoid, as in the below image.

When the environment is less busy waves radiate normally like this:

I do not know what the cells would do with that information but it’s none the less contained in the signal pattern. One more thing to possibly go with the Thousand Brains/Minds way of thinking.

I long had success with the memory modeling basics are as per David Heiserman where as in what you call a spatial pooler a robot’s sensory bits from whatever it has for sensors are in any order (same results any way) connected anywhere to the address inputs of a RAM of some kind, including what could be called a HTM-RAM with best/educated motor data response guess mechanism included. For digital RAM that has to be added by taking at least random guesses when the confidence level in a given motor action goes to zero. Two bit 0 to 3 confidence levels are enough, almost a magic number. There could be a floating point analog used for confidence levels too, but it should never take 10000 failures before sensing it’s time to try something different. With there normally being multiple motor systems in turn working together over time the overall confidence levels and likewise two bit motor actions become more complex than the two data bits per motor per timestep suggests.

HTM makes sense in a plugs right in sort of way. It’s then possible to address more than 28 digital RAM bits otherwise my personal computer is out of memory. Dividing to two brain hemispheres is a big help reducing needed memory size, and there is much that can be done with two 24 bit RAM arrays. Also very fast. Although that addressing bit limit would not exist I would still need to do the same with HTM by running two systems in parallel, or alternate. It would otherwise not be taking advantage of the wonders of bilateral symmetry or have enough biological accuracy to impress neuroscience. I would not want to go past the in all animals Left/Right and Forward/Reverse upper level controls by adding a bunch of legs that stand in 3D and slows down the PC to look fancy. I believe the important part is establishing the very basics of what ultimately makes it possible to model things in a Mind’s Eye at least as well as we can, with ours. I hope you found what I have a good start in that direction.


HTM Based Autonomous Agent
Intelligence vs Consciousness
Project : Full-layer V1 using HTM insights
HTM Hackers' Hangout - Apr 6 2018
Read this first
Not Oscillations Traveling Waves
HTM Hackers' Hangout - Apr 6 2018
#2

This is fascinating, but I’m ignorant about a few terms you are using. Can you elaborate on a couple things? Obviously you have spent years on these ideas. I love the detail in the sensor arrays and visualizations of different sensor / creature states. Brilliant work!

Can you elaborate what you mean by this? Is your model passing an efference motor signal in along with the sensory data?

What do you mean by outputting the same signals? Isn’t the output of the agent motor commands? If so, how would motor commands reflect input?


#3

Thanks for the compliment! It’s one of several science projects I have been on and off working on since I was old enough to read, which was a little over 50 years ago. I learned basic electronics from reading American Radio Relay League manuals and home course for adults. I have no formal college education but I later did a good job of keeping up with developments in AI, ML and Neuroscience. The internet opened up a whole new world. Forums like this one were once just a dream.

Free space conditions are as in radio broadcast waves places where there is no wire mesh screens or other things disturbing transmission. For the network that’s avoid and reflective places.

To make a wave radiate in a hexagonal geometry as they do from a non directional radio antenna or by casting a stone into a pond each place/column representing it in our mind requires a 3.5/6 = 58% thrust pattern or else signal travels out in a straight line or triangular pattern from the signal source. Beyond that sends spurious signals, wasted energy.

I do not know enough about the neuroscience of what’s being recorded in this figure from the paper I set out to in code replicate but that’s what I ended up with in the numbers when the waves are spreading out real nice, places have their timing right and it shows.

The most recently uploaded and almost alike are below. They include a .exe from back in the days before compilers could be hacked via the internet then were, and does not install anything to the operating system, just runs from PC RAM then is taken out of memory after you quit the program. If you choose to try it and the program errors out that’s because I am running without error traps so that I know it made a divide by zero math error or something else. Several times I added error traps everywhere I could, but then it became impossible to even know it was making errors then got in the habit of leaving them out. Challenge is to have none in the first place, which is another easier said than done thing but I’m working on it.

https://sites.google.com/site/intelligencedesignlab/home/IDLab6-0.zip
https://sites.google.com/site/intelligencedesignlab/home/IDLab6-1.zip
https://sites.google.com/site/intelligencedesignlab/home/IDLab6-2.zip

To force the motor system to only show what the spatial network provides for behavior visual and other cues like taste are not included in addressing. For one or two hemispheres the general arrangement is otherwise as in when I went for a variety of easy sensor types and drawing out the resulting circuits:

You can above see what I meant about two hemispheres sharing the load reducing addressing requirements. Two of the second circuit only doubles the amount once, not a whole bunch of times for everything on the other side too.

What gets drawn in the spatial map and when is the reason I’m here right now and the new ID Lab 7 is running with training wheels, by the program providing X,Y locations of what’s around it in the environment plus whether it’s an attract, avoid, or both.

In the opening post pictures of what I have right now in 7 we’re seeing what happens when I essentially remove the whole RAM system and go straight to motors. It’s then a guided missile with just enough virtual neurology to come to a controlled stop at where its mouth requires. I found this works great for getting it there, but once it does and there is no sensory at all to let them know that they’re eating or not it sways back and forth like a zombie, while getting a tiny bit each time. An incredibly messy eater. But it’s expected that it should do that when it knows it’s stopped at the right spot but senses no food there. I also found that it then only avoids the shock zone after having gone right in and eaten enough times while all the time getting zapped. It then lacks the common sense to try another motor action, get out of there.

For the newest version I had to update older code so it has motion sensing eye signals more typical of biology. Back then I used the simplest thing of them all. Where there are 7 eye facets/RGB photosensors number them 1 to 7 and use the brightest (or alternately most changed) of the seven as the value, and when all see nothing it’s 0. There is then a three bit number needing to be addressed. All seven could have just be connected to addressing but then it takes 7 bits per eye. The rocking back and forth in front of the feeder for a little while made it easy to optimize the retina related code especially the drawing of sectors that involves a double “evolution of an angle” math problem known to drive programmers nuts when it has to (as in VB6) in both cases be perfect or draws all wrong. There are tricks using Trig to find least angle and such but then it’s slowed down by math functions, instead of taking advantage of VB6 compiling 32 bit Long loops and math to machine level code that is as fast or faster than C. Once proficient the Rapid Development Environment does what it says, so it’s been hard for me to give it up.

At this point I more or less have an empty RAM socket and some jumper wires to for now bypass all that too. The zombie has been useful for testing the bump, shock and other things that then become too rare, and now has it ready to plug a RAM of one kind or another back in. This time though visual and other sensors instead through something like HTM connect to the spatial network to supply X,Y locations and head angle. I can take off the training wheels the program was supplying then the question becomes “How close to that floating point double precision perfection can HTM get?”

The spatial network can be thought of as inside the pointy box on top, where in this case the wave direction at the location it’s at in the map is compared to its actual to determine whether previous actions worked or not. Motors directly connect to RAM data outputs. Only thing that can ever load data into the RAM is the guess mechanism. The circuit shown at the very bottom with one end connected to a 5 volt reference “Stays 1” is to take a best and usually good guess that after experiencing something never experienced before it will not like slam on the brakes while driving, upon every new sight passing by. The only way the output data bit can be 0 is by it then addressing something never experienced before, a new memory. After being first time addressed the output data bit gets set to 1 by the reference at its data input. It was a way to show what the code is doing to make it less bungling and be more neural in the sense that it takes more than an insignificant change to be able to stop what’s doing to respond. Like us it will then keep on doing what’s working until something goes wrong.

The two primary Left/Right and Forward/Reverse efferent motor signals are normally included in RAM addressing, to add recall of what it did following a given successful motor action. This causes taking actions step-wise.

Outputting what was input results in wave reflection back to source, as in sound waves reflecting off solid objects. In code I added that as one of the possible properties of an attractor location. All 24 output bits are the same as the 24 input, like this:

         OutAll(X, Y) = InAll(X, Y)             'reflect any incoming signals.'

There are no motor commands in this part of the system. The network places only have to as fast as possible relay action potentials from one to another and be set to change behavior to given properties of things other than free space, each place’s default condition.

Where HTM is used to supply the spatial network with the needed signals it only has to connect to that alone. The motor RAM system only needs to know how well it’s doing, and will from there take care of the low level figuring out of what needs to be done after another in order to get all its moves right. It should also be possible to use HTM for the motors too but the hard challenge is from sensory predicting location and orientation of itself and what it sees. The rest is for me the easy part. After that the motor signals can be modulated by other brain areas but you might say that’s the one even zombies have, even fast running virtual kind, where from there it’s a matter of controlling all it’s capable of navigating out by not having added social skills related inhibition to keep it in the mind only.


#4

The following link is for a VB6 implementation to demonstrate the behavior resulting from following network wave flow. There is no motor memory system, which makes it unable to take evasive action after entering a shock zone. But otherwise after enough experience the virtual critter does surprisingly well going around instead of back into trouble again.

https://sites.google.com/site/intelligencedesignlab/home/IDLab7.zip

It greatly increases the size of the download to include the needed 24 bit input to direction vector files used to simply the math/logic related code, increases program speed. These will be automatically calculated after starting the code for the first time. The array contents are then saved to disk to start up fast after that. The first startup might take half a minute or more, much larger download can in some cases take much longer.

There are comments including links to videos and papers contained in each respective .frm module. These can be read without VB6 by opening the .frm files as a text file.

Credit for getting this online by now goes to HTM Hackers’ Hangout - Apr 6 2018 where it ended up with at least the silent critter on the screen one minute into the live discussion (later video) to indicate we were there too, to work on something, but at the time did not know what.


#5

If you are interested in experimenting with the wave generated behavior alone then I can quickly clean up and upload what I have right now. You could then hopefully help me explain this at the next Hangout.

After adding a motor memory system the only way to see the base behavior is to take the added code out of the circuit anyway. VB is wonderful for rapid development of an idea but has no bitwise operators to speak of. If Javascript looks good and is fast enough then I would want to add a motor memory system to whatever you end up with, unless of course you beat me to it. I would not mind.


#6

This would be very awesome if you can get me something to start.


#7

Then I’ll get to work on it!

At least the wave related part is easy. With there being so many ways to add in a motor memory the step after that is a whole other project that can go from a David Heiserman on into HTM then way over a month’s time of possible experiments.

I’m happy enough with the way it shows where the simple Input to Output rules apply. The version 6 may have confused everyone by the way I for speed pretrained a memory.

I will leave in the way it first preloads an array for converting 24 input bits to wave flow angle, its proper head/body angle. I found that that there are a few possible directions being pointed out. The correct response is whichever is closest to current heading. The change should make it even better at getting around the shock zone and hopefully eliminate the loading of such a large array, just for convenience sake. That too can be added later, but if looks easy to improve upon what I have then go for it.

I’ll post a link to the code, hopefully in a matter of hours.


#8

Here we go, sloppy eater and all!

https://sites.google.com/site/intelligencedesignlab/home/IDLab7.zip

It tripled the size of the download to include the 24 bit input to direction vector files. Those will be automatically calculated after starting the code for the first time then the array contents saved to disk to start up fast after that. The first startup might take half a minute or so.


#9

I fixed up a number of things. Back and forth swaying at feeder is all gone, problem was more like a bug in the bug. It’s now hard to notice what’s missing, by not yet having a motor memory system.

The Spatial form/module was sensing a very rough wave flow angle that caused it to travel more in straight lines along the 3 hexagonal axes. Part of the reason was for as an experiment commenting out code in the AvAngleOf7 function that previously provided better results, to see what happens. That part is now back in working order.

The Spatial.Timestep subroutine was given a going over, including comments. There is now only one angle “BdyAng” needed by the Body form. It previously had one for the mouth as well but at this point does not seem needed.

Spatial.Propagate no longer requires a Frame number, the feature was some time ago removed. A “2 On Outs Cancel” checkbox to test a low power propagation mode was added.

“NavNet” checkbox was renamed to “Spatial” to match its module name.

Body.Timestep is slightly more simplified, mainly from only using the Spatial.BdyAng variable, instead of two that together did not work as well.

To keep things in one place I’ll link to the new version and add additional information to the following thread, where I already went into some detail:

I next need to explain more about how it works in regards to map sparsity, the ideal ratio of 58%, and after that maybe Matt’s useful looking visual aid. I already though bombarded your Hangout thread with enough work-in-progress type information. I now have the VB code all set to make a good first impression, following all the rest of the information that goes well but at this point just talks about it.

At least I can honestly say that the Hangout event far exceeded my expectations. You made a miracle happen. Everyone who has for around a year been patiently waiting for me to upload the newest will be thankful too. For Matt and others their mission was well accomplished. Thanks again!


#10

Breakthrough!

After setting the program to startup in the “2 Outs On Cancel” mode that (as in Heiserman motor bit methodology where binary 11 is stop like 00) cancels out cross connected outputs the network holds vectors even when the attractor is surrounded by avoids, which previously caused the whole network to stop propagating, a dreaded freezing-up problem. In ideal propagation conditions it’s still maintaining a 58% ratio, in that case though of not signaling instead of signaling.

Since it was a minor change that only involved a few code lines and also removed a bug that caused an error when program calculated vectors were shown on the screen it’s the same filename:

https://sites.google.com/site/intelligencedesignlab/home/IDLab7.zip

Now I do not have to explain a part that greatly complicated going into further detail. I was not sure how to describe where and when the network needed to be cleared for proper navigation. The simple answer is now never, or so far as I can see from watching the even more complex behaviors expected of animals that have with a good navigational system like ours.


Are V1 transforms pre-wired or learned?
#11

@Gary_Gaulin, wanted to ask… I believe you model wave phenomenons with something like a per-cell velocity vector. Did you consider the possibility that maybe a “refractory period” after firing could be enough to ensure that a wave goes “forward”, instead of having to deal with 58% hacks and such ?


#12

This is the simple code that passes waves. It’s inside the code I earlier posted for the Propagate subroutine. Does all 24 I/O bits at once. 16777215 = 111111111111111111111111 binary. It’s most like people making stadium waves:

'PROPAGATE, default mode that waits for signal to be received then does opposite of input.'
If InAll(X, Y) = 0 Then                   'If all 24 inputs are quiet no AP then'
   OutAll(X, Y) = 0                         'so are all 24 Outputs.'
Else                                      'Else one or more action potentials were received'
   OutAll(X, Y) = 16777215 - InAll(X, Y)    'negate Input, to derive opposite Output.'
End If

Another way to explain the like magic in the number 58% is that’s the average number of place to place connections that will be signaling at any given time from the inversion of input to output, resulting in waves that radiate nicely outward in all directions. If it’s 20% then there might be more of a pie shaped sector, with the rest of the circular radiation pattern missing. At greater than 58% there is too much signal and a mess of waves that can even send waves backwards as they also travel forward.

Since neighbor to neighbor outputs both spiking at the same time are essentially zapping each other for no good reason (the two cancel out anyway) the radiation pattern can be reduced, and when it is there are 58% not signaling instead of 58% signalling. This is not because of anything I coded into the program it’s in the math and geometry, in the same way the positive distance around a circle has a like magic number of 3.1428… and if you go around the circle in the other direction the magic number is then instead -3.1428.

It’s very much like the part of the paper showing the 58% ratio is saying for circles “There is a geometry related process going on here, through which after every revolution the signal path covers 3.14 times more distance than its diameter.”


Project : Full-layer V1 using HTM insights
#13

To help explain the simple process I wrote this 1D example. I’m thinking of adding it to the program code comments.

5 Places are shown, each has a one bit Input and one bit Output:

    Input1 bit.
    |   
  <-00-00-00-00-00->		
     |  
     Output1 bit.

Starting a 1D wave (in the left to right direction only) at Place3 input looks like: 

    1  2  3  4  5   
-----------------------------------------------------
  <-00-00-00-00-00->    No signal.
  <-00-00-01-00-00->    Place3 set to Attractor, will start one wave.
  <-00-00-01-10-00->    Output3 signals to Input4, then is no longer set as Attractor.
  <-00-00-00-01-10->    Output4 is now opposite of what Input4 was, signals to Input5.
  <-00-00-00-00-01->    Output5 is now opposite of what Input5 was.
  <-00-00-00-00-00->    No signal.

For 2D waves use 6 of the above in a hexagonal pattern, each place then has 6 sectors.
For more sectors (angles in-between) use additional back and forth connections between neighbors.

#14

Thank you for trying to break things down to simple pieces. Should help (me at least) to understand.
(Still reading Marr’s book, taking a while… sorry for the delay in the reply I promised)


#15

I recently got an inexpensive old copy of ‘Artificial Intelligence: A Modern Approach’ by Stuart Russell and Peter Norvig, who have a small section in chapter 25 of their book called “Situated automata”.

Some of the ideas presented in that section are quite similar to your own, Gary, and if folks could borrow/acquire a copy of the book, I recommend taking a look at that section and its suggested structures as well. It covers early research into using tiny finite state machines to carry out complex functions, super similar to what you’re doing here in your design. There might be something there that you could use further.


#16

That’s one I never knew about. After looking around online I found a pdf of the now 23 year old book:

http://sclab.yonsei.ac.kr/courses/04FuSys/Artificial%20Intelligence%20-%20A%20Modern%20Approach%20-%20Russel,%20Peter%20And%20Norvig,%20Stuart%20(Ebook%20-%20English).pdf

This I think is the start of what you were describing:

Each place in the network does resemble a state machine, although logic gates are simpler and make a traveling oscillation that moves through the network as fast as the neurons can oscillate, change states. The attract input to start a wave holds all outputs 1, input to make an avoid holds all outputs 0, otherwise it’s normal mode that negates all inputs in response to a signal at any input or reflect waves by not negating then input equals output. To start a wave at given places neurons only need to connect an axon to a respective place/column then zap it with enough energy to get a wave started. The wave will then on its own stay going. Locations to avoid can be added using axons that inhibit the entire place/column. In both cases it’s possible to influence the inputs that surround each place, instead of the central place/column itself. The network is still very much like the surface of a pond that makes nice waves, when energy in thrown into to somewhere, where there can also be tiny islands that waves can go around but not through.

Earlier versions of the ID Lab used a truth table, logic gate, but the rules showed up better when coded using If…Then… statements.

If you think I should use the If…Then… rules to store equivalent logic gate behavior into a RAM array then I will. The Propagate subroutine is then just a tiny thing, If…Then… statements now take up more room than anything else. Since there are 24 sectors it will take 26 bits of address space to do so, that’s almost all the RAM a modern personal computer has. But I at some point wanted to reduce that down to 12 or so bits, in a way that adds the ability to sense the multiple directions each place can at times point to in its wave flow. This should also replace the two now giant arrays, which have to go too. I started at 6 sectors, which is a tiny array. Then I went to 12 sectors, no problem there either. But after doubling again to 24 it became a long wait for the program to precalculate all the array data.

Relating this to HTM theory it might work as well for each place/column to have the ability to predict what to output, instead of outside circuit that overrides normal propagation of a more passive circuit that otherwise just passes waves by negating inputs to derive outputs. The 58% ratio is in what the negation process causes to emerge in the network from an attractor (by staying on not one pulse) sending a continuous wave through (no avoids or reflective) empty space.

David Heiserman took the “behavior based” mentioned below and inspired by Rodney Brooks to the next level by adding a RAM to the circuit. I too have long been using the behavior based methodology, for the motor memory system, although not in (for demonstrating the spatial network behavior alone) ID Lab 7.

The ID Lab 7 starts with a “complete representation of the world state” and the following suggests this is expected to separate from the “behavior based” part of a model:

When the behavior based motor system is in the circuit there is no need for a “compiler” or anything I can recognize as such.

The wave directed “world state” outputs angle and magnitude it needs to go, to the hedonic system that compares it to actual angle for sensing when it’s necessary to take a guess, which is the only thing that stores new data into the motor memory. This is maybe all together a state machine that takes guesses what its motor state should be. There is the similar feedback from motors back to input, as can be seen in the earlier shown colorful one and two hemisphere circuits.

This book seems to have been a help relating the model I’m developing, to modern HTM theory and what was around just before the AI Winter I recall buried everything deep in ANN’s. You dug-up a nice one!

I am eager to see HTM working in the circuit but with the code level work being so new to me and neurobiology something that eventually requires Jeff’s genius I sensed it’s best to explain what I have, in enough detail, for you and others to find it useful somewhere in HTM theory. Both are based upon a network of cortical columns, although mine is more for the older 3 layer end of the cortical sheet. At that point there is a motion sensing mind in control of motor muscles where words recall sound wave motion of noises they make, who for thrills rides machines that out their whole bodies into unsafe motion of one sort or another such as jumping over long distances on a motor cycle, while body language type motion and rhythm of the music contains signals galore too. It then makes sense that YouTube is filled with what it now contains, where the #1 click-bait are playful kittens moving just right especially to music or something else in the same way we would.

Why we are the way we are once made no sense to me at all. But after seeing what happens when a behavior based muscle motion is combined with a spatial network for mapping out and navigating real or imagined worlds the mystery was (at least for myself) solved. Now I only need to explain all this to you, and we will be set, even though that’s easier said than done, in one night. I think we at least made some progress. Thanks for your help.


#17

In a number of very useful recommended videos I found one that indicates the earlier mentioned motor RAM system I left out of the most recent code (but is in earlier shown 6.1 video) is a very simple cerebellum. The structure of a binary RAM results in there being one Purkinje cell data location for each of the possible input address bit patterns. Since the total number of address bits is under 28 (and 10 is enough) each of the Purkinje cells span the entire width of the address bus. The video is showing how to an address a much wider than 28 bit address bus, by sensing 10 or so bits at a time. The video explains changing the data at a memory location that “presumably was contributing to the body making that motor error” or in other words take a guess. Addressing influences when an action is taken, while data stores what action that will be.

We are born not knowing what does and does not contribute to a given motor error. But even a random guess is a good enough “modification/rectification of movements” to learn how to walk then run. If a new action later does not work then it simply guesses again.

Also, from: www.biologydiscussion.com/nervous-system/cerebellum-meaning-feature-and-functions-human-physiology/62885

The above and the video below explain what is happening between the cortical level spatial network angle and magnitude for where it wants motor muscles to power it to, and its actual body angle and magnitude.

In 1979 there was not enough known about how the brain worked to exactly know what it was I was modeling, I only knew that what David Heiserman explained best matched how we think so I stayed with it. Adding in a cortical related function was easy.

I without knowing it compared what happens with and without a cerebellum. Without it there is a loss of navigational control but as in human patients can still navigate using coarse movements. The cortex needs the servo circuit to work as expected, otherwise its also influenced by the loss.

To prove itself HTM will ultimately need to demonstrate the effects of damage to the cerebellum upon the cortical level functions. This can at first seem like something best left for after all else is working, but it’s another starting point to help figure out how the whole brain works. In either case it’s much like what Matt stated as the question “Yeah the whole idea of incorporating motor behavior in learning makes a lot of sense, doesn’t it?

Modeling a cerebellum was more or less last on my list. Then it became a priority for me to explain what I learned, in case anyone wants to test it in their HTM model. I’m very interested in how it works out for others and what their resulting HTM code looks like. All helpful information is appreciated.


Temporal unfolding of sequences