Sorry I forgot to post when I streamed earlier. Here is are the recordings. Great viewership today! We got raided by another channel and maxed out at 38 viewers.
Just wanted to say that I really enjoy your streams, keep up the good work!
Thanks for the encouragement @momiji!
I’ve decided not to stream today. I do not know if there will be a research meeting tomorrow or not. Thursday I will stream Forum Q&A and BHTMS.
Streaming now, research meeting starts around 10:15AM Pacific (20-30 min).
A post was merged into an existing topic: Numenta Research Meeting - May 22, 2019
Streaming BHTMS today. Starting with some casual chat and forum QA around 8:30 . I’ll take a lunch break to get my kids from school early, then back to streaming around 1PM . (2 streams today).
Live now, just chatting. Will be doing BHTMS soon enough.
Live research now.
Great stream again, Matt. Thanks for broadcasting these, Numenta! (Why not thousands of people are following these is beyond me).
Some musings about this session:
The process of compensating for orientation must happen before grid cell processing (as opposed to paralel processing in a separate layer). Otherwise the grid patterns produced (and observed by the experiments) would not be fixed as the subject turns around in the box. This means that the orientation information is already contained in the grid cell modules (I think @jhawkins also briefly speculated about this). This also means a feature associated to a displacement cell is also connected and referenced orientationally to any other feature.
The hexagonal (or triangular) pattern of the grid cells is not a constraint. It’s just an observation. Navigation in a space would work in any constellation as long as it remains fixed when the subject moves. So, if a grid cell pattern is deformed in a non-flat environment (e.g. the jungle gym rat experiment @mrcslws mentioned), then the subject would still be able to model its environment.
Two streams today:
- 10:10AM research meeting
- 7PM Meetup
I am not streaming today, but tomorrow I’ll be busy:
What time zone are you in Matt? I’d love to be involved if possible
Pacific daylight savings time.
Live now, talking about Rework AI Summit.
- ReWork AI Summit Day Two Recap
- HTM Forum Q&A (if I have time)
- Research Meeting (?)
- BHTMS - Finish up SP learning and start Active Duty Cycles
Thanks Matt for your recent encouraging thoughts on the Intelligence Design Lab model. You surprised me on this one, I needed to catch up on:
The code is in VB6 only. I would love to port to another language but due to VB6 not having bit operators I had to use math to extract bits from Byte and Long variables. It’s best to rewrite the whole thing, in another language most suited for 3 axis arrays, which gets into many hexagonal math systems, while math I used might be improved upon, so it quickly became increasingly complicated.
Since it looks like you only need the easy trick to drawing vector maps (and I’m not sure what is best in other languages for hexagonal arrays) the code most importantly only needs to propagate a 2D wave. The way I most simply did it is make 6 Inputs and 6 Outputs of each hexagonal network “place” a six bit value, in which case the rule to apply to each nonzero place during each timestep is negate the input, to derive output.
Out6 = 63 - In6
When you keep setting and resetting all six places of a place you get a continuous wave, and it will look like radio station broadcasting outward from an antenna.
In the first timestep all six outputs of a place are set to 1, to send action potentials outward in all directions. In the next timestep the six neighboring places send action potentials further outward, but not back to the places that started the wave. You then have a nice clean signal pattern where places alternate back and forth when most pointing at an in-between angle, to average together for 12 direction sensitivity, from 6 signals. This is where there is an on average 7/12=58% signal ratio as noted in live rat signal ratios.
The fun part is then what happens after mapping out an environment using places that (do nothing therefore) adsorb for places to not bump into and avoid, or for echolocation (Out6=In6) reflect, then a wave is started by a place where what it needs is located (or like a bat from itself in which case wave may reflect back). Summing wave direction at each place makes a vector map to follow. After (repropagating) clearing the arrays the shortest distance would be the first waves to make it to the place the critter is located.
The only thing this cortical level navigation network needs to supply to the motor system is an angle for Left/Right to point towards, and for controlling high speed landing or stop a Forward/Reverse magnitude that depends on distance from place center. Rounding off places to a dead center for landing/stopping causes the map to shift a little each time feeder moves to a new place. To an animal the distances between places only need to be approximate, while once there it’s convenient to perceive an exact center, and in code needs one or else perceives the food to be somewhere else. Following waves from place to place through a network leads to a place that has an exact wave center, so it’s something that’s in the math and physics to for navigational purposes make the best of by remapping accordingly.
Most of my problem is not knowing where to begin. So I thought I should explain how it’s first a matter of (however you would most simply represent a cortical sheet) propagate a 2D radio/traveling wave. For you it might take minutes to paste something together. I can easily adapt to whatever you would use for language and code.
I am not feeling well today, so cancelling my live stream. Sorry folks, especially @bitking. Maybe we can talk about fully connected networks as a part of our next HTM Hackers’ Hangout in one week?
Bummer dude. take care of yourself and apply the energy to getting well.
Hackers hangout it is.
Assuming that it is Friday.