HTM Hackers' Hangout - June 7, 2019

  • update on BHTMS
  • new HTM projects
  • research Q&A with @mrcslws

If anyone has questions about our recent research streams, Marcus will taking Q&A at this hangout. You can also ask questions below if you cannot join the live stream.

1 Like

I’ll try to tune in, but it’ll be hard. If I can’t make it, maybe you could ask his opinion about the same question we discussed briefly last Monday. I’ll rephrase it here:

And I have another question regarding another interesting comment Marcus made in an earlier research meeting:

Thank you. And have a great meeting.


Join here if you want to get on camera. Or just watch live at


I just watched the hangout video. I’m really sorry I couldn’t make it.

Big thank you for answering my questions, @mrcslws. It’s impressive how many papers and points of view you juggle at once. I did follow the research meeting where you reviewed the Geoffrey Hinton papers, so I am with you on the flux capacitator. ;-).

Also the answers by @Paul_Lamb to @marty1885’s sequence question were really insightful. It’s a big help for me to understanding other aspects of HTM.

<sigh> So many things to think about. This is frustrating and fascinating at the same time. I fear I won’t be sleeping much tonight. :-D.


Matt, as usual my day job had to come first but I was later there for a replay. I found it well worth my time to study.

The grid cell alternating back and forth in time Marcus explained is the same as in the ID Lab where I found this as in a “union” merged previous blobby experience from two reference frames into one that takes into account directional movement of the zone. This way senses how close it is in space and time to getting zapped again.

This gets back to an earlier challenge you proposed to use HTM to learn a 2D environment that is mapped out in a text file or from drawing app. Where we left off I was expecting it to become easier to just include the alternating between time frames part then watch it go, on its own, than have to write code to manually control all that and extra code in the HTM algorithm to compensate for not having full control of its body. In that case it would also be a four sensor antennae/dendrites starter model like the Neurons’ “antennae” are unexpectedly active in neural computation topic led to, where the cells can later learn to with antennae sniff out the right chemical gradients to migrate towards before differentiatiating. If it were easy for molecular biologists to add what they discover then it would be easy to get help tweaking.

With the way cells have mysterious metabolic network sensory patches galore I do not see it as a stretch of the imagination to assume a neural stem cell has at least as much similar navigational intuition as an ID Lab critter. What works for neurons that wire themselves together as well as they can may eliminate the minor quirks in current HTM implementations.

I would like to have the same kind of evidence for all the other interactions in the navigational system I have been experimenting with. It’s still the only thing I know of to easily explain the otherwise confusing looking to make sense of things going on with grid cells.

I’m hoping that my model is now useful for explaining what some of the new information is (from at least my perspective) strongly indicating. For me this is the first time what to look for has been this obvious in something you’re trying to figure out.

1 Like