Thousand Brains Hangout with Jeff & Subutai


#1

January 23, 9AM / 5PM UTC

This will be instead of our usual HTM Hackers’ Hangout. We’ll have @jhawkins and @subutai join us for some Q&A.

Watch here if you miss the live event. I will add a link to join the Hangout immediately before the event starts right here on this thread.

Add questions below

If you can’t join the hangout, you have a chance to ask Jeff and Subutai questions on this thread ahead of time. We are specifically interested in talking about the Thousand Brains Model we’ve been promoting with our latest few papers.


Numenta's 2018 Year in Review
#2

SWEET

one question though, is there any code out there yet where we can play with those concepts we’ve read on the 1kbt paper

and after HTM school is done will there be a subsequent lecture called HTM dev school where we teach people who are interested in building projects with HTM?

EDIT: Scratch that question regarding HTM dev school, there are examples out there with “One hot gym” and “Sine experiment” out there which helped a lot of people to get into all of that


#3

Here is the code accompanying Locations in the Neocortex (“Columns Plus”):


#4

We are about two weeks away from this. Who’s not convinced that each cortical column is building it’s own model of the world? Why?

There’s been lots of discussion on the forums about TBT in other places. Are there questions that have not been answered?

I’ve been saving up some questions… :sunglasses:

  • What is the latest thinking about orientation and head direction cells and how they might fit into the model?
  • In the 2018 review @subutai mentioned trying to apply HTM ideas to current ML frameworks, how is that going?
  • Minicolumns seem to run through multiple layers within a column. How might minicolumns be related to grid cells or grid-like behavior?

#5
  • Minicolumns seem to run through multiple layers within a column. How might minicolumns be related to grid cells or grid-like behavior?

Oh, that question brings up another question i was wondering regarding vision that i couldn’t articulate

how do grid cells translate into active or inactive stripes we see in V1 and V2 and what does that say about the communication between the neocortex and the thalamic system.


#6

The question I haven’t found an answer for in the previous papers is about invariance of the pattern representation. It’s clear, that it’s possible to get quite a broad coverage of different shapes in this approach just saving many variations (what basically DNNs do), plus, rotation and scaling invariance can be easily implemented in this case, but what about the true invariance, which is tolerant to any transformations after the one-shot learning?


#7

I just finished reading the Grid Cells paper. (Great stuff, thanks). Maybe this has been asked before elsewhere, but…

If the neocortex consists of modular columns (the big ones, not the mini-columns), and they are presumably functionally interchangeable, how would grid cells then be used in those regions that à priori don’t require them, like for language, music, abstract thought, etc. Would the location layer be suppressed there? Or could there be a use for location input on a more abstract level? Like for instance moving up and down in a tune for pitch or for volume.

It seems strange, but having all this hardware in every column, and then not have a use for it, seems even stranger. If it is missing or less important in some area’s, perhaps the layer would be less developed, and the neurons there less dense.

Furthermore I think the Numenta website should have a merchandise section offering the Numenta cup.


#8

To be fair, even humans can sometimes struggle with this when we see an object from a rarely observed “transformation” of an object, such as looking at a scene from behind, in the shadows, or even upside-down. I’d be tickled pink for a system that does even as poorly as I do, while being “tolerant enough”. In my mind, that means beating the brittleness of Deep Learning based systems, where random pixel flipping can be enough to mess up an entire classification system.


#9

I am beginning to suspect the invariance and even covariance requirements pursued in Deep Learning. The major problem with invariance is that its difficult to have many kinds at the same time. So for example, ConvNets have translational invariance, but don’t simultaneously have rotational invariance. This can be relaxed using covariance instead, but there still that complexity of satisfying many simultaneously. So, the more promising approach is what’s found in Transformer network that trade of invariance for attention instead.


#10

8 posts were split to a new topic: Difference between brain hemispheres


#12

I think this is answer youre looking for. People apply navigation techniques to all sorts of non-spatial & abstract problems.