Background: I’m a Computer Vision Engineer working for some time in the industry now (3-4 years). HTM’s have always piqued my interest ever since I stumbled across the youtube video by Taylor.
Every time I go through a topic I get lost in the plethora of resources available. I enjoyed the topics of sparse representations and predictive processing. And reading about predictive processing linked me to neural generative coding framework. Somehow I’ve ended up back again here. Is there a structured way of learning things about these topics (sparse representations and predictive processing etc) without getting lost??
I’m particularly interested in the research and implementation side of things. Any help would be appreciated.
Thanks.
I know you mentioned ‘the YouTube video’ by Matt Taylor, but have you seen his whole series called HTM School? This is the first thing I’d personally recommend, since the videos are brief and engaging with great visuals. Matt was a 1-man army on these and he is deeply missed professional as well as personally.
On the implementation side I’d recommend looking at NAB (Numenta Anomaly Benchmark), where HTM was evaluated alongside a few other time series anomaly detectors on a battery of real & artificial data sets with labeled anomalies.
Also related to implementation I’ve actually written a wrapper module around htm.core, allowing for quick prototyping for time series anomaly detection. If you’re looking to do that kind of application let me know and I can give you access.
Hope this helps get the ball rolling and again welcome to the community
Heyy took some time to get on this was stuck with work but I was able to go through the NAB white papers and elsevier paper published. What do you recommend on starting next ?
So you’ve read the NAB stuff and now know the concepts and findings, but haven’t implemented it yet I assume? Are you interested in getting up and running on that? Just checking where that fits in with your general goals.
Yes I would love to get started on this but not really sure where to begin. My current goal is exploring more biological-inspired learning. Numenta was the first place I stumbled upon, on this topic. Another one that I did stumble upon was ngc-learn which is on the lines of predictive neural coding. Can you brief me about how you got started when you first started working on this?
Sorry for such a long delay.
I got started with this paper from back in 2016 - it explains some of the theory behind the basic structural concepts of HTM like spatial pooler/SP and temporal memory/TM.
I really found them inspiring and intuitive to understand why the cortex would land on mechanisms like this for robustly building its models of the world.
There have been many other papers since then that have extended HTM theory, to representing locations
and more recently the Thousand Brains Theory.
I would also highly recommend the HTM School videos produced by the late great Matt Taylor. They are very visually engaging and intuitive to understand.
I know lately Numenta has also been using neocortical principles like sparsity to build LLM’s for business at much lower cost than otherwise possible.
I’m nor sure exactly what you’d like to get from biological learning or just interested in the theory, but in any case I think Numenta and HTM theory are a can’t miss in this area.
The Numenta papers cited are an excellent starting point. However, you should also view the YouTube videos on Object Modeling in the Thousand Brains Theory (parts 1 and 2), Sept. 3 and Sept. 9, 2021. These lectures point out significant shortcomings of the prior work and suggest major adjustments to the column model.
Key features of the column model as they appeared pre-9/21:
Objects are defined as a set of feature/location pairs.
The grid is implemented as a number of modules, which are based on rhombus-like grids of different scales and orientations.
Displacements are determined via computation using pairs of locations as inputs.
The output layer can identify an object (columns paper) and the sensory and grid layers can converge to a known location (columns+ paper). Collectively – both an object and the location within the object can be identified.
After the 9/21 talks:
Objects are defined as a set of sub-object/displacement pairs (a directed graph), not feature/location pairs.
A multi-module grid is “wrong” – among other reasons: it is too complicated. Grid cells are still present, but their role is not clear. Are they some form of “scratchpad”?
Displacements are maintained directly rather than being computed based on locations.
The output layer as previously considered is no longer part of the model.
This update is only available in the Numenta YouTube videos as far as I know. I.e., I haven’t been able to find a “published” version in arXiv or elsewhere.
I do recommend ngc-learn as it’s one of the few efficient Python platforms (custom simulation compiler built to serve Google Jax) built to foster research in biological credit assignment, synaptic plasticity, and neuronal dynamics research in brain-inspired computing and computational neuroscience (also neuroAI). It’s one of the few tools out there that support arbitrary predictive coding, spiking neural networks, Boltzmann machines/contrastive Hebbian learning, and design of custom biophysical models of learning and inference (as well as reproduce classical and modern work).
Plus the library was built to have a lot of docs/tutorials meant to be pedagogical and help educate newcomers in the fields above (and it’s actively maintained and used by the NAC lab at RIT).
I highly recommend more newcomers to try the tool out and even contribute publicly to it; maybe even add an HTM to it at some point