AGI - what part does HTM/TBT play?

Wetwares have been notoriously slow…

This qualifies as either a conjecture, a wish, or a belief. Not a human verifiable fact.

:smiley: I’m just nitpicking :smile:

1 Like

This is a cute video but really nothing new and with a serious omission. Genetic programming and genetic algorithms have been around forever – I played with them extensively when Koza published his book 20 years ago. Much of what I know now is based on that work.

The first takeaway is that getting new behaviour by mutation takes forever: thousands or millions of generations. I ran out of patience with simulations running for hours and days and getting nowhere.

The second is that adaption especially via crossover is much faster. This video appears ignorant of this fact. It’s a serious omission IMO.

And it goes nowhere to helping with questions around AGI.

Aha! That is indeed the point. Intelligence has evolved precisely because it allows animals to adapt to new challenges and occupy new ecological niches faster than any genetic mechanism. Intelligence allows animal species to adapt quickly, and our intelligence has allowed us to occupy and exploit every part of the planet in no time at all.

1 Like

Grasshopper, never underestimate the power of parallel processing.

Back to the topic at hand - how does one configure HTM/TBT into a real time AI? What else is needed?

Fleshed out proposals most welcome!

To start with: Short Term Memory mechanism, what HTM/TBT conspicuously lacks. Intelligent problem solving cannot be done without STM, besides LTM. It’s a subset of the binding problem.

Just my 2 cents worth, for your amusement :slight_smile:

2 Likes

And short term memory you shall have!

3 Likes

HTM is a good theory and all, but it only explains a very small amount of what the brain does. The brain has a lot of moving parts, and HTM theory intentionally focuses on only a few of them and ignores the rest. This makes it easy to understand and a good starting point for ppl to get interested in neuroscience.

Seeing this I set out to collect the many models of the brain in all of its aspects. Then I tried to fit all of the collected parts into a single monolithic model. The problem with this approach is roughly as follows:

Models can be formulated in both a bio-physically realistic way as well as in a abstract / high-level way. HTM is an abstract model, but there are also biologically realistic formations of the HTM model. For example see: Sequence learning, prediction, and replay in networks of spiking neurons.

If you insist on using abstract models like HTM, then you will find that none of the pieces fit together correctly. Every abstract model is using different abstractions. I thought that with sufficient understanding and effort I could merge the pieces together, but I found that that is neither easy nor always possible. Furthermore, the number of interactions between pieces of the model grows quadratically with respect to the number of pieces.

In comparison, biologically realistic models are easy to combine since they all interact using the common language of physics: voltages and chemical concentrations. My conclusion is that to build more complex models of the brain will require using a significantly higher degree of biological fidelity than the HTM model.

4 Likes

Pardon my ignorance but can you detail a bit upon what/how a realtime ai is supposed to do/behave?

As I understand, spatial pooling currently models lateral inhibition. But I think there is also longer-range and weaker lateral excitation: Mexican hat model. Which may enable short-term memory through reverberation in L2-L3, not dependent on Thalamus. Is that something Numenta looking at?

A dumb boss. Unfortunately without the dumb boss the smart advisor is clueless as to what to do (or what to really focus on) and is only an expert forecaster. HTM will never be a true AI on it’s own.

Real time (temporal dislocation of input streams) I can’t see how the current SDR process/code will work because of the step (lock) timing.

1 Like

I found this interesting. Supports my contention to get chimp level AGI (sans language) before attempting human.

I read through that collection. While that covers a huge amount of information, I have one basic question: what does that have to do with HTM/TBT?

HTM/TBT invented a way to encode temporal memory (claimed to be biologically realistic, keyed on a neuron’s “predictive state”). TBT’s TM algorithm may not be the ultimate solution on the road to AGI, but it seems to be the 1st serious (scientifically rigorous) effort of modeling the brain at the logic gate/neural circuit level.

Using the analogy of a modern digital computer (the prevailing von Neumann architecture) … if we humans ever build an AGI computer (which obviously will NOT be following the von Neumann paradigm):

A virtual reality system (or self driving system) can not be built withouth the hierarchy of layered composition: vacuum tube vs transistors, integrated, logic gates (AND,OR,XOR), logic circuits, binary operations, encoding the whole world into binary codes, architecture basic (von Neumann style & others) - separating CPU vs storage, mix storage of instructions vs data, architecture advanced (volatile memory vs non-volatile, caching at each level), Operating system, binary programming/assembly language programming/higher level programming…Database, networking, computer graphics…

AGI field (if it qualifies as a field, composed of passtionate enthusiasts and serious reserachers/pioneers) lacks serious divide-n-conquer hierarchy. Short term memory at the logic gate/circuit level, have received inadequately small amount serious investigations and scientific efforts.

There has been great advances & discoveries at the molecular level (in the field of neuroscience), probably at the cell level (short term synaptic plasticity has been well studied), at the overall system/whole brain level (Dementia related research abounds).

In between those levels, things become intellectually murky. I have done MANY YEARS’ worth of reading in this area, but I’ll stop babbling about it to avoid turning a fun casual chat into a boring monologue that nobody reads through :slight_smile:

The wall of text that nobody reads is kinda my super power.

I find it helpful to think of layers to 3 as pattern recognition/ sequence labeling and layers 1/5/6 as sequence memory.

As far as short term memory the pattern completion part of HTM/TBT also serves as short term memory.

The loop through the cortex/subcortex/cortex serves as working memory, attention, and goal seeking.

So HTM/TBT fills in about half of the overall GI mechanics.

2 Likes

Well said. Except once you accept that principle, why not start with lower primates? Or other mammals? or birds? There is a vast literature on training lab rats and other animals – wouldn’t that provide a staged series of challenges for successive AGIs to master?

1 Like

While I do appreciate your sentiment as to baby steps, I don’t care to make a mouse brain.

I want Max Headroom and would consider anything less as unsatisfactory. Speech, reading, reasoning - mouse level performance does not give me any of that. This is a binary proposition; nothing less than full human function gives me this stuff.

But I certainly would not stop you in whatever work you choose to pursue.

I read some of it, good stuff :). It’s just not my main focus, and our interpretations differ.

Ok, that pattern completion is a sort of an exaggerated reverberation that I mentioned: enough “priming” inputs can fire the cell even in the absence of proximal input?
That priming input is supposed to be direct L2-L3 connections, or it can be L2-L3 → L1, not via thalamus?

Your “subcortex” is extremely vague, it’s hundreds of different areas. Much of thalamus is actually a map-reduced cortex, Murray Sherman calls it the 7th layer.

2 Likes

I’d like to play the devil’s advocate …

As a vague metaphor … what if one wants Windows/Linux (as OS), Word Processor/Spreadsheet (as application), an HTTP web server/browser (as application/services), a database SQL style (or beyond), a search engine service (like google), a social network service (like facebook)… and one believes Intel and/or Texas Instruments can achieve that “general computing” goal?

Numental/HTM/TBT to AGI =~= Intel/Integrated circuit chips to General Computing

In my mind, HTM/TBT started as a great breakthrough, it would be nice if it can continue on to produce a plethora of software libraries that other people can leverage to develop further: that is one kind of hierarchy, the H in HTM.

Pursuing AGI directly within the framework of Numenta/TBT, it seems to be like asking Intel engineers to design a chip that provide all the above listed “general computing” capabilities. It is worse than trying to develop a web server using assembly language, without using any software libraries.

It is theoretically possible, but does not seem to be practically possible.

From SDR, TM, to sensory-motor integration, those are great stuff. However the mountain top of AGI may be TOO MANY layers up above, with many of the intermediate layers not even in sight of current Numenta roadmap. Maybe.

2 Likes

I suspect that AGI is one of those things that will eventually seem pretty obvious once the first person shows the way.

4 Likes

Ditto on the crossover… I’m not clear if he doesn’t do that or simply skips mentioning it (I haven’t dug deeply enough in his source code either to answer this), but it’s a good point.

As for AGI… well, I think we have a collection of good things and approximations to how biological neurons work, and the primary conversation is how do we wire these things together to produce something autonomous and intelligent. I see this as a graph problem more than anything. We generally (specific points aside) know how the “nodes” work (raw input, encoders, steady-state vs. anomaly detectors, state machines for specific status such as O2 in the blood, memory, interneurons, output, etc.), but the discussion frequently circles around how to or if to connect which/what edges of these nodes.

While many folks here are talking about primates (sure, we’ll get there), I’d be tickled pink with C. elegens level of dynamic system, then scale up from there. :smiley:

A challenge, to me, seems to be how to get a system to arrive at its own self-formulated goals and survival responses, and at least these basic instincts (and their biological underpinnings) seem to be prime real estate for genetic algorithms to explore the space.

graph stuff…