Temporal pooler and Receptive fields aka FeedForward

Make the relatively simple highly repeated memory part (or in the brain - associative memory with novelty detector) and skip the complicated CPU part.

Got it!

Definitely! Problem is current biology understanding is poor from a functional systems perspective, the language is geared to categorization (anatomy), rather than engineering, and the understanding is like probing a 7nm technology CPU with micron probes trying to figure out how the CPU works, correlating voltages on one probe to another probe on another part of the die or with someone pressing a key on a keyboard. It’s arcane! It’s like we need to train computer engineers to do brain reverse engineering. Really we need to just spend 10x more on this work. Money can make it proceed faster.

1 Like

How accurate? Also, are you saying the biology will be necessary for implementing the way the brain works?

The subcortex isn’t like a worm’s nervous system where every neuron is a unique part of the circuit. We have those sorts of circuits but there’s a lot more. For example, reward systems and attention systems are probably essential for general AI and they use repeated circuits, same as cortex.

There’s little reason to think intelligence is all about something like software. Maybe it is, but we don’t even know how the hardware for sensorimotor stuff works.

The only way to study details of generic cortex is to take it one case at a time, or find info from someone who already did that. That info about generic cortex isn’t very interesting in and of itself, because it takes many many years to scientifically prove things about generic cortex.

The essence of (genetic) evolution (as against adaptation) is to keep what works and slowly bolt on new stuff that might work. The essence of science is reductionism: remove all except the essentials.

I am not aware of any tech based on engineering based on slavishly copying biology. What works is engineering based on science. Our cameras have all the useful features of an eye but none of the biological detail.

I consider it highly unlikely we shall ever unravel the mysteries of evolved complex systems, but we just might be able to figure out what a cortical column does well enough to replicate its core algorithm in silicon. And yes, the reason I’m here is that HTM is the first, best and so far only step I’ve found on that path.

The biology is sufficient, but not necessary.

However i think you will find it exceedingly difficult to implement a brain without biology, or at least some degree of biological accuracy.

As i said earlier, one of the big advantages of using biologically based simulations is that it becomes easy to connect different mechansims together. The brain has a lot of different mechanisms, and they can all interact with each other. By simulating the underlying substrate upon which the mechanisms interact, we can easily and accurately compose them together.

Note: the mechanisms themselves do not need to be biological based, only the manner in which the mechanisms interact with other parts of the simulation need to be biological.

1 Like

I strongly agree. Another example is flying machines. They have almost nothing in common with a biological bird, except for the airfoil shape of the wings’ cross-sections. Ornithopters, the more realistic emulation of birds, never really took off. (Pun intended).

" analysing and understanding the repeating unit, the algorithms and data structures by which it operates" – this should be the key, the computing mechanism of the brain (which is absolutely differently from von Neumann architecture), analogous to fluid dynamics to flying.

I beg to differ here – the main overhead-in-computing of biological brain is metabolism, which can be disregarded in artifical emulations.

For example, all biological neurons have to keep a base level of firing to stay alive. Emulated neurons can be dead silent and doing nothing for unlimited period of time, when not participating in computation.

1 Like

I love HTM, its growth, its willingness to accept new ideas even as it makes and tests its own hypotheses, etc… but what we’re missing in it (and to be fair, what isn’t really included in it) is the aspect of temporal firing patterns triggering the depolarization and subsequent firing of other neurons…

I mean, TM is somewhat trying to accomplish this, approximately, but we still have a chicken and the egg problem. That’s to say that input encoding itself is divorced from the time-based nature that biological synapses experience.

So for me and what I’m working on as a part-time weekend project is to use LiF or Izhikevich spiking neurons as encoders. Then let HTM read the spiketrains from that and figure out what it all means.


@MaxLee love to hear your activity. I am asking interested in performance for prediction task in comparison to other encoders!

Cannot agree more. Some one mentioned/emphasized “temporal sequence memory” (or the lack of it) on this forum before, which seemed equally insightful.

TM doesn’t even attempt to differentiate short term memory of temporal sequences, versus long term memory of temporal sequences. That is NOT faithful to brain biology, not a good approximation.

I think the way to recognize temporal sequence in specific order is to add incremental delays to early items. So that the whole sequence arrives at a target neuron at the same time, making it fire. The delays could work as incrementally distant axonal branches. If the branches (delayed inputs) synapse on a target neuron in reverse order: more distant branches closer to the soma, then the sequence of spikes transmitted by the axon could arrive at the same time. Don’t know if this is original.


It is not orignal, but it is spot on! And it (synaptic delay based temporal/spatial pattern recognition) has not been utilized in any significant AI or Neuromorphic computing project or implementation as far as I am aware of, though it is a pretty well studied feature in neuroscience, for example:


Stefan Schöneich, Konstantinos Kostarakos, Berthold Hedwig, 2015. An auditory feature detection circuit for sound pattern recognition.

Sandin, F., Nilsson, M., 2020. Synaptic Delays for Insect-Inspired Temporal Feature Detection in Dynamic Neuromorphic Processors. Front. Neurosci. 14, 150. Frontiers | Synaptic Delays for Insect-Inspired Temporal Feature Detection in Dynamic Neuromorphic Processors | Neuroscience


Naah …aahh… they work in Dune :wink:


I suspect it is a “ladder” of SP and TPs … where shorter temporal sequence become Spatial patterns and vs versa … so at the end you get a Symbolic-feel for the alot of activity…

The second ladder process is that the SDR become sparser and sparser i.e. compressed… as it goes upward

Because the hierarchy is mixed … the ladder process is interrupted/informed by intermediate sequences or spatial information

1 Like

I’ll create/update a topic when I have something more meaningful to show. Right now working full time is keeping me busy. For anyone who wants to run with it, here’s a (very) basic Leaky-Integrate-and-Fire spiking neuron.

The idea would be to have a series of these (perhaps as class objects) with randomly initialized tau values and threshold values. Input ‘voltage’ at each time-step (dt) needs to be a scaled value of the inputs. Lots of room for experimentation here. The work involved for each neuron is embarrassingly parallelizeable as well.

import matplotlib.pyplot as plt

t = 0.0
dt = 1.0 / 200.0

v = -50.0
V_0 = -60.0
tau = 50.0 # This will be what we randomize internally.

V_thresh = 50.0

vs = []
spike_times = []
vdt = []
pvdt = []
spikes = []
spikecount = 0
while t < 100.0:
    dv = - (v - V_0) / tau

    # if t > 20.0 and t < 45.0:
    #     dv += 50.0
    dv += 30

    v += dv * dt

    if v >= V_thresh:
        # Spike
        spikecount += 1

        # Then reset to rest voltage
        v = V_0

    t += dt


fig = plt.figure ( figsize=(12,9), dpi=100)

The output in the plot of the above code shows the ‘spikes’. You can either use the index of where that happens in simulated time from a series of these, all of which could then be concattenated together to form the input for HTM. Also potentially meaningful is the deltas between the spikes themselves. The spike train can easily be represented as a binary array of ‘0’ (no spike) or ‘1’ (spike) for all steps in simulated time.

This should also be fairly straightforward to implement in C/C++ or any other language to get better performance.

1 Like

I have to (mostly) disagree here. What really made artificial flight possible wasn’t the profile of the wing but understanding an essential principle of flight stability which applies to non-powered flight regardless whether it is to soaring birds like storks and albatrosses or to human-made gliders from Lilienthal’s to modern ones and any conventional aircraft.

We don’t have an equivalent principled insight on intelligence, we-re still at pre-Lilienthal stage, guessing that somehow a few thousand feathers vote about the resultant force exerting on the air frame. Which somehow is correct but might be both insufficient and (eventually) unnecessary at the same time.

I would rather look at music than voting. It’s something about a song that “feels right” and makes it unforgettable on first hearing. I suspect it is more of a chorus of singers each seeking to tune into the overall rhythm. melody and harmony than just a parliament voting on its decisions.

But that was the point of my original comment. You don’t get powered flight by simulating feathers. You can’t think about powered flight if your focus is on bone structures, pin feathers, coverts and flapping.

HTM can be described perfectly well without ever talking about neurons. The key features are bit pattern data structures and algorithms operating on those bit patterns. The internal structure of neurons and the low level description of how they interact are astonishingly complex, but HTM is relatively simple. I’m confident there are more data structures and algorithms to find, and we won’t find them by simulating nature in the raw.


Of course not, you get a simulation of powered flight from your simulated features.

1 Like

We humans are fond of using metaphors (supposedly that’s part of our unique intelligence), though its usage often backfires, when the intention is to help idea exchange.

The first challenge for heavier-than-air flying machines is how to generate lift, the upward aerodynamic force. We would not encounter the other challenges (e.g. balance, control, power supply, navigation…) if we had not overcome the first one, IMHO…

I do feel there is some similarity in creating intelligence, or maybe there is not?


Sorry I should have reacted against the two simplistic, opposite and quite flawed views - one stating “we shouldn’t bother much with biology 'cause airplanes have almost nothing to do with birds” (actually they all comply to an important set of principles) and the other stating that “deep learning has nothing to do with intelligence since there-s no backpropagation in our brains”.

Yes, sure, aerodynamic lift is an important piece of the puzzle but that could be accounted to Bernoulli’s principle exposed more than a century before actual flight machines. Or even by measuring/experimenting with kites which were around for hundreds if not thousands of years.

HTM is very likely to provide a similar important clue, as revelations from CNNs, autoencoders, transformers and explorations into sparsity might do.
Still it feels there-s quite a way from assembling these key insights into a set of essential principles allowing us to build design intelligence.

1 Like