It could be my misinterpretation but most TM descriptions assume a 0 or 1 column signal on the column (may be simplification device) … but we know in reality there is a “receptive field” … somewhere around ~1000 synapses (10% of ~10000)
The question then is : How exactly Active neurons are selected ?
There are two competing forces to decide which neuron is active : FF-threshold and Predictive-threshold (Local,Remote on the distal dendrites)
The problem is because the TM model does not support timing you need to use threshold and deem all FF inputs above this threshold active, but you also need Predictive-threshold to select from the ff-active the final-active OR the other way around.
how do you select those thresholds ?
its a interdependent situation : there will different final-active neurons choosing different thresholds. F.e. Predicted-neuron may not get Active if it does not pass FF-threshold and Predictive-neuron may either get punished or rewarded because of that.
Also we seem to end up with two different update rules one for FF/proximal and another for FB/distal
@mraptor I believe that neurons in columns are first “depolarized” or put in a predictive state if their inputs pass the predictive threshold, as you said. Next, k-columns are selected as winners based on a winner-take-all algorithm from the FF-threshold inputs. Next, for each active column, the winner neuron(s) are selected based on which are depolarized first, and if none are, then they are all activated in the “burst” mechanism.
I believe the low-level details of how this process is sorted out varies depending on the implementation. I know that our BrainBlocks implementation is different from HTM in that we tried to aim for an algorithmic elegance over biological plausibility.
The “receptive field” is assumed to be already implemented and handled by the module that is providing the inputs to TM. For example, using the input of a scalar encoder, a single bit provides the receptive field of an interval of possible scalar values. That is, if the scalar, s, is in an interval, a<s< b, then bit is 1, otherwise it is 0.
Alternatively, the SP learns and builds receptive fields automatically from a field of binary inputs. These fields could be arbitrary or they could have topological constraints like making sure all bits are within a neighborhood.
yeah… for SP+TM combo i have so far TWO possible selection methods : by rank or by threshold and TWO sequence orders : FF-first and FB-second and the opposite
Which gives me 4 algorithm variants
Then there are 11 possible connection schemes with the following acting as “atomic”-connectivity : FF,FB,Local, FF(0|1), Columnar-organisation, Just-a-group-of-neurons
Local : means internal connections within the module
FB: Distal and Afferent
FF10: direct signal skipping the receptive field processing (not biological but useful)
I started this way and moving slowly towards as much as possible bio plausibility … every new try increasing the complexity and resource usage … this time around figured a way to hold all the neuron data in a single array which later I can bump to GPU, because i’m using indexed-SDR format … and with RAY can chain multiple modules to run on multi-cpu and bypass python GIL lock.
Also via virtual-SDR I can emulate direct connections between neurons in different modules ;), so that preserves bio-plausibility but allows me to make it as separate modules
We mixed the representation between array and indexed SDRs. We use indexing for describing synaptic connections since the space is so large, and used arrays for representing neuron activations since these are grouped and constrained. This seems to be the most optimal way to do it.
If you convert the neuron array to a binary array, it is even more efficient, but you need to figure out the binary arithmetic yourself or lift it from BrainBlocks. We tested it, and binary arrays are cheaper for representing binary neuron activations. It really keeps the memory footprint down by 1/32 or 1/64, depending on your systems integer bit size. Only 1/8 improvement if you were using char type for your 0/1 activations.
Here is the idea … every Input has a virtual size which stays constant N (equal to the other module number of neurons)
Synapses are 10-20 numbers of 1 … N, so when iSDR arrives 10 of the 2%-of-N bits are stored as synapses. (also add 10 float16 for permanence)
The result is direct virtual connection between neurons in any 2 groups…
Some of the modules will be just 1 group, some may be 5… f.e. SP is only FF … TM is ~10000 neurons but just Local… also i hope not all 100 segs are used
havent figured out forgetting mechanism to prune them out
Also overlap search on my laptop is on big array is ~20ms … below ~1-2ms using numba (no float16 support yet) … micro-secs to ms on the example above.
Beside this op dont see any other that will take longer
This has been my experience as well.
Learning about how the brain works is interesting and all, but if you really want to make a brain then I think that you will necessarily need to model it at a biologically accurate level.
the problem with going full-biological you cant solve real problems … NN is non-biological, but you can solve alot of problems.
The building blocks like SP, TM, TP, grid modules are “non-obvious” i.e. it is not very clear how to combine them to create a tool that can solve some obvious task
F.e. there is now way to do Classification … you can may be build a complex structure to learn simple schematic objects and then be able to Recognize them, but it is laborious task and you may not succeed
One of my current pet peeves is if I succeed with this better Model of Modules for TBT … in what ways I can arrange and combine them to solve practical problems … still a quandary … I will be very helpful on any thoughts on this ?!!
F.e. how do arrange SP,TM,TB,Grid-system to solve three different problems under the same underlying structure a GRID :
Classification/Recognition of schematic objects built with symbols at positions on the grid
I disagree. Evolution arrives at a “just-good-enough” solution using the materials it has on hand. In order to understand how a brain works, you need to extract the principles on which millions of years of natural selection have converged. Just adding biological elements for the sake of it distracts from what those principles might be.
My goal is to gradually remove as much biology as possible until we have algorithmic descriptions of the computing principles being used by the brain. So i’m working backwards, starting with biology and removing as much elements as possible until we have straight algorithmic modules.
With these modules, we can then build brain-like computation and essentially understand how it works. Presumably we’d then build more engineering-oriented architectures for accomplishing the same tasks without the evolutionary baggage in a human brain like x-year childhood development, lizard-brain emotions that distract from the task, and insufficient sensory or cognitive capability for the task at hand (e.g. sonar perception). We’d be building brains to spec, adding in the capabilities needed to accomplish the desired task. I’m sure we’ll find some interesting things about what is actually needed to keep artificial brains from going pathological.
Spot on! It’s one of the enduring frustrations of this site, the constant focus on inessential biological complexity. I don’t want to be a neuroscientist, I want to buy hardware and write software to perform brain-like functions, based on known scientific and mathematical principles. If HTM is one, there must be many more.
Yes, does not need to be biologically accurate, but the architecture for systems, signals, switching, learning, attention, etc will likely need to be fairly similar. I imagine much can done with gates, counters, oscillators, etc. All the same hardware we are familiar with.
I find it interesting that the functions of the subcortex are considered distracting from the functions of the cortex. A common ding on current neural networks of all flavors is that they can’t perform the simple tasks that most critters do. Even the “very simple” critters. Even ones with little or no cortex.
But they do have structures that are similar to the subcortical structures.
Perhaps it’s time to rethink what is important to making a functional AI.
The ‘simple tasks’ like motor control, homeostasis, sensory perception appear to have evolved over many millions of years. Evolution is messy, and I would expect a lot of specialised hardware and neural circuits with dynamic emergent behaviour. That stuff is tough to analyse, really tough.
But some parts of ‘higher’ brains (such as the cortex) are different, and have evolved faster. There is a simplified repeating structure associated with increasingly complex behaviour. That suggests a ‘hardware’ computational unit and some kind of ‘software’.
So my point is that it is neither interesting nor productive to focus on particular neural pathways, nuclei or neuronal structures. The pay-off should come from analysing and understanding the repeating unit, the algorithms and data structures by which it operates. Those we might well be able to replicate in silicon.
We won’t find out how an animal does locomotion, but we might be able to create AI that solves unstructured problems. Even if it was only as smart as a rat but a million times faster that would be really something.
One of the advantages of working strictly at a biologically accurate level is that we know it works. And it’s not just the algorithm that works, the implementation also works.
Take for example a biological synapse. Within each synapse there is a Calcium concentration which is important for implementing hebbian learning (in biology). Scientists have studied such things and built accurate computer models of how synapses work. They also built “algorithmic” models of the same thing which do not directly model calcium, but instead use a variable in the range [0, 1] as an analogue to the calcium concentration.
However, there are other reactions which use Calcium, and some of these other reactions are also important and have a meaningful impact on the brain’s algorithm. These other reactions can release, sequester, and react with Calcium. And each specific type of synapse can contain a different sets of reactions. In this way the basic model of a synapse can be augmented with additional logic, by adding including reactants in the synapse that interact with Calcium.
The “algorithmic” model of a synapse works well in isolation, but in the context of all of the other things that are happening in the brain, it becomes less clear how to fit all of the pieces together.
Spot on. When discussing evolution, one has to keep in mind what the end goal of the process is, to propagate the genes. E didn’t evolve us to develop civilization, or technology, or any of that. We were evolved to maximize our procreation which we certainly accomplished. Our minds, however, have developed to the point that we can now control procreation, not only our own, but any other procreating creature on the planet. Evolution would never allow this. So what happened? Our cortex increased in size to promote our social-order hunter-gatherer existence. Then a peculiar ability evolved: language. Once this happened, we were able to blow every other organism away. Human language, however, was only evolved for communication, not thinking. Complex planning, sensing what others are thinking and then being able to ask them what they are thinking needs a special construct loosely called consciousness. Consciousness is the operating system of the human mind, it exists only as software and it is self-organizing once primed. It requires syntactical/grammatical recursive language in order to develop, so no other organism has it.
From my viewpoint, I think HTM has the best structure to implement it in a neural context, but even though I theorize that it is sufficient, I also do not think it necessary.