Time nuances in reception of inputs

As we all know HTM implementation is biologically constrained. My intuition tells me that there are time nuances in regards to the reception of inputs in the cortex. To be more specific, if there are 5 bits going in, the 1st bit could be received before the 2nd bit, the 2nd bit could be the last and the 5th and 3rd bits could be received at a very few time difference.

I would like to know if this kind of nuance exists in the brain, and if it exists, what is the reason why this is not implemented in HTM? It seems to me that this nuance may add a dimension to an input which could be considered as a feature.

1 Like

I like to think of the nerve signal as pulse trains so perhaps you could put this a bit differently:
1.x…x…x…x…x… (5)
2…x.x.x.x.x… (5)
3 .x…x…x…x…x…x…x. (7)
4 …xxxx.x…x…x…x…x… (9)
5 …x…x…x…x…x…x… (6)
1, 3 are first,
4 is the most sincere.
3 is among the first but not as intense as 4. That said, it may end up winning over 4.
5 is signaling a lower intensity but if you count the number of pulses - it is stronger than the earlier 1 & 2.

So what does this all mean?
Phase coding means “first” is the most important for signalling presence. I see that as winning competition to signal that something is happening and capturing a mini-column by virtue of winning TM competition.

Repetition could well play an important role in establishing a level of signal for intensity. This may also play a significant role in training synapses.

Numenta uses phase in temporal memory competition.
A more compete model would consider both phase and amplitude (pulse rate). That said - I think that would make for a very complicated model. I have looked at this for my hex-grid model and it does complicate things.

4 Likes

The time nuances is also the Temporal coding. It cannot be simply assumed that the temporal memory in the HTM is equal to the model of the temporal coding. Some analyses around the brain are from a physical perspective, such as waves and resonances. Before we figure it out, we can’t tell if they are side effects.

1 Like

I see what you mean and I can intuit that the TM is emulating/handling this nuance albeit in a very crude way.

The reason I asked is because I’m experimenting with a highly concurrent system that somewhat mimics the SP but in a real-time manner. As in a region can recieve discrete input bits concurrently (not fully parallel). Each input bit slot I call it a socket that is observable by nodes. Nodes subsribe to these socket events. When I ran two independent processes subscriber/publisher, obviously the meaning of an “input set” became different to a conventional SP because there are nuances in the context switching of the said processes. I believe it is much realistic to have these nuances and I cannot really see where exactly this type of nuance is being emulated in HTM. Another thing I’ve realized is that the SP algorithm is fully synchronous and that it does not allow concurrent/parallel flow between inputs and that component (whatever that is) that governs the feedback and prediction.

3 Likes

@Jose_Cueto, you say you run into a specific problem. Could you give an example of a real world situation?

The example of @Bitking is really helpful to understand the problem. But it is an abstract set-up. We don’t know what these 5 spike streams represent. If they fire roughly together, there is good reason to think they represent something semantically similar. So even if the winner (however this is determined) causes the other signals to be inhibited, the trigger of all these signals gets transmitted.

Let me try to describe my experiment.

So I was experimenting with simulating an SP but with real-time properties. That is at least two processes run simultaneously. One process acts as a stream of input bits. These bits are streamed concurrently using internal asynchronous IO. Each bit in the stream is observable and I call them sockets, they are analogous to input bits in the SP. Another process is responsible for running nodes. Nodes are simply subscribers to sockets and they are run concurrently with each other. These nodes are analogous to columns in an SP. When I ran this simulation, I came to realize that the sequential nature of the SP algorithm is not anymore applicable, why because it cannot guarantee the whole input bits processing anymore. The whole meaning of input bits became different, in the SP it means a discrete set of input bits, probably as a result of a function similar to @Bitking’s explanation above. However, in this simulation, another dimension becomes relevant and that is the order of which these input bits are received because the system is now highly asynchronous. I believe (most probably wrong as I’m not a NS) that the brain does the same thing and simply filtering the signals’ intensities does not capture this dimension. My question then was, is this really happening in the brain? I believe yes, so I asked another question why is it not implemented in HTM? @Bitking probably answered it, that it complicates things. But then we are already dealing with a complex system, the brain, why leave this dimension? When I ran the simulation, I’m really convinced that this dimension contributes a lot to an emergence (probably learning) why because it is more realistic than the current SP that is highly static and sequential. The simulation is highly asynchronous and concurrent, even parallel when processor cores > 1. I think that it may be worthwhile to implement an SP algorithm that is applicable in this type of highly asynchronous and concurrent system and see the emergence. This is what I’m trying to do now, but it is hard due to computing limitations, and also I’m trying to simplify the system to use non-Nueroscience concepts.

1 Like

Many synapses in the brain use Spike Timing Dependent Plasticity. It’s not implemented in HTM systems because the HTM simulates ~100 milliseconds at a time, which means that the exact timing information is not calculated.

2 Likes

I take a cut at it by having 4 phases of voting in hex-grid formation.

1 Like

If you build it, the chips will come. Once we have chips like the ones RAIN Neuromorphics is working on, we’ll be able to parallelize all the processes.

1 Like

There are all sorts of things reaĺly happening in the brain yes. And you can model each protein in synapses independently, but then my bet is you won’t implement an AI (let alone G) any time soon.

There are people modelling a small amount of neurons very precisely where intensities of current along different parts of the membrane is simulated.

There are people running more large scales simulations where those fine electrical simulations are replaced by a model of synaptic integration and firing, at the millisecond scale (which will take into account your concerns)

There are people trying to get a more abstract functional understanding and take into account what is believed to be “most relevant” properties, so that it could simulate a very large amount of neurons, and hopefully be on the right track still when the functional abstraction was a good one.

By having inhibition modelled outside of main sim, by the concept of minicolumns having a decisive effect on functionally distinct synaptic integration zones, by considering only synapse persistency and on/off states, assumed synchronized to a main clock, HTM obviously falls in that last category.

Bottom line : all these are a matter of choices.

2 Likes

I rambled a bit about this thread today on my live stream but I am not sure I have anything to add.

Thanks for quickly tackling this topic and for playing some guitar licks in the background, sounds bluesy to me. I like that you’ve confirmed the neurons operate independently and mentioned neuromorphics. The behavior of the SP today does not mimic the neurons being distributed machines. There is a lot to see in emergence of highly asynchronous, distributed and parallel machines which cannot be seen in a synchronous system - this became very clear when I ran my experiment.

The current SP algorithm is a very crude one. For example questions came, when to check for overlaps without wasting the incoming input bits? How to determine relevant sets of inputs and so on. It generates more questions and also it tells me that the current SP cannot simply do the increments/decrements in a retrospective way because a lot of bits are coming in, in fact the synapses increments/decrements (I believe) instantaneously and there is no time for the SP’s inhibition algorithm. It also tells me some of the SP’s algorithm properties may not be necessarily written in code, instead they should be “searched”. For example, inhibition, is it really a sequential and controlled behavior or is it an emergent one. If its the latter then it needs to be searched not implemented.

1 Like

Could you please share some references to this when you can? Thanks, I’d be interested to know.

1 Like

for example, these people are trying to replicate firing properties of individual neurons

others are using in silico models to try and understand things. this one even comes with a nice video for a visualization of their model firing

Now, for coarser models operating at a timescale where time-dependent phenomenons would be part of the sim ? I’d say this is currently the vast majority of them, if you ask me.

eg, for an usage…

What? No blue brain project?

https://www.epfl.ch/research/domains/bluebrain/

Mine is grey. So… Can’t trust those guys.

1 Like

Following up on the rather muddled answer in post #2 above:

I see a reasonable approximation of spike neural signaling as phase coding + intensity.

The implementation consists of time slicing your processings to the degree that you feel is necessary to capture the temporal quantization required by your problem.

The signal that is transmitted from one unit to another should be generated and made present in the time slice that is relevant in the model you are creating.

This value is a scaler that represents the intensity of the signal. This is analogous to the repetition rate of the pulse train.

An example: In temporal memory, the time-slicing is one alpha cycle and early and late times in the temporal competition.

  • Alpha cycle slice: the cell apical arbor does or does not generate a bias signal for the temporal competition. TM does not currently use this but this could be an analog value related to the degree of match.
  • Temporal competition early slice: if a bias is present the cell fires and through the action of the attached chandelier inhibitory cells creates a nogo signal to the mini-column
  • Temporal competition late slice: If no winner and no resulting inhibition, burst.
    And various housekeeping and learning things as required.

NOTE: I acknowledge that this approach misses one aspect of the pulse train model - the fact that a long pulse train makes a continuous signal that could be used to gate another pulse train over a period of time.

1 Like