Non-precise timing

Timing is always relevant…information from the now, the moment, the present, the future and the past is available for the neocortex at all times…an example:

In classical conditioning the animal learns to stay away from say an electrical floor. This information learned in the now (in the experiment) is then transferred to a restriction in the learning process:
Next time you sense this floor (ie. stimuli) then stay away…the animal learns a restriction.
As you see, the sense information is transferred into a restriction to create value in other situations and “now´s”. This is the real wonder of the neocortex and apparantly the pyramidal neurons, that they can reuse information in the now. This saves the trouble and energy of collecting this information in real time…

Information from the past creating restrictions is just one of several sources of information in the now. You also have assumptions about the future, regulation in the present and control in the moment. It all depends on how close the stream of bits is relative to the timing of the decision to move.

So timing is very much about having the best information when you need it, and this is what the neocortex makes us able to have.

Is this “scientific” enough? :slightly_smiling_face: Finn

1 Like

I can’t argue that. However, this thread is about precise timing in the sense that (e.g.) I can recognize (or hum) a song at different speeds while preserving its identity as the same song.

Edit: now we’ve been moved to a thread where this is on-topic.

2 Likes

smiling, it seems you need the solution and dont care much about the problems that the solution should solve? Ok I willl detail a bit:

You can have a random stream of bits…and no matter how you time it, you will not be able to predict the next bit…you can only guess based on some average you learn from listening…agree?

1.2.1.3.2.4.3.5.2.2.7.7.7.7.6.4.5…the entropy of this string is high, and you will have to use many questions to be bale to predict with a high probability…agree?

Now what can nature do to this?

  1. Guess from the number before…Markov Chains…is the solution to program…
  2. Guess from a sequence of numbers before…that means memorizing a patttern, a rhytm, and try to calculate the frequency of different patterns…

Whats is done in the neocortex is probably that the firing of neurons across layers are synchonized…this is the neurophysiological observation done in the human brain project (henry markram)…the synchronization makes it possible to align parallel processes in the neocortex (parallel in the way that they try to answer different parallel questions)…the synchronization thus forces the neurons in to the same rhytm…so that the feedback matches the prediction…

How can this be coded?

Finn

the coding can be done based on an example:

imagine you have a bow and an arrow and you want to hit the target. You can be accurate, which means that you hit the target, and you can be precise, which means that you can hit the target the same place everytime…the same with the brain and each neuron…it should start firing at the accurate point in time, and it should fire with a frequency that is precise…so timing is about starting firing when the prediction tells to, and achieving the goal is firing without any difference to the predicted firing pattern (the frequencies)…

so this is the coding job…I guess…adjusting both accuracy and precision

sorry did not understand the problem I willl keep quiet…

I think (and most people interested in HTM would probably agree) that timing at a very large number of scales is relevant to producing behavior.

  • It’s important for an agent to have learned from its long-term past experience in order to improve its behavior over its lifetime.
  • It’s important for an agent to keep track of its short-term past experience in order to make sensible choices right now.
  • It’s important for the agent to consider its present state when making those choices as well.
  • It’s important for the agent to predict its short-term future in order to avoid immediate danger and obtain immediate reward.
  • It’s important for the agent to predict its long-term future in order to avoid getting into bad situations, and achieve instrumental intermediate states that will result in long-term reward.

And of course the extreme long-term on both ends (evolutionary history, future of society, etc.) are relevant as well, although only highly intelligent agents appear to consider the long-term future beyond their lifetimes.

The general ideas of HTM have the potential to address aspects of all of these timescales. The long-term history of the agent’s experience is stored in the connections of the network (particularly in the distal dendrites). Its short-term history is stored in the state of the temporal memory, and once implemented, the temporal pooling mechanism. Predictions at a variety of scales are made with increasing abstraction at higher levels of the network. And of course evolutionary history is relevant for designing the structure we’re trying to understand, and that understanding may help us think more clearly about the long-term future of intelligence itself.

These are certainly worthy issues for discussion regarding HTM. In particular, designing a mechanism for recognizing and predicting sequences at increasing scale and generalization at successive levels of a hierarchy (a key motivation and promise of HTM, not yet understood) seems crucial to me.

So I wouldn’t want to shut down any discussion. This seems like a good place to do it.

Hi jakebruce whoever you are

Your post about handling information on different time scales in the now, is a more of less direct copy of what I wrote to you just before…it seems un-academic and not quite respectful to convert my arguments into your own words without giving me some credit. But may that is how you act in here, and then to say shut down to the discussion is even more disgraceful. You can read more about timing in my book The Human Decision System, its on Amazon and probably you can find some patents somewhere.

It is sad that an open minded debate is unable to take place here without some people really try to degrade others. This can only get the consequence that people just ask questions but never contribute anything of value…very disappointing…

Regards
Finn

Please be respectful. Your tone is combative.

I don’t find Jake’s post inappropriate. The best way to understand someone’s ideas is to repeat them back. He is not shutting down discussion, he is opening it.

I am quite confused by what you mean here. No one is degrading you as far as I can tell. And there are many people who contribute not only their own original ideas and theories, but diagrams, visualizations, and code. It is too bad you’re not getting anything out of this community.

1 Like

hi Matt
Yes I always want to be respectful, but in this case I think Jake and I needed to clean the air to sort out misunderstandings and to say what had may be been underlying assumptions. If you read my new answer to Jakes post, I think you can see my intentions.
Regards
Finn

This took place in a personal message conversation. I will post a portion of Finn’s message here so we can continue.

And now to respond. I hope you don’t mind me taking the liberty of reformatting some of your text for clarity.

Sure. There is a book in progress (https://numenta.com/biological-and-machine-intelligence/) and a list of academic papers (Numenta Research Papers). In particular, Jeff and Subutai’s paper “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex” may contain most of the information you’re looking for: https://www.frontiersin.org/articles/10.3389/fncir.2016.00023/full.

I’m aware that sometimes, when coding for behavior (in the psychology sense, where researchers have to annotate observations of behaving animals to precisely describe the data), categories like these are used. In a simple reaching behavior for example, the motion is categorized into phases of preparation, initiation, extension, and termination. This is a practical decision, rather than one based on first principles of neural computation or behavior: they just need to annotate the data somehow in order to analyze it, so they choose these somewhat arbitrary phases in order to do so. I’ve spoken to behavioral psychologists and neuroscientists who readily admit this.

So I wonder, is there a more principled reason to categorize behavior in this way? Do you have evidence from brain recordings, or computational models, that suggest this taxonomy is grounded in the neuroscience (like studies of action selection in the striatum)? Such data would greatly help support your conjecture.

I do see the point in trying to understand the brain from an information-theory view. This is a popular idea in neuroscience (“the Bayesian brain” might be the trendy way to talk about it these days), and there’s a lot of work trying to understand how the brain might encode uncertainty, some of which strongly implicates the neuromodulator acetylcholine (ACh) in signalling uncertainty. See Michael Hasselmo’s work for some characteristic research in this area.

HTM has not yet incorporated ideas of Bayesian uncertainty or ACh (or really any neuromodulators). However, HTM networks make multiple simultaneous predictions, and can evaluate how surprising new patterns are when they arrive (novelty detection), so it does contain capabilities that could be pressed for information-theoretic purposes.

I’m not sure how familiar you are with the consensus around the laminar structure of the neocortex in contemporary neuroscience, so I apologize if you already know all of this, but I’ll briefly review some of what’s known about the structure.

Most regions of neocortex are composed of 6 layers that can be distinguished approximately by their cell types and patterns of connectivity.

  • Layer 1 is the surface layer, and contains very few cell bodies, mostly just axons arriving from other parts of the cortex, and dendrites coming up from the deeper layers to connect with these axons. The few cells in this layer are inhibitory.
  • Layers 2 and 3 are very similar to each other, so similar in fact that many neuroscientists don’t think it’s appropriate to consider them separate. In reality what’s probably going on is that there is a gradient of concentrations of different cell types and patterns of connectivity between the two, so the overall function changes smoothly as you go deeper from what we call 2 to 3. These layers receive driving input from layer 4.
  • Layer 4 is considered the input layer of the cortex, receiving most of the projections from the feedforward pathway of the thalamus. It contains very few pyramidal cells, mostly spiny stellate cells, which can be thought of as pyramidal cells without the apical dendrite that extends to layer 1.
  • Layer 5 is considered the motor output layer of the cortex, and is thought to receive driving input mostly from layers 2/3 (although this is not universally agreed upon; under certain conditions L5 appears to fire first). Axons coming from layer 5 often project to subcortical motor regions, and many of those axons branch and send an efference copy to the thalamus, and indirectly back to the cortex.
  • Layer 6 is a very diverse and enigmatic layer, with many cell types and poorly understood function. But it’s often implicated in attention, and it’s thought to send feedback to the thalamus that helps decide what should be gated and what should be sent to layer 4.

So once again I apologize for the review if you knew all this, but it’ll be the foundation for what I’m going to say next.

I’m skeptical of your conjecture regarding the functions of layers 2 and 3, because the consensus seems to be that they are very similar with only subtle differences in structure and activity. HTM does not usually separate them for that reason: to align with what we think we know about the biology.

Do you have any studies that you can point to providing evidence of your conjectures about their functionality? I would be very interested in any data that can help tease out the computations they may be performing.

Agreed. There’s a fine interplay between episodic control (executing memorized trajectories), deliberate control (consciously planning and supervising trajectories), and model-free control (finely tuned adjustments to control parameters, learned from a large number of trials). The neocortex, striatum, hippocampus, and cerebellum all contribute to this interplay. HTM does not yet integrate most of these ideas, although a few people on this forum have contributed interesting ideas, implementations, and results related to this process (mostly under the umbrella of reinforcement learning).

Once again I have to express my skepticism about such a clean parcellation of the neocortical layers into answering particular questions. Especially from what I’ve gathered in other threads about your conjectures on the function of layer 1, as there are very few cells there, and no excitatory cells. So I find that difficult to reconcile with the data.

HTM considers the issue in a different way. Each layer has a computational function, but they do not parcellate cleanly into separate functions of the organism. Instead, the computational functions all work together to enable the organism to sensibly navigate reality. Briefly:

  • Layer 4 encodes input in the context of the motor outputs that are causing the input, using the efference copy from layer 5.
  • Layers 2/3 receive that context-grounded input and encode it in the context of the temporal stream of activity, and send their activity up to the next region in the hierarchy.
  • Layer 5 receives the temporally encoded representation from layers 2/3 (maybe; see note above about L2/3’s influence on L5) and produces behavioral signals that stimulate motor regions, and also send a copy up to layer 4 of the next region in the hierarchy.
  • Layer 6 encodes a summary of all of the activity in this region and sends it down to the thalamus to act as an attentional feedback signal for the thalamus to decide what parts of the input are salient and should be relayed to cortex. This functionality is poorly understood and I don’t think anyone here has really implemented this.

All of these parts together help build a sensible, hierarchical, temporal representation of the state of the animal and the world. This representation can be used by basal ganglia regions like the striatum as context for action selection (“decisions” I suppose, in your theory), the positive/negative value of which are found by reinforcement learning using the dopamine-driven reward system in the brain.

So to summarize, I can see the point of a lot of what you propose. However, the details don’t appear, to me, to map nicely onto what contemporary neuroscience knows about the brain. I welcome more details and supporting evidence to change my mind, or to illustrate if I’m wrong about any of this.

3 Likes