Biological way to decode SDR back to Objects

In https://github.com/htm-community/nupic.cpp/issues/297 I’m discussing how to properly decode (convert from SDR back to original Object) and what would be the biologically plausible way.

Decoding, inverse of compute(), aka topdown compute

This idea is about how we should properly obtain original value from SDR, how to “decode”.

Methods

There are 2 (3) methods doing decoding:

  1. a Classifier is trained on pair (SDR, value) and later can infer SDR-> value.
  2. (in the past) there used to be top-down compute, which did inverse operation to compute in SP, TP (not available anymore)
  3. encoders in #291 support deterministic decode for encoding->value.

Pipeline

[any real world object] -> Encoder - > [binary vector] -> SpatialPooler -> [SDR] -> TM (TP) -> [SDR with active + predicted cells] -> CP -> [SDR] (Anomaly-> [scalar 0.0..1.0] / Classifier )

Methods

1. Classifier

currently used approach uses Classifier, which can be used anywhere in the pipeline and learns association between SDR/vector and [object], which it can later infer.

2. Top-down compute

If inverse compute is implemented, Classifier is not necessary and we could replace direction in the pipeline and get back to original value from anywhere in HTM processing.

Current Status

Classifier is used on assigning value [object] to SDR (typically from TM!)

Proposal

  1. remove Decode from encoders, as not needed
  • this is because SP does not implement decode anyway, so the only thing we can effectively decode is an encoding obtained from the encoder in the first place.
  1. implement topdown compute for temporal components (TM, TP), as those are stateful (output SDR differs on position in the sequence)
  2. use Classifier to decode SDR from SP (or decoded to level of SP)
  • 3.1 (optional) UniversalEncoder (! this encoder is not semantic)
    [object] -> hash() -> [hash] -> RDSE -> [SDR] = SDR_hash
  • 3.2 train SP(=SP_assoc) with associative memory #156 on {SDR, SDR_hash}
  • to decode:
    • SP_assoc.compute(SDR2) -> [SDR2 with assumed SDR_hash' portion] -> UniversalEncoder.decode(SDR_hash')

Summary

I think best approach is the compromise of the 2 existing:

  1. TemporalMemory needs decode() to convert from contextual SDR -> static SDR.
  2. Classifier {SDR -> value} is a reasonable biological implementation of “decoding”
  3. the concept of decoding does not (have to) exist in biological brains, but is needed for interpreting results from HTM systems
  4. SP extended to work as associative memory could be a biological implementation of a Classifier.
2 Likes

I am not sure there is a biologically feasible way to do this. In biology, we don’t decode anything. The decoding is a way for us to try to peek into the internal reality of the intelligent system. It is a hack, and it is not the way we will eventually communicated with these systems.

I am not discouraging your efforts! It is really useful and necessary at this point in the evolution of HTM that we do this. I just don’t want you to go looking for biological proof of this cause I don’t think our brains do it.

3 Likes

Still - we humans go from seeing or hearing or feeling an object to be able to produce a sound that stands for that object.

What is that chain that links the sensory experience to the internal model to the related motor production?

I don’t pretend to know the answer but I spend considerable time in contemplation of this exact thing.

3 Likes

This is an excellent question, perhaps the one we are all trying to answer. I don’t think it involves decoding object representations back into the individual sensory spaces exactly. At least I don’t think we’re going to find an opposing process to “sensory encoding” that will allow us to do this.

1 Like

Just heads up, that we eventually went that way, with decoding as impossible:

@breznak

In the brain, there is a tight coupling between the decode and encode parts of speech.
In the “Dad’s song” project we hypothesize that the critter learns the sound first, then learns the muscle movements necessary to mimic the sound.

So how does that play out in a conversation?

Extending this to what is happening in the brain - you perceive something in the overall global workspace. That is the extended network of connected areas/maps.

The “highest level” version of this activation pattern extends to the temporal lobe to be experienced. This is shared with the hippocampus and through that - to the rest of the “older brain structures.” What they get is a highly digested version of the perception of the world - they are spoon-fed your experience.

The lower brain structures process this and then project a command output to the frontal lobe which is elaborated through the various levels of the forebrain. This elaboration is shaped by fiber loops connecting to brain areas that are also used to parse the world - guided by remembered facts about the sensed world from the physics of the world through to the facts and relationships of remembered objects and places.

@Gary_Gaulin This bit should give you some crazy ideas for your “bug creatures.”
You can think of the level of processing of these lower brain structures somewhat like what a moth does in flying up to the light in response to all the cues that signal mating time. In the moth genetics have tuned it to fly up to the moon to mate; genetics did not plan for porch lights. With our big brains these old senses and drives are vastly enhanced. These senses should be better at processing sensory cues and turning those drives into suitable action plans. I call this my dumb boss/smart advisor model.

At the lowest levels to the forebrain, the output fibers don’t project to the body, they project to the temporal lobes to be experienced as “thinking” and “recall.” This is really the same thing as the lower brain structures pointing the eyes thought the FEF (frontal Eye Fields) to look at things of interest, but this part is all internal to the brain. These recalled memories are then experienced by the temporal lobe, hippocampus, and related structures in a loop of experience we call consciousness.

Some of these forebrain activities may result in the selection and production of motor activity - words and actions. These are all stored motor programs that are being called into play, customized by the recalled memories and drives from the limbic system/forebrain. The networks in the various areas settle into states where there is the least “conflict” between the various activation patterns. Experienced AI researchers will recognize this as a relaxation computing process.

So - to address your question about decoding SDR contents - the right kind of internal activity could interrogate the contents of memory. It would have to be done at a system level to be biologically plausible. The system must learn the access method at the same time it learns the data.

2 Likes