Reality for machine intelligence: internal vs consensus

Are you talking about applying external labels to object representations that mean something to us humans out here? Are you talking about consensus reality?

The labels part.
How do you know “what” pattern is being recognized.

I don’t see a major problem with this. We’ll need to establish some simple communication protocol with the intelligent system. It has sensory systems, it could snapshot its sensory data while observing an object and use it as a label. We can understand “cat” from an image of a cat (any cat the robot happens to have used).

There seem to be a lot of ways we could create communication channels without actually parsing the neuronal representations.

Is this currently being done with any implementation of HTM at this time?

I am not saying that you are wrong - but this is an important problem that will have to be addressed to make the important contributions of HTM more widely usable.

1 Like

No, I’m just spit-balling.

I just gave this an official topic because I think it is a very interesting question, one I first started thinking about because of a book recommended to me by @jhawkins himself. In Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, renowned physicist Max Tegmark talks about 3 types of reality:

  • internal reality, or what you think you experience, how your sensory perceptions are perceived by your brain and represented as a model of…
  • external reality, or the absolute truth of the universe, of which we can only glimpse through our senses, but every other intelligence is capable of observing this external reality
  • consensus reality, or how intelligent beings communicate with each other, where we all agree how long a foot is, what to call a pigeon, etc.

What we are really talking about is whether intelligent systems we create with machines will have an internal reality. These systems will, by the nature of their neurological structure, have an internal reality, one that can never be fully communicated it its entirely to other intelligent beings. This is the nature of intelligent learning.

The idea that an “artificially” intelligent system can have representation of reality that we humans cannot comprehend concerns some people, so I would like to open this idea up for discussion.

All that being said, we could have a very comprehensive consensus reality between us that can communicate massive amounts of information. But the robot will never be able to tell us if what it perceives as “red” is the same as what we perceive as “red” (although it might be able to give us some form of probability, lol).

1 Like

I think it is another matter to be able to communicate with AI.
animals that do not talk are also intelligent, People who speak different languages are also intelligent. same effect, different ways.

This touches on the heart of what I am identifying as an important need for HTM to be useful to the larger machine learning community.

In all three case posed: As HTM is currently implemented - it can signal that it has seen this thing before but can’t say if it was a lion, tiger, bear or bagel.

In an anomaly detection scenario - it can say that something is up with the monitored signal but it can’t really say what is new with the signal.

As far as qualia, I view this as a mostly manufactured issue. You perceive some signalling that reliably communicates some property of the external reality and you take that as your internal reality. I really don’t care if your red looks the same to you as my red does to me - I do care that we agree that it is red.

For an agent to exist as a independent entity it is necessary to label internal representations with some value functions to for goal and plans. I can’t see how you could possibly do this without some form of labeling.

To exist in a social setting it is useful for this internal labeling to shared externally to form group goals and plans. Even in creatures that don’t “talk” they usually have some method to signal external threats, plans, and social signalling. This implies internal reality and labeling. The fact that “we humans” can attach a specific sound or shape to that perception and signal a wide range of representations is a very useful bonus.

To the best of my knowledge, this important labeling function is not accessible via HTM technology as it exists. Adding this should address the common criticism that HTM does not do anything useful. This may not be important to Numenta but as someone that is invested in the HTM concept I see this as important.

3 Likes

If we can create AI, we can of course communicate with them.
From a philosophical perspective, different models are modelling the same world.
Communication with AI can be solved technically, but more importantly, let AI learn to identify objects and understand the meaning of “is”.

1 Like

It can’t come up with the labels itself, is that what you mean? If so, I agree there is no language inherent in the theory.

Let’s say we train a system on a set of objects by letting it move sensors over them. We did not label them, we just identified to the system that they were different objects. Assume the system has learned these objects, but it has not labeled them (it has an internal representation, but has not created an consensus representation that humans can recognize).

If the output of the system is SDRs representing the object being identified, we could write a classifier and apply labels from the human side. It’s not perfect, but it establishes the beginnings of a communication protocol.

I was also trying to make this point.

I agree, but this is an internal process, the semantics of which might not make sense when observed. Every agent might label things differently. And it’s not even really labelling, it’s just organizing. It’s the consensus reality where we apply the labels. And we humans are going to be doing the labelling for this reality, because computers are awful at naming things.

1 Like

I’m talking specifically about how objects are learned, represented, and recalled here. Labels are language. Language is another subject. My point is that we don’t need to have language in order to have complex object representation in the neocortex.

3 Likes

I think you missed a very important part of my prior post:
For an agent to exist as a independent entity it is necessary to label internal representations with some value functions to for goal and plans. I can’t see how you could possibly do this without some form of labeling.

So - is language necessary for this?
I say no.

To exist in a social setting it is useful for this internal labeling to shared externally to form group goals and plans. Even in creatures that don’t “talk” they usually have some method to signal external threats, plans, and social signalling. This implies internal reality and labeling.

1 Like

The pattern itself is the lable. The same abstraction process produces the same pattern encoding.

1 Like

Still - it has some ability to pair up to meaning.
That drives behavior.
Memory of places, objects of positive and negative value, mating subjects, ect.
These are all somehow coded and evaluated or the critter does not live to reproduce.
HTM currently does not have this ability - hence the interest in extending the theory.

HTM is a theory about object modeling in the way of cortex work.
It is a proven hypothesis, isn’t it?

Proven? No.
Goal? Yes.

And it does not cover large sections (all really) of the sub-cortical structures.

still, it’s amazing.

Actually - you are wrong on this. For a give spatial pooler any number (potentially - a fantastically large number) of input patterns can trigger the same output pattern. There is no realistic mapping between input pattern and output patterns. This is what I have been trying to communicate to you.

A given output pattern can map to many input patterns; the universe of input patterns don’t even have to be semantically related. All HTM will tell you is that it has seen this pattern or sequence before.

1 Like

Maybe, if we try to store everything into a small matrix.

Thanks to HTM we now know that this internal reality is a hierarchical combination of sparse bitfields. Recognising an external reality (a cat for instance) is comparing those bitfields (within acceptable margins of error). So if two HTM systems (human brain or machine) have a way to communicate in such a way that this bitfield can be compared across a medium (by speach, or by electronic transmission, or by showing pictures, …), it means that both systems contain the same internal reality.

If a human looks at a spot of red, with eyes that convert certain frequenties into a specific combination of bits, in such a way that can be repeated and confirmed, and an artificial system can look at the same spot, through a different lens, creating a different combination of bits, but also confirm the red spot repeatedly, then we are very close to a consensus.

But in principle, it should be possible to give this machine the exact same type of sensors as my eyes contain, and a very similar structure of logical gates as the neurons and synapses in my brain, that produce an hierarchical combination of sparse bitfields that is very close (if not exactly the same) as the one in my brain.

So in principle it should be possible to plot my sparse representation of red on a paper and plot the machine’s sparse representation on paper, and objectively determine that both are very similar.