Reality for machine intelligence: internal vs consensus

What you’re calling “labelling” here is no longer “labelling” in the strict machine learning sense. It’s much more ambiguous.

I’m glad we agree. :slight_smile: As long as language is not necessary for your version of “labelling” I think we still agree.

But the labelling is occurring outside the internal representation, within the consensus reality. Language is necessary for this type of labelling.

2 Likes

Not really, they both contain unique internal realities representing external reality. But they are not comparable to each other! Important distinction. Each agent has a different sensory experience with reality, therefore a different internal representation.

2 Likes

Language may not be necessary if instead we can label objects with emotional context. I can see a lot of useful applications just adding this to TM. For example applied to a server anomaly detection application, the system could initially be surprised by, for example, a spike in CPU usage, then remember that this was a “bad” sequence and later become “anxious” when it starts to recognize the pattern again.

1 Like

Avian mating dances?
Herd behavior in herbivores or canines?
Warning calls in multiple species?
Offering acceptable types of food to infants of your species?
Vast numbers of species imitative or demonstrative behaviors, adults to juveniles?

Do these all get to be languages?

Yes of course.

So how does that differ for tasting something and spitting it out because it is bitter?
I won’t eat it again so I have make some sort of internal label - bitter; and the value label - bad.

No words necessary.

When I signal that value non-verbally to some other member of my species is there some transformation in your labeling system? Some line that is crossed over in this outward expression?

No, there is no transformation, only an association with another object.

Say caveman you tasted something bitter and made a bitter face. I was watching you. I saw you make the bitter face, which activated the “bitter face” neurons in my brain, triggering an association with bitterness.

The symbol is another member of my species making a bitter face. I now link my internal representation of bitter to my internal representation of “what a bitter face looks like”, which can be transmitted in consensus reality by making the bitter face.

1 Like

@Bitking now he’s calling you a caveman. I suppose that is a step up from dumb boss lizard. :wink:

2 Likes

Only on this forum does a discussion about labelling quickly turn into a discussion about the nature of perceived reality.

4 Likes

I suppose it’s a promotion.

As has been posted elsewhere on the forum, language is the basis for much of what we think of as higher level cognitive functions.

Going the other direction - at the lowest level language is rooted in the somatosensory experience. We have explored caveman me and certain vegetables but as a bodied critter I have a feeling brain that is conscious of my body and my agency in that body. My perception of my body has a multi-sensoried experience of this body and my agency in that body has a mental model that is manipulated to effect external actions.

I would direct your attention to this paper that describes the grounding of language in the somatosensory cortex but could will be reduced to the basic conscious experience of the body.
https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf

2 Likes

I agree again.

1 Like

So as the senses register the world this ends up in the place/grid system in some (likely spatial) dimensional representation in the EC/hippocampus system.

That location or part of a representational tuple is at that point a label for that perception.

As was stated earlier by @OhMyHTM - the pattern (likely in L2/3 of the EC) is the label.

2 Likes

I thought that HTMs had a solution for labelling. The “label” is a set of neurons which respond to an object under all situations. I thought this happened in layers 2/3?

3 Likes

Have we been calling this a “label” in papers? I know it is sort of implied, but perhaps we are (once again) arguing about semantics, not ideas. I agree with you both this representation could be used as a sort of internal label. But those are not the terms I use in my internal representation. :wink:

2 Likes

It is semantics, all the way down!

And you thought it was turtles!

1 Like

Word-semantics aside, I think the important point is that there needs to be some way of communicating with external entities information about concepts that the system has learned. If the representations for these concepts can be grounded in something that is known about the system (emotional states, behaviors, etc), then we can make at least some sense of what are otherwise random bits that model the concepts.

Maybe the server monitor can’t tell you in plain English exactly what it is experiencing, but at the very least it can communicate that it thinks “something is bad”

2 Likes

Thanks for getting me back on topic.

In the terms I established earlier, this would be us broadening our consensus reality by basically opening up the brain of a running intelligent system and trying to decode exactly what it is experiencing. I don’t see how this is going to work with continuous learning systems. It could work if you train one agent, freeze learning, analyze its “brain”, then create many non-learning versions of the agent that you can completely understand.

1 Like

I guess my point is you don’t have to start with something that complex. For example, I can understand my pets at a very useful level without having to open up their brain and run some intelligent decoding. A few basic signals (like maps to emotional states, needs, low-level behaviors) can go a very long way toward communication.

2 Likes

I agree. My point is that intelligent systems (I define that as a learning system) will never be able to share its internal reality with us. This is the nature of intelligence.

3 Likes

And here we are.

Please read the quoted snippet that started this thread and see that I was asking for the same thing.

We have added some constraints but it does not change the original intent.

HTM needs to be able to communicate more about what it is learning in a form that is useful to an outside agent.

As it exists - it knows everything it needs to detect novelty; the outside world has no idea what that novelty or familiarity might be.

This is really much the same problem that is currently haunting the DL community.

1 Like