Reality for machine intelligence: internal vs consensus


#41

The way to get that unique output pattern is to communicate with it. That’s why I brought up the idea of consensus reality. For a consensus reality to exist, you need more than one intelligent system.

When I said “language” in all my posts above, you could substitute “communication” instead if it makes more sense. The way I propose we extract meaning is by thinking about ways to establish a communication protocol with the intelligent system. It could be as simple as hard coding certain codes to be emitted at intervals that mean certain things we want to know.


#42

To clarify an important point from my perspective, this doesn’t need to be a bi-directional communication protocol. The observing intelligence just needs to have access to useful signals about the internal state of the other intelligence (emotions, needs, actions).


#43

@bitking Perhaps the output of certain naturally evolved loss functions? I’m not against hard-coding in parts of the lizard brain around HTM. It is inevitable.


#44

Humans and dogs can not effectively communicate anything useful about their own internal states… why should an AI?

At best, you should expect that an AI has managed to do what humans do which is to say, associate some lossy compressed noises with some notion of observed model generalities and then use those noises to light up similarly generalized model states in other intelligences. When you say “cat”, there are millions of possible “cat” generalizations that might light up in my head formed both on previous experiences I’ve had with cats, things I’ve read about cats and whatever else happens to be impinging on my consciousness at the time. It’s not like there’s A neuron or even a consistent set of neurons that light up every time I think of a cat… there’s some cluster of a whole bunch of ideas about cats (almost all of which are also true for dogs) that may or may not light up depending on circumstances.

I have to assume an AI system would be just as ambiguous in its internal representation of the world.

As for the labeling thing… when the AI has a need to communicate something about what it’s current representation of cat might be, it should engage in communicating the word cat, but it’s cat should be as ambiguous as yours, and it should change from one instance of saying the word to the next.


#45

I disagree on this point. If we could have anything close to this level of communication coming from, for example, a server monitoring tool, it would be extremely useful. Sure it would be far more useful if it could communicate in plain English like a human, but I don’t think a much lower-level of communication should be simply written off as not useful at all.


#46

I believe one of the reasons that what we consider to be “human thinking” feels so laborious is that we have to work pretty hard to keep concept fidelity from one thought to another so that when we think about putting a cat in a box, we maintain the samish representation for cat and for box for the duration of the thought process.


#47

I think that it would be far easier for an AI to tell you that there was something wrong with a hard drive then for it to tell you why it thinks there is something wrong with the hard drive.

You don’t know why you like this soda vs that one and I think that a sufficiently advanced AI would run into the same self knowledge problem. It can have notions, but it should have a very hard time telling you why it has those notions.


#48

Yes, definitely. But even simply knowing that the AI believes something is going wrong with the hard drive is useful (especially if it can reach that intuition earlier than other traditional tools). HTM, of course, isn’t able to do that today, so it would be a significant improvement in capability.


#49

So what you want is

Reality Model (RM) = hard drive failing with no alert
Desirability Model (DM) = hard drive failing notification sent
Action Model (AM) = fire alert that hard drive is failing

RM - learns to model the world (including the AI)
DM - trains to have desired modalities
AM - trains to take actions based on states of RM and DM resulting in new RM


#50

The DM for hard drive operating within spec is hard drive operating within spec, so nothing for the AM to do.

The DM for hard drive operating out of spec is hard drive operating out of spec plus alert


#51

I’m confused @Oren. You are talking about actions now? How does this relate to an intelligent system’s representation of reality (and communication of that representation with other intelligent systems)?


#52

Communication is an action. It is a thing that you do to achieve a desired state. (In this case, the desired state of having the model of me be informed of the thoughts that are in your head.)

A server monitoring system would need to have a desire or a plan to communicate in order to get it to produce any useful information.

Think of it like this… human babies start off crying about anything and everything until they learn through culling that crying is only useful in some model states. The same would apply to an AI that is tasked with keeping your server in a good state.


#53

To be able to communicate your observations of the world, you had to learn a world model, a language and then to stitch those two networks together in a way that resulted in useful outcomes.


#54

Got it. We’re on the same page.

IMO it is up to the implementor of the intelligent system whether hard-coded communication is an action that should be fed as an efferent signal back into the system. I can see both arguments.


#55

You’d run into a problem with plasticity. If you hard code the comms then you lose them if the world model shifts. A neural language mesh would evolve with the world model.


#56

Not if you think about it outside the learning loop. It could be a passive one-way transmission, the way @Paul_Lamb was describing it.


#57

But then you’re talking about a static reflexive system rather than a learning intelligent system.

I would imagine that system would not age very well in a real world situation.


#58

The static reflexive system is advised by the intelligent system, and thus it relays more intelligent information as the advisor’s model of the world improves. In the case of the hypothetical system monitor, it isn’t communicating just that the hard drive has gone bad, but an emotion or need which indicates that it believes the hard drive is going bad. As the advisor’s model of the world improves, these lower level reflexive outputs become more reliable indicators.


#59

I’m not sure I understand. Do you mean this is in principle never possible, or that it is different because the internal representations are in different locations?

For instance, if my (overly simplified) sparse representation of an object is
000100010001000

and by some extreme coincidence a machine’s sparse representation of that same object is
000100010001000

and all the gateways that lead to these sparse representation in both systems are the same

would those two not be the same internal representation? Or would it still be different because each representaion exists fysically in a different location? (One in my brain and one in the machine)?

Or do you mean this is practically extremely improbable, tangent to the impossible?


#60

Old argument.

In essence: Internally - does red look the same way to you as it does to me?
How would you know if to me, internally, it looked like what you might think of as orange or purple.
The thought is that our internal reality is forever private. We can share some descriptions of the external perception with agreement that a certain symbol corresponds to a certain observed perception but I will never share your internal world - it is unknowable.