Reality for machine intelligence: internal vs consensus

I don’t agree (although I may be wrong of course). I think this is a philosophical problem based on limited technical knowledge. HTM solved this as far as I’m concerned.

(Edit: I don’t mean you have limited technical knowledge. The philosophers have).

Red looks a certain way because my hardware encoded it into a specific binary pattern. This pattern can have large inconsistenties and still represent red. The proof is that another red spot will trigger the same pattern in my brain. And two systems that learn what red is, can mutually confirm that.

Now, a third system can analyse the same fysical (external) reality with comparable hardware and encode a same (or very similar) pattern. In principle. That makes it objective.

There is considerable philosophical argument both ways on this question.

I consider it as a meaningless mental exercise.
If we both can see it and agree that it is red - done.

5 Likes

A gentle introduction to shooting down this rabbit hole. Sure our hardware delivers some impulses to certain parts of the brain and these are processed by similar regions so we do both get some repeatable sensation when we are exposed to a certain wavelength of light.

What does that look like to me - my quaila?
Are you certain that my experience is in fact the same as your experience. You know that some people experience synthestesia. How do you know for sure that I don’t see colors as sound? Or shapes as color fringes? Stepping back a bit from this - do I see that red as the same shade as you think you see? There is no possible way for you to experience my senses the way I do. We have come to associate a certain wavelength with the sound red but you will never know what I experience when I see that wavelength - the qualia of experience.

Welcome to philosophy 101.

1 Like

But isn’t this the crux of this thread? Why argue this question for 60-odd posts and then say: “Meh, I don’t really care.” ?

Qualia and experience are in the realm of consciousness. I though we were discussing intelligence?

That’s the beauty of the HTM theory. We can limit our investigations to what we understand: the easy problem.

Don’t get me wrong: I love discussing consciousness. I love philosophising about free will and its ramifications. But here we’re trying to understand intelligence. And this question was about transmitting information in a way that two systems could compare this information effectively. (One of them being a human brain, the other a machine).

Hawkins and his team of amazing researchers cracked this nut. Or at least came up with a very plausable and testable theory that reduces complex and even abstract information about (external) reality into digital data, and even described the hardware to come to this digital data.

Now saying that comparing this digital data is in principle impossible, is not giving Hawkins et al due credit.

Sure. But that’s a hardware problem.

If you did, and if it was impossible for other people to know for sure, then we would never have found out that there is such a condition as synesthesia in the first place. The fact that you can describe what you experience, and that we can test you, shows your internal representation is in principle knowable.

Consider this thought experiment:

We build a machine in software using HTMs theory and train it to recognise hues of red. We record hues of red digitally, store them on digital medium (a HD) and train the system with the recorded data.

We build this system twice. We give both of them exactly the same hardware (same processor, same memory, same busses, etc) and we feed them exactly the same data. In the same order. Over the same duration.

Is it now possible to feed both systems the same digitally recorded data of a particular hue of red, freeze both systems (make a data dump) and compare if the resulting sparse representations in each system are identical?

Thanks, but I signed up for Intelligence 101. ;-).

1 Like

Quick comment on this point: Even feeding two equally dimensioned HTM systems exactly the same inputs will not result in identical representations formed. Recall that many parts of the HTM algorithms utilize random tie-breakers. You could try to centralize the random number generator and use a common seed, but as you move to more distributed architectures, it becomes increasingly impractical.

2 Likes

Ok, but to make my point, could we make those tie-breakers deterministic? Use Fibonacci for instance?

Also, unrelated, could you explain (for my benefit) why there is a need for randomness? Or point me to a place where I can read up about it?

2 Likes

Sure, probably the best way to explain this is to imagine an SDR for some highly abstract concept like “Grandmother”. This representation will contain bits that overlap with many other concepts which share semantics, such as “elders”, “Christmas cookies”, “nursing home”, etc. The specific mixture of semantics will depend on your specific experiences which built the “grandmother” concept for you.

Now suppose I want to project this concept as an input to some other region (maybe transmitting up a hierarchy for example, providing a biasing feedback signal, voting with other regions, etc). I have a certain number of cells that are connecting with the “Grandmother” representation. Lets also assume for the sake of argument that the “Grandmother” representation consists of 1,000 bits, and each new cell can connect with up to 100 of them.

Now suppose there is no randomness involved, and I simply connect all of the cells with the same 100 bits of the “Grandmother” representation. It is likely in this case that the new representation formed will not be connecting with important semantics of the “Grandmother” representation (maybe we’ve only connected with the semantics for “Christmas cookies” but have lost the “elders” semantics).

If instead each cell connects to a random 100 bits of the “Grandmother” representation, we can be confident that in all likelyhood there is an even coverage of the semantics involved.

4 Likes

Is it really possible to discuss reality and perception and experience and intelligence without consciousness?

We are discussing perception and something must perceive. I think that this is currently the crux of why most machine intelligence efforts fall short of being considered intelligence - there is nothing inside - no ego of any kind. I don’t subscribe to the soul but I do think that there must be a personal frame of experiential reference. I really can’t conceive of an intelligence without this.

The “meh” aspect is that for any agent that has this personal frame of reference it will be personal and not accessible. We can create artificial agents where we have considerable control of the relationship between the defined ego and the senses and even know that a second one will be identical to the first to the point where we can say that they have identical experiences but we human will never know what they experience.

In the example that I have given you allude to self reporting so different that comparing our attempts to self report deviate to the point where we know that something must be different.

Point taken.

How do I self report what my perception of a color looks like to me and what is your frame of reference to interpret that? I can use color patches to develop a continuum and similarity, I can certainly say that this color is almost the same as that one. I can’t say what that perception redness feels like in any what that allows you absolutely know that you are feeling the same thing. People have tried and failed.

Perhaps you are familiar with color calibration. It’s a big thing in painting and people spend a lot of effort in trying to make sure that what the customer sees and approves in the digital presentation is the same thing that will roll off the production line. In this case we can control the wavelengths emitted and the sensing on the calibration equipment. I have pages of color swaths and we use them in presentation. Even with all that - it is very hard to account for the perception differences in transmitted vs reflected light. The spectrum of the illumination is a huge factor. I work in quality control and getting two painted items to look the same in all lighting conditions is fiendishly difficult. If two items on the same machine look very different it can make the final product look cheap or defective. I do know that there are people that can see difference that I can’t. As the QC manager in my company I find these people to be very annoying.

It is only relatively recently that human self reporting and testing has closed in on Monochromacy, Dichromacy, Tetrachromacy, and Pentachromacy. This is just in the perception mechanism. We still really don’t have the tools to understand how these senses are registered in the brain.

If we are defining the precise vocabulary to create machine intelligence then precision in definition is absolutely correct and desirable.

If we are trying to apply those definitions to the human condition the current state of the art is not up to the task. There may come a time were we can capture the exact patterns that define our perception and transmit them to a brain where the individual whorls and wiring are different and know that we will still capture the red of redness but until that day my focus will stay closer to the utility of the consensus value as the most important parameter in humans.

3 Likes

Ok, I understand. This guarantees an even distribution. But then any sufficiently broad distribution function would do the job, would it not?

Yes. It is certainly possible to implement the HTM algorithms in a way that ensures exactly the same inputs will result in exactly the same internal representations for the conceptes learned. I was mainly just pointing out that it isn’t how it works today.

There could be some benefits to doing it this way. For example, an HTM-based system that was trained in a specific way and then learning turned off could be a component of the “static reflexive system” we were talking about earlier.

3 Likes

6 posts were merged into an existing topic: Intelligence - what is it?

@Falco @Paul_Lamb Using the same random seeds won’t help (except in certain tightly controlled situations). Imagine two identical intelligent agents, same random seeds. They both move their sensors in an environment and build models based upon sensory input.

Place these two identical agents in any environment, and unless they start in the exact same location with the exact same perception, they will immediately have different internal realities, even if they are observing the same features and using the same random seeds.

1 Like

Yes, there is of course randomness that will be introduced into the system from the real world. I should have said if both systems receive exactly the same inputs (which is probably not very practical in many scenarios).

I should point out that I don’t entirely understand what this is trying to gain WRT the original topic. One could just as easily train up one single HTM system with the controlled inputs, study and learn some information about its internal representations, and then copy it into any number of instances.

2 Likes

I agree of course. But I was trying for a thought experiment without difference. I guess I’m going for this tightly controlled situation.

I’m simply trying to understand what some of you guys meant by ‘unknowable’. If I have a sparse representation of some object in my brain, I think that in principle it must be comparable with someone else’s sparse representation of the same object.

To eliminate the chaotic influence of the complex (and partly random) gateways that lead to these sparse representations, I proposed two exact same systems that end up with a sparse representation each of the same digitally recorded and therefor exactly equal experiences.

My question is if you are of the opinion that these representations are still not the same. And why.

It’s because I don’t understand what you mean, that I came up with this thought experiment. Depending of your answer there is a second tier to this thought experiment.

IMO, if two different representations (in an HTM-like system) share the same proportions of semantics, then they are the same (I personally don’t think it matters if the specific active bit indices are the same). In reality, there will almost certainly be some randomness that results in these proportions of semantics being different between any two systems. In those cases I would say even if they are similar, they are not the same.

The real question is how to communicate internal representations to the outside world. One way would be to freeze learning at some point and perform some analysis to learn information about its internal representations. Another way (which I think is probably simpler, but both are hypothetical at this point) would be to establish low-level emotions, needs, actions that are integrated with the AI, and which could be used to perform some basic level of communication to the outside world about its internal state.

3 Likes

Yes, but those instances cannot continue to learn or else you’ll slowly lose the ability to look into its internal representation.

1 Like

Exactly. The entire labor-intensive process would have to be frequently repeated as long as you needed the system to continue learning, which would greatly limit possible use-cases. Adding a low-level method of communication seems much more feasible.

2 Likes

I like this split, but it seems to be missing something important: my own brain. My neural architecture doesn’t exist outside my sensory system, nor does it exist inside my internal model.

Maybe the question is: Internal or External to what boundary? And in my own thoughts so far, when I can satisfy one part of that definition, it ruins the other parts.

Or, we could just add a fourth kind of reality? :man_shrugging:

1 Like

Hi Brev! Maybe I don’t understand your point, but my take is that internal reality is a result of your neural architecture. It causes your representation of reality. I don’t think you can separate your neural architecture from your internal reality.

1 Like

In psychology I think there is a term called confabulation. Loosely, it means “tell stories around something”.

An example is Anton-Babinsky syndrome.

There’s also the even stranger inverse of Anton’s syndrome (I forgot the name) where a patient can physically avoid objects, even drive a car through traffic in one case, but is not aware of what he/she sees.

And then of course there’s the phantom limb syndrome.

And what people call schizophrenia, but is often paracusia.

Some psychology tests have demonstrated confabulation in everyday events. Our brain seems to have an urge for rationalising what we perceive or how we behave, and often make incorrect conclusions.

Since I found out about this, I sometimes second-guess my own thoughts. So, you guys better check the links I sent. I could be making this all up. :-).

2 Likes