Reality for machine intelligence: internal vs consensus

Ok, but to make my point, could we make those tie-breakers deterministic? Use Fibonacci for instance?

Also, unrelated, could you explain (for my benefit) why there is a need for randomness? Or point me to a place where I can read up about it?

2 Likes

Sure, probably the best way to explain this is to imagine an SDR for some highly abstract concept like “Grandmother”. This representation will contain bits that overlap with many other concepts which share semantics, such as “elders”, “Christmas cookies”, “nursing home”, etc. The specific mixture of semantics will depend on your specific experiences which built the “grandmother” concept for you.

Now suppose I want to project this concept as an input to some other region (maybe transmitting up a hierarchy for example, providing a biasing feedback signal, voting with other regions, etc). I have a certain number of cells that are connecting with the “Grandmother” representation. Lets also assume for the sake of argument that the “Grandmother” representation consists of 1,000 bits, and each new cell can connect with up to 100 of them.

Now suppose there is no randomness involved, and I simply connect all of the cells with the same 100 bits of the “Grandmother” representation. It is likely in this case that the new representation formed will not be connecting with important semantics of the “Grandmother” representation (maybe we’ve only connected with the semantics for “Christmas cookies” but have lost the “elders” semantics).

If instead each cell connects to a random 100 bits of the “Grandmother” representation, we can be confident that in all likelyhood there is an even coverage of the semantics involved.

4 Likes

Is it really possible to discuss reality and perception and experience and intelligence without consciousness?

We are discussing perception and something must perceive. I think that this is currently the crux of why most machine intelligence efforts fall short of being considered intelligence - there is nothing inside - no ego of any kind. I don’t subscribe to the soul but I do think that there must be a personal frame of experiential reference. I really can’t conceive of an intelligence without this.

The “meh” aspect is that for any agent that has this personal frame of reference it will be personal and not accessible. We can create artificial agents where we have considerable control of the relationship between the defined ego and the senses and even know that a second one will be identical to the first to the point where we can say that they have identical experiences but we human will never know what they experience.

In the example that I have given you allude to self reporting so different that comparing our attempts to self report deviate to the point where we know that something must be different.

Point taken.

How do I self report what my perception of a color looks like to me and what is your frame of reference to interpret that? I can use color patches to develop a continuum and similarity, I can certainly say that this color is almost the same as that one. I can’t say what that perception redness feels like in any what that allows you absolutely know that you are feeling the same thing. People have tried and failed.

Perhaps you are familiar with color calibration. It’s a big thing in painting and people spend a lot of effort in trying to make sure that what the customer sees and approves in the digital presentation is the same thing that will roll off the production line. In this case we can control the wavelengths emitted and the sensing on the calibration equipment. I have pages of color swaths and we use them in presentation. Even with all that - it is very hard to account for the perception differences in transmitted vs reflected light. The spectrum of the illumination is a huge factor. I work in quality control and getting two painted items to look the same in all lighting conditions is fiendishly difficult. If two items on the same machine look very different it can make the final product look cheap or defective. I do know that there are people that can see difference that I can’t. As the QC manager in my company I find these people to be very annoying.

It is only relatively recently that human self reporting and testing has closed in on Monochromacy, Dichromacy, Tetrachromacy, and Pentachromacy. This is just in the perception mechanism. We still really don’t have the tools to understand how these senses are registered in the brain.

If we are defining the precise vocabulary to create machine intelligence then precision in definition is absolutely correct and desirable.

If we are trying to apply those definitions to the human condition the current state of the art is not up to the task. There may come a time were we can capture the exact patterns that define our perception and transmit them to a brain where the individual whorls and wiring are different and know that we will still capture the red of redness but until that day my focus will stay closer to the utility of the consensus value as the most important parameter in humans.

3 Likes

Ok, I understand. This guarantees an even distribution. But then any sufficiently broad distribution function would do the job, would it not?

Yes. It is certainly possible to implement the HTM algorithms in a way that ensures exactly the same inputs will result in exactly the same internal representations for the conceptes learned. I was mainly just pointing out that it isn’t how it works today.

There could be some benefits to doing it this way. For example, an HTM-based system that was trained in a specific way and then learning turned off could be a component of the “static reflexive system” we were talking about earlier.

3 Likes

6 posts were merged into an existing topic: Intelligence - what is it?

@Falco @Paul_Lamb Using the same random seeds won’t help (except in certain tightly controlled situations). Imagine two identical intelligent agents, same random seeds. They both move their sensors in an environment and build models based upon sensory input.

Place these two identical agents in any environment, and unless they start in the exact same location with the exact same perception, they will immediately have different internal realities, even if they are observing the same features and using the same random seeds.

1 Like

Yes, there is of course randomness that will be introduced into the system from the real world. I should have said if both systems receive exactly the same inputs (which is probably not very practical in many scenarios).

I should point out that I don’t entirely understand what this is trying to gain WRT the original topic. One could just as easily train up one single HTM system with the controlled inputs, study and learn some information about its internal representations, and then copy it into any number of instances.

2 Likes

I agree of course. But I was trying for a thought experiment without difference. I guess I’m going for this tightly controlled situation.

I’m simply trying to understand what some of you guys meant by ‘unknowable’. If I have a sparse representation of some object in my brain, I think that in principle it must be comparable with someone else’s sparse representation of the same object.

To eliminate the chaotic influence of the complex (and partly random) gateways that lead to these sparse representations, I proposed two exact same systems that end up with a sparse representation each of the same digitally recorded and therefor exactly equal experiences.

My question is if you are of the opinion that these representations are still not the same. And why.

It’s because I don’t understand what you mean, that I came up with this thought experiment. Depending of your answer there is a second tier to this thought experiment.

IMO, if two different representations (in an HTM-like system) share the same proportions of semantics, then they are the same (I personally don’t think it matters if the specific active bit indices are the same). In reality, there will almost certainly be some randomness that results in these proportions of semantics being different between any two systems. In those cases I would say even if they are similar, they are not the same.

The real question is how to communicate internal representations to the outside world. One way would be to freeze learning at some point and perform some analysis to learn information about its internal representations. Another way (which I think is probably simpler, but both are hypothetical at this point) would be to establish low-level emotions, needs, actions that are integrated with the AI, and which could be used to perform some basic level of communication to the outside world about its internal state.

3 Likes

Yes, but those instances cannot continue to learn or else you’ll slowly lose the ability to look into its internal representation.

1 Like

Exactly. The entire labor-intensive process would have to be frequently repeated as long as you needed the system to continue learning, which would greatly limit possible use-cases. Adding a low-level method of communication seems much more feasible.

2 Likes

I like this split, but it seems to be missing something important: my own brain. My neural architecture doesn’t exist outside my sensory system, nor does it exist inside my internal model.

Maybe the question is: Internal or External to what boundary? And in my own thoughts so far, when I can satisfy one part of that definition, it ruins the other parts.

Or, we could just add a fourth kind of reality? :man_shrugging:

1 Like

Hi Brev! Maybe I don’t understand your point, but my take is that internal reality is a result of your neural architecture. It causes your representation of reality. I don’t think you can separate your neural architecture from your internal reality.

1 Like

In psychology I think there is a term called confabulation. Loosely, it means “tell stories around something”.

An example is Anton-Babinsky syndrome.

There’s also the even stranger inverse of Anton’s syndrome (I forgot the name) where a patient can physically avoid objects, even drive a car through traffic in one case, but is not aware of what he/she sees.

And then of course there’s the phantom limb syndrome.

And what people call schizophrenia, but is often paracusia.

Some psychology tests have demonstrated confabulation in everyday events. Our brain seems to have an urge for rationalising what we perceive or how we behave, and often make incorrect conclusions.

Since I found out about this, I sometimes second-guess my own thoughts. So, you guys better check the links I sent. I could be making this all up. :-).

2 Likes
2 Likes

“reality” is an ongoing construct. It is well known that a good deal of your perceived reality is actually re-perception of something that you saw before. As long as it is there every time you need to see it - it exists.

There is a fascinating experiment where they switch up what is in your visual field (out of the foveal area) and oddly enough, most people don’t notice the change. This has been done with both computer imagery and in actual physical settings.

Our mental functioning has evolved around the concept that reality is relatively stable so things that don’t change are not memorized internally.

4 Likes

I found this Amazing Color Changing Card Trick video very cool:

3 Likes

Hi Matt, thanks for the reply! Sorry to be slow here.

I’m thinking along the lines of a split between the physical hardware and logical “software”. By analogy, think of playing a modern 3D or VR video game. The immersive reality experienced by a player of the game is quite different than the reality of the hardware technician soldering a GPU onto the video card. All humans are experiencing their inner reality right now, but almost none (except us) are worried about the neuron firings behind it.

My point is quite pedantic, because obviously, you can study other brains out in external reality, instead of your own. I’m just noticing a possible semantic blind spot, relative to the observer only. It seems worth pointing out since only observers can experience reality.

I think ultimately, we’re onto one of Godel’s strange linguistic loops. It’s hard to define that which creates definitions.

Pedantics aside, I think moving popular culture towards the Internal/External/Social understanding of Reality will be a huge step forward. I wonder if a lot of our ills come from the current popular singular definition.

thanks!

What you think of as hardware is just software you haven’t learned to modify yet. It’s the substrate you think of as non-optional vs the stuff you can turn on and off through wiggling some wires.

Calling one internal and the other external misses the point that it’s software all the way down.

Whatever an observer or observation IS, there are some universes or realities that lack an observer… we call those basic math. Then there are some that have an observer… we call those universes. The fact that you’re having a difficult time identifying how the observer is structured means that your simulation level is hiding or obfuscating that part of the operating system.

The really cool thing is that since emulations of emulators are basically equivalent, it doesn’t matter how many layers of emulation down you are… you exist on an emulation path that has “observer” somewhere upstream and is using that property as a base operational function of interacting with this particular simulation. That main simulation itself is then cooked, reprocessed and spoon fed to you as an observer by another layer of simulator known as a human brain.

Speaking about “reality” misses the point that you’re an observer sandwiched between two layers of emulation/simulation and that in all likelihood your “observationness” is external to both layers.