Brain Building - Q1. Define Intelligence

I have had exchanges with many AI newbies on many occasions and on various levels of depth. Most seem to start out with some sort of “folk wisdom” about what intelligence is and some ideas about how it might work from introspection.

Almost every one of them starts out with some idea about what parts of the laundry list will make up a usable AI. I have yet to see one that has really sat down and though through what they really want or what they will get with the proposal they put forward. Most reject emotion and many downplay the critical nature of the command and control structures built into the older parts of the brain.

I went though much the same mental evolution path so it makes is a bit easier to see it when others are walking the path - It goes something like this:

I want a magic calculator that can do or solve anything.
You mean like excel? Won’t that take a lot of programming?

NO - I want it to have a powerful built in learning mechanism so it can program itself.
You mean like skynet?

NO - I want to put limits so it can only do good things?
You mean - if you make a mistake in programming it will gain self-control and because it really is powerful it will destroy us all!

NO - It won’t have any [fill in the blank - no sense of self, no emotion, no xxx] so it can’t runaway and kill us all.
Every one of the limits imposed really would not work. For example - without a sense of self - how will it interact? It has to have a marker for me in every interaction so it knows who you are talking to, it has to have a physical location to run end effectors, … Even Alexa has to know you are speaking to it so it knows that you want it to do something. A really smart AI will have a strong sense of self and self history or it will be very limited.

This goes on for several rounds but in most cases the person I am talking to does not want a person - they just want a magic genie that can do no wrong.

Looking to the only example of functioning human level intelligence we have (humans) we see that one of the critical features is strong socialization. When a baby is frustrated they are SO angry; ask any parent. If they had the capabilities of a fully grown human they would wreck great destruction on the source of their frustration - usually a fellow human. It’s a good thing we socialize humans before they get big and powerful. We have to build in a sense of right and wrong in the development process; what action are and are not acceptable. It has to be built in and reinforced all during the development process.

One very good piece of advice is not to try and interact with wild animals. They are NOT socialized and their behavior set does NOT include any bias against attacking humans. If the situation arises to defend against a human or harvest them for food - they do it. Animals don’t screw around; killing is very much part of the built in behaviors in the wild. You don’t want any powerful machine without this key feature of right and wrong.

I offered this definition of intelligence before and most people seem to think that is is too simplistic. I would invite you to consider it with an open mind and think of how it could be developed into a functioning AI.
Intelligence - the quality of the processing of the sensory stream (internal and external) that ties that stream to prior learning in useful ways. The end result is the selection of the best action for the perceived situation.

1 Like