Yes - there is one key difference.
On the perception thing we are together.
The intelligence part is the selection of the best action.
Yes - there is one key difference.
On the perception thing we are together.
The intelligence part is the selection of the best action.
We both see action as a part of the perception process.
Would that mean that intelligence is required for perception in your view?
Maybe the difference lies in the temporality of what we call actions:
After putting a lot of thought into how the Numenta word “predict” fits into a definition for intelligence a good test was what happens when used to describe the goal of the Jeopardy game, which is to correctly predict the question to a given answer.
It’s then a matter of in as few words as possible describing the required basic components in HTM Theory, Watson and the Heiserman Beta Class algorithm. I ended up with: a memory system to store confidence level(s) controlled motor action guesses.
In more detail is this I have for over a decade been tweaking:
Behavior from a system or a device qualifies as intelligent by meeting all four circuit requirements that are required for this ability, which are: (1) A body to control, either real or virtual, with motor muscle(s) including molecular actuators, motor proteins, speakers (linear actuator), write to a screen (arm actuation), motorized wheels (rotary actuator). It is possible for biological intelligence to lose control of body muscles needed for movement yet still be aware of what is happening around itself but this is a condition that makes it impossible to survive on its own and will normally soon perish. (2) Random Access Memory (RAM) addressed by its sensory sensors where each motor action and its associated confidence value are stored as separate data elements. (3) Confidence (central hedonic) system that increments the confidence level of successful motor actions and decrements the confidence value of actions that fail to meet immediate needs. (4) Ability to guess a new memory action when associated confidence level sufficiently decreases. For flagella powered cells a random guess response is designed into the motor system by the reversing of motor direction causing it to “tumble” towards a new heading.
For machine intelligence the IBM Watson system that won at Jeopardy qualifies as intelligent. Word combinations for hypotheses were guessed then tested against memory for confidence in each being a hypothesis that is true and whether confident enough in its best answer to push a button/buzzer. Watson controlled a speaker (linear actuator powered vocal system) and arm actuated muscles guiding a pen was simulated by an electric powered writing device.
Thousand Brain Theory distributes complex predictions among cortical columns, made of social-cells where there is differentiation into complex neural circuits. Even slime molds are surprisingly fast learners:
Social-cells are already excellent at learning to ahead of time properly respond to environmental stimuli, make predictions. HTM models code that into each cell, without needing to exactly know how the underlying cellular biology works.
I would judge the usefulness of a definition for intelligence by how quickly it has someone modeling a system that learns to make correct predictions, contains what is most needed for readers to try it for themselves.
I put this one as the expression of intelligence. Without some behavior there is no way to determine if something is intelligent. As much as it might annoy some people on this forum, you have to judge expressed behavior in some way to determine how intelligent something is.
I had not considered the role of intelligence in active perception but that does make a great deal of sense - the ability to correctly parse a complex environment leading to selecting actions would also be some measure of intelligence. As you learn the features of the environment you should get better at understanding the environment and parse out the objects, actors, relationships, and predict the possible outcomes of actions.
This directly points to the deep intertwining of structure and content in intelligence. It’s not enough to have a structure capable of intelligent behavior - it has to be filled with useful data to complete the construction of intelligent behavior. Without this programming/learning it is no more intelligent than a common stone.
Capacity for intelligent behavior alone is not enough.
Yes, there are a number of models to qualify as being “intelligent” (or not) where for a robotics beginner it’s David Heiserman, for someone who wants to input game show poetry and all that it’s a Watson, and for in-between very neuroscientific methods is HTM and associated Thousand Brains theory.
After accounting for Arnold Trehub’s simplified diagram for the human brain I get this comparison:
Above illustration is from: https://sites.google.com/site/intelligencedesignlab/home/ScientificMethod.pdf
What makes a Heiserman simple circuit/interaction able to produce complex behavior is what happens (temporally) over time, while racing from one timestep/thought to the next, in a direction that depends on what it was thinking about minutes or seconds ago. Action outputs become pulse/spike train signals for fine control of muscle forces. When everything is going well it’s switching high confidence motor signals back and forth to maintain balance at a given (circular or linear) navigational angle. One tiny thing that it has not yet experienced can cause a total change in what it is doing before going back to earlier task, or not, depending on whether it (normally used) starts off a new memory location with current motor actions for data, or guesses entirely new actions.
The model to demonstrate navigational behavior has to ahead of time predict motor actions needed to stay out of the way of an approaching shock zone that sends confidence in usual actions to an all time low, which causes reguessing that leads to avoidance of food while getting zapped. The virtual critter then gains the common sense to go around the zone then wait for the food to be in the clear. There are then dilemmas where wanting to do both conflicting things at some time causes amusing impatience, as in living animals. It will never win a genius level game show but this is an excellent test for navigational level intelligence. At the “intellectual” level are vocal motor actions and associated networks for “drawing a picture” of something we can in our mind navigate, predict from. We need both to talk and walk at the same time, not single thing controlling both.
More information is at:
Numenta has an interesting model that was easy to this way qualify as intelligent, which drew me to this forum. Adding “prediction” raised the bar another notch but it’s more like something that a properly functioning system I’m familiar with all together produces, not new thing that has to be added to an existing circuit diagram or algorithm. I want what I have for a definition to withing its given limits work with what others have, make sense to areas of cognitive science it applies to. It’s something I put a lot of thought into, I needed to share with you in this thread.
Slightly related to the topic at hand:
I would start with building something and show its tricks to a friend/s or some human/s and maybe judge this human’s reactions instead? I think this is a simpler solution without a long laundry list. Just my 2 cents I prefer this because I’m an engineer, start simple, evaluate results, iterate simple, evaluate results, and so on.
Cuz programs usually don’t surprise us if there’s no new idea. No need to code it.
I think OP should not lack programming skills. Actually, this is a very basic question. What is intelligence? What are we ganna do?
BTW, I saw something unbelievable upstairs, ha ha ha ha ha ha
Depending on what you mean about “programs”. There are programs out there that all you can do is wait, observe, wait and observe. It is up to the observer to give meaning about the outcome. One thing I like about HTM sims is one has to run them to be able to see and evaluate what’s next. This feature is akin to complex systems and sometimes it is much more useful to define intelligence as the sum of the parts’ emergent actions plus its governing rules. The former is observed while the latter is built iteratively.
I studied HTM last year.
Agree. Behavior and actions are the only way to assess the intelligence of an agent.
I don’t know how to define intelligence on my side. I tried to, but was never satisfied with what I came up with. Eventually, I think it is more useful to spend time on understanding brain mechanisms than on finding a consensus on the definition of “intelligence”. Definition of “intelligence” will get clearer along with progress in neurosciences and AI research anyway, so no reason to focus too much on the perfect definition right now in my opinion.
That’s why I prefer to focus on two other terms that are more specific: perception and cognition (for sure, they relate to intelligence, but how exactly they relate is again likely to become a never-ending debate).
I like @Gary_Gaulin’s suggestion: instead of defining what intelligence really is, why not choose the definition that will drive AI research in the good direction? In this line, I would suggest to define intelligence as a continuum from perception to cognition.
Let me now explain a bit what perception and cognition are in my view.
Perception is our sensory experience of the world around us and results from the interpretation of bottom-up sensory stimuli based on internal top-down expectations. Note that perception is an active process (sometimes called active perception or active sensing).
I’m a fan of the predictive coding theory applied to perceptual processing. It states that the brain is constantly generating and updating a mental model of sensory inputs.
Moving our sensors is not only a way to scan the environment, it is also a way to actively verify the correctness of our models and to correct them if needed. We learn from the consequences of our brain’s actions about aspects of the environment that matter for particular goals.
When incoming bottom-up stimuli fit top-down expectations, it implies that a connection has been established between some brain’s circuits and something meaningful from the real world. This active process is referred to as grounding. It attaches a meaning to a stimuli-induced neural activity that becomes a meaningful percept.
Perception is a prerequisite for cognition because the latter uses meaningful mental representations that should already be grounded by active sensing.
Contrary to perception, cognition is characterized by a disengagement from the external world. Cognition relies on internally organized activity detached from immediate sensory inputs and motor outputs. Thus, cognition means delayed actions (contrary to immediate actions of perception)
In fact, I think that perception & cognition use the same fundamental mechanisms. From the perspective of a brain network receiving sensory inputs, there is no difference between real sensory inputs and similar activity generated by other internal networks.
I don’t have a position on this topic because, again, it depends on whether you consider intelligence as a collection of specific skills (content related) or the ability to acquire new skills (structure related), or both.
Maybe we need a “Brain Building - Q1. Define Perception” topic
Agreed - strongly - on the internal mechanism using the perceptual hardware.
I would like to add that this is as serial process, hence my drawings showing the “inward” loop through the subcortical structures vs the “lateral” and “external” loop in your material from Buzaki. The personal experience of this action is widely labeled as consciousness.
For the other ones, Mark is referring to this illustration from Buzsaki (in his last book “The Brain from Inside-Out”):
And my adaptation:
I believe you are referring to recognizing smile or frown on facial expression? But that seems to move away from my original point on recognizing objects, which seems to be hypothesized being handled by the ventral system ?
I don’t want to open this can of worm, I am now wondering what is considered as emotion then since similar to intellgence I can’t really seem to find a commonly agreeable definition. Say if a C. elegans is in a toxic environment and the body is under tremendous “stress”, is that considered as emotion? Is surprise a emotion or just the brain trying to align the prediction with reality? If an organism can express just “stress” and “surprise”, do they consider having emotion? And the most important question, if an organism does not express emotion, does that mean it cannot have intelligence?
I think you stated it clearly already here, I believe you are describing visual attention, not recognition. You have to be able to recognize certain part of your visual is an object before you can pay attention to it. But I would like to stay focus on the recognition part.
In a sense I am still on the same side as Francois Chollet on the relationship between emotion and intelligence. But at the same time I suppose I need a good understanding on what emotion really is first, is it just different chemicals released by the body that will influence the brain’s operation just like the basic architecture, or it is something else?
I can 't help thinking that doesn’t seem to align with the engineering approach. I agree with starting out small. But how are you to go build something when you don’t know what you are building? What tricks are you showing when you don’t know what that trick is?
You were asking for a laundry list for a sim of intelligence or intelligent object. It is right that you have to know what you are trying to build but in Intelligence’s case it’s a different story. No one knows what intelligence really is but we can feel/identify it. Would a laundry list when implemented result to intelligence? Or can intelligence be reduced to properties such as listed above? I don’t think so, it will just complicate things. So IMO to make it simple for a simulation, identify the problem, provide a solution (e.g. algorithm) and a good test of intelligence is to ask beings who can identify intelligence by evaluating the sim’s/solution’s behavior - this is easier than a laundry list.
The problem with the five criteria in the first post is they-re so generic that a raspberry pi with attached camera and motion detection&recording software, arguably fits all intelligence criteria:
Perceiving and memorizing information that’s quite obvious that it does.
De-memorizing it’s just erasing old recordings in order to keep free space available for new.
Predicting is less obvious but the basic motion vector algorithm in the GPU does exactly that - it does not record frames entirely, the video compression algorithm stores predictions on how does current frame reshapes in the following frame. That checks exactly next prediction alignment with the next perception thing.
The last one - deriving information is too vaguely expressed. A simple xor of two files, or any computing process in that respect, derives new information from previous one. “Mash up” doesn’t mean much we tend to use it whenever we don’t really understand what actually happens. “Chemicals got mashed up and first living cell appeared. Mashing up must be an essential ingredient of life”.
I think maybe this continues to be an issue on the discussion with different objectives. I think I have stated mine quite clearly that my intention is not going to build a usable AI where I am only interested to build a simulation of biological intelligence (and not human either). I know many of the members in this forum are very much only interested in human intelligence because it is more interesting. But in my opinion, understanding human intelligence is a very very complex topic. I am not religious but when I look at all the species around and the commonality of the neuron structure I can’t help thinking there were designers experimenting and improving the design from single cell to c elegans to octopus and finally us. If that is true, I am sure they would fail miserably if they just focused on building human intelligence right from the start. I kinda like one of the statements from this article (although commercial), the reductionist approach:
“Ambitious scientific projects, such as the human connectome project and the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, are looking to answer these very complex questions through direct imaging of human brains. Meanwhile, other researchers are taking a reductionist approach to unlocking the secrets of the brain. A reductionist approach is one that seeks answers to highly complex questions, such as how does the human brain work, in the simplest model available, such as C. elegans. The C. elegans nervous system is extremely simple when compared to the human brain. A C. elegans hermaphrodite has just 302 neurons while the human brain is estimated to be made up of 100 billion neurons”
When I look at the progress of the european brain project after so much money and time invested and still not much of a breakthrough, I do believe going from a reductionist approach might be more practical. And there is a very big issue on using computer hardware to simulate biological intelligence with quadrillion synaptic connections. You mentioned before you are planning to use a floor of computers and I am not sure if you are planning to do it yourself or not but I can’t help feeling pessimistic about that approach. I am lucky in a sense to have a small team that has already done a pumping engine for a high volume real-time analytics service with single digit microsecond latency, back off idle strategy, and back pressure that I can use to bridge the differences between computer hardware and biological architecture (I am hoping this will be the only part that does not adhere to the biological design). But even that I am not sure if it can even evolve to simulate octopus’ intelligence.
I do like your definition. In a sense I could relate my list with yours to a certain extent. I am not sure about “useful ways” and “best action” because they sound a bit on the subjective side. And I am going to invite you to be open mind if see if you can break down your definition further and deeper with first principal, particularly around “useful ways” and “best action”. And what I learn so far is, don’t worry if others think it is too simplistic, being complicated is never a good thing. And if you are going to, please share your update. Would love to learn more from that. And I truly think it is important to continuously work and refine the definition of intelligence as the basis of the work in every step of the way.
I like this quote from Francois Chollet:
“One of the benefits of having an explicit, formal definition of intelligence, is to identify what general principles underpin it. A precise definition and measure serve a North Star for research”
And thanks to Mark, I highly recommend reading Francois’ paper.
I respect others’ objectives (genuinely), but mine is really about building a simulation on the biological intelligence, really about how a network of biological neurons can form to show sign of intelligence (not human level).