Numenta, vis-à-vis Jeff Hawkins, has indeed addressed this and they are steering clear of consciousness as they should.
I never mentioned consciousness. Cognitive capabilities are much more discrete and granular and measurable than consciousness, which still needs a generally accepted definition i.e. consensus.
I fully agree with EEProf. Consciousness cannot be considered as a measure of any progress.
AGI is the ability to understand, learn and execute intellectual tasks of a kind that can now only be performed by humans and perhaps some other animals.
Qualia, emotions and other introspective experiences have nothing to do with AGI.
AGI is all about science and engineering, Numenta understand that, and as a consequence have made real progress. There is still a very long way to go.
I’m very curious what kinds of new developments Numenta has. “What progress looks like” seems to be very different between disciplines (compare neuroscience to machine learning, for instance), so it is quite hard for me to tell what kind of new work is worth being excited about.
From the outside, it has definitely appeared like they’ve onboarded some folks with implementation/ML-type backgrounds, which leads me to believe there’s motion towards putting theories into scalable code (as have the rumors of sparse transformer work).
If something is more “general” than humans (after all AGI is not AHI) it also needs the ability to reason by analogy.
Analogies are built upon models. and those models have valence, aka some models are better than others. Now if you want a machine to reason about the morality (a model of the world you abide by) and you probably will want it (or you need to specify every micro level and no room for inference), you definitely need a conscious machine that has needs and that can have emotion. Else it is will not be general, it won’t understand why eating babies are bad, it will say everything has the same value, which is not a problem solver entity should say.
If a machine is not conscious it cannot even answer some of the simpliest questions like any human can using access consciousness, which is the most simple part of the consciousness. It is just a little autobiography upon actions and needs and one’s own perception, all residing in the same bunch of neurons that handled perception, action, memory etc.
By this reasoning, it is essential that a machine needs to do fundemantal things first. It can have GENERAL abilities later, like I don’t know, end world hunger etc if it’s possible. But a general machine without a consciousness will actually worse than Data in the Star Trek series. It will be nihilistic because everything has the same valence, it will not answer why it act like that because it is not conscious behavior and most certainly you cannot improve it by going even further because that system will be fundementally broken.
You won’t even have a good conversation with the thing because it won’t properly learn language in the first place. Language is full of qualia. Both guide one another. You associate the feeling (the qualia) of the wetness of the water to the “WA-TER”. Now consider a being that has never needed anything is hearing idioms with water. It will think every human is soooo stupid, because they act “random” (aka observed actions of others cannot be modeled by the observer). In reality the people who build it broken were to blame.
As I said, AGI is the ability to understand, learn and execute intellectual tasks. There is no special privilege for morality, emotion, consciousness or even language, there are just problems to solve and tasks to perform.
It is an AGI task to determine the intended meaning of the English language sentence ‘eating babies is wrong,’ to determine the truth value of that statement in various contexts, and to provide the logical framework supporting its conclusion.
It is an AGI task to acquire and exercise language skills to be the point of being able to conduct a ‘good’ conversation according to a given metric. [Language has no qualia, and the qualia you experience for a given stimulus is likely to be quite unlike mine, or any other human being.]
The people who build that AGI will have done a very important and valuable thing.
Some folks are so sure they know what it takes (and what not) to make an AGI I wonder what stops them from actually making it.
Each with their own vision, not having an actual implementation, there-s no point to argue, nor proof, someone else is wrong in their conviction.
I think such discussions are indeed very valuable, but very inefficient. They are inefficient because before we can really engage in a productive discussion which enters the realm of Philosophy, we need to first establish a very clear glossary in which each key term we use is precisely defined. And if a given term like “consciousness” has multiple differing definitions in the group, then we need to lable that term differently for each definition. For example: Consciousness-1 = (Semantic Definition 1 …). Then Consciousness-2 = (Semantic Definition-2 …). And so on. I know this may seem very pedantic and nerdy, but I am very convinced after past experience with such discussions that this lack accuracy is the main cause regarding why people cannot agree. We rarely ever mean the same thing when we use such complex words with so many potential different interpretations. So how could you agree on anything, if the words “qualia” and “consciousness” have different meanings?
My motto: First set the definitions, then let the discussion start.
It’s called necessary conditions and sufficient conditions. You MUST have walls to build a house, they are necessary. But having only walls is not sufficient for building a house, it is not sufficient.
I guess you are new to it. I’ve been sick of this shit for decades.
Morality is something every person invents. It is a model of action. It determines what the end goals should be in every action. Read Erikson’s morality development. Without morality you cannot plan the next course of action and expect what others will do. Language is full of expectations. There are 4 known maxims. Read Grive’s maxims. If you cannot infer anything about morality you cannot expect people do anything (they will look random) and you cannot communicate with them (it will look like me talking to a fish).
What we call emotion is the explainations of the mind for the body’s physiological changes to events. Those changes are the modulators of the action and perception. Read the work on grounded perception. In order to initiate any motivation you need valence and have a model of your needs. That model will modulate your response so you can modulate the way you plan.
You have limited memory, you keep the iconic representations of many things and discard the experience. Hence you store a model of the world. That model is not the perfect replica of the world, through perception it is updated. When you plan to do things, things change, they do unexpected stuff, you need to change your initial plan. You realize there are unattainable goals and do the next best thing. But what the h… you were doing? What did you want 2 second ago? Is there a model of your current act? How did you attend to world, at what part? The dynamics that deal with consciousness remind you of what were you thinking while you tried to achieve that thing 3 min ago. Listen Joscha Bach.
Language is not a problem. There is no problem of language. Either you can see the world with the dynamic that creates language or you don’t. Home signed children, eventhough they are not taught any language skill, can create a full-fledged one by just winging it. Read about nicaraguan sign language.
You should really try sometimes explaining a person why eating babies is wrong without any emotion or morality sometimes, it is so much fun. Also no consciousness There is no “problem” of eating babies. Humans don’t eat their babies that’s why they survived. There is no problem in the language aspect, too. There is only persuation aspect that you should solve in this setting. The task is already solved.
Why would you even want a consersation? What metric do you use? What is the metric of any animal when they communicate? Also no qualia? Read anything about psycholinguistics. There is a human reason why some words are funny, some names are strange, and every swear word on the world has common elements when it comes to the sounds (they are very harsh sounds with p,z,k,t vs). Read Helen Keller.
The people who think like you will bankrupt companies for generations to come.
The name of the game is to simulate the world, and satisfy your needs, play the long game in a veeeery veeeery complex world. Because humans are very good at this, this complexity never attract your attention. You should be happy for yourself! You are soooo autonomous that you have the privilage of not knowing anything about autonomy. You didn’t need to reflect upon how you operate.
The devil is in the details. The ones who are ‘so sure’ tend to be philosophers and psychologists–the last ones to have any real ability for the building. In Dennet’s Consciousness Explained, he makes a plea for researchers to build his Multiple Drafts model. That model is so much like ‘Thousand Brains’ he may get his wish. Then he went off to MIT (apparently just around the corner from Tufts) to consult with Rodney Brooks and the Cog project. Got nowhere.
Then you have Graziano (Princeton). In one of the last papers he wrote on AST (Attention Schema Theory) he made a plea for Computer Scientists (gag me with a spoon) to implement AST and thus produce a subjectively aware computer.
From my engineering perspective you don’t need much in the way of definitions. What you need is a graded series of specific intellectual tasks to be solved, and a metric to measure progress.
This is work we have done for many years in assessing the intelligence of rats, chimps, ravens etc. When we can agree on tasks and then create software that can perform them (without prior task-specific programming) we have AGI.
This is becoming too much ad hominem. I’m out.
I get your point, but I guess, patience and an optimistic attitude in spite of very uphill semantic battles, have not abandoned me yet.
We have many clever minds here, that inspire me. And you have to cut people some slack if they come form different disciplines and experiences. I try to stay constructive.
This translates from German as follows: There is no reasonable alternative to optimism. - Karl Popper
Yes, but it’s good to focus that optimism. I am not optimistic about folks obsessing over morality, consciousness, qualia: those things obviously have nothing to do with GI. At least it’s been obvious to me since my teens.
People who like to define things will volunteer definitions, those who don’t, forget it.
You are right regarding that we need focus, however there is a proper justification for every area of consideration. The philosophical realm is not just trivial for it sets the framework for all scientific research and more so for the engineering of simulations and solutions. In plain English, there is a place for everything. The HTM Forum has numerous threads and some are clearly focused on Theory which is intricately tied with some level of philosophy. Other threads are focused on the “Engineering” and modelling. Ideally we would have a good “bridge” between both areas if we standardized some of the vocabulary. But not everyone needs to engage in both areas. There is plenty to be done on the “Engineering” and “Foundational Neuroscience” fronts, that does not require a philosophical debate. And I fully agree with you that “Morality” is actually an emergent principle, not an objective foundation given by nature. So I will always steer away from such arguments and principles. Such topics are a pitfall that are difficult to get out of. I agree with you on that.
Going back to Jeff’s contribution above: My interest was clearly to understand Jeff’s vision of a very functional roadmap from where we are with TBT to the AGI realm, which he claims (and I believe) we are making great strides towards. I also do not wish to see this roadmap in philosophical terms. Much preferred, I would like to see it in terms of specific capabilities, like “attention”, “awareness”, “predictive behaviors”, “self-awareness and the emergence of agency”, “abstraction and generalization” etc. And these milestones are most likely built upon given structures and control mechanisms, like HTM, SDRs, TBT consensus building, timing and cyclical processes etc.
This is inspirational, but I am not optimistic that it’s true.
There is only one capability that I care about: predictive power, which is the same as generizability of representations. Everything else is instrumental to that. Jeff didn’t elaborate on recent breakthroughs, but latest efforts seem to be on building in reference frames.
Thanks for mentioning what areas Jeff has been focusing on lately. I find reference frames is one of the most interesting areas to focus on. In my opinion reference frames could even substitute the concept of hierarchies. In principle, reference frames could be nested at multiple levels and multiple intersections to create multi-dimensional hierarchies, which are much more flexible and versatile. I will (im)patiently await new surprises by this research.
Regarding the capability of predictive power, yes, I also see the value but wouldn’t reduce everything to that capability. There are some interesting studies by Stephen Wolfram on “Computational Irreducibility” and on the principle of “Computational Equivalence” which are worth looking into, if you haven’t yet. They do not take value away from predictability, but make it mathematically clear that many (relatively simple functions) are unpredictable (and therefore irreducible). It just makes the point, that not all things in nature can be learned and thereby predicted. In fact, surprisingly many simple things are unpredictable, mathematically and for all forms of conceivable intelligence. Wolfram himself calls this depressing, but there are other capabilities of high value, like recognition, disambiguation, homeostatic functions, Bayesian analysis, genetic algorithmic evolution towards optimizations, etc. Most if not all of these rely on some form of prediction, but not as a goal, only as an instrument and limited pockets predictability combined with some statistics tend to help these other goals to be attained.
In the end, intelligence without a purpose is not confirmable. Or stated in other terms, you could only define intelligence in terms of a purpose. In the absolute absence of purpose, there is no intelligence, not even in nature.
Aside from Stephen Wolfram’s “computational irreducibility” you also have Kurt Gödel’s theorem of incompleteness as a separate proof. Kurt Gödel’s incompleteness theorem demonstrates that mathematics contains true statements that cannot be proved.