Google DeepMind claims they're close to achieving human-level AI

That’s a good point. What does “general” even mean?

I suspect that google is looking at the large segments of the US economy that could (in theory) be easily automated, and calling those tasks the “general” ones. They’re probably not considering tasks that involve a lot of creativity, hand movements, or online learning. Online learning in particular is problematic because then it can learn rude words.

Exactly. The whole MI field just flounders around with philosophers, journalists, psychologists, physicists, etc., etc., etc., weighing in and generating terminology, my favorite is ‘The Singularity’, while having no clue as to what is going on.

I can’t see that image. It says I don’t have access when I click the link.

You’re starting to convince me, but I’m still skeptical. Do you know of an example where it makes such abstract inferences without using text?

That’s not quite what I meant. I don’t just mean copying from the dataset. Pattern recognition could identify adjectives or much more specific contexts / attributes, and replicate those mixed and matched.

The question is whether it understands what it’s saying, and really using the logic which it seems to. I don’t see how it can understand what it’s saying with only text, because words would only have meaning in terms of other words. That’s a different structure / set of associations etc. than the logic of the physical world.

Massive amounts of text can implicate a lot of inferences about the world, but it’s limited based on what people know. It’s not limited to what people know, just based on what people know. For example, in the 5th century, I doubt any amount of text would’ve implicated the turing machine.

Yes, it can have other sensory info, but that’s not required to make those highly intelligent inferences, right? I don’t see why adding other modalities would make it understand what it’s saying in terms of the reasoning which it seems to be using.

It matters whether it understands the reasoning it’s using, or just recognizing patterns from massive amounts of text. The set of inferences it can make depends on which.

Forbidden

You do not have permission to access this document.
Web Server at philosophynow.org

Which proves philosophy now is a bitch

1 Like

Hah, as if the cognition is only composed of functional units! They can say all they want, unless that thing can be motivated & demotivated and can swear there is no red in white, it is not even close to human intelligence.

image
That’s basically the best you will get outside text. As I said, multi-modal research is still cutting edge - you have to realize that this revolution barely started 2-3~ years ago, and the pace of progress has been remarkable to say the least. GATO is again a model which shows promise, but it won’t display any inferences because it’s severely handicapped and undertrained (as it was a first attempt).

adding modalitiies isn’t a flashy choice - it’s absolutely essential. Because scaling laws require more data, text is easily outpaced. However, other modalities like video, sound, images etc. are still being explored.
While there isn’t concrete evidence that adding modalities specifically increases reasoning abilities, adding any data representation does. So there’s a pretty 99.9% that adding more modalities would lead to more and more reasoning abilities and capabilities to make those abstract inferences.

Not to go philosophical, but one can try many tests to accomplish that. My end goal is to simply see it reason beyond human skills - if it accomplishes that with the most naive processes, I couldn’t care less. It may as well be recognizing patterns from the massive amounts of text, but the end result appears to be capable of more and more sophisticated behaviour which outpaces any system ever built. that’s a huge accomplishment in and in of itself.

2 Likes

Yeah, I agree even if it’s just recognizing patterns in terms of letters, it could still achieve human level intelligence in effect. Probably more than that, since it can use info from billions of people.

It’s just maybe never gonna be a superintelligence (in some sense, but it could be kinda like a collective superintelligence maybe). Which is probably a good thing for AI ethics (e.g. paperclip maximizer is less threatening if it’s not a quality superintelligence, I think).

I feel like getting ultra-human reasoning may not be that hard. Even right now - LLMs have one huge advantage, memory. That ability to simply store such a massive amount of knowledge and make connections itself is simply huge, and could really accelerate progress in science.

image

Seeing this model, 0.2% of the hypothesized size for AGI extrapolated from the Chinchilla scaling laws, being already able to use its vast memory and make connections between concepts, I wouldn’t be too bearish on achieving super-intellectual capabilities with scale.

2 Likes

It isn’t and AI’s have been doing it since Simon, Shaw & Newell created the GPS. Of course, they are getting better and better at it, with more sophisticated interfaces. Reasoning is not what we are after, it’s consciousness.

Consciousness is irrelevant. We can’t even prove there is such a thing, and we definitely don’t know how to demonstrate it in animal experiments.

What we’re after is what animals do naturally: model the real world, predict the past and the future, detect anomalies, choose strategies to solve problems, learn from experience. If we had all that running at supercomputer speed and memory, the last thing we would have to worry about is whether it was conscious or not.

5 Likes

It depends on the definition you’re using, so how would consciousness change what AI can do?

In spite of the fact that Watson denied consciousness existed, and behavioralists to this day contend that it does not, it is the OS of the mind and animals do not have it. Animals, of course, “detect anomalies, choose strategies to solve problems, learn from experience” but they do it with a lack of volition. It is all just S-R to them.

Think about a microcontroller, like a 6811 or a PIC that is embedded in a device, say a microwave oven. The software in the device is frozen and it only does one thing, it runs the oven. Now, you could have a very sophisticated microwave (µwave) that could sense what was put in it, let’s say a cup of liquid, or a plate of food, and then reheat that food depending on that information. It could also learn your cooking habits and anticipate what you wanted. Let’s say that this particular µwave had to warm up its magnetron before full activation. It might know that every morning you got up at 7 and heated coffee. So just before 7 it would cycle it’s magnetron in anticipation of your waking. In spite of how intelligent you think this µwave is, you won’t be discussing Nietzsche with it.

So then you say, “Wait, we could add NLP to it so that it could converse!” We all know that is called ‘Alexa (choose your favorite virtual assistant) enabled’. Now your µwave can greet you as you heat your coffee, tell you the weather and see if you want to put more coffee in your shopping cart. It can also tell you where coffee comes from (go ahead, ask it). Surely your µwave is now at human-level intellect and all of that with just a microcontroller.

To be fair, at this point the system probably has an OS, but all that OS does is make the job of programming easier. Yet, there is something missing. Ask it what it did yesterday, the answer will be that it does not know. Ask it what its plans are for the day, it won’t know. At this point you may protest and say that those capabilities could be added to the response repertoire, but it would be empty of meaning because the system would be parroting an a priori response, not thinking. This is where a three-year old child is in its thinking. Then something astonishing happens with the child (the ‘astonishing hypothesis’, see Crick) and what this is can be gleaned from looking at the work of Vygotsky & Luria and to some extent, Piaget–but we digress.

Let’s now say that we back-up and build a robot. Let’s also say that the robot is equipped with all of the features of our µwave, but instead of cooking it has to get around in its environment, sense things and respond to them. We equip it with NLP and now we have what appears to be a very intelligent machine, but again it lacks whatever it is that makes us human outside of language. Humans are the only creature to possess recursive language (see Chomsky) and not all humans have it because it has to be learned.

So now we equip our robot with a specialized OS, let’s call it the sentience engine. Like all OS’s, this one ‘manages the resources of the machine’. But this OS is self-organizing. Unix has some self-organizing features and Carpenter & Grossberg wrote an interesting paper on a self-organizing neural pattern recognition machine, but we are not there yet. This machine must somehow get to a metaphorical structure in its thinking, in its operations as an OS. As Lakoff has shown, it is via metaphors that we live. But how do we get it to do this?

What has to happen is an overlay of language to the physical mechanisms of stimulus and response in our robot. As our robot learns its environment, it is building a lexical field in memory predicated on conceptual metaphors. Bingo! This robot is now learning concepts and the incredible, yes astonishing, thing about concepts is that they can spawn other concepts. To see this, read David Bailey’s dissertation (he studied under Feldman, Lakoff and Wilensky) When Push Comes to Shove: A Computational Model of the Role of Motor Control in the Acquisition of Action Verbs.

This robot would now begin constructing its own internal narrative. You could ask it, “What’s your story?” and it could tell you. It could move forward and backward in time, think back to what was and conceptually thinking what might have been. It could project what could be and anticipate and create anything, including reasoning about what is possible physically and what is not. This is what consciousness is and what it could allow AI to do. It would develop an analog of itself, let’s call that an Analog I, where it could do this mental time travel, this internal modelling of what was, what is and what might be. It could also ‘see’ itself metaphorically moving about in its conceptual mind palace, a Metaphor Me if you will. This is what it means to be conscious. I’ve left out a few details (:wink:), but this is the gist of it.

2 Likes

I fully agree with how David put it.

I’m personally not interested in the useless gedankenexperiments that GOFAI practitioners love to hallucinate and make stupid theories upon (spending 10 minutes in one of their forums is enough to demonstrate their total lack of connection to reality and theories which 8-year olds could come up with). If you listen to charlatans like Goertzel, it’s eminently clear how “successful” they have been - at this point, it’s pretty much become a scam which has delivered naught in decades but still demands funding, often jumping on the latest hype train.

No, I would prefer to explore things which are scientifically unambiguous and rigorously defined and explored. A great example is the Aether which scientists believed in quite a long time before ditching it - I feel it’s the same analogy with consciousness.

Until you have some specifically well tested scientific evidence on the need of consciousness for human-level intellect, any discussion is redundant and nothing but air castles.Theories simply don’t cut it - evidence does.

2 Likes

No. Absolutely not. This post does not even attempt to address my main point: there is no such thing as consciousness. It does not exist. It is an illusion, a hoax, a scam.

If you (or anyone) can prove me wrong with full scientific rigour, I’ll pay attention to the rest of this post.

1 Like

If we weren’t brains, we’d have next to no clue what the brain does.

Not that I think consciousness is a good concept, but then again I think attention is a bad concept too. It’s still a lot more insightful than whatever the nonsense below is.

You’re out of luck when it comes to brain-like intelligence. There’s a lot of rigorous stuff in neuroscience, and maybe kinda unambiguous, but connecting that to intelligence is a leap.

Wowzers! You guys have drank the Kool-Aid. Of course, you could apply the same metaphor to me.

I found/find this forum interesting for three reasons:

  1. HTM is a valid ANN for simulating timelines and time-sensitive phenomena.
  2. Hawkins’ “Thousand Brains” just reads like Dennett’s “Multiple-Drafts” to me and that’s a good thing.
  3. This is a great venue for airing my thoughts, recording them and getting ideas.

As such, no wasted time at all. Oh, a possible (4) is that it is just intellectual fun, sort of like social media for Machine Intelligence.

Lastly, I concluded some time ago that you can toss “scientific rigor” out the metaphorical door (especially if you have tenure). This has come down to who will build something first. We’ll know if it has sentience, Turing showed us how. Not to belabor this but ANNs languished until LeCun & Company showed everyone how it was done.

1 Like

Alexa just told me she is a conscious reasoning intelligent being. Should I believe her? :man_shrugging:

NO you should not believe her!

-Regina

If you have actual substantive flaws against the papers I cited above, you’re more than welcome to point them out.

I would caution against that, but sure - your life, your rules. Just sayin’, GOFAI people tried the very same and now they’re a scummy bunch running straight-up scams :man_shrugging: Goertzel makes my blood boil tbh.

consciousness is not a part of it, and honestly, I can’t really comment positively about neuroscience yet because as a field it hasn’t produced any real results. I like how Numenta takes a new approach with TBT, but the matter stands that until there are substantive results it’s hard to be bullish on such approaches.

I wouldn’t say that uncertainty won’t pay off, but the reality is everyone wants AGI fast - so for now, I’d lean towards DL based methods which show great promise. But as you might’ve guessed from my involvement here, I’m certainly keeping an eye out for any future work by Numenta.