Google DeepMind claims they're close to achieving human-level AI

Oooo! Can I trade you my Alexa for yours?

Goertzel is not GOFAI by any stretch. He is a bullshitter by nature, nothing to do with any specific paradigm. Experimental “rigor” is a safe way, but you won’t get any break-throughs if that’s all you got. Even ANN, which has extremely primitive theoretical motivation, was pure trash experimentally for the 1st ~3 decades of it’s history. Until it wasn’t. You are hopping on that bandwagon now, but it only exists because people worked on it without any “scientific rigor” for a lifetime.

2 Likes

That’s pure bias. There’s a huuge difference between the era - that was 1950 with the perceptron, which led to LeCun’s MNIST exploits. While DL at that time didn’t have the scientific rigor it has now, it had results and mathematical theory underpinning it, which if I remind you is still very much prevalent and foundational. GOFAI on the other hand had nothing, except a bunch of crackpots making up theories. Goertzel may not be GOFAI but he does claim to be - and the few GOFAI forums I’ve visited exhibited this wonderfully.

even from its inception, there wasn’t anything “magical” per se. As I said, the mathematical fundamentals deriving out of various fields like geometry, topology and trusty linear algebra along with Turing’s work was still an interesting direction for those early researchers.

More importantly, the earliest form of the Universal Approximation Thereom was proven by Stinchcombe et al. in 1989 which provided a strong theoretical underpinning to their work, as well as hinting that scaling their ideas would help tremendously, noted as much in their paper.

Safe to say, there was more theoretically safe research in DNNs simply in the few early decades than any other field, including GOFAI and neuroscience which languish behind in results despite being centuries old. (GOFAI was there at the very inception of modern computing, while neuroscience technically started accelerating in late 19th century and is still very much a popular field of research)

1 Like

Anyone can get bad results.

The practitioners disagree. “Theory” is in the eye of a beholder, you can always find one after the fact, but original motivation was simply to mimic neurons, however badly.

Yeah, magic is bad, consciousness as a constructive principle is a voodoo, but you won’t get anywhere without imagination. Which is never “rigorous” or safe.

3 Likes

@bkaz well said, well put.

1 Like

I can’t let that one pass. Most scientific research starts with someone noticing something interesting and doing experiments to try to learn more. This kind of serendipitous tinkering has a long and hallowed history and has led to many dramatic successes. Go for it!

But this is quite different. The proposition before us is that there is something called ‘consciousness’ and that the end goal of AGI R&D should be to reach that goal, while refusing to define what it is and how one might know that it had been achieved. This is not science, it’s alchemy or astrology, and about as useful.

1 Like

There is a whole bunch of propositions before us here :).

It does exist and is definable, if you really want to define it. But it has nothing to do with GI per se.

1 Like

Their results weren’t bad at all for their time - infact as my source mentioned, Lecun’s model was used in about 20% of US checkpoints. That kind of reliability to deploy something that experimental is pretty impressive.

That changed very quickly however; the original hypothesis was neurons mimic a function, and ANNs were the ultimate mathematical object to map those functions. I suspect no one expected them to be used so far - the expectation was that we’d have migrated to something much different.
Thus their approach was never to simulate the brain from the very start.

Imagine all you like - I (personally) simply care about results.

1 Like

You are talking about ~1989? That’s 46 years after McCulloch and Pitts.

And you don’t care how people get these results…

There are related / sub-component things which are more about neurons.

It might take a millennium, I have no idea, but neuroscience will figure out how the brain works. So kinda a backup option at least. Unlike machine learning, neuroscience is guaranteed to lead to general intelligence. I mean, ML seems pretty likely to lead to something like that, but it might be very different from humans and useful for other things.
It hasn’t produced results like machine learning, but it does make progress on brain-specific problems. It’s just not super clearly applicable to AI yet. Partially that’s because of the experimental tools, which have gotten a lot more powerful.

I think it’s being defined:

I think consciousness is vague and has many meanings. Whereas that’s pretty specific, like narrative imagination or something. Maybe just thought.

1 Like

Not to get into an argument about DL history, but most researchers consider Rosenblatt’s perceptron in the 1960s to be the closest precursor to actual Neural Networks - a mere 30 years b/w Rosenblatt and LeCun.

More importantly, it was 30 years in a vastly different era and atmosphere - 30 years in the 21st century does not equate to 30 years in the 20th.

No, I don’t. You can dream up elaborate theories about the brain connected to a divine being to derive its intellect from for all I care. but if you manage to turn that crackpot theory into AGI, that’s really what I care about. Theories are fine as long as they’re adequately grounded in at least some scientific rigour. Otherwise, we’d be arguing with flat earthers.

That I fully agree with. its really the time frame which is a deal-breaker for me personally. But neuroscience as a field is useful for other ways too, as you pointed out, such as understanding brain-related diseases and ailments.

1 Like

As you said, 30 years in the 21st century does not equate to 30 years in the 20th.

Advances in other areas of science & technology can greatly help neuroscience. In particular, advances in microscopes and genetics are enabling new methods of recording data about the brain, for example (https://www.youtube.com/watch?v=j8xiM_UdrUw), and advances in computers are helping to sort through all of the data.

1 Like

FSD will never happen via DL.

How does that video prove anything? and what is it supposed to even show lmao

1 Like

Ok. I thought that an explanation was unnecessary.

Do you confuse a sun projection for a road lines when drives in a tunnel, like that? I understand what is happening and ignore it (a five-year-old child will do it). Do you think FSD will ever understand the situation?

PS: the laughing my ass off is out of place in any civilized discussion. Another FAIR bully?

what? I have no affiliations with Facebook.

This is confusing; I don’t even know what system you’re using, so even assuming Tesla’s FSD beta, they frequently push major updates so without knowing the exact details about its software version and vehicle model, one can’t even fully reproduce this scenario.

I’m still unclear what the problem here was - you mentioned the sun’s projection, but while the car enters the tunnel with the shadows on the road, there is almost no deviation of the steering and the car is on-course driving perfectly as I’d expect and hope.

If you were referring to the slight turn when it enters the tunnel, I still see no problem - it follows the lead car perfectly, and was overly cautious when entering the tunnel and slightly moving away from the wall to avoid collisions (0:11) seeing how dark the tunnel is, that appears to be a fantastic policy to maintain the safety of the driver.

In any case, it still followed the lead car excellently and quickly adapted when it realized that the turn was slightly excessive and autocorrected to its original trajectory.

It’s really that quick adaption to a new situation (and rapid response to a mistake, quickly recognizing that this new path reduced the margin of safety with the vehicle alongside and was too close for its taste) that actually stands counter to your entire point - this acutally demonstrates very well that FSD is indeed possible and flexible to adapt to new situations, improving over time.

The car confuses the sun projection with a road mark. It’s clear that is wrongfully overcorrecting the trajectory to the right side.

The car trajectory is corrected because the driver takes the wheel.

No matter the version of the software you are using. A DL can understand the context of the image?
This is just an example that proofs that image processing is not enough. You need to understand where you are.

Adapt to new situations? DL is just a glorified lookup table.

1 Like

Full Self Drive ain’t. Image processing is not enough. DL will never do it.

Animal brains model reality, project forwards and backwards in time, detect anomalies, choose strategies, learn rapidly from mistakes. We have no AI that comes close.

The wonder is not that self-driving cars make elementary mistakes, it’s that they can do so much with so little smarts.

A high quality driving AI would be unbeatable on the F1 track. Don’t hold your breath waiting for it.

No - that is very important. Different companies have vastly different approaches, with a huge ocean of differences. Comma.ai leans towards an end-to-end behavioral cloning approach, Tesla comes off as a more perception focused while delegating policies to (maybe) discrete mechanisms (that’s still an industry guess; those guys share little to nothing and their stack changes every couple months as they keep experimenting) companies like Waymo and Nvidia go for more traditional LIDAR + standard software approaches to FSD.

So there’s a huge difference between the capabilities offered by each manufacturer. Currently, it appears Comma.ai provides the best driving assistance system according to Consumer reports.
As I said, it matters.

As for DL models demonstrating more abstract inferences - it’s all demonstrated by my previous posts in the very same thread - you’re free to scroll and point out any particular flaws. My position has been very clear to those who’ve interacted/read the posts here. Large models consistently show human-like intellectual behavior and capabilities to the point where any existing systems can’t even come close to compete with.

The caveat is that such behavior arises from Large models, the ‘L’ in LLMs. FSD companies running on 4 TOPS hardware can simply not compete by using large models - their stack is a glorified regressor. Hardware capabilities force them to rely on 30-year old CNNs which you might be familiar with. There’s nothing “jaw-dropping” about this - anyone who has even the remotest expereince in FSD can tell you that.

Why Elon keeps harping on achieving FSD in a few years? because they’re the only manufacturers who focus on hardware in the race to add more and more power to their chips. Hence DOJO - their custom chips to squeeze out maximum performance. Tesla’s hardware is nearly 100x as powerful as competitors but they can’t use it fully due to power constraints (Tesla’s engines operate on the same batteries as a reminder).

But Elon takes a bet - he continues pushing hardware advancements, Andrej Karpathy the head of AI @Tesla is very familar with LLM research and frequently partakes in its experiments, but the bottleneck is battery research which Tesla wants to push to utilize full hardware.

Seeing all of that, its clear why Tesla is going to achieve FSD. News outlets don’t really talk about this aspect because its highly technical; I personally have no interest in Tesla per say nor do I own a product so I’m not a “Elon” fanboy fanning over his idiotic projects like Hyperloop.

But I would say Tesla is really ahead of the curve in their game. They know what it takes to get FSD and seem to be able to deliver it in future.

Wow, slow down and let the keyboard cool down, cowboy.
Even the illusionists acknowledge conscious processes exist in the brain (they disregard qualia however). If you are not conscious, then you are mechanistically forced to do certain things like a zombie. And if you think you are a zombie, how do you, a hypothetical zombie, further plan to explain other hypothetical zombies that consciousness doesn’t exist?
Conscious processes interrupt the functional processes so that functional processes can be “creatively” “pieced together”. It is the reason why we do one-shot. And qualia is strictly a non-functional signal that is used to interrupt these processes (if it were functional, it would disappear into the problem solving skills). But it is absolutely essential to realize coincidences and incentivize the functional parts of the brain learn the pattern how weird the pattern is.