The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do - Erik J. Larson

Larson argues that current logical models are not useful for real world reasoning, and that ANNs (including GPT3 etc) are just veneers, relying purely on frequentism from massive datasets.
The justification is straight forward - first or second order logic provides no more information than was encoded. He claims that Abductive Logic (AL) is required. Or directed guessing, to create new information.
He (AFAIK) argues that the smarts for ANNs are still provided by people: with data encoding (structuring, cleaning, etc) on the input, and for interpretation of output. He claims a long history for this ‘error’ back to Turing.

He has a lot to say about Big Data, tech culture, ‘bootstrapping intelligence’ fallacies, and many other large issues, which I won’t cover here.
In the end, he offers no solutions to the main AL issue, but calls on the community to get back to what he describes as the real problem (AL).

I cant help but feel this is some leap to mysterianism but I’d like some local opinions, if people are willing.

Sounds suspiciously like he just revived Dreyfus’ classic book “What Computers Can’t Do: the limits of AI.” A great read even though it is now 50 years old. Keep in mind that Hubert later said that a machine would never beat a human at chess and then guess what happened.

1 Like

He does go on a lot of about Winograd Schemas (WG). Seems to be his main example, and suffers from the typical publication delays from books (1-2 yrs), but still, the story on those since is interesting.

Initially, it looked like he was just out of date, but others have emerged suggesting that those WG breakthroughs were just gaming the topic, rather than answering:

Time will tell if WG just falls again, but I would not bet against the algorithms here.
In the meantime, WGs may be fashionable again:

just as is the Machine Common Sense work via DARPA (just finishing):
https://www.darpa.mil/program/machine-common-sense
which I believe he was also involved in.

2 Likes

Thanks for the link, I still think DARPA has an evil AI sequestered in a black ops lab somewhere. Then again, I did see something like that in a movie, but then the AI got loose on the Internet.

If you’re interested in some DARPA results, check out what people like Chandra Bhagavatula are doing:

1 Like