Let's Watch the Marcus / Bengio AI Debate

Join me on Twitch Monday Dec 23 at 3:30PM PST / 6:30PM EST. I’ll be watching this debate live and streaming with you. Voice chat will be open on Discord. This should be fun if we get a lot of chatter. There will be a chatroom voting system. :nerd_face:

I will start streaming a Pre-Debate Show around 1PM today, which will include a viewing of the 2017 Marcus / LeCun debate. This will give me a chance to set up all the moving parts of the voting system.

  • 1:00PM: Marcus / LeCun 2017 Debate viewing
  • 3:30PM: LIVE Marcus / Bengio Debate viewing

Voting instructions below.

7 Likes

Is anyone planning on joining me? This is going to be embarrassing if I’m all alone.

1 Like

It is running during the evening drive home. Perhaps I can be a naughty monkey and drive slightly distracted.

Please don’t do anything unsafe on my account!

I’ll join if I am home from work by then.

1 Like

That’s 1 AM for me. Of course I’ll be there! ;-).

4 Likes

You’ll post a link as it approaches yea?

I already posted the link to my Twitch above.

1 Like

That’s around midday Christmas eve for me, but I hope to be able to sneak away from family for an hour and join in. Looks like fun!

3 Likes

We are on the same timezone! I’ll also try to join :slight_smile:

3 Likes

I think if you click the Twitch link above (first post) or go directly to Matt’s Twitch url, it’ll start streaming to you.

Edit: Oops, Matt said that already.

1 Like

I’ll try. Sometimes, I forget even with reminders!

sounds fun!

Prereading for the debate:
http://www.montreal.ai/aidebate.pdf

1 Like

Marcus proposes that connectionist networks can only represent some subset of the training set used to build the network. This position simplifies to “I can only recall a remembered state.”

In a post response to the Numenta December 4 research meeting I outline a general method of building an object representation based on a collection of cortical maps or regions.

As this was already a very concentrated blast of material that would take perhaps 10 times as much supporting background material to provide a digestible path to take in the concept I stopped with what I had posted and hoped that someone would pick up on the concepts if there was any interest.

Alas - nada.

Reading the Marcus position drives home that this high-level view of cortical representation is not a mainstream view and that without this - the constructionist model is as impoverished as Marcus claims.

He does not think big enough. I see multiple object representation as I have outlined in a rough sketch - in particular - two in common use in the brain. The objects are composed of short fiber tracts, and the resulting objects are joined by the long fiber tracts.

The pair of high-dimensional representations most important to human speech are on either end of the Arcuate fasciculus, which joins the high-dimensional grammar (Broca’s area) to the core of high-dimensional object representation. (Wernicke’s Area) This allow a modular construction of object particles to populate the structure of language templates.

When this is combined with the serial stream of consciousness as described in this post, you have a speech production system that is vastly more complicated than most recurrent networks but at it’s core, works on the same basic concept.

As you may have noticed, you may not know how a sentence will end when you start it. You have some object(s) and relationship(s) you want to represent and you fire up some sentence form to encapsulate that object. Speech production is initiated and you perceive the speech as it produced. As the sentence rolls on the relationship part of the object lurking in the object store in the temporal lobe is selected and this goes back through the Arcuate fasciculus to prime the next part of the sentence production. This serial process is interactive between these two object stores, (grammar and object/relationship) each influencing the other to work cooperatively to form the sentence.

I have just described the production of external speech but in the process that we call thinking this process is retained internally and allows modular manipulation of the stored object fragments to form novel object relationships between stored object fragments. You can perceive this internal speech as an experience and both store and recall this as if it was perceived from an external source.

An important feature of this system is the novel recombination of sub-features and related generalization.

This is not the symbolic relationship that Marcus describes. It is the functioning of a properly configured connectionist system.

Of the prereading provided, this paper from Bengio comes the closest to what I am proposing here:
The Consciousness Prior - Yoshua Bengio

6 Likes

I would definitely like to know more about your ideas on how to reconcile the connectionist and symbolic AI approaches.

However, I was a bit lost in your explanations about how a mental manifold composed of representations of features & locations could exist / work. Could you expose your idea with the example of the coffee cup that Jeff often uses? I think it will help to understand your more complex example of langage (that involves serial conscious processing).

Are you using the term “constructionist” on purpose? Or was it supposed to be “connectionist”?

Even if your ideas are not easy to digest, it seems to me that you are suggesting that adding symbol manipulation to deep learning networks sounds like adding serial consciousness abilities to massively parallel unconscious abilities.

If yes, I have a similar intuition on my side (probably biased by reading too many of yours posts :wink: ), but it is still very fuzzy in my mind and I am struggling to formalize it.

I haven’t read this paper yet. Thanks for the link!

It supposed to be “connectionist” but I am intrigued that constructionist does apply even though it was not intentional.

Today is a snow day here in Minnesota, I will be shoveling out two properties after work tonight so I will not be able to “build a cup” using this system until tomorrow at the earliest.

And yes, you have hit in the core of my proposal: adding symbol manipulation to deep learning networks sounds like adding serial consciousness abilities to massively parallel unconscious abilities.

The global workspace is an encompassing framework but it does not directly address the contents and evolution of contents of the connected maps; I am making an attempt to fill in this missing part.

The brain is made of many massively interconnected maps or areas. I propose that there is a general overall organizing principle to both the contents and the evolution of these contents over time. This general system picks up at a higher level than HTM but incorporates the basic mechanism of the thousand brain or hex-grid systems at the lower level and the related cortical column computation at an even lower level.

2 Likes

Definitely watch this:

Slides:

TIME UPDATE! They moved it up 30 minutes to 3:30 PST.

How to Vote in Twitch Chat

!vote <points> <name> <contest>
  • points: Everyone gets 10 points to award. Followers of my channel get 100 points (so follow me dammit)
  • names: yann, gary, and yoshua (names will change between debates)
  • contests: delivery, technical, science, & rebuttal

Example

If you see Yoshua Bengio give an excellent technical rebuttal, you might award him points like this:

!vote 5 yoshua rebuttal
!vote 5 yoshua technical

This would give Yoshua a total of 10 points, split between two categories. If you run out of points, get more by following my twitch channel.

If you decide to re-tally your points, use !vote clear to clear your points and start over. You can do this as many times as you wish during the debate.

The points awarded within contests are tallied into an overall debate score. You’ll see all this on the screen when I get started. I hope to see a lot of you there with me!

3 Likes