My "Thousand Brains" book review

It could, if only to input current weather maybe… But the slice could be hardwired to weather data and always pondering on it. The answer involving motor output is also an analogy to our anthropocentric way of envisionning communication : we don’t have direct acces to one’s state of mind, so we wait for her to voice an answer… but if we just extrapolate what we’re doing with current-state HTM, we don’t need conscious action : we’re directly decoding SDRs for which we know of a semantic.
I’m not saying there’s a direct possibility to use that reduced setup for commanding autonomous agents, that’s in fact kinda my last question… but maybe it’s what Jeff envisions.

Another way to put it is: Basically, I have no clue how the biological Basal Ganglia does its stuff and how it manages to everywhere be interconnected with our cortical layers and understand and drive “volition”, but what I was proposing is just to be in the Basal Ganglia shoes, and wire ourselves (or our decoders) to the cortical slice, the same way it does.
Is it possible ? Would it be enough ? Would it be fast enough or do we require an automated real-time-reactive BG just for that (the speed issue) ? Dunno, but those are my questions.

1 Like

@gmirey - that’s not a crazy idea, and in fact a couple years ago I spent a month or two being really excited about building an AI without drives, trained purely on predictive / self-supervised learning with no rewards. But I eventually grew pessimistic about that approach and dropped it. In brief (more details here), my main reasons are:

(1) I don’t think people are going to be satisfied with that kind of AI. We want our robots! We want our post-work utopia! I don’t think people will easily give up on those goals, unless we had rock-solid mathematical proof that agent-y AGI is definitely going to cause a catastrophe. So either way, the right approach is to try to design safe agent-y AGI, and either succeed or (much less likely) prove that it’s impossible.

(2) It cuts off the possibility of metacognition—the ability to learn models like “When facing this type of problem, I will think this type of thought”. I mean, it’s possible to ask an AI a question, and the answer immediately “jumps to mind”. That’s like what GPT-3 does. We don’t need reward for that. But for harder questions, it seems to me like you need your AI to have an ability to learn metacognitive strategies, so that it can break the problem down, brainstorm, give up on dead ends, etc. etc. Predictive learning is insufficient for that, you need RL, because “thinking a thought” is in the same category as “picking an action”, and as @Paul_Lamb says, you need a drive / reward to choose one action over another.

1 Like

Ok steve, then if reward and jugdement are necessary, then what about the idea of the (hopefully human) controller taking the role of the basal ganglia itself ?

This is not the same as a human with a reward button, and doesn’t suffer from the same issues.
Why?
Because in my opinion, a human indulging in wireheading is not a cortex tricking its BG into low hanging rewards.
Since… if the BG is indeed drive motor and judge, then a wireheading human is maybe… really a BG using its powerful Cortex tool to knock itself out… instead of being a mischevous Cortex trying to trick its boss.

So… again, could we assume the “BG” role instead ? Even with decoding tools, and maybe even with the help of a database of SDRs which are flaggued “good” and some which are flaggued as “bad”, for letting the Cortex chunk on its thoughts when we’re alseep, or when we need real-time values faster than we could handle… we’d still be able to redefine that database, and their values, at will (albeit at our slow human reaction time paces). Cortex doesn’t have a drive without us.

I haven’t fleshed out the idea enough, but those are my current lines of thought…

1 Like

I meant the words of the question itself. Like, if a human were asked this question, they would receive it either through their ears, eyes, (or fingers if reading Braille). The AI may have a special “text” sensor, but in the context of the cortical algorithm, it is sensory input.

But why were those SDRs activated in the first place? Isn’t the generation of language a motor action (in the context of the cortical algorithm)? What motivated the algorithm to generate SDRs related to a particular answer (or any answer at all)?

I think you are proposing that we the humans would be providing that motivation (through some sort of explorer interface that is feeding decisions into the cortical algorithm). It probably depends on how verbose that interface needs to be. My suspicion is that this is a very tight integration in a biological brain (which may be difficult to design a usable interface for), but that is just speculation.

Just to clarify something I said before, I also agree with this description. I also see the neocortex as being driven by the “boss” to satisfy its needs. When I mentioned the humans “tricking” subcortical networks into issuing rewards, I meant it is the the neocortex/“boss” partnership that is doing the trickery, with respect to intent of evolution (the developer in this analogy).

1 Like

Every area of the cortex sends axons to the Striatum, which is part of the basal ganglia. These connections do not mix between cortical areas, so each cortical area has a distinct territory in both the cortex and in the Striatum. The Striatum is very much like an additional layer of the cortex. For every 1000 cells in the cortex there is 1 cell in the Striatum (approximately).

I have a hypothesis (not a theory) which states that reinforcement learning is critical for learning about abstract things. Unsupervised learning alone can take sensory inputs and recognize the image of a concrete object, but abstract concepts don’t necessarily have an obvious physical manifestation.

What is an abstract concept? It is something which is important to you, and which you need to use to solve some problem which you’ve encountered. I argue that “being important to you” is what all abstract concepts have in common. (Note: they might not be important anymore, but at one point you did care about it).

What is “important to you” are all of the things which are relevant to getting rewards, and that is determined via reinforcement learning. Abstract concepts are your brains way of detecting the “just the important things” out of “everything you see”.

Mapping to biology: The rear half of the cortex processes raw sensory data to find all of the concrete objects. The Striatum uses RL to detect the objects which are important to you (relevant to predicting your rewards). The Striatum connects (via the thalamus) to the front half of the cortex. The front half of the cortex then re-processes the sensory data, except the data has been filtered by the Striatum for just the important stuff.

This is just a hypothesis, but I think the answer to your question is no.

3 Likes

Going back to the Dad’s song - we passively learn the sound and then learn to reproduce this sound to gain some reward. Eventually, the reward for learning a new word is more abstract but the habit of learning a sound to gain more power has been formed and continues through life.

2 Likes

My devil’s advocate self a few hours ago before hearing your (and @steve9773’s, and now @dmac’s) rebuttal would have answered: “pure unsupervised pattern recog”. Now, I’m hearing you all, let’s move on ^^’

I’m well inclined to believe they are synaptically connected in some way, of course… it’s the fact that there seems to exist a mechanism, which is beyond me, able to tie reward signals with patterns in cortex, themselves indireclty recalled from an experience buffer mapping god knows how and when (okay maybe we know when) to cortex also, and somehow managing all this by pure pattern recog and maybe the occasional pushes from amygdala… (pure pattern recog… or else we’re back to a same kind of tortoise-all-the-way-down, like @steve9773 expressed for the need of an expert system on top of a pure cortical algorithm: who-would-judge-the-jugde-otherwise).

I’m quite okay with that proposal.

So we’re on the same page here. The cortex+boss partnership as a whole is the identified problem, and could be a major problem if automated. It is already a problem for wireheading humans, and if we try to handcraft a “boss” and fail just a little, it may be the cause of catastrophic tilings of Earth with smiley faces.
So why not keeping a tighter control of “boss” automation ? If you have a direct interface to cortex (which seems possible… since biological basal ganglias “have” a direct interface to cortex… dunno how but it works)… then the problematic entity here could be human+artificial_cortex… which can suffer from (meta-?)wireheads, sure, but in the same way as humans already are. If we have a board of humans, you can mitigate that risk, and we’d also almost be safe of the “tiling the earth” kind of scenario (unless you don’t test your controllers for schizophrenia).

Now, I understand that direct BG like interfaces to cortex could be hard to design, and there could be human reaction speed issues to solve.

But I don’t see it as infeasible (since a biological example exists already).
(Which loops back to my bafflement as to how Basal Ganglia really works it magic)
(Which ain’t magic)
(So we’ll get there)
(If we don’t tile earth first by not hearing @steve9773)

3 Likes

Now that I red 70% of the book … i have to be harsh … i wont be finishing it.

The beginning was good and intriguing, the middle was OK, but the end was boring.
What does overpopulation, climate change and vaccines have to do with a book about Theory of the Brain ?!! ;(

I am sorry you didn’t like the end of the book. The three sections of the book are all about intelligence. Climate change and vaccines were briefly mentioned examples of how two people can start with the same observations and yet reach different beliefs. This is very relevant to how the brain works.

The third and last section of the book is in my opinion the one that will be most relevant for the future, the one that is most remembered, and yet I expect that many people today, perhaps like yourself, won’t see that. That is why I told the story of the talk I gave at Intel. In that 1992 talk I described to the management team at Intel how the future of personal computing would be dominated by handheld devices, and yet they didn’t see why it was relevant. I told this story as a way to ask the reader to keep an open mind, as you might not immediately see why the third section is relevant. For me, it is one of the most important sections of the book.

I respect your opinion, but I would encourage others to read the entire book, including the last chapter which describes the ultimate role of intelligence, even beyond our species.
Jeff

8 Likes

If someone is working on the project to reverse-engineer the neocortex, I think they need to ask the follow-up question: "What if we succeed?" It’s wrong to just say that this question is off-topic, or that it’s someone else’s problem. If you choose to work towards developing a technology, you need to think through the consequences of actually developing that technology—and developing AGI would have impacts beyond anything since the industrial revolution, if even that. My review (at the top of this thread) had a lot of disagreement with Jeff about these bigger-picture questions but I am delighted that he sees that as an integral part of what he’s doing, and I think we need much more of that attitude from everyone in neuroscience and AI.

4 Likes

I don’t disagree with many of the points you raised in the book. Plus I’ve obsessed almost as much with Mars colonization as with extracting the “brain-juice” and implementing it.
My frustration is coming probably from my expectation that the book will be dedicated to the Science of the Brain, rather than topics from popular themes of the current public discourse, which the public cares about.
You “speculated” on so many topics, but not on the the rest of the CC mechanisms and the rest of processes of the Brain. May be it was savvy because that would have opened for attack of the Core theory, but :wink: I wanna know ! :wink:

I got fired up by the first part and then my enthusiasm subsided :slight_smile:

I think the reason for me is the same as with the Documentary movies and Panel discussion on Quantum mechanics or Black holes and whatnot … after the fifth one they all sound the same …

It has always been the case that the market of the layman and the specialist are well served but the niche market of the intermediary is always neglected :frowning:

Beside the book … I love re-watching your presentations and discussions , I always find gems I missed.

Will be interesting to see you in Panel discussion with AI-ortodoxy.

Anyway I’ll shut up now :wink:

2 Likes

While you are correct that thinking about the “what if we succeed” question is important, right now we are pretty far from actually implementing something even remotely like this. It has to be accomplished with baby steps.

Thinking about ethics is all good but right now, we do not need humanitarians or ethics professors since the focus is mostly about reconstructing a mechanical version of the brain. It would be a relevant point - if and when we actually make a powerful AGI. but we are not at that stage right now.

Why am I saying this? because often the media and the public get fired out about stuff they don’t even understand one-hundredth of. someday, probably PR execs will get the sniff of what’s going on here and I can guarantee the headlines will look like “We have finally understood the brain” or something similar.

At this point, where a good amount of progress has been made, we are still not at that stage of having a concrete experiment to prove we can make AGI. Thus, any useless speculations would likely end up with people who create hype and likey won’t understand any of it.

1 Like

That attitude worked out really well for chemical weapons, nuclear weapons, and biological weapons. First build them, then use them, then worry about the ethics. We wouldn’t want to teach scientists and engineers about ethics. They might decide not to amplify the power of the brilliant people doing such a wonderful job in leading the world forward. The risk of journalists exaggerating the progress in AI is just too great, better to wait and see how creative the US military can get with autonomous weapons first.

3 Likes

Is usually the part you don’t agree with not the best? Unless you like singing in the choir being preached to, new ideas either change your opinion or at least help you improve your arguments against them.

Please, give us the benefit of your intelligence and tell us how you disagree. After you’ve read the last part of course. ;-).

3 Likes

I did not say i disagree … at least with the most of the arguments.
I said I expected different content. Jeff is not obliged to meet my expectation … end of story.

… lets not beat dead horse …

nice try btw “give us the benefit of your intelligence” :wink:

2 Likes

You seem to be too fired up about humankind making AGI like its ordering a book from amazon :slight_smile: its an ongoing effort and there is no gurantee its gonne be completed anytime soon (being a little pessimistic here)

but do you know what is the consequence of too much hype that gets deflated in a second? you think journalism is a joke and not much harm can be done from the hype. First of all, the news people rarely understand what they report. Since their audience is pretty big, they can have an impact on the majority of non-scientific population.

But importantly - if there is a hype bubble, many leaders would divert funds to such research. these people are not expected to understand AGI and most results would disappoint them very much. what happens then? Politicians get ridiculed for diverting funds to R&D instead of bringing down hunger or other issues; and what do they say? that it would be the last time extra funds got to AGI/AI research.

Historically, what I said has happened many times - the public perception is directly proportional to the amount of diverted funding.

Don’t underestimate journalism - they provide a buffer between scientific and non-scientific people but since most powerful people are non-scientific anyways, providing a true picture of a scenario may become difficult and may lead to both parties discontented.

1 Like

I don’t expect a full AI winter this time as there have been many important wins in the field.
On the other hand, people can only wait so long for C3PO before they call humbug on the whole thing.
For many people, the only thing that will count is walking talking robots out of science fiction. The fact that Smart Speakers and internet back ends are getting pretty close to full AI is lost on a lot of the general public.

1 Like

Do you really think so? My experience is that saying something an AI can understand is hard work. They just don’t get context, and forget about using pronouns, this or that or the other.

1 Like