My "Thousand Brains" book review

I don’t trust any humanities shitbags who don’t a thing about AI can talk about it like it’s the equivalent of SkyNet.

Don’t get me wrong - having autonomous weapons deployments should ensure that the deploying country understands and accepts whatever it does. But if a country is willing to deploy them - I support the report, that there should be no ban against AI weapons on the battlefield.

Also, let me reimnd you that right now, not much things can be done by AI. we are sooooo far from a terminator-style robot. the current “AI” is simply just masked object detection with a few bells and whistles on it to identify hostile units and eliminate them - nothing more sophisticated. Governments already deploy that sort of thing in drones and aircraft.

This is just pointless talk seeing the current state of research; until there are credible leaks for power AI with the DoD.

1 Like

I have just completed a book review for “A Thousand Brains” which I would like to share with this community. I have written this review from the perspective of one of you, who has been an avid follower and occasional participant to this forum and community events for about a decade. I wanted also to both pay tribute to Jeff and his valuable contributions which he so openly shares with us. And also to reflect on my own thoughts which reading this book has evoked. Perhaps my thoughts are also of interest or may stimulate further discussion. I certainly recommend anyone who has not yet had the pleasure, to make certain you read “A Thousand Brains”.

1 Like

If anyone would like this as a PDF, let me know where to post it or send it.

I think you might have misunderstood the voting mechanism. Jeff replies to questions about this here Paper referenced in A Thousand Brains: A New Theory of Intelligence - #4 by jhawkins The disambiguation is achieved in the cortical column not at a level above the cortical column.

The major difference I see between On Intelligence and current work is that On Intelligence assumed active prediction and HTM implements passive prediction (predictive state of dendrite, no predictive output). This is a radical change

Most of your review is a presentation of your own idea. I would separate out the proposed extension to TBT from the review and post as two separate threads.

In part 2/3 the major issue I see is an assumption that intelligence can be amoral. That is not reasonable because any agent that acts will have moral consequences. If the agent does not “care” that does not make the actions amoral. If the agent has no understanding of suffering then it will not consider suffering and act in ways we judge as immoral. If the agent does understand suffering then it will have learned that from a particular community or individual i.e. it will be morally biased. Part 2 is irresponsible in this regard as it encourages AI researchers to continue research without taking moral and political responsibility for their work. It also ignores the huge amount of existing literature on the alignment problem. It reads like an initial opinion rather than a deep reflection taking into account contemporary research in the field.

Thanks a lot for your feedback and suggestion to separate my review portion from my own “insights”. That is a good point. I can do that and put it in a separate thread. It just landed in my review, because it was directly evoked by the book.

Regarding the term “disambiguation” and the term “voting”, I think I do understand Jeff’s use of the terms. (Thanks for the link to that other thread, btw.). Often times there is no disagreement in principle in such discussions, it is only a slightly different definition of a particular term being used. Even if we did not perfectly agree on the definitions, we almost certainly agree on the principles. In this case, I do agree that disambiguation is being achieved in every single cortical column. I would not question that. But in TBT disambiguation is taking place at multiple levels. It does not end with the output of a single cortical column. The final disambiguation of what is being perceived (the stable invariant perception) does not result, until the so called “voting process” has taken place. In fact many individual cortical columns are “voting” against each other. There are some CC in disagreement, until the “voting” (or consensus building) takes place. I would be very surprised if you or Jeff would not agree with this in principle. And I want to thank you for your feedback, because it goes to show, that I have to be careful with the definitions of the terms I use. I actually provided my definition in the paper, at the risk of making the review too long.
Regarding parts 2 and 3, I like your thoughts, but I will not comment them extensively. The philosophical arguments are even more treacherous when it comes to using differing definitions for terms. :wink:
I only wish to comment, that I do not consider part 2 to be written in an irresponsible manner. This is not to take away importance from your stated objections, which are indeed not given enough weight. However, Jeff does state repeatedly, that much care is to be taken, with any developments. I believe this section was a response to the general trend that is dominating much of public debate, that AI, by its very nature, automatically leads to some form of guided behavior (be it ego-centered or otherwise). Jeff is not underplaying the “impact potential” of machine intelligence. This brings responsibility. Perhaps he should have reiterated it more. I think it is self-evident for him and therefore only mentioned briefly.

Exclusively anthropomorphizing AI is wrong. What if it just thinks? What if it doesn’t even have a mind or a will? It’s great that AI gets so much scrutiny ethics-wise, but it’s not good that people are mostly preparing for a paperclip god. I think AI will solve all of STEM fields long before we get superintelligence. That’s a bigger existential threat I think.

1 Like

Yup, look at GPT-3. It has some kind of intelligence without agency, personality, desires and morals except mimicking them in a very limited context of a few guiding phrases of user provided prompt. In that limited space/time context It can impersonate dozens of “characters” in parallel, good or evil, it learned to mimic from everything it read. None of these millions of potential personas is or needs to be “alive” beyond a few seconds or minutes of simulated “thinking”.

There isn’t a trace of memory between current and previous invocation of the same character. All are new and ephemeral.

I assume you think I am exclusively anthropmorphizing AI. Which I am not. Morality is a human concept, it is humans that decide whether AI is good and/or bad. This does not make AI human. Humans can only judge the morality of an AI based on human conceptions of morality.

An anthropomorphic statement about AI is your “What if it just thinks?” It does not matter if the AI thinks or has a mind or has a free will in regards to the moral consequences.

Nobody I know is preparing for a “paperclip god.” I have not seen anyone on this thread preparing for a “paperclip god.” This is a distraction from the discussion and there are already concrete ethical dilemmas regarding the use of AI. It would be outright stupid to wait for AGI before worrying about the ethical concerns. AI is already having major impacts on society and the ethical issues were not considered.

The debate is not about what a superintelligence will or won’t do. That is one relatively insignificant issue compared with the major moral concerns which are already here. For example, what to do about autonomous weapons. AI engineers need to be educated more broadly than previous generations of engineers.

This thread [edit: actually I was thinking of another thread on ethics, not this thread] is full of unconcsious incompetence - people who know so little about the topic that they don’t realize they know nearly nothing about the topic. Their own opinion seems just as informed as anyone elses because they do not even understand the problem. To put this in another perspective, asking a philosophy student to implement an AI without first learning anything about engineering would produce an AI as effective as the engineering student’s efforts to develop moral practices. The unfortunate difference is that we are protected from the incompetent engineer’s AI but we are not protected from the incompetent ethicist’s morality.

I will try to surprise you :slight_smile: The presentation of the columns as voting is perhaps misleading because it leads to images of invariant representations that are shared and “agreed on.” I think the problem Jeff addresses is how the system could functon without needing those representations being passed around. It is an interesting angle to avoid that problem and I would like to see a neuroscientist review that aspect of the book.

The concept of voting is our projection of an explanation onto the dynamics. The “votes” are actually context at the inputs of cortical columns i.e. the votes are massively distributed, not centralised, and each “vote” is on a unique ballot - it is just our independent observation that interprets this to have a result “as if” the columns were voting. In a “real” voting system there is a common ballot i.e. a shared representation. The brilliance (IMO) of Jeff’s idea is to avoid the shared representations.

I think you are missing the critique - the problem is not whether Jeff is or isn’t a nice person. The problem is that his conception of ethics is antiquated. Like neuroscience, philosophy is a moving target, and if you are not engaged in reading the contemporary work then you are running with ideas the are probably many, many decades, if not centuries, old. Ask the average person in the street about neuroscience and you’ll probably get answers that are from the ealry 20th century (and probably much earlier!)

18 posts were split to a new topic: Politics in AI Research

To avoid cluttering this tread, I split some of the posts to a separate thread for the discussion of AI research and political goals. Feel free to continue the discussion there.

3 Likes

Blockquote

I would like to focus on this point first. I am really interested in understanding your understanding of the voting process in the TBT. For me it seem quite apparent, that there seems to be some uncertainty as to how this voting process exactly takes place. IMO the best description is that provided by Eric Collins (CollinsEM) in the parallel thread you refered to earlier:

Blockquote

This is exactly my understanding of how the consensus building takes place. You are still very right that there is no centralized vote counting. But rank choice voting still leads to the result that in the end, for the entire “superset” of cortical columns that are involved in the voting, the most common selected choice emerges as the stable “winner”. It is almost like finding the highest common denominator of a group of number (to use a mathematical metaphor). This is still a disambiguation process taking place at the collective level of the cortical columns. The collective has not decided which model is the best fit to the perceived reality (input) until this consensus is reached. So final disambiguation cannot be attained by any singular cortical column. It requires the collective of the cortical columns to reach that consensus. This is the true, decisive final disambiguation of the perception taking place.

Perhaps the problem understanding this, is that the representation of the final perception is not stored in any of the cortical columns. The knowledge representation of the perception taking place is only stored outside the cortical columns in the active distal axon connections for the group (collective) of participating cortical columns. We are used to looking inside the CCs in their SDRs for the representations that we associate with consciousness. But in TBT the representation that corresponds to consciousness is in the network of distant axon connections of the CCs. Only the set of distant connections that wins remains active. And that set has the common denominator in each CC active.

1 Like

Because the voter are not sharing the same ballot the analogy does not work very well. You can’t vote for the same thing if there is no shared representation of “thing”.

Can you point to where that is claimed by Jeff?

I think this is the “wrong” way to think about it. It tries to bring back in the notion of representation which is what TBT is trying to avoid. It is unclear to me that the way hierarchy is dealt with is even theorised by Jeff. There is some hand waving going on but that is not a theory of intelligence that can be implemented. There are fundamental problems like how to integrate active prediction.

In summary, I think you are making some huge leaps in your interpretation of TBT and over-estimating what it has actually achieved. That is understandable as the book does come across as highly (over?) confident.

1 Like

This is a subtle point that I think is important to highlight. What each cortical column is voting on is its perspective of what is being observed. For example, a cortical column dealing with touch has no concept of color or smell or contentment. Those semantics make up the “whole object”, but no one cortical column (low in the hierarchy) has access to all of those semantics. A CC related to touch might have nearly identical models for two coffee cups with the same shape and texture, while another CC might have very different models for “the red coffee cup” vs “the coffee cup with the Numenta logo”.

I definitely share your views with the description above, stating that “what each cortical column is voting on is its perspective of what is being observed”. I think that is a very good way to describe it. But please also try to view this neural architecture from the other perspective of connections between the CCs. That is to say, connections outside the CCs. These outside connections are necessary precisely in order for the CCs to be able to observe anything taking place in other CCs. Without the distal connections reaching across to other distant CCs, those CCs would not be able to “oberserve” anything beyond their primary inputs from the sensory organs. They would be blind to activity in neighboring CCs. So my description of the TBT activity is perfectly compatible with this one, but I am describing the long distance axonal connections as being the ones that “associate” the models in different CCs (which could indeed be very different from one another, such as auditory and visual). It is these distal axon connections that make “observing” (or getting inputs) what others are voting for, possible. Therefore, IMO, it is these long distance connections that are the key to what is going on. These long distance connections associate the models in different CCs to each other, by being active and contributing to co-activation in the other CCs. They are practically the representation of the observed object, at the extra-CC level. (Meaning outside single CCs).

1 Like

I suppose it comes down to the learning algorithm and how representations are selected in the “output layer”. I don’t think the details are solidified yet, but if I understand correctly, the papers have depicted connections between CCs as distal (i.e. they put cells into a predictive state), whereas active states in the output layer are driven by activity in the input layer (in the papers, they were actually just chosen randomly, but presumably that is only a temporary incomplete implementation).

Assuming the learning algorithm for CC-to-CC connections is similar to how distal synapses are formed in the TM algorithm, this would mean that no semantic information from other CCs is included in the representations in the output layer of a given CC. Populations of cells become predictive based on the semantics in other CCs, but they themselves do not encode those semantics.

Incidentally, I have implemented the output layer a slightly different way, which more aligns with your thinking. In my case, I am sending proximal connections across CCs (via hex grids) rather than distal connections. Activity from the input layer is combined with activity from the output layers of other neighboring CCs when forming those grids. Thus the representations in the output layer of a given CC are injected with semantics from other neighboring CCs. This deviates from what Numenta has proposed, though.

3 Likes

If my current understanding of the Numenta model is correct, then the output from other columns arrives on the apical dendrites in layer 1. Much like the distal dendrites, these inputs do not typically drive activations, but rather bias the minicolumn to make it more likely to fire when presented with the appropriate proximal inputs. This may be one possible pathway for voting between columns to occur.

That being said, there are other pathways for columns to communicate with other columns: most notably through the thalamus, and via the long-range connections between regions.

2 Likes

I believe in Numenta’s models, the apical connections feedback from the output layer back to the input layer, not between CCs. I could be out of date on that, though. That view is based on the Columns and Columns Plus papers, which are of course a bit outdated by now (outdated may be a bit too strong – the theory has obviously advanced since those papers).

1 Like

My assumption is as per Eric. Paul, where do you see the axons going? They all go to a higher level? What is the path of CC communication you prefer?

1 Like

The bidirectional fiber tracts that join layer 2/3.
These are inter-map connections - and it’s myelinated fibers! That means that for short distances, like inside a head, are about far away in communications cost as nearby lateral connections.

2 Likes