Flushing out the radicals

One way of exploring the challenge of building truly intelligent machines (as per Jeff’s definition) is to identify assumptions that are currently leading us astray. Perhaps Jeff’s core insight, many decades ago, was to look to neuroscience for inspiration in computer science. The story goes that this was too radical at the time, at least for the academics Jeff courted.

Here is a proposal for categories of assumptions held by HTM community members. I’ve listed them in order from least to most radical:

a) HTM
b) Computer Science
c) Modern Science
d) Modernity
e) Philosophical
f) Non-linguistic
g) Anthropocentric

Which ones do you believe will need to shift to solve the riddle of truly intelligent machines.? For example, maybe it is only a question of getting the HTM algorithm right, so a) is where the changes need to occur. Or maybe you believe we also need a new paradigm in computer science, so it would be b).

At the moment I believe it is c) for Jeff’s objective and e) for AGI. I’m intrigued if there are other radicals out there :wink:

1 Like

Nicely phrased Mark - there are definitely one or more “grand challenges” to be solved, in addition to getting HTM right, to realise Numenta’s goals. If I can only choose one option on your list I’ll take “c”.

Doing science “in the open” (call it open science or open source or open access) is a radical departure from both commercial and academic research practices. Kudos to Jeff and Numenta and this community for moving the needle.

Another category, not on your list, is what we might call the ‘Marketing Challenge’ (making one’s research achievements both commercially viable and easy for more and more social groups to responsibly adopt). That requires investment and resources to get your results and point of view discussed and disseminated.

Love to hear more responses …

2 Likes

The requirement for ‘truly intelligent’ is a bit of a loaded issue. There are all kinds of implications packed in there that might turn out to be misconceptions. If you’re going to put this question, I think you need to spend time nailing down just what AI success looks like. I have my view; it won’t be yours.

Re your list, I would ignore everything from (d) on. We certainly don’t know enough © science, but I don’t see this as a source of major paradigm problems either.

Computer Science is full of assumptions that are serious limitations on understanding brain(s) as computing machines. We have no idea; we don’t even know what we don’t know.

And HTM is equally at fault. It’s a monumental idea, but to think for a moment that it’s more than a tiny part of the whole is just wrong. I expect to learn that columns are computing machines, and with the right software they can do HTM and many other things besides. But that’s a story for another thread.

Re AGI: it seems inevitable to me that an AGI built by us will use the same computing model as brain(s) we can study in the lab. It will exceed our abilities by scale, not by kind. A primate or corvid brain with more sensory data, more working memory and a million times faster would be more than our match. That’s seriously scary.

1 Like

I was led astray by the assumption that the brain could be simulated at a low resolution of time & space. Now I’m convinced that the only good path to AGI is an accurate simulation of biology.

The HTM serves a purpose: to demonstrate certain principles of neuroscience. And I think that those principles are at work inside of all biological intelligences. However, they are implemented in dramatically different ways with important implications for all of the other principles which govern how the brain works.

“All models are wrong, but some are useful”

5 Likes

I think you have to be wrong. I don’t believe it is physically possible, within the boundaries of known science, to interrogate the (chaotic) state of a single neuron or simulate its (chaotic) behaviour in sufficient precision, to be able to accurately simulate (predict) future states. You can only approximate, and if you approximate one part, which part do you choose?

My view is that we have to simulate the computational model of neural units (perhaps cortical columns) with sufficient accuracy that they perform the same (statistically similar) computations and produce the same outputs on given inputs.

Do you have a reason based on experiment, known science or high authority to favour your view over mine?

1 Like

Given sufficient time & memory: computers can operate with arbitrary precision.

That’s my view too! Except I don’t think that the basic unit of computation is a cortical column. I think that the basic units of computation are atoms and molecule and their chemical interactions. And we have the technology to simulate chemicals.

I highly recommend reading up about conductance based models, which describe how electricity flows in the brain: Conductance-based models - Scholarpedia

And also reading about kinetic models, which describe large proteins: https://papers.cnl.salk.edu/PDFs/Synthesis%20of%20Models%20for%20Excitable%20Membranes%2C%20Synaptic%20Transmission%20and%20Neuromodulation%20Using%20a%20Common%20Kinetic%20Formalism%201994-3743.pdf

1 Like

The Bitter Lesson

Rich Sutton

March 13, 2019

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. There were many examples of AI researchers’ belated learning of this bitter lesson, and it is instructive to review some of the most prominent.

Continued at The Bitter Lesson

5 Likes

I will go out on a limb here and propose what is wrong with the scientific paradigm we currently assume in regards to AI.

Since Newton science has assumed that three of Aristotle’s four causes is sufficient to explain nature. The cause that was ignored is final causation or the goal that leads to an action. This is why it is impossible to use modern science to build intelligent machines.

These assumptions lead to overly ambitious clams like Turing’s ‘universal’ computer. Gödel’s incompleteness theorems are a formal demonstration of the limitations of that conception. This is associated with limitations in the foundations of the mathematics modern science relies on.

Fortunately somebody has already identified these problems, explained them in a brilliant book, and developed the mathematics required to exploit a new paradigm. Robert Rosen’s book Anticipatory Systems was published in 1985. The mathematical formalisms are developed at great length by his student A.H. Louie and make extensive use of category theory.

If anyone else has taken the time to understand Rosen’s work on anticipatory systems, and is interested in exploring the implications for computer science, then please let me know!

If you would like to understand Rosen’s work then please read the book - I am not able to explain it for you. If you do not agree with Rosen and you have not read Rosen’s book, then I am not interested in your uninformed opinion :slight_smile:

I am not familiar with Rosen’s work. I found a copy here: https://crealectics.files.wordpress.com/2018/08/anticipatory-systems-rosen.pdf. There is mention of a 2002 update. He is now deceased.

There is enough in a brief scan to make it worth looking into. While I’m getting up to speed, would you be so kind as to point to where it says that “it is impossible to use modern science to build intelligent machines”.

I did not immediately find that anyone has taken up his theories or advanced them. Do you know of any later work?

True. But for a computer to operate at sufficient precision to simulate the universe, it would require more time and memory than is available in the universe. Of course, if we are a simulation, this would all be done in the next level up.

Sorry, but I don’t share your opinion. My PC has atoms and molecules too, but its ability to perform computation arises from the structures built from them, and is not inherent in them. The brain has millions of columns, which is plenty given the right computational model.

1 Like

That is not a copy of the book. It is mainly the prolegomena that was added to the 2012 edition and is written by M. Nadin. I would ignore that and focus on Rosen.

I did not quote Rosen when I wrote that it is impossible to use modern science to build intelligent machines. This is an obvious conclusion from Rosen’s work. He does go into some detail about information theory, learning, and neural networks.

As I mentioned, A.H. Louie is one person who has furthered the work, you can find his publications at Publications | A. H. Louie

OK, I found a bunch more fragments, references and related work. I read all of Louie’s site, and this: [PDF] Robert Rosen's anticipatory systems | Semantic Scholar. The books are way too expensive, so we go with what we have. I found this:

Robert Rosen claimed that life, as a property of a living system (an organism), is not caused by the physical nature of what it is composed of but, rather, is a consequence of complex organization of a certain type in a material system. In other words, the causal basis of life is a matter of relational causality rather than physical, material causality.

No problem there. Elsewhere we have the idea that science is reductionist, so science cannot predict emergent behaviour, such as life. All matter is constrained to follow the laws of science, but those laws do not predict the behaviour of complex systems, or life. All good so far.

I still don’t find anything that deals with either computation, or intelligence. Could you please point me to material I can read, or guide me through the steps to the obvious conclusion as you see it.

No I can’t sorry. Your best bet is Rosen’s book. Perhaps try the library.

I’m disappointed that you’re not willing to even attempt a paraphrase. I go with the fragments I can find.

So my working hypothesis is that Rosen and his followers have staked out a region of reductionist pure science and proved rigorously that life cannot be explained by anything found in that territory. Well done! We know some places not to look.

These books concentrate on various kinds of systems, pure science and life, but this forum is more about computation and its relation to brain function and intelligent behaviour. I was not able to find that these books deal with these topics at all.

So my working hypothesis is that these books do not deal with anything of particular interest or relevance to this forum. If there is a link, feel free to point it out.

Hello all!

I think that ‘fitness for some anthropobiologically advantageous purpose’ should always be front and center of any effort to develop “non neural generally intelligent machines”.

I claim this because (AFAIK and can see) the emergence of our verbal (symbolic language capable) neural “actention” selection serving systems (≈brains) involved not so many but momentously pleiotropic “ambiadvantageous” mutations, the phenotypic results of which were naturally selected;
Not selected purely by luck but because the individuals who pioneered them outcompeted mating rivals who lacked them.

This outcompeting occurred in our ancestors extended-family group, and, after a few generations a selective sweep was completed there. What typically then followed was usurpings of (combined with occasional bride-stealing from) neighboring human family groups who lacked such a back then most recent momentously “ambiadvantageous” mutation.

How these our common protohuman or early human mutation-pioneering ancestors became reproductively successful involved their unique capacity to not merely adaptively sequester or block the excitatory and otherwise distress-motivating neural messages (originally and post-traumatically generated at the core of any Conditioned-in kept Unconscious Reverberating State Effecting Symptoms or CURSES) but their additional capacity to via sprouting of new neural connexions reroute the spiking at the core of CURSES; A rerouting toward it being used as internal unconscious co-motivation for ambiadvantageously paid adaptively focused “actentions” (behaviors or preoccupations).

What I most simply refer to as “ambiadvantageous coping” is that CURSES can in the case of us humans be securely enough sequestered while their excitatory core is ‘siphoned off’ as co-motivational fuel for taking (exploiting) more or less directly procreation promoting opportunities.

Almost everything we humans are uniquely doing, or are exceptionally good at doing, suits (does IMO) being referred to as “EAVASIVE” (my most pragmatically pieced together ÆPT acronym).

Let’s not ignore the fact that ‘one thing’ that our by language functions leveraged cognitive functions have endowed us with is a capacity not just to motivate or demotivate but to deceive ourselves as well as others.

P.S. Although it brings this comment off topic, I’d like to mention that what puts any (and all) genuine CURSES (same spelling in plural) is an instinctively sensed circumstantial threat of Specific ‘Hibernation’ imploring (SHI) threat — i.e., more precisely, a correspondingly genuine “threat of SHI type”.
Who needs the tacky Freudian terminology‽:blush:

1 Like

Personally, I would be ecstatically happy to see a ‘non-neural generally intelligent machine’ of any persuasion before filtering out those that fail your fitness test.

But having re-read this post I cannot escape the possibility that it was generated by an AI. If so, congratulations. It’s a good one!