I don’t think that was the aim of @neubie. I think he wanted a list of fundamental elements of intelligence, as a starting point.
Point taken, but just identifying the basic parts misses the essential element that has been dancing in and out of the last few posts in this thread - those parts have to be arranged is a special way or they don’t do what is needed. This arrangement of parts may resist the simple laundry list approach.
I think it’s worse than that, compiling laundry lists is exactly backwards to defining intelligence. Such definition is an ultimate generalization, and generalization is a reduction. You have to leave non-essential stuff out, else there is no point.
Hey now, don’t put words in my mouth! I have deliberately stayed away from those debates because I don’t think they are relevant to our work yet.
Yes, you’re right. I’m sorry.
It was meant somewhat tongue in cheek, but I should at least have added a smiley.
This is a fairly exhaustive coverage:
This is the sense that I was using:
That is, that mental analysis by introspection can be misleading and counter-productive. At a minimum, it is a task that must be undertaken with great caution. I support the middle way " An intermediate position is revisionary materialism, which will often argue that the mental state in question will prove to be somewhat reducible to physical phenomena—with some changes needed to the common sense concept."
I think what Mark is saying is that we in this community are always looking to explain how something works with biology. It must make sense that neurons and synapses could possibly perform the theories we discuss, so we stay very low-level in most cases. The “sequence memory” part of HTM explains a very low-level theory that if accepted, provides a clue to how other neural mechanisms work. The Thousand Brains Theory falls apart without the sequence memory we’ve defined very clearly with a specific neuron model.
I think there is more progress to be made by looking at the brain and asking “what the hell are those neurons doing firing like that?” than by attempting to make clear definitions of what intelligence is and is not.
@neubie Excellent outline! A logic and useful roadmap to define (and then) expand key-points of the intelligence and how they relate each another!
Matt, I get what Mark is trying to say.
Maybe I will ask you this to convey my point. In Numenta, when developing HTM as a biological constrained model of intelligence, on what conditions that the HTM has to satisfy in order for Numenta to conclude that the theory or the implementation of the theory indeed possesses intelligence? In the other words, what are the hypothesis on intelligence that you are testing to prove the theory (and the implementation)?
Are “reduction” and “leaving non-essential stuff out” on understanding a complex problem such as brain intelligence as a first step a good approach or bad approach.
Absolutely, I have no intention to think this will be done by tomorrow.
You are not discouraging me at all. The fact that now is the end of 2019 we are still not entirely sure how our brain works, I know this will be a long road, if I ever get a chance to see the end. But for this reason, it is even more important to have a starting point. And I appreciate you are trying to help me see the shape of landscape we are travelling. I am a newbie, but not a naive newbie.
I think we all have some catching up to do
The high-level things like intelligence, desire, attention, and thinking are far beyond the current state of the art. An army of researchers are coming at this from many directions and using many methods.
Researchers gather data. Medical practitioners have paired up various damage sites with loss of function. Penfield stimulated parts of the brain and localized many functions. Hubel & Wiesel traced the stream of visual information part way into the cortex using micro electrodes until it got too complex to understand. Various imaging techniques show patterns of activity in many behavioral settings. There have been some exciting discoveries of patterns in the hippocampus and related structures (again with micro electrodes) that correspond to things like location and motion. There have been measurements on the scalp that tease out the rhythms of the functioning brain has a whole. This research spans the scale of activity from tiny fractions of a cell like the membrane channels and synapses, to the interactions between a few cells, to the actions of thousand or millions of cells, on upwards to the mass actions of the entire brain. Connectome researchers map out the dense networks of connecting fiber tracts in the brain.
All this is only a small sampling of the ongoing generation of factoids about the brain.
Theoreticians sort though this river of factoids and try to see how the parts fit together. In the case of Numenta, the focus is on the micro-structure of cognition at the cortical column level. They test these ideas with computer models and publish the results. These theories inspires new rounds of research in the wetware and ever more sophisticated computer models. All of these levels of research and theory cross pollinate each other in an ever accelerating accumulation of knowledge of the brain.
Each of these efforts are a dab on the game card but nobody is able to shout bingo yet. As this flood of research continues ever more of the brain mechanisms are falling in place. The thinking is that if we can identify enough of the sub-system functions the overall operations will become clear and these will inform our understanding of the true nature of these high-level behaviors.
From your posting I can sense you are quite knowledgeable in this field and sounds like you have been on the journey for a while. That got me quite interested in knowing more about you. If you don’t mind me asking, what is your ultimate goal in this field? Are you a brain builder? Or are you a knowledge seeker?
I have been reading about neurology and AI related fields since the early 80’s.
At this point I feel that I know the basic overall structure and function of the brain. There are many loose ends that still mystify me but I am close to seeing the end-to-end details of coding a trial AI.
A key point for me is working out how to do the sparse bidirectional many-to-many connections (dendrite to axon) matrix with a plausible local inhibition in an efficient method that will fit in modern hardware.
I have collected many of conclusions of ~35 years of informal study in this thread. I would estimate that this thread captures roughly 50% of the ideas that I will try to roll into my model:
35 years. I respect that. So you are building AI? Or are you building a software version of the brain with intelligence? The reason why I am asking is, will you be building it such that the dendrites will grow and, and synapses will grow or get pruned, and communication will be based on neurotransmitters?
And have you started it? I am curious to see how overwhelming would that be to you for the initiative.
I have been studying at various brain builders including Chris Eliasmith from my school to learn how they started their projects to avoid the same pitfalls.
I see that you see AI as somehow different from the brain. This is kind of weird as the only working example we have of intelligence is the brain. I want the minimum parts of the brain necessary to develop intelligent behavior. So - some brain or some AI.
The answer to the level of fine detail is somewhat complicated. It is a mix of slavishly copying the brain and reckless short cuts. Topology and temporal dynamics are important at all levels. The nit-picky details of neurotransmitters and such are bundled with a biological implementation and I feel that they can be replaced with an accurate approximation of the function.
Much of the function of the cortex is wrapped up with the topology and temporal dynamics of connections between the various components and I think that this will have to be copied to retain the functionality.
The workings of the subcortical structures is still a dark continent and the largest basket of unknowns lurk there.
How hard? This has alway been in the back of every consideration. In the 90’s and augts the hardware to do this was hopelessly out of reach. At this time I think that it may be possible to do a single map with a single cpu box with some hardware support like a GPU card or an ASIC. There are roughly 100 functional maps in the brain but the few cases of humans that have had a hemispherectomy strongly suggest that maybe half that number may be all that is required. So - for a full AI implementation - 50 or so workstations with very fast network connections is the implementation with current hardware. At $150 for so for an off-lease workstations with a basic GPU or add on ASIC and 12 gb - about $7,500 and a lot of floor space and electricity.
So is it overwhelming to contemplate? Let me get back to you after the Pseudo-code for the project is completed.
AI is a term heavily used by the industry mostly referring to the heavily statistical model. If you remember Jeff mentioned a bit on that history from AI then to Neural Network. I believe AGI is being used to refer to something closer to the biological intelligence (or maybe a more general deeper learning). So for me I want to be more careful when I use the term AI but not referring to a simulation of biological intelligence to confuse the matter.
Definitely would be interested to learn about your approach. If you can be kind enough to share your plan such as which minimum parts of the brain you are starting off from, what criteria are you using to define the intelligent behaviour, and more importantly what your testing plan is to be able to conclude your implementation is successful according to the criteria you set out. That would be hugely beneficial to me as a newbie to learn. I definitely have no interest to fall into the same problem as the European brain project when initial goal set out too ambiguously and too big.
For obvious reason where my goal is to simulate the brain, I personally think it is essential for that level of detail. And I can’t help to think the importance of neurotransmitters in terms of natural back pressure handling and strength of the representation. But I don’t have any doubt the code can fake that without implementing the actual correspondence.
Just go cloud. Much easier and more flexible. But I am curious if it requires 50 computing units whether it is a sign that it is not close to what our brain does.
That’s a term I haven’t heard for a long time. With language so easy to understand and particularly if you use OO (finally that is a good use case to use OO), directly code it might be more beneficial. Save some time from translating from pseudo to real code and everyone else can read it.
Looking forward to learning more about your plan. I can definitely benefit a lot from that.
Each map and neural node is essentially a standalone agent.
I don’t agree with the way that Minsky proposes his agents work together, (more symbolic, not very biological) but it really is a society of mind.
My beef is that the interactions don’t match up with what I read in how the maps and nodes work together.
As far as you asking how I propose to do things, I have already spent many hours documenting this in this thread I posted to you earlier. Please go through that and consider that to be the first cut at answering your question. If you have more after going through that I will be happy to answer follow-on questions.
I code in straight C for most stuff and PERL when I do text based things. If I do OO stuff, it’s in PERL.
I was coding in C and assembly long before (about 20K hours at that point) any of the “easy” languages hit the mainstream (~2000) and I am familiar to the point where it is about as easy as breathing for me. I can say about the same thing about PERL. Why would I ever want to change?
Are you one of those rare people that find OO PERL code easier to read than my C language formatted language independent Pseudo code?
BTW: how many of the recognized “code smells” apply to non-OO code?
Compare and contrast those with OO smells. When I read the Stroustrup book my very first impression was - goodness - look at all the new ways you can screw up. I was teaching computer languages at the time so I feel like I was qualified to make such a statement.