Brain Building - Q1. Define Intelligence


From your posting I can sense you are quite knowledgeable in this field and sounds like you have been on the journey for a while. That got me quite interested in knowing more about you. If you don’t mind me asking, what is your ultimate goal in this field? Are you a brain builder? Or are you a knowledge seeker?


1 Like

I have been reading about neurology and AI related fields since the early 80’s.

At this point I feel that I know the basic overall structure and function of the brain. There are many loose ends that still mystify me but I am close to seeing the end-to-end details of coding a trial AI.
A key point for me is working out how to do the sparse bidirectional many-to-many connections (dendrite to axon) matrix with a plausible local inhibition in an efficient method that will fit in modern hardware.

I have collected many of conclusions of ~35 years of informal study in this thread. I would estimate that this thread captures roughly 50% of the ideas that I will try to roll into my model:

35 years. I respect that. So you are building AI? Or are you building a software version of the brain with intelligence? The reason why I am asking is, will you be building it such that the dendrites will grow and, and synapses will grow or get pruned, and communication will be based on neurotransmitters?

And have you started it? I am curious to see how overwhelming would that be to you for the initiative.

I have been studying at various brain builders including Chris Eliasmith from my school to learn how they started their projects to avoid the same pitfalls.

I see that you see AI as somehow different from the brain. This is kind of weird as the only working example we have of intelligence is the brain. I want the minimum parts of the brain necessary to develop intelligent behavior. So - some brain or some AI.

The answer to the level of fine detail is somewhat complicated. It is a mix of slavishly copying the brain and reckless short cuts. Topology and temporal dynamics are important at all levels. The nit-picky details of neurotransmitters and such are bundled with a biological implementation and I feel that they can be replaced with an accurate approximation of the function.

Much of the function of the cortex is wrapped up with the topology and temporal dynamics of connections between the various components and I think that this will have to be copied to retain the functionality.

The workings of the subcortical structures is still a dark continent and the largest basket of unknowns lurk there.

How hard? This has alway been in the back of every consideration. In the 90’s and augts the hardware to do this was hopelessly out of reach. At this time I think that it may be possible to do a single map with a single cpu box with some hardware support like a GPU card or an ASIC. There are roughly 100 functional maps in the brain but the few cases of humans that have had a hemispherectomy strongly suggest that maybe half that number may be all that is required. So - for a full AI implementation - 50 or so workstations with very fast network connections is the implementation with current hardware. At $150 for so for an off-lease workstations with a basic GPU or add on ASIC and 12 gb - about $7,500 and a lot of floor space and electricity.

So is it overwhelming to contemplate? Let me get back to you after the Pseudo-code for the project is completed.


AI is a term heavily used by the industry mostly referring to the heavily statistical model. If you remember Jeff mentioned a bit on that history from AI then to Neural Network. I believe AGI is being used to refer to something closer to the biological intelligence (or maybe a more general deeper learning). So for me I want to be more careful when I use the term AI but not referring to a simulation of biological intelligence to confuse the matter.

Definitely would be interested to learn about your approach. If you can be kind enough to share your plan such as which minimum parts of the brain you are starting off from, what criteria are you using to define the intelligent behaviour, and more importantly what your testing plan is to be able to conclude your implementation is successful according to the criteria you set out. That would be hugely beneficial to me as a newbie to learn. I definitely have no interest to fall into the same problem as the European brain project when initial goal set out too ambiguously and too big.

For obvious reason where my goal is to simulate the brain, I personally think it is essential for that level of detail. And I can’t help to think the importance of neurotransmitters in terms of natural back pressure handling and strength of the representation. But I don’t have any doubt the code can fake that without implementing the actual correspondence.

Just go cloud. Much easier and more flexible. But I am curious if it requires 50 computing units whether it is a sign that it is not close to what our brain does.

That’s a term I haven’t heard for a long time. With language so easy to understand and particularly if you use OO (finally that is a good use case to use OO), directly code it might be more beneficial. Save some time from translating from pseudo to real code and everyone else can read it.

Looking forward to learning more about your plan. I can definitely benefit a lot from that.

Each map and neural node is essentially a standalone agent.
I don’t agree with the way that Minsky proposes his agents work together, (more symbolic, not very biological) but it really is a society of mind.

My beef is that the interactions don’t match up with what I read in how the maps and nodes work together.

As far as you asking how I propose to do things, I have already spent many hours documenting this in this thread I posted to you earlier. Please go through that and consider that to be the first cut at answering your question. If you have more after going through that I will be happy to answer follow-on questions.

I code in straight C for most stuff and PERL when I do text based things. If I do OO stuff, it’s in PERL.

I was coding in C and assembly long before (about 20K hours at that point) any of the “easy” languages hit the mainstream (~2000) and I am familiar to the point where it is about as easy as breathing for me. I can say about the same thing about PERL. Why would I ever want to change?

Are you one of those rare people that find OO PERL code easier to read than my C language formatted language independent Pseudo code?

BTW: how many of the recognized “code smells” apply to non-OO code?

Compare and contrast those with OO smells. When I read the Stroustrup book my very first impression was - goodness - look at all the new ways you can screw up. I was teaching computer languages at the time so I feel like I was qualified to make such a statement.

That’s a nice diagram you did there. Although the diagram looks simple but it touches on large number of complicated sub components. For example, the diagram mentioned about somatic does that mean you will implement the motor side? And with the limbic system will you also implement emotion also? Not sure if I miss it in the long post, I don’t seem to find the area it talks about the testing. Would love to hear your thought on how to test. I will spend more time on re-reading the post and all the related posts.

I am not sure if I am one of those since I really haven’t heard of the term pseudo code since I was in high school. And not sure what “C language formatted language independent Pseudo code” really is, isn’t that just C?

I don’t like getting into the discussion on subjective things in coding. Too religious and everyone thinks their code is better. Any language is fine to me. The only reason why I mentioned about OO is because the design of OO was inspired by biology and so would it would seem to be a good fit to use it to develop a biological system and would be easy to read because each class can reflect the corresponding actual biological part of the system. The challenging part would be how to manage the massive parallelism efficiently, both energy consumption and processing latency. Our brain can potentially have chemical reaction simultaneously in trillions of synapses in close proximity.

But I am going to focus on the conceptual side first. Will worry about the implementation later.

I have no idea if you know C but if you did you would know that the natural way to organize and access complicated data structures in C is with “structures” and pointers to members of these structures, things that are not really all that compatible with Python programming.

Most modern languages use C formatting and this is a good way to to write things so most programmers can understand it.

If I use the C notation to access a table of pointers to pointers to members of an array of structures most Python programmers would be utterly lost. Pointers to functions? Yes, I plan to use that in dispatch tables. Again, not a thing that Python people have experience with. Example code exposition riddled with interactions between structures of axons and structures of dendrites based on these data access methods is not going to be very clear for most reader and I would be peppered with a never-ending stream of clueless newbie questions.

This is not a good way to communicate when I know that a significant fraction of my audience will never have seen anything but python OO code. So no - C code is not the best way to speak to a wider audience. Not optimal.

And the PERL code that I want to use for the master control panel? Even worse.

This means that I have to express algorithms and data structures using pictures and C like pseudo code code fragments to be clear to programmers no matter what language they are familiar with.

Unfortunately couldn’t get much out from this discussion on defining intelligence but the topic got into more about whether it is necessary to define what intelligence is. Personally, without defining what the fundamental elements of intelligence really are I am certain any brain building project will fail because massive amount of time will be spent on building something unnecessary. For example, I do agree with Jeff Hawkins at 43.28 that emotion is not a necessity of intelligence. If we spend time on building the parts related to emotion then the system will be so complicated and will never get done. And without the list, there is no way to test if the implementation achieves the objectives. I will continue to search for what I am looking for. Thanks everyone for the help in this!


Oh, they are definitions available. Plenty. What you won’t get nowadays is a consensus.

Consider this: at this point, any “brain project” is in fact a way to maybe reach the goal of finally defining what it is in a more concrete way.

What we’re quite sure about is that there exists biological implementations that work. And that Newton had neurons and synapses.

Failure of a brain project is not necessarily a stall for researching about it. Failure is informative.


Assuming you get your definitions correctly. Once people thought if they figure out how to perform at chess game, whatever plays good chess must be intelligent.
A definition can be used as a test, (which as “chess test” might later prove insufficient). We need more than assumptions about what a target should be, we need good intuition about what the right building blocks are.

And speaking of “building” stuff, I think intelligence in humans at least isn’t built. It grows within the brain in a process.
Once one have built the “brain”, it might be much more difficult to have it built “already intelligent” than to provide the right framework in which may grow by itself (well within a right environment, the point is in humans it doesn’t pops out of the brain instantly and it isn’t carved or sculpted in by parents or “education”.
It’s a developmental process.

For this process to work right emotions might be as vital as currency for an economy to work right. They shape directions in which to grow, otherwise we’ll end up as dysfunctional idiots with big brains.

Specially when the path is inspired by human brain, is very tempting to say oh, we don’t need to do what cerebellum (apparently) does, or what amygdala (apparently) does.

This clip is quite interesting:

1 Like

I was trying to achieve that here. But unfortunately other than on Emotion, I couldn’t get much else.

This is an interesting statement, you mention “that work”, what works? If we don’t know what intelligence is, how do you know the biological implementations work?

That’s conventional thinking. But think this through, If Tesla was out to build Model S right away instead of electrify the MB Smart Car first, then I don’t believe Tesla will still exist today. If we are set out to build the full human brain with full emotion instead of building a brain that satisfies the absolute minimum of what intelligence is, I think we purposely try to reduce the odds to succeed.

But then the definition is not correct then. Isn’t it?

Test is important. Definition also defines the scope. For this complicated subject, you don’t want to go so big that is important to achieve.

Make sure we are on the same page, I NEVER say “human”. That’s way too big. I don’t believe our designer, if there is one, build human on day 1. And that is what I am trying to avoid.

Also, I also did not say building intelligence. My objective is to build a software version of the biological brain (not human) to be able to satisfy the minimum of what intelligence is.

If you still think in the human intelligence perspective, then yes. But I am not. Too big for me. I am trying to do a baby step.

Again, want to emphasize not “human” in my context. Hope that clarifies.

Thanks for the comments. Always makes me think more!

Now that you make your goal clear I can give you a better definition. But first - you will need a body to run because that is THE primary purpose of a brain. Your goal is to learn the behaviors your puppet body needs to survive in it’s environment and enough smarts to select the best behavior in enough of the situations it encounters to be considered a success.

In biology that definition of success is successful reproduction. You may provide a different measure of success.

That is the basic measure of intelligence that nature uses. Anything beyond that is a second order effect.


the fact that you wouldn’t get a consensus on a single definition does not preclude consensus upon the existence of the hard-to-define phenomenon.

in other words…
“I don’t know what intelligence is, but I bet @Bitking is intelligent”
maybe you’d find that to be an interesting statement too :sweat_smile:


What would this ‘minimum’ look like in action? (and pardon if you already answered this somewhere).

I’ve asked myself this question too, and my current basic idea is an agent which can move intelligently in a novel environment toward some not-perfectly-specific goal(s).

When I say ‘move intelligently’ I imagine it takes efficient paths, doesn’t run into stuff and adapts immediately to other moving objects – which involves predicting their movements, even for objects its never seen.

When I say ‘not-perfectly-specific goal(s)’ I mean instructions like: ‘Put all fragile objects off to the side near all soft objects and all the hard or heavy objects to another side, and bring back any recent financial documents you find’.

The theme is things which you could reasonably ask any adult person to do, but would be very non-trivial for current A"I" systems.

The OP asked about “minimum intelligence” and building a small brain to support that. The proposals that pass by insects and worms are missing the bit about “minimum” as the referenced critters clearly have a brain and expressed behavior. Go back to Cisek’s phylogenetic chart in the earliest examples to see things on the minimal level, which is what was requested. Reiterating, acquiring (learning or genetic gifts) behavior and expressing it in an appropriate manner to survive to reproductive success is the minimal intelligence required by nature. It may not look like much to clever mammals but that is what brains started out doing.

At this level we avoid scary looking things, look for food and mates, seek water and shelter. Oh, and grooming. That about wraps it up for basic intelligence.

1 Like

Unfortunately I do not find that interesting.