DEEP HTM for learning firing sequences in an artificial brain

Tofara, you should make up your mind, either talk about the brain or some generalized function. I don’t think it’s the former, because you have no interest in neuroscience.
Re the latter, forget about CA because it’s just another generic TM, you won’t know which one to use until you have a formal function. Using music theory as a function is just another of your arbitrary choices, that show lack of big picture.
“Entropy”, on the other hand is pure physics envy, an obfuscation of what can be expressed much simpler as similarity search / pattern discovery.

I would respect your opinion more if you understood what i was talking about. on the other hand your opinion may be a lack of faith that it makes sense, which is fine, but give it a chance. What do you understand about what i said here:

this system is a CA which learns a firing rule that causes it to fire in such a way that forces the agent to visit states that cause the input part of the CA to conform to the same rule by using the same rule on the actions part of the CA.The rule is optimised with this objective in mind.Since a rule is a reduction of entropy that can only mean the agent thinks in ways that cause actions that lead it into low entropy states.

It helps to ask questions. writing it off before doing so is premature would you not agree? You clearly havent understood it. Music theory is an optimisation technique not the function. maybe you could start by asking what i mean. Music theory is to the firing rule what backpropagation is to neural nets…back propagation is not the neural network function

Yeah, I lack faith. Never mind.

1 Like

Is anyone prepared to ask questions? Without them i have no idea what to say. if it is nonsense following a path of questioning will reveal this clearly…simply writing it off because of “lack of faith” is to admit that there is no real justification for doing so. i am not stubborn, if i see that it cant work or is even just a bunch of nonsense i am fully prepared to agree…but give us both the chance to see that…

Imagine the state , brain and actions of an agent.

So all three belong to the same CA. The state is represented by some cells that fire differently in different states. the brain by others and the actions by others that cause the physical body of the agent to move.

so there is a distinction between the actual state which is the physical real world and state cells which are on the CA and fire differently in different states

when state cells fire they cause brain cells to fire that lead to action cells firing which leads to different states and this leads back to state cells firing in a different way.

Our grand plan is to make all the parts of the CA fire with the same rule. we will explain why at the end.

we can arrange directly for brain cells to fire with any pattern/rule as well as action cells with the same rule…but it would be a huge coincidence that that random rule would make actions that lead to state visits that lead to state cells that fire with that rule.

so we learn a rule for the brain and action cells, and evaluate how well the rule is being learnt by observing how different the rule looks like for the state cells from that of the rule we are learning for brain and action cells. Thats our loss.

On convergence the whole CA follows the same firing rules.

What that means is the processes in the brain will be causing actions that seek out particular types of state visits.

But what does this mean for what the agent does? Since all rules have low entropy, and we arranged for the agent to visit states that fit a rule, these states will have low entropy and high order.

Civilization and all advanced human behavior is predicated upon seeking states with low entropy.

i still haven’t yet described which particular states with low entropy will be visited , but if this is clear i will continue.

This is the route he needs to take, the description of what he is trying to do is too vague.

2 Likes

Any advice of what this system would look like. i am not in a position to train a humanoid robot, but i have an idea. The state will be an image which will activate state cells while the action cells that activate are the class nodes of the network.

Then train a classifier :slight_smile: by arranging for the firing rule of the brain to cause the state cell activations to lead to action cell activations (class) that are correct.

Would this be sufficient?

Hey thanks guys!

i will proceed to do that, but if anyone has questions i am willing to explain further…

Is there any way to demonstrate a robotics system without an actual physical robot?

Check out this link!

2 Likes

Why does it need to be a humanoid robot?

1 Like

The system finds a way to order its environment,actions and processing all at once. This will involve classifying things . it should move on to classify itself to its closest resemblance…us… the follow our behavior all without a reward.

May I ask, what is your programming ability? Are you familiar with and able to use Python, C/C++, Rust, or any other language to produce working code and applications?

Knowing this might give me the ability to suggest resources for you to go off to learn/do/test this on your own, then share your findings here. Otherwise, I think this discussion thread is reaching a conclusion point and dropping in its utility.

To be clear, I don’t know that there’s anything wrong/right with your idea, but it’s appearing that we’re quickly reaching a point where you need to apply some Just-Do-It-And-See-What-Happens to the problem.

Max

3 Likes

You may want to try this robot, it is the one I am working with.

It already recognizes faces, including its own. Pretty much at the intellect of a cat or dog, except that it has NLP.

3 Likes

I probably shouldn’t knock it until I try it, but that’s a very bold claim!! :grin:

5 Likes

All it has to do is alternate between being indecisive, loving, demanding attention, then ignoring you as it does its own thing… we could probably accomplish that with a state machine :smiley:

2 Likes

Thanks for the link. Fascinating device. Might even buy one, when available.

But no, state of the art AI is still a long way below mammalian intelligence, even if it does compete on a few narrow grounds (like faces in the right orientation).

2 Likes

Yes I’m familiar with python.

What i don’t understand is how this thread has evolved. Everyone said they didn’t understand it then when given the chance to ask questions they didn’t.

I cant answer questions i haven’t received and this has led to the perceived drop in utility.

Perhaps you can suggest why questions weren’t asked for clarity so i can understand humans better lol

What’s the core insight you have? If you don’t fully focus explanations on that, rather than things like what’s pleasing to the ear, people can’t understand it, because it’s an insight.

So one metric to judge what the neural function (policy) is learning is symmetry of the activations. And we are minimizing the difference between input layer activations symmetry , and the rest of the networks symmetry.

One we cannot control directly…input layer activations…but we can control it indirectly by how we control the symmetry of the rest of the network as it learns.

if we simultaneously aim for the most symmetric policy we can learn as we minimize this distance then the states visited by the agent will be highly symmetric or ordered or low entropy.