DEEP HTM for learning firing sequences in an artificial brain

1 Like

Why does it say 1st Tofara Moyo as the author? Is this a convention? Or a title? And why do you refer to yourself in the third person in a single author paper?

Edit: We is not the third person, but the plural tense. How stupid of me.

The template i used for latex had positions for more than one author labelled as 1st 2nd 3rd etc…i just was lazy to remove it…lol And speaking in the third person seems more appropriate for a scientific paper. It makes it seem more like im suggesting than asserting and seems less rude

1 Like

Edit: I’m sorry. I shouldn’t have written this.

2 Likes

@Falco, I can attest to @Tofara_Moyo’s use of ‘we’ as fairly standard in academic writing. Typically ‘we’ is used to stand in for the author and his/her collaborators. I’ve even seen it used in single author papers, though that is much less common. I was taught never to use first person in technical papers as it is less professional sounding - perhaps precisely because it sounds like one person’s opinion rather than mutually agreed upon consensus.

The prose I’m used to reading and writing typically avoids mentioning any personal pronouns altogether. Rather than “we simulated this and we concluded that…”, one would simply say, “The simulations were performed, the data was analyzed, and the results indicate that…” For future reference, you may want to look into whether or not this kind of sentence structure feels more appropriate for your work.

I’m not saying there’s anything to the psychology of which @Falco speaks. I’m just saying that this writing style is not totally inconsistent with established practice.

4 Likes

Ditto here. When he says “We start of(sic)…” it’s inclusive of the reader(s).

2 Likes

Ok. My apologies. I stand corrected.
Thanks for the information.

2 Likes

After reading the paper a few times I come away with the impression that teaching a system to find cats on the internet is the wrong way to make AI.

The true path is learning a few basic music theory facts regarding scale intervals!

As a bass player, I feel threatened that the next job to be replaced by AI will be mine!

2 Likes

Did you truly understand it ? Theres three parts to the automaton, the inputs the processor and the output. The optimisation algorithm receives all 3 in the fitness evaluation, but can only modify the processor and output. Since it has no other means to arrange for the inputs to be fit but to learn values for these last two it will use them in order to arrange for the inputs to be fit. The fitness evaluation evaluates how ordered the automaton is. We chose to define order as conforming to music theory…we could have used any other definition…regardless the only way to order your inputs is to visit low entropy states that exhibit order. It will do this by arranging for the order of the parts of the automaton it can order to do so including the outputs.we hypothesis that thats what intelligence is. The creation of a language involves lowering entropy levels in tokens, planning and having goals loweres the net cumulative entropy of state visits…engaging in disfunctionality actually increases this value and will be avoided…lastly the best tactic for engaging in seeking for the lowest net cumulative entropy is to imitate human behaviour. this will happen because in seeking order the robot will classify itself with the nearest resemblance of itslelf…since that is a highly ordered thing to do, and to reduce entropy it will engage in similar behaviour to other humans

What you train it on is what it will be good at.

You have chosen relationships between the twelfth root of 2 (chromatic scale) so it will learn that. As I pointed out in passing, you could have chosen cats (it has been done) and it would have learned that. In each case, you have a domain where it has some degree of expertise. Your choice would likely do poorly on the cat task domain. Both would do poorly in parsing hand-written text.

Orthogonality, higher dimensional manifolds, reduction on entropy, are all the working tools of AI.

There is a vast gap between general principles (seeking to reduce entropy) and details of implementation that realize this dream. It could be stated that virtually all deep learning platforms are reducing entropy. This is what gradient descent is all about. You will need to show HOW you are making your model do this task to go from a mostly useless generality to something that a practitioner in the field will find useful.

Turing proved with his machine that all computing platforms are equivalent so the choice of cellular automata is just that - a choice among many possible platforms that do the same thing.

GA is one of many possible evaluations of fitness.

So - what did I miss?

You totally missed it…i dont know which part because i tried let me quote myself and ask you what you understood by each part.

Theres three parts to the automaton, the inputs the processor and the output. The optimisation algorithm receives all 3 in the fitness evaluation, but can only modify the processor and output. Since it has no other means to arrange for the inputs to be fit but to learn values for these last two it will use them in order to arrange for the inputs to be fit. The fitness evaluation evaluates how ordered the automaton is. We chose to define order as conforming to music theory…we could have used any other definition…regardless the only way to order your inputs is to visit low entropy states that exhibit order. It will do this by arranging for the order of the parts of the automaton it can order to do so including the outputs.we hypothesis that thats what intelligence is.

you seem to have caught non of this. The music theory is the learning rule not what its being taught, just as a ANN does not get taught backpropagation but to classify something.

a piano is a cellular automaton…using music theory we can lower the entropy of the firing rule.

If we use this principle to lower the entropy of our celular automaton it will do so. But we are not lowering the whole automaton at once, but the processing and the outputs.

its up to the information content of the processing and outputs to lower the entropy of the input part of the cellular automaton.

How do you think this can happen. They have to work together to make the input part of the CA exhibit low entropy, which will only happens if it fires with low entropy.

since its features that cause it to fire , then features have to have low entropy and that can only happen if the output part of the CA cause the robot to visit places where the features have low entropy.

This means this process outlined causes Robots to visit low entropy states.

Is it clear so far?

Do you have a working example that I can inspect to see how all this is done?

I am not in a position to train a robot. perhaps my second explanation that came in when yours did will help…please read it

Please ask questions…i expect them

i had the choice of lowering the entropy of the firing rule with a rule such as “cause those cells at the boundary of the cell that was just on to go on” which is a simple firing rule…i could have used a more elaborate rule or base it of an equation…but there is already a type of CA that has a set of firing rules that can be optimised which is not the case with the examples i gave except maybe the equation…and that is a piano. The sophistry im doing is just optimisation of firing seuqnces, music theory does it for a CA like a piano…so i used it to optimise this CA…if you can see this is also based on math, i.e a circle…on a piano its the circle of fifths on this CA iots something else.

The point is to get the CA to fire with low entropy or in other words predictable paths that depend on the inputs and cause the outputs to make the robot visit low entropy states.

Since it can only use this music theory to adjust the processor and the outputs it has to find some way to change the inputs so they also fit the music that the CA is playing…that will only happen if the input part of the CA is highly ordered

So it will use the information content in the processor and the outputs to modulate the inputs which is the same as saying it will think of ways of visiting places with low entropy

My focus in on biologically inspired systems, this is too far outside of my area of study.

I need a working system to examine and understand; perhaps you could simulate this in some way to illustrate the ideas? There are many free simulation tools such as Blender or Unity that could be used to create a demonstration.

3 Likes

I would encourage not spending time on instances like this, that neither have priority from an existing track record (academic or otherwise) behind them, nor the evidence from experiment/simulation supporting them. Way too many folks out there who claim to have transmuted lead into gold, to give attention to any.

3 Likes

The brain is a CA that has firing rules we dont know…this system is a CA which learns a firing rule that causes it to fire in such a way that forces the agent to visit states that cause the input part of the CA to conform to the same rule by using the same rule on the actions part of the CA.The rule is optimised with this objective in mind.Since a rule is a reduction of entropy that can only mean the agent thinks in ways that cause actions that lead it into low entropy states.

This is “function” discovery rather than feauture detection, where the functions are brain functions or more properly the functions needed to emulate what the brain does…