Tree Neurons

Before I had done neurons with a lookup table activation function. A step up in complexity is a decision tree. If you think about it a decision tree is quite valid as a neuron activation function (even for a binary output case.)
Using ID3 for training would be fairly slow. There are greedy algorithms that are much quicker.
For the systems I work with I can see a tree learning algorithm based on gradual error reduction that should work. It’s greedy but it takes quite considered steps. I’ll see how long it take to write the code.
In what way could the Numenta neuron model be regarded as resulting in a decision tree? Would regarding matters in that way provide options for better ways of training such neurons?
Anyway I had a look of the Google photo of the outside of your premises. I’m sure there are some good places to eat lunch around there. Personally, I would go to a supermarket and buy the things I needed to make a sandwich. Anyway it didn’t look like the offices of a mega-corporation so I feel more empathetic.

1 Like

Hi @Sean_O_Connor,

Welcome! I’m not a Numenta employee, but if you consider the a neurobiological approach such as theirs, where the focus is to reproduce a biological model of neuronal function you’ll see quite a bit more complexity and activity forming the typical processing cycle of a “neuron”. Our understanding has come a long way since the '50s where we developed the typical neurons that are still used today in deep learning and other neural networks.

For instance, the typical pyramidal neuron has any where between 10k - 30k synapses, 10% of which form the typical feed-forward connections modeled in classical neural networks. However there are the other 90% which are mostly lateral connections which are not accounted for in today’s classical NN systems.

Another thing is that instead of modeling synapses by using adjustable weights (to emulate synaptic learning), neurobiological systems (and thus HTM systems), use a process called synaptogenesis, which is a fancy term meaning to learn by actually growing and culling synapses.

I’m not a neuroscientist, nor even a data scientist - so this is only my opinion about the differences in “applicable” algorithms which may be used to model neurons - but I would recommend watching Jeff Hawkins’ very light and accessible talk here where he expounds on the differences between their approach and typical neuron modeling.

Cheers and nice meeting you!
David

1 Like

And of course there are temporal elements to a neuron’s behavior as well just to complicate things!
Forest of Tree Neurons:
https://drive.google.com/open?id=0BwsgMLjV0BnhWWF4M2RaRUJFa1U
I just finished the code. I have to experiment with it myself to see if it does anything interesting.

1 Like

@Sean_O_Connor

Nice… How do I run it? I see it’s written in Basic (at least that’s my assumption due to the file extensions)? …also I’m a Mac user…

I’m not sure if there is a FreeBasic compiler for the Mac. I could convert that code to the Lua programming language, with the huge advantage that Lua has hashtables built in. Also LuaJIT is pretty quick. Anyway I want to explore the notion of “tree neurons” more generally. I’ll try to produce code in a number of different languages.

I see a number of advantages to decision tree neurons:

1/ Rather than having to twist and contort real valued parameters to in some way fit a new training point all you have to do is add another decision branch.

2/ While a tree can’t unlearn, in a distributed representation situation you can truncate a single tree back to its root and let it regrow with little harm done. (You can use a random projection to very simply create a distributed representation.)

3/ It should be possible to put tree neurons into a more connected configuration such as the Hopfield net. During training you should be able to carve deeper and more certain attractor states.

N tree neuron branch updates per example.
https://drive.google.com/open?id=0BwsgMLjV0BnhSVg0ZXg2dzlZMHM
I doubt anyone can really follow the code, unless they were to make a considerable effort.
I might provide some documentation on a Github wiki. A step by step cookbook.

@Sean_O_Connor

Hi there. I don’t want to seem like I’m ignoring you and I applaud your sense of exploration, but I have to say that I’m mostly interested either strictly in HTM technology or technologies that somehow lend insight into organic systems or HTM-like systems in general. Not that I’m not interested in anything else, it’s just that I have to carefully guard what I spend my time doing because there’s a lot of stuff to learn :slight_smile:

Anyway, I’ll be watching this topic but I really don’t have anything to contribute to it, at this time, ok?

Cheers,
David

It is very clear what Numenta’s focus is. Numenta’s goal is synapse building within some rigid structure yet to be fully defined. Currently awaiting further information from the biology department. You could think about what the decision tree equivalent to the Hawkins/Ahmad neuron is, that could even include temporal aspects. Anyway doing so would not leave you with less insight than before, unless you got terribly confused in some way.

The major problem is interpreting the findings in already published neuroscience studies so that a model could be extracted. HTM research is not capped by biological information, never really was. Does more information help? Definitely. Though it is more about filtering and converting that biological information into functionality because there is A LOT OF information. Typical neuroscience studies do not directly ask questions that HTM seeks to know, at least in the past this was the case. So we have to generate our own answers based on those studies. That is the hard part and the answers may even be hidden in works that were conducted a decade ago. So @cogmission has a valid concern about time and staying focused on biological intelligence systems.

On the contrary, neuroscience studies may need to be guided by the needs of general AI systems. Knowing what you are looking for is a huge advantage for any research because we cannot even understand how a microprocessor works by conventional neuroscience methodology. There is only so much you can do without trying to model and without understanding the functional challenges.

I guess Numenta is saying that the brain is a repeating structure of processing units and it wants to know what each processing unit does exactly. That should be doable (ie. it is technologically feasible given sufficient time.) Pay for some lab work.

Before Numenta was the Redwood Neuroscience Institute, founded by Jeff Hawkins.

The goal of RNI was to develop a theoretical framework for the thalamo-cortical system.

Have you looked at the complexity of the system? There are some interesting diagrams on the HTM Cheat Sheet. There is a lot going on in that processing unit.

1 Like

https://groups.google.com/forum/#!topic/comp.ai.neural-nets/cJuLBruZ6u0

Gradual learning Hopfield decision tree neuron network:

https://drive.google.com/open?id=0BwsgMLjV0BnhQlFfU0hnRkdULW8

Maybe it would even take too long to train. I don’t know yet. Anyway it should be less of a memory hog than the first version.
I need to think about how it might be possible to incorporate the powerful decision tree learning algorithm ID3 into the net. That could be very interesting.

Can Numenta neurons be put in a Hopfield network? Or are they effectively part of a Hopfield network anyway? The learning algorithms for the current generation of deep neural networks are fraught, non-local and hardly understood even by their creators. The Hopfield networks are very nice in that learning is local to each neuron and you can define very exactly the attractor states if you reason about things correctly.

https://drive.google.com/open?id=0BwsgMLjV0BnhLUMzMVpEY2NoMDA

The above version can carve out attractor states rather well (within the limited testing that time has allowed me.)
I think that it is a validation of the ideas I put forward regarding decision tree neurons, dropout and the Hopfield network.