How far that conventional model took us? My model makes difference where input comes from.
And back to the title question: there are two distinct nature of neural messaging - categorical and numerical. All conventional systems employ either one, why not both simultaneously?
No idea where you got that from, the difference between spike and spike train is just a matter of degree. Backprop can use binary values just as well, it will simply work worse.
If you can beat on benchmarks, talk to VC. We are not VCs, I only care if your scheme makes sense. And I canāt tell because you are not explaining much.
Iām not looking for money, firstly. Secondly, lets come to binary (yes or now) conclusion.
āthere are two distinct nature of neural messaging - categorical and numericalā - that is the conjecture. At least implements continual local learning on multiple irregular sequences, expressed in 400 lines. No BP, which is āelegantā but not that smart. Howās for a benchmark?
Bigger, more efficient systems comprise 10-15K of lines. still, not enough, but a good POC.
Either the conjecture works or it does not. Nobody ever said it does not.
Iām not here to push into your throats my findings. Just wondering why obvious simplification (duality) is not that obvious. Probably, inertia - a lot of time, money and efforts already invested.
Iām free like a bird :-), do some coding, some thinking, some driving a truck . āAre you smarter then a truck driver?ā (c) - just kidding, peace.
Not very thoughtful, Iād say, conventional though.
Numerous lines of research support the 100-step rule, input to output in mammal systems. That is the maximum number of synapse transmissions from sense to response. Given the known hierarchy, this does not allow for much of a spike train at any stage of processing.
Much research centers on coincident firing (or nearly so) to support training or anti-training. This is not a train, but first to fire/ phase relation to other neurons in the train. Does a model address this? If not then I have questions.
Lastly - what about spike trains? Clearly, this provides a āvolumeā signal.
So first to fire, followed by amplitude values? But wait - thereās more!
In the cortex, these actions are all contained in the coordination waves emitting from the thalamus. Recent papers are clear that phasic differences in relation to these coordinating wave trains carry significant information.
So first to fire in relation to a coordinating wave and in relation to some distant cooperative part of an engram and any lateral voting, followed by further spike train/value as part of the information transmitted. Add in interactions with the inhibitory inter neurons and there is a lot going on there.
Any proposed model will have to fit into this framework and match these known properties to make any sense to me.
That 100 steps is for immediate action, it doesnāt mean there no longer chains that can trigger action down the line? Thalamic transmittion, with a lot of bursting, is mostly to higher areas, which is not an immediate action.
Anyway, a burst (spike train) is a single step.
No. As far as I can make any sense of your ācategoricalā, it would be something like a return address per input: a category represented by presynaptic neuron (I have to speculate because you have some kind of explanation disability).
Neurons donāt get such addresses, all inputs are purely numerical, be that binary or integer-valued spike trains.
which disability is worse: explanation or comprehension? thatās the question
Have a great time, Iām leaving for a week. Iāll try to explain better later. Think meanwhile,
my best argument - the stuff is working. Peace. Lets stop the flame.
Logic gates have logic tables, why lock yourself into one specific type of gate like āandā when you can just implement a table. Whether you wish to understand ReLU as a switch or not you can understand that it has 2 states one when its input x<0 and the other when x>=0. And you can use say 4 ReLUs to get a 4 bit table look-up index into 16 table entries.
I tried that with a neural network using evolution for training, and it was a bit much for evolution to deal with.
I am a very recent convert to backpropagation, decades behind everyone else.
Thatās the price you pay for being a hobbyist and being too lazy to overcome a point that requires some effort to comprehend properly.
I have to say though backpropagation works extremely well and I may try it with look-up table type neural networks.
Reading through the (more than 400 lines of Java), single threaded, not necessarily all that efficient code and watching the youtube videos⦠I have not read through any of the other papers, rather just looking at the codeā¦
Some of the models have shifted away from using the raw letters as inputs and using syllables to reduce some of the L1 complexity equivalent (and avoiding nonsense input text).
In standard spoken (non technical - created for written and not spoken use) language the input syllable count is relatively limited. Even if you spoke to someone a 12 syllable word I very much doubt anyone other than a speciallist (with prior priming) would know or hold it in memory properly. i.e. the well known (not) 12 syllable sulfoquinovosyldiacylglycerol. Just try even pronouncing the 17 syllable Pneumonoultramicroscopicsilicovolcanoconiosis. Some words are being created for specialist use and not really intended to be spoken⦠which makes for an interesting differentiator between something that is intelligent with relatively few syllables vs an ultra all encompasing 17+ syllable capable model.