BrainX architecture

Hi,
Watch the below video and please let me know about your thoughts on this architecture in the comment section.

1 Like

I didn’t understand the “Wiggle” part which seemed key. What’s the mechanism, is it related to neuron excitation/inhibition?

2 Likes

It combines previous layers output with the current layer output. Its sort of updates the previous layer with current layer.

The update was used to cluster impossible-to-cluster vectors on the same layer. The impossible can be due to far distance between the clusters or any other.

1 Like

I like the animations. But I need more than 3-10 seconds to understand each concept, so a written format (like a blog post) might be more appropriate. Or, ya know, a white paper :grin:

Do you have any experimental results? That is usually a good hook to get people interested in your ideas

2 Likes

I am documenting the architecture step-by-step right now… i will post here once it was ready. It is still in theory stage, no experimental result as of now. The mathematics behind this architecture is not allowing me to create a prototype, a mathematical framework was not developed can you recommend any advice for creating math for this architecture. For eg, Tell me how do you create a math if you have a theory in hand?

1 Like

I don’t have any good advice, do any others?

I myself have a lot of algorithm ideas, but when I sit down to write out “okay what does it actually do:” then it gets really difficult! My best strategy is to reduce the scope/features as much as possible, to show the most basic result, but often I wind up with more questions than answers. It’s tough.

2 Likes

Ok man… i understand… thank you for responding…

Typically, one way may be to figure out how to represent input and output in mathematical formalism. Then the intermediate computation and map functions can be developed. Will be glad to have more insights.

1 Like

Input and output both as feature map. Encoder consists of convolution layer + trainable 3D distribution density that scale the input (or) bias. And decoder with trained 3d distribution density that scale the feature map + deconvolution layer.
But I don’t if the above will work, but i will try anyway. But still I can not make things clear because the complex problems were in the intermediate stage. So thank you, i will try to improve by moving backwards from input and output math representation as you said.
In case, If you are interested in this concept you can join my discord server… Dm your email i will send the link

1 Like

Some quick first thought questions around this architecture :
how are random neurons different from chained neurons ? if changed neurons are created just for the sake of partition . why do random neurons connect to these chained neurons.

i dont see any temporal memory or spatial pooling algorithm , are you implemnting classic ML point neurons that learn based off of weights . ?

what is the process of wiggling ?

1 Like

Chained neurons act like passage way for the values of input feature map to pass through the network. And random-neurons act like biological brain that is they are attracted to values of feature map in the chained neuron, comes closer and becomes non-random fixed to the position neurons. These fixed neurons equals synapse between neurons.
It is a novel approach, traditional concepts is not followed. There may be some few concepts copied but entire thing is different.
Wiggle Connection is to transfer combined clusters of current Active layer + previously activated layer To the second network layers- this was done to reduce complexity.

1 Like