Speculating possibility of significantly speeding up encoding using vortex math…

The reason is it really reduces large numbers fast…

Try this example on github https://github.com/MTIzNDU2Nzg5/vortex-math

Might be able to expand the numbers instead of 9 being the highest number…

IDK how to assign meaning to the numbers, somebody smart speculate on the possibility…

Could be useful as a extreme regressor or something…

Yes, that person is kinda mean, but it might be useful…

Quote from rationalwiki. - **Pseudomathematics**… That gets me concerned.

Can you explain exactly how is it going to accelerate learning or provide some reference implementation/examples?

So the vortex math system reduce all integers to between 0 and 9

One possibility is to reduce a large number, but add length to indicate a level

Example:

2^15 = 32768 after vortex is 8, but you add on the initial length, so it would be 8,5 (5 is length of initial number)

2^25 = 33554432 after vortex is 2, but you add on the initial length, so it would be 2,8 (8 is length of initial number)

so 2 > 8 because of higher level of 2

One use might be to assign meaning to very large numbers of dendrites,

You could easily reduce a million dendrites and assign meaning to it

sum(1 million dendrites) = big number here

apply vortex to big number = reduced number plus length

one argument might be rounding, also speed up this operation, but would not give the same meaning

again, this is just for speculation, if you heard of cellular automata, vortex math is like a cellular automata rule

If I get the gist of what you’re suggesting, you’re saying that vortex math (I think some folks here are on the fence about the validity of that branch of mathematics), which acts as a squashing or compressing function (see how sigmoid or ReLU functions work), will speed up learning…

I’m not following the logic of where the speedup comes from.

While I’m a fallible human, my understanding with computation is that we always need to balance work vs. memory. We might be able to compress the information of those dendrites and come to approximations of their current state via some decompression method, but doing that will incur computational and accuracy costs.

Again, in my fallibility, perhaps I just don’t get it. You’re exchanging memory usage for more computation and potentially inaccurate approximations. At that point, I’m not sure where it would be more beneficial than just using semi-random activations based background noise from the environment (analog static in a circuit, for example)… that would at least be computationally cheaper while saving memory, but perhaps still ding you for reproducibility and accuracy.

If I’m just not getting it, let me know.

The brain (and HTM) doesn’t have to search a large space to predict and learn, so I don’t think it would stand to gain much from an optimisation like this.

I won’t judge .