Hypercomputing, compare and contrast?

I’ve just now been informed of an interesting topic called Hypercomputing. I know almost nothing of it yet, but I’ve started reading this foundational paper and it seems a fascinating alternative approach to brainlike computing. Is anyone here an expert on the topic, and could they shed some light by comparing and contrasting with HTM?

Hi,
So the Hyperdimensional computing is another idea. I think there is no real link to HTM.
This is an answer to today hyper-optimized systems with no resilience.
You can find, however, few papers connecting Kanerva’s ideas and HTM.
Best,
Bela

1 Like

That looks like a good review of the topic!
I have not read beyond the abstract though…

Over the years, many people have rediscovered these types of mathematics.
These maths have some fascinating mathematical properties, and it appears that the brain uses some of these types of maths too.

I found it fascinating when Numenta calculated the theoretical error rates of an HTM/SDR system in this article:

Although they’re separate theories, they do seem to share some concepts. In hypercomputing, objects are represented by (for example) 10,000-dimension binary vectors. Although they don’t share HTM’s idea of sparsity, they do share the idea that any ‘coincidence’ is extremely unlikely. Any random pair of hyper-vectors not sharing somewhere between 47.5 and 52.5% of their bits is beating the odds by about a billion to one. This leaves the programmer with the task of finding a randomized encoding scheme that nonetheless preserves similarity between similar inputs–just like the task we have in making HTM encoders. Subsequently, any vectors found with a similarity score outside the aforementioned range is known to be significant.