Hypercomputing, compare and contrast?

I’ve just now been informed of an interesting topic called Hypercomputing. I know almost nothing of it yet, but I’ve started reading this foundational paper and it seems a fascinating alternative approach to brainlike computing. Is anyone here an expert on the topic, and could they shed some light by comparing and contrasting with HTM?

Hi,
So the Hyperdimensional computing is another idea. I think there is no real link to HTM.
This is an answer to today hyper-optimized systems with no resilience.
You can find, however, few papers connecting Kanerva’s ideas and HTM.
Best,
Bela

1 Like

That looks like a good review of the topic!
I have not read beyond the abstract though…

Over the years, many people have rediscovered these types of mathematics.
These maths have some fascinating mathematical properties, and it appears that the brain uses some of these types of maths too.

I found it fascinating when Numenta calculated the theoretical error rates of an HTM/SDR system in this article:

Although they’re separate theories, they do seem to share some concepts. In hypercomputing, objects are represented by (for example) 10,000-dimension binary vectors. Although they don’t share HTM’s idea of sparsity, they do share the idea that any ‘coincidence’ is extremely unlikely. Any random pair of hyper-vectors not sharing somewhere between 47.5 and 52.5% of their bits is beating the odds by about a billion to one. This leaves the programmer with the task of finding a randomized encoding scheme that nonetheless preserves similarity between similar inputs–just like the task we have in making HTM encoders. Subsequently, any vectors found with a similarity score outside the aforementioned range is known to be significant.

I agree - looks like the same space mathematically as HTM (sparsity aside).

Great talk here:
Stanford Seminar - Computing with High-Dimensional Vectors - YouTube

With slides here:
https://web.stanford.edu/class/ee380/Abstracts/171025-slides.pdf

Referenced by current (Sept 2021) neuromorphic chips from Intel:

Regarding Kanerva slides/presentation, one important problem is at slide 25 - using associative memory for nearest neighbour search.

His system using very large, dense vectors this is inherently a tough challenge for the same reason he finds them useful: every point in that space is orthogonal to >99% of all other vectors.
Which means for any given vector the “neighbours” and non-neighbors are pretty much as far from that vector and is computationally expensive to build and use a searchable index.

I am sure you are right however according to this later paper (Jun 2021):

They seem to proceeding with effective implementations anyway:

What am I missing?