HTM as a default algorithm of prediction ai on SingularityNet?

SingularityNet.io has got a lot of publicity recently.

As a review, it’s the idea of a common protocol for AI managed economically, facilitated by distributed consensus technology.

I recently wrote an answer on quora about this idea and how it will progress in the future. (Might be useful to read for context of this question).

The question I have is, given that a network of AI agents could be like a network of regions in the cortex, that come together to produce reactive, informed behavior, do you think the Al algorithm that will grow to permeate the network will be a generic temporal sensorimotor pattern recognition algorithm like HTM? Or do you see the ai algorithms used on the network as remaining, forever highly diverse? Do you think an HTM region simulation would make a good generic “miner” on such networks?

Apart from the cringe-inducing hype of singularitynet.io and neureal.net, a “web of AIs” needs discussing. Jeff touched this subject in “On Intelligence”:

… we might unite a bunch of intelligent systems in a grand hierarchy, just as our cortex unites hearing, touch, and vision higher up the cortical hierarchy. Such a system would automatically learn to model and predict the patterns of thinking in populations of intelligent machines. With distributed communications mediums such as the Internet, the individual intelligent machines could be distributed around the globe. Larger hierarchies learn deeper patterns and see more complex analogies.

This paragraph implies, I think, that the process of tying the individual AIs together would be centrally organized by a single entity such as a company or government. However that would be a very un-web and un-internet thing to do. The philosophy of the internet is one of a great marketplace of individual players constantly renegotiating their relationships with one another.

However this is easier said than done. Quoting from singularitynet.io:

No communication: There is no way for AI communicate data to each other and coordinate processing. Everything has to be done manually and is really expensive.
No AI discovery: There is no way to find AI services, nor any way to judge the quality of an AI service without using it. This creates unnecessary risk.

You don’t have to go all the way to a “web of AI” to run into these limitations. Search engines never really worked well for just simple text search and never covered the business aspects in the first place. Now, for the burgeoning economy of not even intelligent web APIs, the whole thing comes apart. Your best way to find an API is to (manually) search a textual description of it and then go from there, how sad is that.

So why, instead of worrying what type of algorithms the AIs at the edge of the “web of AI” are running – this should be opaque to the network --, why not use AI to solve above communication and discovery problem. These type of problems are, if approached naively, resource-intensive and need the smartest of algorithms to get the most out of limited bandwidth.

Ditch the blockchain which is really a badly scaling, brute-force way of solving a “no trust” problem that for the most part doesn’t exist in any postdiluvian society. Use AI to build trust relationships and level them up into network connections, and turn society into a tangible network of computer nodes. HTM is your place to start.

1 Like