Swarm intelligence

This is fascinating, because I’ve always liked the idea of the ant as a neuron in a hive mind.

1 Like

Ah - ants!
See the Ant Fugue (Aunt Hillary, a conscious ant colony) in Gödel, Escher, Bach book.

1 Like

“…Can we use the tools of psychology to understand how colonies of social insects make decisions…

Might get you 97% of the way.

To bridge the last 3% - as you all are outstandingly endeavoring – one modestly suggested flight path, in three arcs:

1.Get to 98% by outdoing nature by conjecturing the behavior of a hierarchical – and, of course, socially re-entrant – federation of hives, while thoroughly grounding it in the “flat-hive” behavioral science of the day. It might look not unlike the haptic sensing and motor-control neural system of an advanced warm-blooded animal.

2,3.Get to 99% and 100% by positing visual and aural analogues of this two-way haptic interstate. I squandered decades of avocational reflection and reams of dinner napkins puzzling over the information-theoretically efficient ways to attain enough scale, panoramic, and rotational invariance to not walk into a wall. Then, thanks in no small part to your forum – which I think of as providing message paper, ink, corks, bottles, an occasional copy of the Mercury-News, and an ocean in which to toss things – it hit me. The most efficient way to instantiate a thousand brains – and their learning – might be into a couple of dozen cerebella, some too small to be seen (or yet noticed) by the human eye. A couple of pairs of which would hypothetically reside in foveas and cochleas.

Once more, I’m not a neuroscientist, but it was my good fortune to cross paths with some outstanding ones. For clarity, my role – be the smartest dumb business person in the room.

My prior life-experience had been that plumbing the bus structure and communication/control network architecture of any information system – HW or SW – reveals as much as deep insight into the individual nodes. In electronic-based systems – where processing power and memory capacity are notionally limitless and are thereby often wastefully used – rack/backplane bus bandwidth may yet be a constrained resource. And no getting around time-of-flight delays other than to make the path as straight and low-k as possible.

In any case, we talked about the channel capacities of the neural system – and one of my colleague’s early passing remarks stuck with me. Actually, many did – but this is the relevant one:

“Our visual model of what’s around us is so reduced, vs what we might intuitively think by counting pixels”

Not only reduced, I’d now venture – but also distributed and cached. As I thought about how many things might be explainable by the presence of such localized processing – Including why our tolerance for latency can compatibly range over orders of magnitude, from 50 milliseconds to 50 seconds – I just stopped. If I wrote them all down, I’d run out of message paper, and wouldn’t be able to fit it into a bottle.

Not only visual – but aural, too.

So one last outrageous ignoramus notion from the dumb side of the conference table.

Follow the money.

Which – of course, in advanced warm-blooded animals – is the oxygenated and glucosed blood supply.

Godspeed.

PS

The “self-gedanken” experiment. Just fixate on something while slowly and repetitively nodding your head 15 degrees to the right and left. As usual, the higher-order cognitive model just sails on through – but do you notice any subtle changes in saccadic suppression.

I like the intuition here, and I’m a fan of psychology by the way.

As an initial take about their similiraties, I’d like to investigate the following ACO equation,

https://en.m.wikipedia.org/wiki/Ant_colony_optimization_algorithms

It facinates me how each of the terms here may be related to SP or TM algorithm. This is the probability of the kth move of an ant which is analogous to the next synapse update if a neuron. There is more to this, I can elaborate later.

1 Like

@rhyolight just posted a link to a paper slightly relevant to the current focus of this thread:

2 Likes

Focusing only on a single synapse, the T in the equation above which is the pheromone amount between x and y is similar to a synapse permanence value. Both reflect the probability of a path or a connection. The N is a priori desirability of a path (x, y) which I don’t think is very useful for SP as it is doing an unsupervised learning. The pheromone evaporation update also is similar to the synapse permanence decrement, both try to slowly forget the path/bit as it becomes irrelevant to the current input. All of these are mostly intuition but I think the take-way is that the SP’s computing aspect can be studied using similar methods used in ACO and it might be that they are in the same superset algorithm. This is what I’m generally interested about the SP that it may be formalized in the computer science domain and implemented in applications.

1 Like