RAIN Neuromorphics

The RAIN neuromorphic chip also has the potential of order of magnitudes higher densities with much lower prices compared to current computer chips because what limits density by increasing costs in today’s chips is the difficulty of keeping a low defect rate while increasing resolution. A two core modern cpu is 6 or 8 core chip with the remaining cores scraped by one failed transistor out of millions.

While the RAIN chip… there isn’t even a meaningful concept of “defect” since all connections are random anyway.

Wow!

7 Likes

Just like what Jeff said in the book…
The prophecy came true!

1 Like

Jack Kendall, one of the co-founders of RAIN, told us his work on this chip was inspired by On Intelligence.

5 Likes

And since we’re on the topic, if you don’t know about RAIN:

8 Likes

Haven’t watched the video yet (corporate firewall), but do you know if they have released any emulations which developers can begin experimenting with (short of having the actual hardware)? I would love to have an HTM implementation ready to use on this hardware.

The SP algorithm would be pretty straight forward in this structure I think. TM might be a bit more complex, but could maybe do it where the electrodes play the role of distal segments and axons rather than whole pyramidal neurons.

4 Likes

That was the first thing I asked Jack at the last meetup. He says not yet. Stay tuned.

4 Likes

There is also another important consequence of the random distribution of “dendrites” within each chip:
Each one will be unique, and irreplaceable.
Which means

  • A backup of synapse states will only be useful to restore a previous state of the source chip, another chip will have totally different connections, so restoring state on another device will be impossible. It might be simulated on a gpu datacenter maybe but that will be very slow and expensive.
  • manufacturer will be able to record signatures of all chips produced and enforce patent/licenses. “illegal copies” will be easy to spot.
  • strangely, an intelligent agent built with this type of “neurons” will be mortal like living animals. If the chips are destroyed its unique structure will be irreplaceable. As flesh agents, they will be literally prisoners of the physical structure of their “brain”.
  • which means the costs of intelligent agents of this kind will be driven not so much by hardware but by the time spent by each of them training in a simulator.
4 Likes

Not necessarily correct!

I am working up an answer to a question that Matt asked about how the contents of the hippocampus are transferred back to the cortex.

In this case it revolves around the spike timing -> learning and reciprocal layer 2/3 connections but in principle any method that induces a parallel connected sheet to read the reciprocal connected sheet should do the trick. I can see that capacity being added to side-load programming into a new product quickly.

3 Likes

I needed to clarify what the memristors actually are, this is a short&clear explanation, explains also the crossbar layout

4 Likes

(I’ve been away for a bit, so I may have missed some info).

Is there a chance RAIN’s paper is going to be discussed in a streamed research meeting?

I don’t think so.

I was thinking about this. Although an exact copy wouldn’t be possible, a very good approximation would be possible if topology were used in the encoded semantics (such that two electrodes physically close to each other share equally close semantic meaning). The copy process would work by:

  1. Map out the random wiring on the new target device by iterating over each odd electrode, attempting a write, then read with each even electrode. If read is successful, record it as a connection, and erase what was written in the test. This would take (#electrodes/2) ^ 2 tests to complete.
  2. Map the state of the source device in a similar way, iterating over each odd electrode, and reading the resistance with each even electrode. Any connection recorded, identify the most similar connection on the target device and copy it there.

This of course would add noise to the encodings, but given that SDRs have high noise tolerance, it should be a close enough approximation.

2 Likes

I’m not sure I get what you are suggesting but the “identify the most similar connection on the target” seems quite tricky. There might be possible a simple “rerouting” by a permutation (aka simple rewiring) of electrode positions in order to obtain a network “similar” with the original. But I’m not sure if that is cheaper than to just place memristor wires one by one, (in a deliberate position instead of random) in order to replicate the original structure.
By “impossible” I meant just an approximation of “too difficult to bother”

By this I mean the connection whose endpoints are physically closest to the positions of the original endpoints (i.e. this would be forming a connection between two bits on the target which each share similar semantics with their counterparts on the source (due to their physical proximity).

Of course, this would require an algorithm to find the target connection with the closest endpoints to the source connection. Perhaps this part would be computationally expensive (worst case would be to compare each source connection with every target connection).

There is probably a faster algorithm for this though. For example, I could sort the target connections by the odd electrode position, and only compare with connections which have the first endpoint within some threshold distance. This would greatly reduce the search space (at the cost of throwing out some connections which do not have close enough counterparts on the new target)

“Fixing small missing pieces of brain” should work anyway with replacing a chip and apply usual learning technique to have the chip re-integrate lost functionality.
Anyway this problem, of being able to replace a broken chip, is secondary to making the chip behaving like a piece of brain tissue.

Which may begin with how a relatively uniform structure connected “anarchically” in both X-Y directions could be morphed into a hierarchical one as in mini columns or columns. I assume most connections of any layer are with neurons from a following layer, with some fewer connections ( if any?) within same layer, then some distant lateral connections.

One way to approach this could be - if technology permits - to have the fibers placed not entirely random but with some directional prefference over e.g. vertical axis of the grid. Statistically to have fewer horizontal fibers, a bit more at an angle and most of them vertical.
Another approach could by to electrically fry most lateral connections between contacts on the same row. Again there-s a feasability question mark here too.

WRT minicolumns, the columnar shape could be “squashed” and instead use circular areas. This should work because collections of nearby electrodes in this chip are connected to each other densely. A hex grid based algorithm could be used in place of the SP algorithm for both sparsification and preserving topology.

Also, we could easily have multiple chips performing different functions which in a physical brain might be performed by the same population of cells. A given virtual cell need not be physically running in one single chip.

For example, some chips could be performing functions like classic SP, where the electrodes represent populations of cells (versus individual cells).
Other chips could perform other functions where the electrodes represent individual cells. Others could be performing functions (for example the TM algorithm) where the electrodes represent dendrite segments.

1 Like

I am very interested in your views about the memory consolidation process.

For those interested, this recent article in Nature is the clearest review I have read on the topic: “Mechanisms of systems memory consolidation during sleep”, by Jens G. Klinzing, Niels Niethard, Jan Born
https://www.nature.com/articles/s41593-019-0467-3

Complimentary access given by Nature with this link:
https://twitter.com/nresearchnews/status/1168137445059244033?s=09

6 Likes

Great paper! Thanks for posting it.
I am copying this link to the referenced thread.
I will still add some notes about spike timing learning but this covers much of the “other” material I had to cover to explain what the bits I have fit in. (The back story?)

3 Likes

Can I ask how you know the connections will be truly random, and not pseudo-random, like with some random generating algorithm? (Fibonacci comes to mind).

I don’t doubt you, but is this speculation, or did you read this in some specification or paper I don’t know of?