Thought it could be interesting, from
The University of Manchester
Thought it could be interesting, from
It will have over 50,000 chips of this type with 16 (18-2) productive cores each. If I understand correctly, the idea is to have the cores simulate a neuron each, and allow the cores to communicate among themselves without a centralised memory management.
Wow, imagine the kinds of simulations they’ll be able to run on that monster. Of course a million cores simulating a neuron each (though insanely impressive) doesn’t really qualify it as a “human brain” supercomputer. It could probably fully simulate a cockroach brain though.
In the original article they mention to try to simulate a billion neurons, so maybe each core is supposed to simulate a thousand or even more?
AFAIK. Each chip has 16 cores (17 actually, one if for redundancy). In which 15 is available for simulating neurons and each can handle up to 100 neurons.
So that’s 50,000x100x15 = 75,000,000 neurons on full performance?
The hardware seems generic enough (Atom). There are even adaptations of “classical” ML http://apt.cs.manchester.ac.uk/projects/SpiNNaker/apps_other/ that run on it
P.S. “80 neurons on each of the neuron cores”
FYI. SpiNNaker 2 will have 144 cores per chip and each core is ~3x faster than current one with proper FP32 and other hardware acceleration support. It is a total beast.
@marty1885 Currently, I find out 2 excellent platform: SpiNNaker2 from Human Brain Project and Akida from BrainChip company. I hope that we can port HTM into these platforms.
@thanh-binh.to Agree. Those are the best platforms out there. Do you have ideas to access those platforms (maybe we can work together?). The main SpiNNaker machine seems to be out-of-reach for any non Europeans. While HBP does sell a 4-chip SpiNNaker module to civilians worldwide. The price is too high if I can’t guarantee results and publication. Likewise Akida doesn’t look cheep enough to experiment.
Otherwise I really want to have HTM on these systems.
@marty1885 as far as I know you can access this platform only though any project collaboration. Personally I do not have any of them, but I know they will be available only for a limited number of selected users
It looks like BrainChip has released an SDK/API for their Akida processor (here).
Anybody looking at this with HTM in mind?
I found it very interesting too, and it is not so expensive (2500$ system with Raspberry Pi4).
It is configured for Spiking NN, but I believe that can be used for HTM with many efforts!
My thinking was that someone might use the simulator to develop and test a small HTM model. The software is free, but if there were good results it certainly would justify buying the hardware. Another possibility is that BrainChip might loan a system to a researcher who showed promising results with just the simulator so that they could do a performance evaluation.
There is an article in MedicalExpress about some of the math research:
I still say the fast Walsh Hadamard transform, used as a connectionist device is a way to copy the massive connectivity of biological neural networks.
The WHT is quite efficient on current CPUs, presumably also on GPUs. On a special chip I think you could get close to biological connectivity energy efficiency, presuming the energy cost of say 20 simple add subtract operations is similar to the connection costs of a neuron. However the universe is not listening.
The WHT really prefers CPUs with a large L1/L2 cache size.
One other thing I have been thinking about is keying neural networks.
That is adding in a key pattern to the input of a neural network to provide extra information, such as the x,y location of the input information in a larger image say. You could have 2 random x vectors X1 and X2 and 2 random y vectors Y1 and Y2. Then the key vector to add to the input would be linear combinations of those. If you normalize x and y to the range 0 to 1 then:
key = x.X1 + (1-x).X2 + y.Y1 + (1-y).Y2
It might be a way to use smaller networks and keep all the processing within the CPU or GPU caching structures.