Hey there! So, I’ve been thinking about brain computer interfaces and nupic. Does anyone know if the output of a nupic region is compatible with the signals a real brain operates with?
For example, assume that you have direct access to 100 neurons, so you can read and write to them.
We know that direct translation of inputs can be learned by the brain - in Jeff’s book, he refers to the blind mountaineer using a tongue pressure device that provides a sight analog. Would signals further up a nupic hierarchy be as readily learned by the brain?
If you trained a hierarchy that reduced a 100x100 grid to 50x50, then 10x10, and interfaced that last region with your 100 electrode bci array, is what nupic is doing to the data similar enough to what the brain is doing that the brain would be able to infer from and operate on the low level 100x100 input, with only the 10x10 signals to work with?
Thanks for any insights!