Brain computer interfaces

Hey there! So, I’ve been thinking about brain computer interfaces and nupic. Does anyone know if the output of a nupic region is compatible with the signals a real brain operates with?

For example, assume that you have direct access to 100 neurons, so you can read and write to them.

We know that direct translation of inputs can be learned by the brain - in Jeff’s book, he refers to the blind mountaineer using a tongue pressure device that provides a sight analog. Would signals further up a nupic hierarchy be as readily learned by the brain?

If you trained a hierarchy that reduced a 100x100 grid to 50x50, then 10x10, and interfaced that last region with your 100 electrode bci array, is what nupic is doing to the data similar enough to what the brain is doing that the brain would be able to infer from and operate on the low level 100x100 input, with only the 10x10 signals to work with?

Thanks for any insights!

1 Like

@marionlb Has worked on this. Here are a few projects:

2 Likes

I’m interested on HTM direct BCI projects. (not EEG, but direct neuron stimulation).

the format, SDR does seem to be compatible with what is passed in brain regions.

But there is much more details, at a scope:

  • high level: brains structure, neural hubs → in HTM “1000 Brains” and NetworkAPI to let you simulate many regions and their connections.
  • low level: brain is fully parallel, asynchronous. HTM operates sequentially.

I plan doing some experiments with OpenWorm and its neurons (you get the direct access there), it’s a biological model.