So, Neuralink

In todays news: http://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs

I am curious how the folks around here feel about this. It seems to me that, to pull off anything of a brain to computer interface, one would need a thorough understanding of workings of the brain. To me, that reads HTM

Has there been any work (looking for papers) in feeding HTM-like data to the brain? I would imagine that feeding SDR state to a grid patch electrode in the brain would have a better chance of succeeding than feeding in, say, some DNN state.

I presume most people have heard by now about blind people learning to see by repurposing parts of the cortex to perceive using an electrode array on the tongue or the skin [1]. Experiments have also shown that rodent brains can learn to perceive invisible light using electronic infrared sensors [2].

The cortical algorithm was built for this stuff, making meaning from nonsense. Pulling meaningful information from the brain is likely to be more difficult.

[1] Bach-y-Rita, Paul, et al. “Form perception with a 49-point electrotactile stimulus array on the tongue: a technical note.” Journal of rehabilitation research and development 35.4 (1998): 427.

[2] Thomson, Eric E., Rafael Carra, and Miguel AL Nicolelis. “Perceiving invisible light through a somatosensory cortical prosthesis.” Nature communications 4 (2013): 1482.

1 Like

It seems pointless to me but there are examples of symbiosis in nature, even in you already.
I think you just accept your mortality and not be so afraid of it.

I have always thought that if we could hack cochlear implants to output to SDRs, we might be able to process sound better by using this as an encoder to HTM.

2 Likes

That is an intriguing idea. I did a little reading about cochlear implants, and as I understand they get their input from contacts on a wire that is threaded into the inner ear, and the output are electrodes which stimulates the neurons that would normally transmit input from hair cells connected to the inner ear. In other words, the wire and electrodes replace the connection between inner ear and neurons lost due to damaged hair cells

–EDIT– Actually upon further examination, the input is from an external microphone – this would lead me to believe that the shape of the inner ear is not as important for the encoding of semantics – either the software in the cochlear implant or neural systems later in the process must be encoding semantics.

It is possible that the semantics come from some neural system situated after the neurons that receive input from the hair cells. If that is the case, then hacking the cochlear implant or studying the inner ear wouldn’t help in encoding semantics for sound. As I understand it different neurons respond to different frequencies of input from the cochlear implant (the device is designed to mimic the stimulation which would come from the hair cells, and it sends different frequencies to different populations of neurons). This could indicate that semantics are encoded later, or it might just be a way to differentiate the different “features” of audio input (where the semantics have already been established).

1 Like