Functional Emergence, or so I hear the cool kids are talking about it

Howdy Everybody!

Recently while procrastinating on Twitter, I came across an interesting article, and I was curious what you people thought :smiley:

My team and I have been playing around with a few schemes, trying to add some automation to software development such as Unreal Engine 4. I know I’ve asked this question before, but as an aside: any takers for collaboration via VR at Numenta? We have an extra Vive that is gathering dust.


https://www.wired.com/story/the-mind-boggling-math-that-maybe-mapped-the-brain-in-11-dimensions/

Title: THE MIND-BOGGLING MATH THAT (MAYBE) MAPPED THE BRAIN IN 11 DIMENSIONS

"Emergent Effects
But Kathryn Hess is no neuroscientist. Instead of a meaningless pile of data, she saw in Markram’s results an obvious place to apply her abstract math goggles. “Topology is really the mathematics of connectivity in some sense,” she says. “It’s particularly good at taking local information and integrating it to see what global structures emerge.”
For the last two years she’s been converting Blue Brain’s virtual network of connected neurons and translating them into geometric shapes that can then be analyzed systematically. Two connected neurons look like a line segment. Three look like a flat, filled-in triangle. Four look like a solid pyramid. More connections are represented by higher dimensional shapes—and while our brains can’t imagine them, mathematics can describe them.
Using this framework, Hess and her collaborators took the complex structure of the digital brain slice and mapped it across as many as 11 dimensions. It allowed them to take random-looking waves of firing neurons and, according to Hess, watch a highly coordinated pattern emerge. “There’s a drive toward a greater and greater degree of organization as the wave of activity moves through the rat brain,” she says. “At first it’s just pairs, just the edges light up. Then they coordinate more and more, building increasingly complex structures before it all collapses.”

BLUE BRAIN
In some ways this isn’t exactly new information. Scientists already know that there’s a relationship between how connected neurons are and how signals spread through them. And they also know that connectivity isn’t everything—the strength of the connection between any pair of neurons is just as important in determining the functional organization of a network. Hess’s analysis hasn’t yet taken synaptic weight into account, though she says it’s something she hopes to do in the future. She and Markram published the first results of their decade-in-the-making collaboration yesterday in Frontiers in Computational Neurobiology."


Title:Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function

Abstract:

The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.

http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full

It is an interesting question, so I am curious what you people think.

Also, a publication which reduces the size of the training set through weighted average sums:

Title: Machine Learning on Sequential Data Using a Recurrent Weighted Average
Abstract:

Abstract: Recurrent Neural Networks (RNN) are a type of statistical model designed to handle sequential data. The model reads a sequence one symbol at a time. Each symbol is processed based on information collected from the previous symbols. With existing RNN architectures, each symbol is processed using only information from the previous processing step. To overcome this limitation, we propose a new kind of RNN model that computes a recurrent weighted average (RWA) over every past processing step. Because the RWA can be computed as a running average, the computational overhead scales like that of any other RNN architecture. The approach essentially reformulates the attention mechanism into a stand-alone model. The performance of the RWA model is assessed on the variable copy problem, the adding problem, classification of artificial grammar, classification of sequences by length, and classification of the MNIST images (where the pixels are read sequentially one at a time). On almost every task, the RWA model is found to outperform a standard LSTM model.

https://arxiv.org/abs/1703.01253

Modulo some smart dolphins
http://www.sciencedirect.com/science/article/pii/S2405722316301177

1 Like

If the number of dimensions correlates to the number of connected neurons, seems as if they are a few thousand dimensions short :stuck_out_tongue:

1 Like

I think their definition of dimension is perhaps different than yours, with respect to specific data. Some data is wider, but the global vs local distinction here is what interested me.

No, our definitions are the same. My comment was tongue in cheek… I was trying to be humorous in making a point that describing wave formations mathematically (however mind boggling) is a far cry from “mapping” the brain. Don’t mean to belittle the work, but the article is obviously intended to be sensational.

3 Likes

I guess if error correction is involved then geometric patterns of activation could be expected.
http://ieeexplore.ieee.org/document/6271922/
www.ece.mcgill.ca/~mrabba1/pubs/2012/randCliqueCodes.pdf

Perhaps fault tolerance. Not sure about error detection and correction (SDR. cerebellar structures + prediction,… sounds more likely for ECC). In any case, multiple synapses for the same pair of neurons sounds a better solution (and more biologically plausible).

I hope it is just fault tolerance. You cannot casually learn error correction theory. I would imagine you would need to put in at least a year of serious study to get a basic grip on the subject.

I would disagree with your observation. @Paul_Lamb do you do work in algebraic topology?

With respect to definitions, let’s ensure we are referring to the same one. What’s your definition of dimension?

I don’t think localist ECC theory can be effective at all here. Even in not-so resilient devices (such as NVM) conventional ECC approaches struggles to alleviate the inherent endurance problem. Biological substrate is orders of magnitude less resilient than that… I think that error correction theory used in electronic devices is useless here.

1 Like

No.

c^2 = a^2 + b^2 + … n^2

The point I was making (admittedly not very well) is that we are unfortunately a long way from “maybe” mapping a brain. My comment was directed toward Molteni’s sensational article, not intended to criticise the study. Sorry if it was taken the wrong way.

I think if you put this work:
https://science.slashdot.org/story/16/12/03/0659237/our-brains-use-binary-logic-say-neuroscientists
With some of Gripon’s work on error correcting associative memory it starts making sense:
http://www.ece.mcgill.ca/~mrabba1/pubs/2012/randCliqueCodes.pdf

Someone could go to a great deal of trouble and expense to learn a deep neural network and then you could cheat by using associative memory to learn the responses of 1,2 or 3 layers of that network at a time and stacking those together. Maybe you could even do better than the original network because associative memory can have an error correcting effect.
Or has been said - cross-talk is a uncontrolled form of generalization you get with associative memory when the input patterns are not well separated. However you very likely can step in to control the sort of cross-talk that is happening and improve the quality of generalization.

I’ve come across some more fancy ECC’s, specifically something called adinkras, found in Super Symmetric theories.

More poacher on N space algorithm.
‘Breakthrough’ Algorithm Exponentially Faster Than Any Previous

I just ran across a video about the root topic here, and it turns out the context was specific to our interests: “The Hidden Structure of the Neocortical Column” (about 2 years old now):

The Blue Brain project (Markram) built a detailed model of a rodent cortical macrocolumn, layers and all. The data is available below, it’s ready to be used with Yale’s NEURON simulation package (which I’ve just seen recently mentioned on this forum):

https://bbp.epfl.ch/nmc-portal/downloads

I was able to find some beautiful HD desktop wallpaper images from the slides, in case anyone is interested:

#1

#2

#3 (make sure to click “load full resolution”)

3 Likes