Functional Emergence, or so I hear the cool kids are talking about it

Howdy Everybody!

Recently while procrastinating on Twitter, I came across an interesting article, and I was curious what you people thought :smiley:

My team and I have been playing around with a few schemes, trying to add some automation to software development such as Unreal Engine 4. I know I’ve asked this question before, but as an aside: any takers for collaboration via VR at Numenta? We have an extra Vive that is gathering dust.


https://www.wired.com/story/the-mind-boggling-math-that-maybe-mapped-the-brain-in-11-dimensions/

Title: THE MIND-BOGGLING MATH THAT (MAYBE) MAPPED THE BRAIN IN 11 DIMENSIONS

"Emergent Effects
But Kathryn Hess is no neuroscientist. Instead of a meaningless pile of data, she saw in Markram’s results an obvious place to apply her abstract math goggles. “Topology is really the mathematics of connectivity in some sense,” she says. “It’s particularly good at taking local information and integrating it to see what global structures emerge.”
For the last two years she’s been converting Blue Brain’s virtual network of connected neurons and translating them into geometric shapes that can then be analyzed systematically. Two connected neurons look like a line segment. Three look like a flat, filled-in triangle. Four look like a solid pyramid. More connections are represented by higher dimensional shapes—and while our brains can’t imagine them, mathematics can describe them.
Using this framework, Hess and her collaborators took the complex structure of the digital brain slice and mapped it across as many as 11 dimensions. It allowed them to take random-looking waves of firing neurons and, according to Hess, watch a highly coordinated pattern emerge. “There’s a drive toward a greater and greater degree of organization as the wave of activity moves through the rat brain,” she says. “At first it’s just pairs, just the edges light up. Then they coordinate more and more, building increasingly complex structures before it all collapses.”

BLUE BRAIN
In some ways this isn’t exactly new information. Scientists already know that there’s a relationship between how connected neurons are and how signals spread through them. And they also know that connectivity isn’t everything—the strength of the connection between any pair of neurons is just as important in determining the functional organization of a network. Hess’s analysis hasn’t yet taken synaptic weight into account, though she says it’s something she hopes to do in the future. She and Markram published the first results of their decade-in-the-making collaboration yesterday in Frontiers in Computational Neurobiology."


Title:Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function

Abstract:

The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.

http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full

It is an interesting question, so I am curious what you people think.

Also, a publication which reduces the size of the training set through weighted average sums:

Title: Machine Learning on Sequential Data Using a Recurrent Weighted Average
Abstract:

Abstract: Recurrent Neural Networks (RNN) are a type of statistical model designed to handle sequential data. The model reads a sequence one symbol at a time. Each symbol is processed based on information collected from the previous symbols. With existing RNN architectures, each symbol is processed using only information from the previous processing step. To overcome this limitation, we propose a new kind of RNN model that computes a recurrent weighted average (RWA) over every past processing step. Because the RWA can be computed as a running average, the computational overhead scales like that of any other RNN architecture. The approach essentially reformulates the attention mechanism into a stand-alone model. The performance of the RWA model is assessed on the variable copy problem, the adding problem, classification of artificial grammar, classification of sequences by length, and classification of the MNIST images (where the pixels are read sequentially one at a time). On almost every task, the RWA model is found to outperform a standard LSTM model.

https://arxiv.org/abs/1703.01253

Modulo some smart dolphins
http://www.sciencedirect.com/science/article/pii/S2405722316301177

1 Like