And if there are enough degrees of freedom there is nothing to stop micro step random hill climbing from always being able to find a better solution. Though sometimes the system might be slowed at some difficult saddle point. You think you are doing fast Hebbian learning by altering a few synapses at a time. However that is micro step (random) hill climbing in very large systems with no local minima traps. Over time very slow and more global learning is occurring.
Well, that is speculation and maybe not possible within the biological time frame of a life. Anyway Iâll try with some code.
The old idea of emergent intelligence has gone out of favor. I believe âhogwashâ is the term used by one senior researcher. And yet there is a paper that seems to indicate you can continually inch your way toward ever smarter systems in a high enough number of dimensions.
Itâs refreshing to read the term âlocal optimaâ in this area of AI/ML. As Hebbian learning isnât a search algorithm I wonder where local optima could have affect. The cortex is unsupervised, so therefore unaffected by any error (except in behavior/reinforcement learning).
In what context of âdimensionsâ used? In a philosophical context it would be interesting if the cortex could model the 5th dimension. If spatial pooling is encoding the 3rd dimension (space/objects/3d), then temporal pooling is encoding the 4th dimension (time/events/4d), then temporal memory is encoding the 5th dimension (logic/5d).
Eh? Logic/5d? - Higher-order temporal memory could be encoding conditional time-sequences (transitions between 4d âplanesâ). A-B-C-D & X-B-C-Y are semantically related logical/5d constructs. In the world there could be very similar events/4d that could play out in different ways depending on what is currently happening (5d).
There is a temporal pooling representation (4d) of someone kicking a ball into a goal. IF David Beckham shooting THEN probable score. IF 1-year-old-girl THEN probable no-score. This has a similar representational structure of if(A)-B-C-then(D); if(X)-B-C-then(Y) - if(Beckham)-kick-shoot-then(score); if(1-year-old-girl)-kick-shoot-then(no-score).
I wish I could explain this speculation better but Iâve had evening wine. All the dimensions could be represented utilizing the Hebbian rule - I speculate its all about the connections between groups/layers.
Well, Hebbian learning in a recurrent system. When you do that you are going down an energy landscape. If neurons are being updated more or less at random then the system will always some way further downhill because you canât get stuck in say a billion dimensions the way you can get stuck in 3 or 10 or even 100 dimensions. However I could only imagine it as a secondary effect in the human brain because it is such a slow form of learning.
You could build one type of AI you feel you understand and can easily control, but it could have the characteristics that would allow a secondary, emergent type of intelligence to build up that you didnât reckon on.
You canât stare into the face of it and expect to capture every potential statement that face would ever express.
Itâs not a sum of its parts. You canât with deliberation and explicit intention put everything in that would eventually manifest as component pieces of any possible output.
We struggle valiantly to capture the essence of it as if its expression is a function of anything we could possibly observe.
And yet we act surprised at the inevitable and consisistent discovery of the invisible barrier; despite being introduced to it upon every attempt to consume it in our masterful wake.
We need to first give recognition and acknowledge the architecture of intelligence that mother nature has settled on - before any attempt to outwit it.
If all there is is water and we are a fish, then the water will forever be unfathomable to us. We have to first transform ourselves into something else thatâs capable of observing water, before we can crawl up on the land - and know that there is that which exists outside of water.
Thereâs no harm in letting mother nature clue us in. The brain doesnât do math. It instead harnesses the capacity to create a context in language out of which the observation of mathematical relationships is possible to observe - but there is no math down on the playing field; math is in the domain of the observer giving an account. Itâs in the conceptual domain not the ontological domain - the domain of being. Any epistemological observation is implicitly inextricable from the observer.
You canât put all the pieces together and put them inside of a set and expect to get intelligence out of it. As said before, itâs not a sum of its parts - in my opinion.
Thatâs why we start with what we know and maybe later we will finish with something that exceeds its origin?
âŚor maybe not.
I donât mind the conversation getting philosophical, but how about we move it into #other-topics:off-topic?
I think there had better be an element of snapshot learning of synapses by neurons. How would a neuron remember that a very particular pattern of inputs had recurred a number of times over a number of days or weeks. Simple accumulation wonât do, the result would be a washout. It must remember the pattern as a complete entity initially, or is there a way to dodge that?
I believe the learning of spatial and temporal relationships is between inputs in a sequence (i.e. not static snapshots), and are âdistributedâ among all the columnar / cell / dendritic behaviors - and not quite captured by any single neuron? Not sure if that addresses your question?
I was just thinking about accumulation of some biochemical trigger prior to a synapse being built. That will only build a synapse off of the average firing rate of some adjacent neuron. I know this neuron fires a lot and this other one doesnât. That doesnât capture any information. Knowing that 2 other neurons fire in conjunction would capture some information. I guess if the biochemical trigger diffuses around it would make learning a conjunction more likely but it is still not very decisive.
A post was split to a new topic: Sleep research high-resolution images show how the brain resets during sleep
I think the fact that clusters of spines within a 20-40u radius act as coincidence detectors (keyword being coincidence); meaning that it takes multiples of active synapses within an area to depolarize the post-synaptic cell, and that these pre-synaptic connections come from cells originating in potentially many different columns - carries along with it the spatial (i.e. semantic meaning) information and the temporal context (occurring due to which cell within a given column is firing and contributing its pre-synaptic input) - contributes temporal/sequential information.
So yes, I would say that the information occurring from co-occurring cells all acting upon a recipient dendrite within a close proximity, conserves and captures plenty of coincidental information.