You’re might be right. One problem is that adjusting speed of theta oscillations might not work for uneven scaling.
I’m trying to think of deformations we experience regularly. What about two cups, one stretched tall and one short, but equal width? I think those would feel like two different cups, whereas if I had one cup twice as large as another, it’d feel like a duplicate. Although, that reasoning feels like a stretch.
Introspection is not science. We should be able to demonstrate empirically that an animal can recognise objects through a certain transformation, then set out to replicate that as an algorithm on SDRs. Baby steps.
That’s because reverse engineering something isn’t science. If you try to do that as if it’s science, you ignore the external motive you assign to something, but it’s still there. You’re not trying to explain why you observed something about the brain (which is because it developed to be that way) but rather what purpose it serves. The external motive (hypothesized purpose) ends up being something vague like efficiency. That’s unscientific because it’s nearly impossible to disprove vague hypotheses about what something does.
Of course it’s essential to test hypotheses once you have them.
I agree, but sometimes introspection is useful for narrowing down the choices of ideas to investigate further. But also keep in mind that it is incredibly difficult to make empirical measurements of a functioning brain under sufficiently controlled conditions. To make progress you work with the tools you have available.
I see introspection to be of limited utility. Much of what the brain does has no cortical representation and as such, is not available to introspection.
Just for fun, try to bring saccades into your consciousness. The scanning and integration of objects is an observed behavior but try as I like, I can’t “experience” it.
Likewise, the loop of consciousness has a significant portion of the process in subcortical structures. We can draw a box about the “subconscious” but it is not available to introspection.
Emotions are a subcortical process - we usually become aware that we are being influenced by these these structures by the effect the chemical messengers have on the body.
No, it’s because it just isn’t. You can’t use the science tools of observation, measurement, hypotheses, theory, experiment etc on introspection.
Reverse engineering is not science, it’s engineering: the rigorous use of known science and technology to produce a known end result, instead of new knowledge. I know, because I’ve done it.
HTM does not reverse engineer neurones and synapses. It assumes science and new knowledge, and then attempts to hypothese an algorithm, which is then implemented to be in software. What matters here above all is the data representations and the algorithms that operate on them. Whatever the computing model is, it’s quite unlike anything we’ve done before, and that’s the exciting bit.
We’re still not on the same page. Within limits, introspection has occasionally been somewhat useful in triggering ideas for experiments that might in due course lead to real science. It still isn’t science.
Science starts with observation. The next step is formulating an hypothesis, some intuitive explanation based on experience in the observed domain. A later step is creating an experiment, predict results and generate enough data to show the results match the prediction. Only when enough results warrant the confidence, a theory is formulated and offered for peer review. If possible, this theory can be based on more fundamental theories, and cross-referenced to other scientific domains. But this is not always available.
Yet, all this is science. Including the observation. Ergo introspection.
You might think that you “recognize this is a pipe” but you don’t. You recognize that this is an image of a pipe. You would never try to pick this up, or else you’d get fingerprints on your computer screen!
In a similar same way: if you had two coffee cups of different sizes, you might think “these are the same type of object” but you will still think of them as distinct objects. Otherwise you would be unable to manipulate them correctly. Mr Hawkins thinks that he can pick up both cups with the same muscle movements, by only scaling the magnitude of the movements by a single scalar factor. He even proposes a specific mechanism to do this (altering the Thalamic theta frequency). I disagree, here are some specific examples where I think this hypothesis fails, try it!
Try grasping the cups: a smaller cup requires moving your fingers further to fully constrict around the cup, since it is smaller in diameter.
Try lifting the cups: the large cup requires more force since it weighs more.
Try picking up the cups by putting your finger through the loop of the handle: on a smaller cup you might only fit 1 finger through the loop instead of 3.
Try writing your signature small and large: see if you use the same muscle groups. Observe: when you increased the size of your signature did you also lift the pen up proportionally higher above the page? The hypothesis that scale is controlled by a single scalar factor would cause you to increase your vertical motion in proportion to the size of the image which your drawing.
In summary I think that:
Objects of different sizes require fundamentally different motor behaviors to interact with.
I think it’s a scaling of motor vectors, maybe in object-space. Other things have to translate that to muscle movements. Probably a lot is subcortical and in the where pathway.
If you have two exact duplicates, you have to keep them separate. Same object identity but different objects.
True that it has to represent object scale somehow, as in scale in the world not the change in motor vectors.
How can the neocortex represent objects @ orientations?
In the context of 3D geometry a location is three distances (R3) and an orientation is three angles (R3).
From the point of view of a cortical column in the visual cortex, what is the difference between location and orientation?
They can both totally change the visual appearance of an object in numerous arbitrary ways. Both can be controlled through muscle movements. It seems to me that orientation is a special type of location information. Do the theories describing locations not also work for orientations?
The book “perceptrons” showed convincingly that layers of units could be replaced by a suitably constructed single layer. This had limits such as being unable to perform the xor function no matter how many layers you used.
This book effectively killed research in the area for a long time.
True believers kept at it and eventually worked out how to surpass this basic limit.
Adding the limiting activation to the mix allowed transcending the limits of the classic perceptron. It was now possible to form islands of meaning between the layers.
Much the same benefits with layers of HTM modules. Adding the H to HTM radically enhances the representation and computation possibilities. It’s not the same thing then - it does more. The SDRs are now able to pool, both spatially and temporarily and these pools to be sampled to form conjunctions of these semantic meanings.
I never found perceptrons convincing and always saw that entire body of work as a blind alley.
I don’t find ‘layers’ a helpful concept. There are anatomical layers, but they are an artefact of the way that the cortex has evolved. And I haven’t seen anything to convincingly justify the H in HTM either.
In my view the broad organisation of cortex into columns speaks to multi-processing. Large numbers of columns doing the same processing job requires a common data representation, and SDRs fill that need. The representation of sequence memory that learns by prediction as ‘stacked’ SDRs is credible. Something similar seems to apply for location, although I don’t find it as convincing yet.
Rather than a hierarchy or layers I am left with a mental model of SDRs that:
in the sensory areas represent raw sensory inputs and successive refinements
in some higher centres represent more abstract properties, concepts and plans
in the motor areas represent broad motor intentions and successive refinement down to individual muscle movements.
It’s SDRs all the way down. But this is a computational model and while we have a data representation (totally unlike any computer we know) we do not have an ‘instruction set’ (also likely to be entirely novel).
I’m guessing there will be 10-100 ‘instructions’ that represent ways of generating SDRs, of which we know of or can guess at a mere handful. I’m guessing both SDRs and ‘instructions’ will be found in far simpler brains, from which the cortex has evolved. That’s where real science comes into its own.
I’m sure that the deep learning people will be shocked to hear that!
Seriously- that branch of research captures some important properties of the brain.
HTM captures some different properties.
The brain does use elements of both technologies and they are both worthy of study.
As far as localization of function- there has been some very fine-grained work in this area. If this interests you I would suggest looking into the work on the connectome.
I will add that the preservation of topology and the hierarchy of maps in a unifying theme though much of the cortex. This allows the possibility of a spot in an early map to project to a higher level and still be meaningful in processing at that higher level.
The first speaker in this series (Marge Livingston) points out implications of this in her talk on Category based domains. Pay special attention to the bit about how information that goes DOWN the hierarchy trains the lower levels, even if the related sensory stream is not present. The implication is that the connection scheme is critical to forming categories, and not as much the content as is normally assumed.
By dead end I mean: not on the path to AGI. ANNs and their ilk are astonishingly successful at recognising and classifying, given large amounts of labelled training data. They can do things no biological system ever could, but it’s still a dead end. And no, I have no reason to believe they capture important properties of brains. Processing sensory input: just maybe, but beyond that: no way.
The connectome tells me just one thing: brains are packed full of neurones, which are deeply inter-connected. I read the book: it takes us nowhere.The basic precept is wrong: we are not our connectome.
That video is 4 hours! Sorry, but if there is something relevant in there you’re going to need to pinpoint it for me.
I’m a software guy with a medical/scientific background. I see multi-processing, a data representation and a storage mechanism, and I look for the software, the instruction set, the programming language. People without my background think that neuroanatomy and connections will get us there, but they’re dead wrong, that’s just the hardware. If I give you the full wiring diagram of the computer you are using right now, you know nothing about what it does or how it works. That’s software.
So go find the software for a maggot brain and we’re on the path to AGI. Sooner or later we’ll leave the ANNs in dust.