The links to the papers are in the video description, and also in the forum post.
Ahh…yes, “Show More”. Just gettin’ old.
So this morning I went down to the kitchen for coffee and while I was preparing to take a very small pill that I have to cut in half with a pill cutter, one half fell on the floor. So we looked for the little devil, no joy. My wife said that she’d get the broom and see if it showed up so I went about getting my coffee. In bare feet, as I was leaving the kitchen I stepped on something hard, but small. I guess you can see where this is going. I immediately said to my wife, “I think I found it.” Then the most peculiar thing happened, I had a sense it was the pill based on shape (not just small and hard)! It was almost as if there was a brief delay until the touch sensors on the ball of my foot could align with the ‘shape map’ that said half-cylinder. So weird. Now I have to go look up the density of touch sensors in the foot.
And I doubt the micro columns connected to your foot had ever generated a model of the pill.
EXACTLY! So fascinating when thought about in the context of HTM.
So Hawkin’s models in the cortical columns I think can be viewed as learned short cuts - not the only way we recognize objects. The next time you stepped on a pill you might recognize it quicker.
The coffee cup model was perfectly installed in the brain, no matter in what way you tries to approach the coffee cup. If your touch has some finite inputs that was enough to activate the coffee cup model then you can recognise it. Its all about the input, lets say for eg. There two coffee cups that are identical but have different colour, with blindfolded how do you know which one is which if touch the cup with your toes. Your toes doesn’t take the colour inputs but your eyes will. Thats the point, you can’t see with your taste buds, also you can’t recognize a sound by visual. Its all about the type of input we are receiving and our brain is just a giant that acts like superconductor.