How does a cortical layer identify features in the input

Thank you for a quick answer. As I understand the spatial pooler is the algorithm that works on already existing SDRs. I was curious how does the brain decide which neuron should be responsible for a feature. In his book, Jeff talks about each neuron in V1 being responsible for a small feature, like a tilted line or something, and the higher region IT recognizes objects.

So if we take a baby brain and start showing him pictures, does the brain align its neuron structure to be able to encode the pictures or its sort of a preexisting design that this neuron will be doing tilted lines, this one will be doing circles and so forth.

We are modeling a cortical layer, so where this input comes from is largely irrelevant. The cortex is a homogenous structure that process inputs in the same way, no matter where the input comes from (for the most part). It only knows it gets connections to other neurons, but not what they are or where they come from.

In fact one region of cortex could get input from several other regions and sensory input at the same time. It won’t know this, it processes the input the same way.

There is a lot going on in our sensory organs that we are not modeling in HTM. The eye is very complicated, and so is the cochlea. The equivalent things in HTM are encoders (I’m sorry about the hair, it was a phase), but these are extremely primitive compared to living sensory organs.

Right, and as Jeff said in the book, the first layer deals with rapidly changing sensory data. I understand that HTM in its current implementation doesn’t deal with some parts of the brain architecture, but I still think this is a relevant problem, even if we look at it purely from a neuroscience perspective.

Following the way examples are presented in “On Intelligence”, let me try to explain:
When we look at the apple, the encoding for it in the brain is not just “apple”, right? The first layer detects small features and propagates them upstream until the information hits the IT layer. This is where Jeff said the same pattern fires if the “apple” object appears anywhere in the visual space. So how does this initial encoding happen, how does the neocortex know that this set of neurons should fire when we see edges, this when we see colors, etc in the first place? I understand that this may now even be known yet, but still curious.

If my logic is correct and we have some idea about how the brain designates neurons to process the initial sensory spikes, we may be on our way to creating the autoencoders?

This is what the Spatial Pooling process does. It translates that rapidly changing sensory data into a more normalized, sparse representation that still retains the semantic information of the data (as best it can). Then the Temporal Memory algorithm identifies sequences of these spatial patterns over time. I really do think you should watch these two videos:

This is Numenta’s implementation of what Jeff describes in the book.

The auto-encoders… I’m not really even sure we’ll ever have a universal auto-encoder.

I would tend to agree that the best encoders are most likely going to be those created specifically for the input stream, where an expert is able to identify the semantics of the data. That said deep neural networks may have use cases for more generic encoding. For example, I was recently introduced to deepart.io, which demonstrates a rather intriguing ability to distill semantic information from paintings.