How does the brain create a representation of an apple?

Over at the human brain project they say
Creating representation

How does the brain create a representation of an object, like an apple, from multisensory information? This question is crucial since these representations are the basis for higher cognitive processes such as category formation, reasoning and language. One of our goals is to develop a “deep learning” neuronal network that learns to recognize objects and functions in a way similar to real neurobiological systems.

How does this work in HTMs?

1 Like

If you haven’t already you should definitely check this out:

Also, in this video Jeff adds some more interesting details about how features are essentially pooled representations of sensations at an orientation. Those features are then then given a location context, and then those feature/locations are pooled into object representations.