A different way of thinking about prediction

A Neural Interpretation of Exemplar Theory

Exemplar theory assumes that people categorize a novel object by comparing its similarity to the memory representations of all previous exemplars from each relevant category. Exemplar theory has been the most prominent cognitive theory of categorization for more than 30 years. Despite its considerable success in providing good quantitative fits to a wide variety of accuracy data, it has never had a detailed neurobiological interpretation.

2 Likes

Is there an exemplar theory of concepts?

It is common to describe two main theories of concepts: prototype theories, which rely on some form of summary description of a category, and exemplar theories, which claim that concepts are represented as remembered category instances. This article reviews a number of important phenomena in the psychology of concepts, arguing that they have no proposed exemplar explanation. In some of these cases, it is difficult to see how an exemplar theory would be adequate. The article concludes that exemplars are certainly important in some categorization judgments and in category-learning experiments, but that there is no exemplar theory of human concepts in a broad sense.

If you think of predictive learning as (current model) compared to (perception) = (error to be learned) you only have a vague outline of how that applies to the H of HTM. There has to be juggling between layers of the hierarchy as the features of reality are parsed. I have very indirectly championed that reality is “microparsed” where the features of perception are pulled apart and distributed in the various levels of the hierarchy in a dynamic and cooperative process. As clusters of features are aggregated in various maps, they form natural attractor basins - things that don’t fit well are passed onto other levels of the hierarchy to be learned. The connections are formed with both the feedforward and feedback pathways. The relations between these attractor basins are learned at the same time as the original perception is learned - this relation is part of what is learned. Through this basic mechanism, hierarchical categories are formed and modified.

4 Likes

1-2 years ago i went deep into different Concept theories (Classic, prototype, exemplar, Formal Concept Analysis, Cobweb algo, formal-semantics & logic+lambda, frame-semantics … etc)

Neither one is satisfying … currently I’d vote for Embodied image schemas, which is not clear how to implement.

If we go with HTM then I would think that image schemas concepts are grounded on sequence-of-sequences of sensory-motor programs.
NN uses pattern matching, which is static, cos and euclidean similarity i.e. wont work

A Concept is “recognized-sequence” !!

The other thing all theories get wrong is they are based on similarity, but there are no known Similarity measure that captures Concept-similarity.

The reason is that difference is the primary similarity is only meaningful in context and is asymetric.
Similarity is only possible across a measurable “dimension”, which is “revealed” only because there is difference .

Which means SDR overlap wont be the ultimate similarity there have to be some other way.

1 Like

I call that direct vs. inverse similarity measure, each can be defined for comparison of any arithmetic power. For comparison by subtraction, inverse measure is inverse deviation of difference: average_abs_difference - abs_difference. In these terms, HTM is using direct measure for Boolean comparison, basically a sum of AND between sequences of bits?

2 Likes

can you elaborate ?

it’s hard to relate to “concepts”, those are high-level composites.
Explicit definition of similarity should be done for number pairs, similarity between higher-composition comparands is “emergent”.
Disclaimer: this my own interpretation, and I explain it in the context of non-neuromorpic approach.
Part 1: Comparison: quantifying match and miss between two variables (match here is my term for similarity), in http://www.cognitivealgorithm.info

1 Like