Let’s say that an object ‘X’ has the following 5 properties [‘A’, ‘B’, ‘C’, ‘D’, ‘E’].
Let’s say that an object ‘Y’ has the following 6 properties [‘A’, ‘B’, ‘C’, ‘D’, ‘E’, ‘F’].
(e.g. these properties could be something like color, shape, size, etc.)
Let’s also say that ‘X’ and ‘Y’ has little or no commonalities between them.
It seems more efficient for the brain if observing ‘F’ could strongly inhibit/negate all the activations relating to ‘X’ even when the brain observes properties ‘A’ to ‘E’.
What mechanism in the brain helps it learn something like in the above case where a single property ‘F’ changes the whole meaning of an object from ‘X’ to ‘Y’, where ‘X’ and ‘Y’ could be extremely different from each other?
If I’m not mistaken HTM uses only positive ‘weights’ and global inhibition through winner takes all mechanism to do any sort of inhibition/negation. Is that consistent with biology or does the brain have an equivalent of a negative ‘weight’ mechanism like those in an artificial neural network to handle the type of learning I described above more easily? Or would the brain have difficulty learning the above case?