Successful applications here is what will put TBT on the map w/applied AI in a huge way IMO.
In terms of the particular arena, your question about CC understanding general rules is totally practical and must be addressed at some point – though I think there are milestones to hit before then.
My particular hunch for the next major milestone goal would be to have an agent move through a virtual space where there are moving objects to contend with.
The first step IMO is to prove is that the agent (composed of a group of CCs) can continually identify these objects – meaning it doesn’t mislabel them just because they move.
At first the agent could be simply told that X set of movings object are dangerous and Y set are desirable (like an animal knowing predators from prey).
If we see the agent behaving ‘rationally’ (as in consistently avoiding X objects and pursuing Y) this suggests that the agent has formed ‘invariant representation’ of the objects – meaning it continuously knows what they are, despite their changing relative position & orientation.
The current demo of TBT (afaik) is object identification, where multiple sensors (fingers) are shown to recognize a coffee cup by generating movements and sharing sensory information – to identify the cup faster than a single sensor could.
This identification appears to work, but it assumes (as I understand) that the cup is not moving, and is close enough for the agent (i.e. group of finger CCs) to touch it.
I’d like to see this same identification performance in a virtual space where the cup is moving, along with other objects that are also moving in the background. This would add robustness to the CCs object recognition power – getting closer to an agent that could actually navigate a more realistic world in some basic way.
I favor this kind of goal first, before learning games like Tic-Tac-Toe or GO, because the closed nature of these games (I suspect) allows conventional ML methods to succeed – though they are not equipped to navigate any realistic world where things are moving and not totally predictable.
I think that this cortical-based approach of HTM & TBT is best equipped to express true real-world intelligence, like us animals have to. So I want to show HTM & TBT doing things that no other system has succeed at whatsoever or even seriously tried (cause they know it wouldn’t work).
Regardless this is a fundamental question that needs addressing sooner or later. My hypothetical of a world with X dangerous and Y desirable objects doesn’t including the agent learning that X are dangerous and Y are desirable.
The current agent that recognizes the coffee cup doesn’t understand as we do that if the cup fell it would break, and the sharp piece could pose a threat. This to me is a whole other level of understanding which must be reached for a truly intelligent agent and deserves our reflection – I just think that prioritizing that as a demonstrable app would be kinda trying to run before we can crawl.