The standard illustration of the Winograd challenge is for unsupervised machine learning to disambiguate the referent of the word “it” in the sentence:
“The trophy would not fit in the brown suitcase because it was too big."
The reason CT may be a good “fit” for this challenge is its probabilistic math gracefully degenerates into Aristotelian logic in situations where truth values are discrete rather than probabilistic. Moreover, it may be complementary to HTM theory in that while HTM is inherently dynamic (temporal) and CT is not, CT is inherently logical when appropriate.
HTM imputes mesoscale architecture based on its microscale primary theory, while CT imputes microscale architecture based on its mesoscale primary theory. The solution may be for HTM to look to CT for the mesoscale and CT to look to HTM for the microscale.
As an example:
CT’s primary theory is the existence of thousands of “thalamocortical modules” – each of which represents one “attribute” (eg color) of the context which may take on a large number of discrete values (eg red, magenta, orange). These discrete values are imputed to be a combinatorial explosion of sparse microscale activations within the module – however CT’s imputed dynamics, while critical, seems an afterthought, tacked onto the primary theory.