I feel I need to clarify that the idea that I proposed was to create encoders that will act as the primary input channels analogous to a human beings senses. The idea is have sensory encoders that require no pre-processing and can take the world’s state in directly as pieces of a “Universal Encoder”.
I’m hopeful that it makes sense that we can feed in auditory; visual; tactile (kinesthetic); optical character recognition; ascii character streams (for reading the internet) - etc. directly because that is what human beings do. We could convert these “basic” formats to SDR encodings and never have to write another encoder.
All the HTM does (just as all the neocortex does), is process sequences of patterns. From this raw input we build a vocabulary and conceptual model of the world. I’m hoping this makes sense because it seems we could use the series of Hierarchies which turn the world’s sensor data into SDRs which represent higher level concepts - and re-use them such that when an HTM user wants to build a complex Hierarchy to act as an “intelligence” to satisfy a specific purpose; that person could start out using a section of hierarchy (or hierarchies in the case of more than one input “sense”), that processes the format of input necessary to accomplish that specific task?
A Universal Encoder - is what @karchie was speaking of in the original thread? (I can’t help hearing Universal Translator from Star Trek when using that phrase )