Which is essentially what the brain does when it receives sensorial input. I like that approach.
Yes, however, even when the agent doesn’t know what type of information it’s going to get, there are still constrains. Your eyes for instance only process visible light. And so the visual cortex can only learn to process a subset of the information in the spectrum. The preprocessor sets the constrains of the streaming data.
The agent has to detect the limits of this information stream, but it’s the preprocessor that determines the limits. And do I understand correctly that the preprocessor is also part of the system you’re trying to build? So is that not just a question of where to implement the limiter?
However however, I do see the value of creating an initially blind agent that needs to detect the limits if the ultimate goal is to develop a generic cognitive agent. (an agent that can work no matter what type of information stream you’ll feed it).