I was thinking about that very subject quite recently.
One way to have that teacher effect taken into account, for our online-learning models, is:
(I was taking the example of learning your ABC in a chat with @Bitking)
Any thoughts/criticism appreciated ^^
Note : Of course, the above is oriented towards training a feedforward path, for the purpose of evoking ‘A’ back in the higher area when we experience a sight of ‘A’, but from almost same scheme, we could also wire apical tufts of visual pathway, in reverse, to feedback from the higher-area-‘A’, in the hope that part of the ‘object recognition’ functions finally… percolates? compress? … down to early areas of vision themselves, like Numenta is maybe proposing recently (if I got this right).
[Edit] Although… maybe what I have in mind is not really anything new… since
Reading this more carefully… well, in essence, I’m only replacing ‘randomly’ by ‘preset’, here…
However those input/outputs would be split in two hierarchically distinct areas in the proposal above, while I believe the HTM ‘output layer’ represents some bunch of nerve cells in same area.