@gidmeister,
Thank you for your interest in my theory! Ill gladly answer your good questions.
The list you gave of the semantic meanings of each aspect of reality (e.g., FA, LE, m(FE), etc) is very close, however i see a couple of errors:
format:
number. semantic meaning (another way to think of it) = SIGNIFIER,
–> {manifestation in the Rubik’s cube example}
------- predicted aspects --------
-
an object feature, (part of an object’s abstract model) = FA,
–> {feeling an edge on the cube}
-
an object location, (part of an object’s orientation) = LA
–> {knowing that the edge is on the red face, on the bottom left corner area on the cube}
-
part of the body’s orientation and state and position, (a new or previous motor command) = FE
–> {the previous motor movement of my left hand, which caused it to become closer to the cube sitting on the table}
-
an object’s location in space, (not a motor command, but a location) = LE
–> {knowing that the red side of the cube is 20 cm and 10 degrees away from the tip of my hand, for example}
------- modelled aspects --------
-
the current or desired state of the body, ie, the body’s entire orientation, and position, etc, (note: when this state is the new desired state, this is the goal.) = m(FE)
–> {the current model of my left arm, which maybe posits that my forearm is at a 30 degree angle from my chest, and my hand is oriented 20 degrees counter clockwise to my forearm}
-
the full surroundings (the location of all objects being modelled) = m(LE)
–> {the model of the location of every object in the room, like the chair, the rubiks cube, the table, the walls, all smushed together into a single model of the location of my surroundings.}
-
the full model of the object, independent of orientation or location in space = m(FA)
–> {the rubiks cube, thought of in its entirety: every possible somatosensory unique feature-location pair i could feel on every side of the cube.}
-
the full model of the orientation of the object = m(LA)
–> {the cubes current orientations in space: the fact the cube is sitting with the blue side downwards, and the red side facing me, and the yellow side facing towards the left.}
with this said, most of your understanding of these aspects of reality and their names was actually correct, however this list corrects a couple errors you had.
An important note about how i organized this list, is separating out the predicted aspects from the modelled aspects. this is important to answer your initial question of which ones are predicted, and when, or in what order.
Now for your questions:
question: what does orientation mean anyway?
answer: other than the intuitive definition of orientation, it’s hard to answer this cleanly. its sorta like if you take a Rubik’s cube, and you simply rotate it in space, relative to you. (if you aren’t changing its location in space,) you are changing its orientation in space. it is this orientation that is “m(LA)”.
question: how can you know the orientation of an object without incorporating the features of the object?
answer: you are exactly right. you can’t. knowing the previously predicted object feature is imperative to predicting a piece of the new orientation of the object. (note: “a piece of the new orientation of the object”, is identical to an allocentric location on an object.).
question: does orientation include (egocentric) space?
answer: no, it does not. space, or in other words, location in space, is done exclusively by LE-predicting or m(LE)-modelling layers-- orientation is merely its “rotation in space” (more or less) as described earlier. you noted the angle and distance an object is away from you. this is actually a good rough idea of what LE is predicting, or what m(LE) is modelling. (note: this is not exactly at all how brains deal with egocentric locations, however)
question: what is the order of events? do you first have a goal m(FE)? how does it unfold?
answer: this is a really good question. i didn’t talk about this in the paper at all, but i’d say there is actually very little complicated “order” to it, (such as FE, then LE, then LA, etc). I have a hypothesis that the predictions and models happening in the CT modules in both E and A regions are simultaneously the first things to be modelled or predicted, in the process of developing a model of the world. in fact, it may even be silly to think of the modeling and predicting happening in a E.CTmod, or A.CTmod as separate. they are actually intimately dependant on each others activity, and one can only succeed at the moment the other succeeds.
so in other words, in the case of prediction, FE and LA must successfully predict their inputs at the same time, and only then, can FA or LE predictions start to be made. (take this with a grain of salt, however, because this is simply based on my understanding of the theory, not any neuroscience evidence or software simulation.)
this situation is further complicated by the activity of the pooling layers, or in other words, the modelling of the aspects of reality, like m(FE), m(LA), etc. I postulate that the modelling situation only becomes successful, after a sufficient amount of successful predictions are made about the inputs (after LA FE, FA or LE aspects are successfully predicted), OR help (apical depolarizations) from the parent region allows for the coalescence of a stable model of a particular aspect of reality.
both of these will allow a stable model of the inputs to be made.
question: does FA refer the current edge on an object being felt, or the predicted future edge that will be felt?
answer: in the case of the output of layer A.4 (or A.3b-alpha) always the former. it represents the current edge being sensed. in order to develop this FA at timestep {t}, (ie, current feature on an object), you need to do the following. this is given specifically for layer 4, however layer 3b-alpha is not much different.
-
at the timestep {t-1}, you need to develop a prediction of what allocentric location the FA at {t} will appear at on the object. this information is simply the output of A.6a, (in the case of A.4), (which as we know is responsible for producing the allocentric location of some feature being sensed). for the sake of example, we will assume that the A.6a has successfully predicted the allocentric location of the sensory feature that occurred at {t-2}. we can now imagine that this new activity of layer A.6a causes a distal depolarization in layer A.4 . this “allocentric location of the previous feature” is, for some non-intuitive reason, the prediction of the allocentric location of the feature that will occur at next timestep, {t}. …i am assuming that there must be some learnable translation between “the allocentric location of the previous feature” and the “predicted allocentric location of the newly arriving feature”, which might be learned through the learning of the exact dendritic inputs that A.6a gives to the distal dendrites of A.4. (this part is where i’m not really sure, to be perfectly honest.)
-
at the timestep {t}, now that this prediction of the allocentric location of the newly arriving feature is made, (through A.6a distally depolarizing cells in A.4) the moment the newly arriving sensory feature comes into layer 4 proximally, it is made more specific by the current distal depolarizations made by 6a, through “Competitive Ion Update Inhibition”, as i call it. the resultant set of cells which got proximal inputs about the sensory feature, and also the LA distal depolarization, is the current allocentric feature being perceived, or in other words the currently predicted allocentric feature.
question: are the predictions in mutual directions? for instance, does FA predict LA, and LA predict FA?
answer: not quite. actually, each layer that works with FA or LA, (lets use A.4 and A.6a, respectively) predicts its own future semantic activity, (semantic activity meaning, an SDR that represents a FA, or an SDR that represents an LA). each layer makes its own predictions about its future activity using the previous activity of neighboring layers, for instance, in the case of layer 4, it would use the previous activity of 6a, which deals with LA.
question: do you have an FA at time {t} predicting an LA at time {t+1}?
answer: yes, technically. if 4 produces an FA at {t}, then at {t} that FA will come in distally to 6a. this distal input can be thought as PART of the prediction of the LA that will be produced at time {t+1}.
moving on to your observation about numenta’s temporal pooler, you are completely correct. the temporal memory algorithm has no sequence memory/prediction functionality to it, it is unable to determine what the user plans to touch next, it can only build a cohesive model of what is currently being touched.
i think the confusion here stems from a misunderstanding of the place of the temporal pooler functionality in my theory: the temporal pooler functionality is not used in predicting the new (allocentric) location of an object, rather, this is done by an inference layer, specifically A.6a, or A.5b-alpha. temporal pooling layers, on the other hand, are simply in charge of smushing all the allocentric locations that have ever been felt on an object, into a single model, the orientation of an object.
I hope that helps in the understanding of this part of the theory, id be happy to answer more additional questions about this if it still doesn’t make sense.
…its funny you say that people probably can’t build a software model based on this theory, actually. i am currently in the process of finishing the architecture and code for a piece of software which will simulate every aspect of my theory in detail. i will be posting this to the forums quite soon.
Very nice Feynman quote at the end.