I outline a way for an agent to learn the dispositions of a particular individual through inverse reinforcement learning where the state space at time t includes an fMRI scan of the individual, to represent his brain state at that time. The fundamental assumption being that the information shown on an fMRI scan of an individual is conditioned on his thoughts and thought processes. The system models both long and short term memory as well any internal dynamics we may not be aware of that are in the human brain. The human expert will put on a suit for a set duration with sensors whose information will be used to train a policy network, while a generative model will be trained to produce the next fMRI scan image conditioned on the present one and the state of the environment. During operation the humanoid robots actions will be conditioned on this evolving fMRI and the environment it is in.
These assumptions about determinism might prove problematic.
Could you give an example scenario. If any part of the process is random them it cannot yield new information, because the entropy is maxed out.
I don’t like to argue about determinism. If you would like to find out the position of other forum members, you can read the thread I linked above.