Sorry for the delay in my response. This speculation is a work in progress with many gaps:
To have an artificial intelligent agent with emergent behaviors appropriate to all its natural environmental circumstances, rather than having one cost function, as in ML, it’d likely have dozens, hundreds or thousands of cost functions. Those cost functions could in part be supplied by an analog to the amygdala, as routed through the locus coeruleus’ (LC) targeted release of norepinephrine.
I understand norepinephrine as having a major role in focusing attention, with the effect of priming neurons in regions that are relevant to the situation that aroused the amygdala in the first place. So those regions would tend to win neural competitions (Gerald Edleman’s neural Darwinism). The agent focuses on the problem at hand. In short, it could be the “on” switch.
For this to work, the norepinephrine delivery would have to be specific enough to target rather specific cortical areas (something I don’t know at this point).
On the flip side, dopamine release ultimately leads to a signal that “you’ve arrived. You can stop that energy-intensive activity now,” i.e. the “off” switch. Since the ventral tegmental area dopamine can release dopamine at intermediate stages during process toward a goal, there has to be gaps in this story; the release of dopamine alone doesn’t appear to be sufficient to end the focus of attention.
The general idea is that models of these two neurotransmitters could serve the role of a cost function to direct an artificial organism to its goals, with feedback loops providing an analog to gradient descent (as in the neocortex, nucleus accumbens, … back to the LC and the amygdala).
For me, some of the interesting questions are:
- Could this be a key, possibly necessary, approach to the creation of a fully autonomous agent, such that it appears to be fully “alive” within its problem domain, with behaviors that are (almost) always appropriate to its environment (starting, say, at a level of complexity of an honey bee)?
- If so, how simple could a neurotransmitter model get and still provide this behavior? Are two neurotransmitters enough, or do you need six? Would you have to model most of the regions seen in biological organisms, or could you get away with a dozen?
- Is something like this currently part of Numenta’s architecture (my guess, probably “yes”)?