So I was reading the wikipedia page on PID Loops and as an example they describe how a ships captain steers the boat by turning the steering wheel until the boat points towards the destination. And it occurred to me that this is a thing that humans can do. Our brains can learn and implement closed-loop controllers for arbitrary tasks.
Actually, only learners do this. The captain turns the wheel, watches the response, then centers the wheel before the correct heading has been reached. The captain has a mental model of the ship and its controls, as do all higher animals. Brains can learn and implement models of the real physical world, and use them to perform arbitrary tasks.
Closed loop controllers play a role too, but for more primitive tasks like being able to ski or ride a bike. Our AI, no matter how impressive in other ways, can do none of this.
What do you mean? Which of “our” AI-s can’t do what? Skying?
There are lots of AIs that are able to learn doing lots of things. Yeah, PID-like body control too.
PS:
At the highest, conceptual level by looping through
(observation, action, result) correcting cycles till the desired result is achieved.
Which BTW, neither HTM nor mainstream ML integrates the notion of desires (or goals) seriously. Well there is reinforcement learning but that (with a handful of exceptions) abstracts to the highest level and keeps the desire outside of the actual model.
I attribute that function to the subcortical structures in the brain, particularly, the hypothalamus.
I anticipate that adding the functions of the amygdala and hypothalamus to transformers models is the way forward.
That’s a great example!
Nothing in evolution prepared us to ride a bike. We learn how to control the bike (our actions → effects) and we decide on a goal (where we want to go) and then our brain implements some kind of closed loop control over the bike.
And controlling a bike is a non-trivial task! In order to turn one way, you first need to turn the bike the other way so that you start falling in the direction of the turn. This video demonstrates the effect: https://www.youtube.com/watch?v=9cNmUNHSBac
That’s what I’ve been describing: the brain has a model of the world (actually lots of tiny models), a set of goals, a choice of actions and a learned understanding of how the actions affect the model.
But this is not closed loop, this part is predictive/open loop. The brain chooses an action before seeing any results. The closed loop comes in for making small course corrections along the way, based on feedback in reaching the goal.
I don’t know which model you refer to. Because if you claim none of “our AIs” is able to accurately control a human-like body here-s a counterexample.
If you mean they cannot balance a bike… well as a RL task, bike is easier than an inverted pendulum (e.g. cartpole/hoverboard) system because moving bikes have inherent stability.. A rider-less bicycle with enough speed doesn’t fall. What makes it less intuitive to learn is the direction control requires the rider to first slightly steer into the direction opposite to the intended turn.
if I remember correctly, there are indeed PID controllers in the vertebrates, they are actually the central pattern generators in the spinal chord and control the set point of balnace between extensor and flexor muscles.
as far as I can tell, what the cortex outputs is a scalar value for target joint angle(P) plus a cerebellum offset(ID).
the joint angle signal gets converted in the spinal chord to a extension / flexion in muscles using the same PID style learning but with muscle spindle signals for feedback on the current joint angle.
That makes sense: PID-style behaviour evolved and refined over many species. There are multiple sensory inputs for proprioception, from muscles, tendons and joints, which vary across species. There are also higher level inputs such as vestibulocochlear and the visual system itself, all linked together by evolved mechanisms.
So I don’t think there is much learning going on at this level, and however we replicate these systems it’s not really AI. The learning is for complex behaviour involving integration across multiple systems: picking things up, throwing things, gaits, locomotion, using tools, etc. Show me an AI that can learn to use a hammer to drive a nail, or crack an egg, or kick a football, or ride a horse.