Sometimes I worry that a powerful AGI us already close to being created in this century, in the worst case it already exists but is being kept in secret and we should start to worry about alignment now.
I read an argument somewhere that any intelligent system built to maximize some value function will likely lead to a catastrophe if its powerful enough and not perfectly aligned with humanity goals and I kinda agree with it.
sometimes I also worry that its actually impossible to perfectly align such a powerful system and it inevitably ends up doing bad things.
My view, which I have stated often, is that an AGI will do nothing. Intelligence does not provide motivation, goals, morals or much else. What matters is who owns it. We should fear a powerful AGI only to the extent that it gives power to its human masters.
Coincides with Jeff’s view he shares in public. But - motivation is easy to build in, I truly don’t understand why it can’t be some highest priority goal of said AGI system…
I personally believe that motivation is a inevitable side effect of intelligence, high level intelligence is a driven process and it needs to be directed towards a goal or it just becomes useless daydreaming.
I think any system that is allowed to “think” on its own and produce unsupervised conclusions will have a intrinsic motivation.