How close do you think AGI is and what options for alignment do you think we have?

Sometimes I worry that a powerful AGI us already close to being created in this century, in the worst case it already exists but is being kept in secret and we should start to worry about alignment now.

I read an argument somewhere that any intelligent system built to maximize some value function will likely lead to a catastrophe if its powerful enough and not perfectly aligned with humanity goals and I kinda agree with it.

sometimes I also worry that its actually impossible to perfectly align such a powerful system and it inevitably ends up doing bad things.

So what other options do you think we have?

1 Like

My view, which I have stated often, is that an AGI will do nothing. Intelligence does not provide motivation, goals, morals or much else. What matters is who owns it. We should fear a powerful AGI only to the extent that it gives power to its human masters.

No options except to proceed creating AGI. If not us then somebody else and not only this century, but this decade.

Alignment is a myth. Anyone aligned nuclear/bio-weapons ? AGI is an algorithm (one is in our brains) and can/will be applied to good and evil ends.

1 Like

Coincides with Jeff’s view he shares in public. But - motivation is easy to build in, I truly don’t understand why it can’t be some highest priority goal of said AGI system…

Anxiety is a consequence of consciousness. It can be suppressed.

There is no if in terms of machine sentience, there is only when.

Given the resources of governments, a sentient machine could exist now.

Like all intelligent life, its power is only restricted by the resources it has access to.

I hope your anxieties do not get the best of you.

I personally believe that motivation is a inevitable side effect of intelligence, high level intelligence is a driven process and it needs to be directed towards a goal or it just becomes useless daydreaming.

I think any system that is allowed to “think” on its own and produce unsupervised conclusions will have a intrinsic motivation.

there’s always that lingering thought that someone might have already managed to create it and just decided to keep it secret.

“Anxiety is a consequence of consciousness.”
Non-sequitur. Or: any animal has some degree of c-word.

Conspiracies aside, but what we have in our brains is an algorithm, and we (as species) get more and more clues and data…