Politics in AI Research

I’ll try to keep this brief because this is kind of off topic.

It does matter, because AI doesn’t have to be an agent, which is an assumption in most of what I’ve read about AI ethics. It doesn’t have to have any goals or any behaviors which change the world. Building something to not hurt people in pursuit of goals is different from preventing a tool from causing harm.
You seem to assume AI has to be an agent:

I don’t see anything moral about concluding things about the world. AI is amoral if it has no goals. It’s how people use the tool which matters (or what people could do with a tool you’re creating), or the AI’s goals if it is an agent.

Be nice. I don’t like having memories of me being rude or arrogant, which is why I spent half an hour writing these two sentences.

1 Like