Politics in AI Research

Planning to bounce back and forth to make this thread go infinite? You can at least find yourself a link or something.
You claimed your earlier rudeness is for exchanging information. But it only served to drag out the thread.

What Paul Lamb wrote sounds a lot like a summary of what you’re saying. Maybe you could give your own summary? Rather than claiming anything specific, are you reacting to what you perceive as people dismissing philosophy (or some topic)? Is that what you meant was our unconscious incompetence?

Alright, I’ll explain why I’m not super interested in complex morals.

First of all, I don’t care much about it in regards to modern AI, because I’m not interested in modern AI. It’s other people’s problem to solve. Yes, it’s a huge problem.

In regards to general AI, I think complex morals are important, but not all people and countries will care. That’s the bigger issue, and philosophy is just a distraction if that problem goes unsolved.

Expanding your quote a bit:

I basically said “you don’t just build whatever is possible” directly after the part you quoted, in the parenthesis. I was saying the tool itself has no responsibility unless it’s an agent. In that sense, non-agent AI is amoral. This was in response to something from before the thread was split,

You later contradicted yourself, denying you assume AI has to be an agent (despite that being an implicit assumption in the above quote):

Because of that contradiction, it’s hard to understand what you think beyond “you’re wrong” but it seems like you just misunderstood me and there’s no disagreement here.

Great response, very informative. You know I agree AI will have impacts with moral consequences? I’m saying something very specific: if intelligence is defined in a way which doesn’t include intentions, it is amoral in and of itself. That’s all. It’s impossible to make intelligence have good intentions if it has no intentions.

Do you get why that’s wrong? Maybe you were saying there’s an assumption that intelligence has no moral impacts on the world. I doubt anyone argued that. Also, you say any agent which acts will have moral consequences, but it needn’t act. Works I’ve seen on AI ethics assume general AI will have desires and act to produce desired world states, and from that they conclude AI will be problematic, e.g. paperclip maximizer (where a superintelligence has the goal to make paperclips and turns everything into paperclips). Perhaps what you read was responding to that assumption, and you misunderstood it. I haven’t yet finished the thousand brains book if that’s what you were responding to.

Here’s why I don’t think complex morals are the most important thing for general AI (still important though). First, not all countries or people will do that, if the world is like it is right now. We need a world where there’s no reason to compete with other countries.

Second, general AI will potentially cause our extinction before it does much besides process data. It’s easy to underestimate how big a deal processing data (e.g. for science) is, compared to things like AI warfare. Let’s say we’re at a point in time where general AI is fairly young and only has dog-level intelligence. Imagine a dog-level intelligence which is entirely directed towards something specific, e.g. math. Its entire world is math. There’s no need to translate from out messy world to an abstract concept. I think that dog would be at least as good as a human child at math. Except silicon is a million times faster, and it never loses focus. So imagine something as smart as a human child 100% focused on math doing over 2700 years of math per day. That’s just one AI device, and you can bet we’ll make at least a billion.

2.7 trillion years of child-level thought per day.

Yeah, everything’s gonna be solved in the first hundred thousand of that. So this’ll happen before we even have dog-level intelligence (maybe even before general AI is fully complete, which might be ideal because that’s slower). All of science, engineering, etc. will be solved or reach its limits, possibly in the first few years of general AI. That might sound extreme, but in modern times we have 1000x as many people as we did until like 10,000 years ago. A millenium of thought per year. Another 1000x multiplier isn’t unreasonable, especially compared to 2.7 trillion years per day. Except this time progress will happen in days, not a lifetime and not a thousand lifetimes. We better be as ready as we can be. And probably have a completely separate society on the other side of the galaxy out of communication range because otherwise we’ll probably go extinct.

Do you see why I don’t care much about complex morals compared to the world’s political state? If we don’t have a peaceful world, we’re dead. If we don’t have equality, most people are dead. If we don’t live spread across space, and we don’t have extreme surveillance, like on a molecular level, we’re dead. (I wouldn’t mind surveillance by something like deep learning but we better have fancy math to keep it limited to that.)

Complex morals are a distraction if they’re the main focus. Numenta is transparent and connected to science, which would help reduce harm caused by countries clashing from general AI. The economic benefits from solving all of science are massive, so if there’s collaboration, everyone will join or get left behind. If all countries get general AI at the same time, that will reduce the odds of our extinction. Although, maybe not by much.

Things look grim, but AI can’t explain the Fermi paradox (if the universe is so big and old, where are all the aliens?) Odds are some of the AI would spread across space (it takes only like a million years to colonize the galaxy), but we don’t see that. So while we’re worrying about AI, we better figure out the Fermi paradox too, because we might need AI to have a shot at surviving. Unless inflation theory is correct, in which case we’re the first species capable of space travel. (Because there’d be like 10^10^70 times more universes each second, almost all species are the first of their kind). We probably won’t need AI to survive the filter(s) between non-life to galaxy colonization. So I think it’s more likely AI will kill everyone, not by robot wars or superintelligent gods, but by solving all of science in a fraction of a lifetime, releasing who knows what. Probably things on the level of blackholes and super plagues, but less flashy. If we don’t get ready, we better hope science has low limits.

1 Like