Politics in AI Research

Clear does not mean public. Hiring is not a public event. I mentioned that a public political statement might have advantages, that is very different from claiming that organisations have to do that. An educated propsective employee will ask the question or may infer the answer and not go to the interview.

Yes, but that does not mean your or my moral political goals. I’m sure the American empire is full of people busily doing their best to serve God and country.

Technically correct, the best kind of correct. I know the scope is bigger (there’s morals in how its used or what releasing it into the world could do, even if the AI doesn’t have morals of its own). Do you agree intelligence need not have agency? Do you understand how that means intelligence can be amoral, if you’re only talking about the intelligence in and of itself?

AI without agency is different from AI with agency. For one, you gotta make the AI have morals. Like literally code morals or whatever. For both, they have moral implications like any technology (though one difference being AI can cause our extinction).

No I don’t, it’s useful to break problems into pieces. I also think about the broader scope. Here’s what I already wrote I think will happen: AI will extinct us, more likely than not. Got it? And just so you know, I know it’ll (and already does) wreck havoc in other ways before that.
My definition does not matter. I was simply trying to explain why something you said was wrong isn’t. It seemed to use the definition I did. If so, you misunderstood what it was saying. You need to read it with its definitions, you can’t just force your definitions on it and then call it wrong.

Ok, so anything goes and we just let the evolutionary process play itself out. I think I understand your perspective now, but give me a while to think it over before I try to re-summarize it. Full disclosure, I do not agree with you at this point, I’m just trying to understand what specific actions you are proposing need to be taken by organizations which are researching AI.

Again you keep reading more into statements than is there. Where did I say anything goes? It just means that it might be you and me who are wrong. It does not mean anything goes.

Who or what decides the boundary between what is an acceptable position and what is not?

Certainly machine intelligence does not need agency. That does not mean it is amoral, that is where you do not understand ethics well enough to see the problem and I do not understand ethics well enough to spell it out for you in a single comment.

Probably the only hope here is that you realize you might be wrong in your understanding of what morality means in contemporary philosophy. Your attachment to agency is a hang-over from Cartesian dualism, something that is no longer central in most contemporary philosophy (since at least the mid 20th century).

Okay so a saw isn’t amoral.

Do you or do you not still think this:

And don’t force the thing you’re responding to to use your own definitions.

That is a long conversation and I think you would get to a point by studying ethics where you would not frame the question in that way. The question is full of assumptions that would need to be deconstructed. This is why I think it needs ethicists and not engineers to come up with a good answer - and even then the result will probably be unsatisfactory. It is a very hard problem - easily as hard as the AI problem.

1 Like

That is one of the reasons it is not a good position. It is not the only reason it is not a good position.

There is a fundamental difference between a “saw” and an AI. The AI needs to learn or be trained and is intended to interact in a social context that will have moral consequences.

You do not have an intelligent machine until is has been trained or learned. You cannot train it in an amoral way if it is going to do anything useful in society.

Okay, so AI must have agency. It must make decisions and have behaviors (e.g. move arm left) directed to change the world’s state.
Unless you count something like “hey there’s a black hole in this data” as interacting in a social context.

I’m not excluding AI like that. Some, perhaps most, will interact in a social context. Keep in mind the thing I asked you whether you think is wrong using its own definitions.

Same way a saw needs to be safe. Only difference being, for some things, the data chosen for learning can cause biases with moral implications (or e.g. not care about running someone over). That seems like it’s still amoral because you’re just using it immorally if you allow it to gain those biases (or stupid behavior executions which kill people), or if it’s built in a way which produces those biases / behaviors. Unsafe saw.

No, you can train an AI without it having agency, instead it participates in an agent. Something like deciding what you see on your newsfeed can be trained.

This is just not well thought through. If you can’t see the difference between AI and a saw, then I feel safe in regards to any AI work you are doing :slight_smile:

I figured that was the answer after I posted the question. Simplistically (warning: including my usual humorous hyperbole), anything under the umbrella of ethics (which is apparently broad enough to include things like imperialism and willful environmental destruction) would be acceptable, but things outside of that umbrella would not be. :grin:

“Hey there’s a blackhole in this data” does not require an agent but is produced by intelligence. It doesn’t even need to tell you that if you classify its perceptual states.
By “not require an agent” I mean not require it to have goals which modify the world. If it lacks those I’d say it’s not an agent. Maybe agency requires more, doesn’t matter.

Alright we can drop it if you have no response which ya know, conveys information.

Gonna parachute in and say a few things here:

  • Can we not do the philosophy name-drop thing? Talking in such high terms doesn’t actually help- maybe in the AI Ethics papers, where there’s context and a more focused point, but not in general discussion. If it helps, everyone has a feeling for how the world should work, and a theory of mind of others’ feelings in that regard as well (if you don’t, please don’t work with anything that uses chemistry or electricity). We can talk in laymen’s terms about ethics easily enough via politics (which is what politics is for).

  • For example to the above: Do you want AI to get you shot? Of course not. However, a man in Chicago was shot twice due to being followed so much by the police that drug dealers thought he was an informant and tried to kill him twice. Why was he being followed? The police are using a system that classifies people as being likely to be involved in crime, and flagged this guy for some reason likely not explainable outside of a black box algorithm. So: Part of how to fix this is that it should not have been done in the first place. And that’s not only because AI is not going to get this sort of thing correct (it’s not until you get to some kind of human-level social/cultural/emotional AI that can grok humans at those levels, not via some graph as we do now). It’s because of the fact that the police are trying to automate away their own cognitive efforts in policing areas. Instead of beat cops and trying other ways to engage with a city that distrusts them, they’ve opted for a tool that classifies people without their knowledge and without a way to dispute it.

  • Concrete things to do: Check your datasets via ethics papers. Some of these image datasets are imbalanced, have non-consensual photos in them, contain images you’re not training on, are sourced via problematic methods (Clearview AI’s scraping of the internet to build a worldwide facial recognition engine, for example). Algorithms need to be advanced and combined with GOFAI methods of rules when needed. Yes, this is done a lot, but when it’s not- when it’s done as “end-to-end” as a goal instead of addressing an issue as best as can be done, then you run the risk of use a technique for the sake of it instead of using the best tool for the job, and ML/DL/HTM isn’t always that best tool.

  • Sometimes, you need to walk away when you know it’s wrong. I almost did that this year on a contract until I reviewed the practices and discussed the issues with my spouse and came to the conclusion that it’s not doing the harm I thought it would at first glance. It’s not “wokeness”, but not wanting to do harm. It’s also not working against your own society or pacifism- we indeed need to advance AI “here” in the States in response to other countries, but we can do better than just advancing like others are advancing.

  • The best inventions are due to having heavy constraints, not because they had blank checks on what to build. Sometimes, I read these threads and see what kind of amounts to survivorship bias: Everyone is pointing at the planes coming back and wanting to protect the areas hit by bullets, rather than seeing that the reason why they have no bullet holes in other areas is because those are the planes that don’t come back. Talking about censorship on social media feeds on social media is kind of funny. But you don’t hear from those people getting killed due to misinfo on social media feeds because…well, you know.

  • What survivorship bias looks like within AI: “If you’re not doing anything wrong, you don’t need to worry” vs the guy shot twice because some classifier flagged him and made him a target by both cops and drug dealers. “We’re going to make the flight boarding process easier” vs people getting chucked in the concrete room because the dataset confused them with some terrorist. “I’m being censored on social media, I type on Twitter for my thousands of followers to retweet” vs some poor bastard getting hacked to death in another country because of a retweet of some offense that never happened was never censored, but rather trended via hashtag or pushed by a YouTube engagement algorithm.

I think it’s useful to read the articles about how AI fails and wrongs people and breaks, and make a mental list like you look at other people in your life and encounters doing wrong/stupid stuff and make a note in your head to not do things like that. Instead of people worrying about terminators and paper-clip-maximizers, worry about the stalker using an AI tool to murder their ex. Worry about some populist pushing genocidal messages somewhere. Worry first about who can’t use your tool at all. Worry about who MIGHT use your tool, and what that means, and whether there’s a way put friction into that use. Worry about what cognitive tools we take away in making AI tools.

4 Likes

Lol wut? Is there not some deterministic way to convey information, like with words? Do philosophers not write things down? Is it not possible to teach “modern ethics” to a person who has lived too long in this world? I should just get reincarnated so I can get myself a “proper” education?

I think you’re just salty, perhaps because you spent all of your time thinking about obscure philosophy which is largely irrelevant to the world?

1 Like

I’m guessing you are making the common (i.e. not embarassing) mistake of assuming that if there is no universal moral code then we must be left with reletavism. As you might imagine you are not the first person to raise that concern. Perhaps more surprisingly, the debate moved on from that by the early 20th centrury :wink: In conclusion, your worries about reletavism are a way of securing your attachment to whatever your preferred foundation is. Studying ethics could free you of that condition :slight_smile:

Of course it is. But they need to make some effort. It will not be learnt through a one comment summary or a single tweet. Or maybe I have the secret and if you give me the one tweet answer to AGI I will reply with the one tweet answer to your moral dilemma.

Yes that must be it. You are fine, you made the right choices. You can ignore this thread knowing that you are not missing anything.

There is a very difficult problem that would need to be solved before I would ever consider recommending a societal jump to that train (not saying it hasn’t been solved, I just haven’t seen that solution yet). The problem is filling one of the evolutionary purposes of religion, which is a belief that my actions are consequential to me even after I am dead. Without that control in place, people at large are free to behave in a way that their actions are only consequential up to the point they die, and as a result they will likely act quite selfishly from a long-term perspective, having a broadly negative impact from a lineage survival perspective.

In general I like what you posted. In general for the “cogs” in the industry what you are suggesting is probably best practice. I think there are also ethical issues that AI raises that need serious work and that work is, in general, undervalued. Technologists are notorious for avoiding their responsibiltiy and simply following moral norms, rather than actaully developing new moral norms when needed. When playing with some technologies (like AI) we may not have the luxury of learning from mistakes. AI engineers are now, nearly, all aware of the racial bias in their image data, but only because it became a public problem and society pressured them into caring about it. They released that technology without any thought to the implications. What I see, in general, is 18th centrury morality being tested by 21st century technology and it is not headed in a good direction.

Or haven’t looked for the solution yet? Nietzsche killed god in 1882 :wink: