Politics in AI Research

Considering how may of the people in the list have been fired/harrassed/threatened, maybe it’s not the nicotine-company-shill situation you make it out to be. Then again, I don’t think you want to be told about sources so much as being the source yourself, else you’d have done some reading and seen the personal toll of the work of many of the people in that list.

I think what you want is more of a philosophical discussion than actually taking the cultures/sub-cultures of users into account, and that’s of limited use (not useless, but when making tools for people, it’s best to deal with people even more than abstract concepts about people).

2 Likes

Wow, all those people skills have turned you into a mind reader. You are literaly one of the workers on the cigarette production line, so keep smoking what they’re selling you :slight_smile: I agree with you that this thread is a waste of time.

Are they on Twitter?

1 Like

I do think the thread is worth while, especially where there’s the possibility that someone who reads it and comes across a viewpoint or bit of knowledge that changes how they’re doing something for the better. As for me, I’m a game dev dabbling in AI, and don’t use ML/DL, and do my best to avoid filter bubbles and smart devices in want of solutions a more centered on the user without being tethered to the cloud (not that much of that exists, but that’s part of what my game AI rabbit hole has led me here for).

The only reason why you’re sparring with me is because I offered a list of people doing AI ethics work, and you dismissed them out of hand because they don’t hold a degree you think is worthwhile without even looking at their work. You’ve also made a number of statements about me that are similarly out of hand, and I see that pattern in your posts with others as well.

Simply put, you seem arrogant, and I think this back and forth is useless, so I’m out.

If there’s others in this thread who’d like to talk ethics or concepts for AI that do less harm, I’d love to talk as well as listen.

1 Like

Ok, I’ll keep that line as is.

So I’ll remove this line. Sounds like they can just keep their current political goals, no need to update them after acquiring a deep understanding of ethics.

Ok, I’ll remove that line. They don’t need to broadcast whatever are their political goals.

Fine, this line can go then.

So scratch that line too.

Unfortunately that just leaves me with:

Is it really correct to summarize your point as, “Get yourself up to date on ethics, and everything else will become obvious”? That seems a rather unsatisfactory answer to the question that a lot of us are trying to distill out of the discussion: “what, from your perspective, should an organization doing AI research like Numenta be doing differently?”

I think more likely, I have just missed your point entirely, so sorry about that. Anyway, I won’t press you further since you feel the topic has become a waste of time. I’m sure we all have a lot more interesting topics to discuss :slight_smile:

2 Likes

The sad trifecta: offensive, unhelpful and wrong. I see no point in pursuing this.

The point is they would be deconstructing what they currently have and constructing something new. That does not have to be a public process.

A prescriptive morality where I would tell you or Numenta what to do is not of value to you or Numenta. Obviously I am not a moral expert but even if I was, it would not, imo, be a good solution.

What people want, because they’ve been taught to want it, are easy answers. Ethics is not easy. The discussion itself is more valuable than the piece of paper stating conclusions.

Remember this topic comes from a criticism of a criticism of a book that is focused largely on ethical questions without demonstrating the research that topic deserves. I have not checked the bibliography but I guess there are very few (any?) references to works on ethics. We should believe the topic doesn’t need to be studied before making bold recommendations?

You want to turn it into having me come up with a plan for Numenta, that is not the topic.

Many people don’t think there is something to think about. That is a problem.

I’m not trying to tell you what to think, I’m trying to tell you to think. From my perpsective buying into the moral conclusions of TBT would be a bad idea. That does not mean I have some pre-wrapped solution ready and waiting for my new role as the arrogant world dictactor :wink:

1 Like

Considering the length of this thread, even subtracting miscommunication and bickering, people are very much thinking about ethics. I don’t think anyone’s just here to win an argument, especially who you replied to.

No need to prescribe. Suggest.

Before I give my opinions (in replies to this quote and others), don’t mistake my well-reasoned contextual opposition to philosophy to be part of some forum hive mind against you.

You seem to think straightforward answers are easy. Well, they’re not, not if they’re good ones. One step in objective logic is much harder than a dozen steps in a discussion. Besides, it really doesn’t matter whether something is easy or hard. Unless you think people want easy answers to avoid responsibility. I can’t speak for others, but I sure don’t. I don’t make AI.

I haven’t finished the book yet, so maybe my comment on this doesn’t make sense. Keep in mind it’s probably addressing ethics of general AI, not modern AI.

From what I’ve seen, the book argues intelligence can be amoral, which is a good thing compared to having human desires. I think amoral intelligence means it doesn’t have any desires, or those desires don’t change the world other than e.g. moving a camera (and not moving it in a way which somehow manipulates people or hacks the internet or whatnot.) I agree with that to the extent it provides a solution to coding AI ethics - don’t make it something which requires coding ethics in the first place. I don’t agree that’s a real solution (e.g. governments will still build paperclip maximizers and science will progress absurdly quickly), but planning how to code AI ethics wouldn’t be a solution either.

I really gotta go read the rest of that book. Will do.

If anyone thinks this, please say so.

The problem is you haven’t given anything to think about. “I can’t think of anything particularly relevant, go do your own reading about a vast topic, any starting point will do” isn’t gonna get people motivated to learn about philosophy. It’s just gonna reinforce how vague and highfalutin philosophy looks.

That’s not motivated by, “heh, humanities folk, they’re always going on about some english class essay idea”. It’s because this is an objective topic, and an esoteric one especially in the case of HTM. I don’t care about what school of thought people fit in. If you can’t tell people what’s objectively wrong with what they think, dismissing a school of thought holds little water.

I took a philosophy class and loved it. It’s thought in slow motion. The professor gave like all 300 students chocolate to help us think (ever seen the original death note?) That sherlock attitude is great for thinking creatively, but at the end of the day, you gotta check yourself or it’s just arrogance (or a purely creative project). Most ideas are wrong, or don’t ask realistic questions leaving you to pick whatever sounds coolest or most convenient.

The problem with philosophy is it’s very hard to get feedback from reality whether it’s right or wrong. Most of the feedback is just whether other people think your ideas are cool or convenient. For example, the trolley problem is a big deal in philosophy. Do you flip a switch to save 2 people while killing 1? When we discussed it in my philosophy class, it was like a 50/50 split. But when it was tested in a youtube video, most people flipped it (I think everyone, not sure tho. Also it was just a vsauce video, dunno if there are studies seeing as it nearly caused someone PTSD). There’s a big difference between believing it’s best to do no harm and watching two people about to die, knowing if you don’t kill someone, you are responsible for two people dying. That’s what I think true ethics/morals are. They aren’t some fanciful ideas about what’s right in theory. They’re revealed when people actually feel things (but only when people are selfless and fully aware).

That makes philosophical ethics dangerous for something as slow as inventing AI. People building the atom bomb or inventing computers didn’t need to learn about philosophy. They needed to take responsibility for what their inventions might do, and be ready to throw away their life’s work or make drastic changes if they come to fear they’ll cause great harm. There’s no guarantees, so don’t take comfort in fantasy, because that’s false guarantees.

That’s not to say philosophical ethics are useless. They’re good to know when there are grey areas. They just can’t take priority over what people feel or expect they will feel, or would feel if they had full awareness. Instead they should inform what one expects they will feel, or would feel if selfless and fully aware.

I don’t see any grey areas with Numenta. The questions are just what’s gonna happen, not whether each possibility is good or bad. Those are the questions to identify, create multiple answers for each question, and plan for the possibilities.

I also think anyone working on AI should prepare to make objective decisions based on ethics, because that will be a very painful decision, one which you imagined making differently hundreds of times over the years. One thing which may help is acknowledging the fact that day-to-day work on AI isn’t motivated by changing the world, it’s just fun and a beautiful topic, and it’s okay to drop it because you weren’t trying to be a hero in the first place. Science doesn’t make the world better by heroes, it does so by people loving what they work on, except when it doesn’t. But, there’s no way to predict that from the start.

1 Like

That are not going to say it. They are not going to think that they are not thinking.

You are quoting something I did not write. I have given explicit references to key concepts, a framework for approaching philosophy, link to a philosopher. You have chosen to ignore that and not to think but instead to write your predictable and ill-informed opinion.

I am sure you have no idea what your own school of thought is. Where are the ideas you are parroting coming from ? What assumptions are they built on? Why were those assumptions made and not others? What happened in cultures where other assumptions were made?

I guess (and very much hope) that you are young. You have been duped about your education. There is plenty of time to fix that but don’t waste your time trying to stop others from actually doing some thinking.

This is not even close to a very, very, very basic understanding of what the word ethics means. It does not mean morality. At least read the wiki page for god’s sake.

Oh, and there I was thinking you did not know about ethics. It turns out you have the definitive universal answer.

Well said. Thoughtful, but I think you overstate the role of ‘anyone working on AI’. Any sufficiently powerful technology can be used to do great harm, and it is those in power who control how it will be used. The guys at Los Alamos had no say in Hiroshima or Mutually Assured Destruction.

The decisions on how AI is used should be a matter for public discussion and ultimately politics, and not dictated by commerce, surveillance or warfare.Beyond that, it’s hard to see where philosophy fits in. But I’m happy to be educated.

In 2018, The Economist ran this series of articles on Liberalism’s greatest thinkers.

https://drive.google.com/drive/folders/1CxUuWHbYs95BZ7LPcMXYVZqCECfLMrXt?usp=sharing

An example of a good conversation about AI & ethics: Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI - Future of Life Institute

Another good, conversation https://complexity.simplecast.com/episodes/21 perhaps easier to listen to for some.

I think the inventors of something have a lot of responsibility for what it does. They need to drop out if it’s gonna do harm, or mitigate that harm. In the case of HTM, no one else is going to make HTM anytime soon, but they also have ways to mitigate the harm.

Maybe my country was in the right to make the atom bomb because other countries would eventually create them too. If there were the possibility of the Nazis making them, I think it was justified.

What more clearly wasn’t justified is how the bombs were used. The scientists knew they existed, and they could’ve revealed their existence to put more pressure from the general population on the government. The scientists certainly thought about ways they could be used, which they could’ve told everyone is how they should be used.

We didn’t have to kill children.

We could’ve warned cities we were gonna nuke. That was an option on the table. I forget why it was rejected, but I’m guessing it was to protect bombers. I’m sure there was a way around that, like telling every single city to evacuate and sending decoys.

We could’ve dropped them nearby, making a large chunk of the population know what the bombs are first hand.

We could’ve deployed them not on civilians, for example an aircraft carrier fleet.

As far as I’m concerned, the scientists, engineers, etc. who worked on the bomb are all responsible for killing children. I would hope everyone would try to avoid that. I looked it up real quick and there were many leaks, so I guess some tried at least.

I agree, but some countries’ governments don’t listen to citizens, which may force all governments to use it for warfare. I think AI should be part of commerce, not because that’d be good in and of itself, but because commerce and warfare are incompatible. The public / politics should decide how AI is part of commerce, though.

I don’t see where philosophy fits in either. Ethicists probably do a lot of good for modern AI, once you get down to the details like self-driving cars. Maybe they’ll do the same for general AI.

Workers have no such power. Right now there are armies of highly skilled developers working on creating malware for criminal purposes. They know it does harm, but they have families to feed.

All responsibility for the use of technology to do good or harm rests with those who have power. There are no moral, ethical or philosophical issues, there is only politics and power. It is up to us to choose leaders who make decisions in our best interests and use their power to our benefit. Others may not have that choice.

The only role I see where ethics (philosophy) has a role to play is as a way for politicians to persuade the people that the decision they made was the right one.

Too cynical? As before, if I’m wrong and I’ve overlooked something important, I’m here to learn.

1 Like

I agree with the second sentence, but I think many workers have power, or at least the financial wellbeing to have a choice. But yeah, workers who need to feed their families may not have a choice. If it rises to the level of killing people and they won’t literally starve or whatnot by dropping it, at that point I think there is still blame for them.

Yeah I honestly don’t see philosophical ethics being too important, especially compared to practical matters. The only thing I can think of is how a self-driving car chooses who to kill or whatnot.

That is comic.

I have thoughts on this, but you seem to be trying to extend this thread as long as possible, not have a constructive conversation. You’re either a very angry person or messing with me. Either way I shouldn’t reply to you anymore if you don’t show I’m wrong.

1 Like

I am closing this topic as the conversation no longer appears to be productive.
I will remind forum members to be polite and respectful in forum discussions.

1 Like