Politics in AI Research

I don’t think people need to believe their actions will affect them after death, to care about what happens after their death. Religion helps that, though. We’re a tribal species, so part of our survival instinct is for the tribe. Family definitely, but I think it extends to everyone somewhat, depending on who each person considers their tribe. Our tribes are probably getting larger as social media connects the world. Maybe social media is the replacement for religion.

1 Like

I have, and I have not found any satisfactory alternative. If you truely believe there is nothingness for you after your death (you become as though you never even existed in the first place), and that all other humans also await the same fate, and the universe itself is fated to end, where to you find any comparable incentive to take its place? The alternatives seem to be more strong-fisted government control, which of course causes many other negative outcomes.

People who believe in religions with concepts of eternity, reincarnation, etc do. Atheism is much more widespread than perhaps ever before, but I would argue (perhaps that warrants a separate thread) that some of the selfish impulses we see in modern society (environmental destruction, lack of respect for basic human dignity, etc) are an unintended consequence of people loosing the belief that one’s actions in life affect them personally after they die.

One option would be to go beyond a self-centric philosophy. For example, take Kegan’s stages of development, the later stages, which our culture is not really fostering, seem to have an inherent solution to that concern.

Sorry, Kegan’s stages of development do not address the problem with nearly the same potency as a religious believe in eternity. @Casey does make a great point about some other redundant instincts we have to fall back on, but they again do not hold the same weight. There is a reason religion is such a central feature to humans from as far back as we can observe, and replacing the purpose that it fills from an evolutionary perspective is not as simple as it sounds.

I’m more optimistic about modern selfishness. I think most people have no incentive to harm others, because most people live such comfortable lives compared to before, so they have more to lose. The problems aren’t so much between people. When people are selfish, it’s mostly indirect, e.g. consumerism, government actions, or environment change. Those can be solved one at a time. People aren’t going to give up their hedonistic lifestyle, but I don’t think that’s necessary depending on how technology develops. For example, if we get fusion we can do carbon capture, and solar is quickly becoming cheaper. To some degree, I think we might need a replacement for religion which emphasizes those large-scale indirect forms of greed rather than interpersonal greed.

1 Like

Unless you have explored those paradigms I don’t think you could know that. It is not like you read the Wiki and suddenly your over god :slight_smile: If you put in the work, then by the time those later stages arrive, you would not be the same person, not have the same fears, not need the same religion.

By the way, I love the Freudian slip “same impotence” :slight_smile:

1 Like

LOL, nice :rofl:

Again, I am approaching this from a societal perspective, not a personal one. A broadly held believe in eternity is quite effective at the level of societies. If the alternative requires the majority of people in society to travel some deep philosophical journey, that is going to be much more difficult to enact in reality.

It is not that it requires a deep philosophical journey. For example, in the 18th century, you would have needed to be a philosopher and dedicate significant effort to develop the self-paradigm we take for granted today. It is more a question of having the social institutions in place that allow for that development. Of course in a god fearing nation like the USA it will not happen any time soon. So you are right that at the moment it does require individual effort. However, at the scale of society it is a viable alternative and I don’t think we have much choice. Either we will shift to a more sophisticated paradigm or we will destroy our “selves”.

1 Like

Um, on the whole religious issue and it’s prevalence in most (all?) human cultures …

As long as humans are ruled by the subcortical structures, and as long as genetics keeps stamping our innate behaviors as herd critters with instincts to both lead and follow, we will continue to accept that “those guys” are our leaders and we shall follow. That is how our brains are wired and there is very little we can do over-ride that.

BTW: This is the underlying mechanism of religious beliefs. You don’t get a bigger leader than god, and we are all programmed to be followers of what we perceive as a valid leader.

Yes, belief in an afterlife is often identified as a reason to do good and fear of eternal punishment may be part of the moral calculation of some people, but that is not great reason to believe in fairy tales. If you are going to build a ethical or moral code then base it on reality.

‘If it can be destroyed by the truth, it deserves to be destroyed by the truth’ - P.C. Hodgell, Seeker’s Mask, Kirien

1 Like

I was just pointing out that there is an evolutionary purpose being filled by religion (beyond a simple love of fairy tales or follow-the-leader instincts), and an alternative for that purpose is needed before removing the “fairy tales” or (I believe) you start to see broadly negative selfish behaviors emerge in society.

1 Like

Fortnite memes. [great now I get a million fortnite ads on youtube]

If I’m giving a serious response, boredom, love, and pride. Also, I might as well be a different person in 5 years. For all I know I won’t even exist because there’s no way I’d know if I were a new entity every 5 years, with the same memories and such. Heck, sleep might even be death and someone else wakes up, or general anesthesia (maybe I was born a few months ago when I got my wisdom teeth out, because your brain stops working, interrupting the stream of consciousness which, if you’re an atheist, is all that distinguishes you from a duplicate because atoms aren’t special.) I preferred to do general anesthesia because that’s what they recommended, even though I thought there was like a 10% chance someone else wakes up. That’s because there would be a duplicate of myself. To me the sad thing about death isn’t that I won’t experience what’s after it, but rather that it’s an ending. I was dead until I was born, and that wasn’t all that bad. Nor was being under anesthesia. It’s just a time skip. Before and after I die, it’s just a time skip. The one after life goes on forever, but so did the one before for all I know. If I acted like the future doesn’t matter if I’m not there, that’d just be sad. I don’t want to fully acknowledge the void. I don’t care if I won’t be around in a quadrillion years when all stars are gone and black holes provide a fraction of a watt powering simulated realities in slow motion lasting 10^some crazy number years. That’s still fun to think about, so if I could make that more likely to happen I would enjoy doing so. Likewise knowing actions would harm the world in the future, that’d be sad.

2 Likes

For the serious work, I recommend following people such as Abeba Birhane, Timnit Gebru, Meredith Whitaker, Gary Marcus, Joy Buolamwini, Joana Bryson, and others. You’ll know how good their work is by the amount of flak and harassment some of them get as response.

For philosophy on how to use AI, I usually fall back on sci-fi like attitudes about droids in Star Wars or how Asgardian “magic” is just really intuitive tech in the Marvel universe (and how Tony Stark, as the tech bro, continually screwed up on some major decisions). Also, cultural uses of magic, familiars, and companion animals/robots and things like grimoires instead of the current crop of “smart” tech (because siloed AI is safer- and cloud-based AI will always generalize peoples’ individual traits out and try to get them to act in that generalized manner, so siloed and customizable AI is truly going to be the stand-out tech for those who have it, leaving all the “bubbled” people behind if the cloud is left unchecked). Edit: All of this is a basis for how people treat and anthropomorphize things, not that AI is actually magic. Also been reading a ton of navigation/wayfinding stuff lately, and am convinced that NLP needs embodiment (and that game AI has a chance to reasonably fake the funk with AI in more domains than its own if some concepts like sparsity and grid cells were integrated into some of their algorithms without even touching on ML/DL).

3 Likes

A potential issue is the lack of diversity in perpsective in that group:

Abeba Birhane: cognitive scientist
Timnit Gebru: artificial Intelligence
Meredith Whitaker: degree in rhetoric
Gary Marcus: cognitive science
Joy Buolamwini: computer science
Joana Bryson: artificial Intelligence

Of course they are all more diverse than a single description of their studies. But appart from Meredith Whitaker I would guess there is a bias toward a traditional modern view of morality. I would like to also see people who have a humanities background and have specialized in ethics. Then can pick up the particulars about AI rather than having that as a central theme.

Modern individualism has over-run the natural sciences. The human sciences need to have a bigger role to play: philosophers, historians, sociologists, anthropologists etc.

Someone like Bernard Stiegler - Wikipedia has a more contemporary take on the philosophy of technology. Solutions are far more likely to be found by those who are not conditioned by an education as technologist and/or in the natural sciences (they have been instrumental in creating the problems).

1 Like

I gave you a list of people who have perspectives from the female gender, as well as African and African-American perspectives, which deviates from the traditional viewpoints of western white men, and you’re going on about degrees?

Let me put it to you- and others that come across this thread- this way: Stop talking about degrees and philosophy, and start listening to people. The first in your list bring an African perspective to her work, and takes a terrible amount of racist flak for doing so. The second one has done a huge amount of work on algorithmic and dataset bias, and was fired from Google at the beginning of the year in a row so big it made national headlines. The third was also fired from Google, though I believe it was regarding tech labor practices(? - I might be wrong there). If you think Gary Marcus is a run-of-the-mill AI person, then you haven’t pulled out the popcorn for his debates with many famous AI people. Joy founded the Algorithmic Justice League, which probably tells you just a bit about what she’s up to, and Joanna Bryson talks about AI, regulations, and politics regularly.

I led you to water. Drink or don’t.

2 Likes

Also, another bullet point as I was in the car this morning:

  • When I make tools, I just love to expose settings to my users. So much so that I was working on an NPC conversational AI system some years back (how I found Jeff’s work again and solved a question via sparse representations improving GOFAI, but that’s neither here nor there). Part of the system was personalities, and while none of the models out there really capture humans’ personality dimensions, I took the 16PF and modified it, and parameterized it so you could customize your NPC personalities. It’s a ton of knobs and buttons, but it gives the user control and access to creativity I may not see.

Long story short, I’d rather expose the settings and give tooltip explanations of what stuff does instead of black box things. Adds some complexity on your side (which is in our job description, honestly), but gives uses a lot more power and control. And that control will, when informing the user and not hiding the settings, result in better use of the AI.

1 Like

You are confusing diversity in philosophy with diversity of identity. It is not because you personally are woke that the education system has less effect.

There are people trying to make a positive impact, many of them with diverse identities. I am not saying those people should stop what they are doing. I am saying you should look to more diverse sources.

The point, that you seem to have missed, is that you are not aware of the biases in the technical education you and others are receiving. When I point that out you get offended, rather than showing some interest in the underlying dynamics.

You have not led me to water. You have shown me that you are drinking from your own hose. You are wrapping yourself in a comfortable moral blanket stitched together from black and brown skins.

That’s quite the spicy take, lol. Identity does inform philosophy, especially when that identity was raised in a non-Western culture, and so has a different philosophy from whatever Western source you like to point out.

I supposed I could have looked more for some Asian perspectives, or from South America or the Middle East or more indigenous perspectives, but for two things:

  1. You’re looking to trade philosophical arguments, whereas I’m trading in people perspectives and cultural viewpoints, which are far more relevant, because AI tools are used by far more people that are not philosophers, and philosophy is not culture.
  2. It’s not my job to read off about papers that talk about Indigenous and African perspectives on dealing with AI, or why it’s more important to deal with that than to deal with whatever some philosopher has to say. If you can’t see the difference, and why cultural perspectives should inform more than some philosophical framework, then that is also an ethics problem.

And I’m not offended so much as a bit puzzled by your dismissal on the grounds of degrees earned instead of personal perspectives and non-Western viewpoints, which are indispensable to considering when making tools used worldwide.

The last sentence is interesting, though, and I’m interested in hearing elaboration on that.

Perhaps think of it like this, if you were worried about smoking would you consult an engineer hired by a nicotine company to explain the moral situation?

Diversity about ethics is certainly related to different cultural paradigms. A technical degree in a technical school at the heart of American empire does not place someone in a good position to criticize the system that has just given them a very lucrative stock option plan, and probably a large golden handshake once they made some criticism.

It is not that those people are dismissed, they are, of course, each one voice in a conversation. But if the conversation is about ethics then get ethicists involved (from diverse philosophical perspectives).

One of the biggest problems in AI is that there so little independent research into the harmful effects of AI. This is exactly the same routine that the nicotine industry, pharmaceutical industry etc have run in the past. Same play book, same players, perhaps different colors, all so you can buy the same arguments

1 Like

Do you, by any chance, subscribe to critical theory?