Ethics of AI Research

There is a slight problem in this being the first post to create the ‘ethics’ tag. It would be worthy of a section somewhere in the forum - or maybe it exists and I missed it?

To kick off a discussion you might see concerns raised by https://futureoflife.org/ai-principles that are not well addressed in TBT or principles from TBT that are missing from https://futureoflife.org/ai-principles

I’m concerned that Jeff’s conception of amoral intelligence is in conflict with the first point: 1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

3 Likes

One big problem with an intelligent machine’s ethics is that as long we don’t have an intelligent machine there-s no way figuring out how its ethics should be implemented.

2 Likes

I think it is telling that of most of the names associated with the FLI institute have little to no background in neuroscience. I looked through the bio’s for all of the “Founders”, “Scientific Advisory Board”, “Core Team”, and “Top Donors” there appears to be a single person with a background in studying the brain: Christof Koch, Chief Scientific Officer, Allen Institute for Brain Science. There are also a couple of “Past Members” with some background in the field.

2 Likes

Really? we are so far from AGI and we have to think about ethics too?
Here’s one - why don’t we discuss the ethics of inter-dimensional travel too?

My point is just that unless we have seen (or see) a substantial amount of research and capabilities of a system, it is useless to even comment on something that does not exist (and as far as we know, may not). everything has a proper time - but right now is not the most appropriate because it’s empty talk about something we don’t know needs a discussion about ethics on.

For example, a few years ago there was little to no research about biases in Neural Networks. it was only after its testing was there found about the need for balanced datasets and bias-reduction methods.

Again, it’s useless to talk about some aspect of something that does not exist as of right now - because we might not know its full potential

4 Likes

I am reminded of Asimov’s three laws of robotics.
These were formulated without any real understanding of what is possible; more as a plot device.
I can see so many ways that a robot smart enough to do half of what was in his books would be acting to prevent injury to the point where humans would not allow them to around.
Such a list of self-destructive activities: Smoking? Skateboarding? Eating too large portions of food? Sitting too much? Watching the wrong programs?
Where do you stop?

1 Like

For me, Bias is the biggest worry in what I do.

A few years ago, I trained an image segmentation network to extract people out of images for use further down the pipeline to make various health calculations.

What I was horrified to discover was that while my model did really well with white/pale skinned people, it would regularly exclude heads/arms/legs of people with darker skin tones. Where it was more a proof-of-concept type of project rather than something in production, I was able to note it to my client how we needed a more balanced training dataset or else problems like this would continue to occur.

But my main worry was, and continues to be on a daily basis, is what happens when people are creating systems and NOT checking for issues like this? Or if when seeing these problems, simply dismiss outright as “Not my problem.”

I can’t blame inanimate collections of algorithms for the wrong they might do. That would be stupid. But I can and will blame AI engineers who willy-nilly push out biased models without concern for their environmental and societal impact.

Any research that doesn’t consider the broader implications of its applied work shouldn’t be allowed to be published, in my humble opinion. If you can’t state or consider the potential harms of the technology or algorithms you’re creating and ways to mitigate that harm, you probably shouldn’t be making that technology. In my mind, it’s about being responsible entities (individuals, groups, corporations, etc.).

4 Likes

Utility isn’t a moral category. Whether a screwdriver is “beneficial” has no bearing on whether it’s moral.

Broader impact statements read like the copy-paste busywork required in a undergraduate class assignment. “Hm, what are the potential impacts of this new learning rate scheduler, or of fiddling with the dropout rate, or of CNN / transformer variant #16872? I guess I’ll just make something up, or scribble down whatever what everyone else is doing.” The intentions may be good, but it’s unnecessary for 99%+ of published AI research.

Yes, you included more dark-skinned people in your training set; kudos.
This stuff is really hard to get right.
In all states of dress in the clothing of all seasons and cultures?
Nudists in all skin colors and poses and angles of filming?
Makeup and tattoos?
Ornamental scarification?
Skin conditions such as Vitiligo?
Freckles?
Physical impairments such as missing body parts?
Face coverings?
Lighting conditions and directions?
Some people hate being misidentified as critters:
Did you also include near humans like apes and monkeys?
Did your training set include near humans in clothing?

In so many applications, the downside does not turn up until you unleash it on large numbers of people.

We try very hard to test our products to make sure that they work as expected in a wide variety of cases. Then our customers show us all the edge cases we never thought of.
“What? You drove a tank on it? It smashed the diamond plate flat?”
“What? You put a 2-ton safe on it and the foot punched right through the platform?”
You should see the stuff that comes into our service department.

2 Likes

I like this thread, though some of the replies considering an ethics take to be unnecessary may be missing out on the actual harms that AI algorithms are causing right now, such as arrests of people of color due to issues with datasets and high error rates of certain groups as opposed to others, recommending longer prison sentences or denial of parole based on datasets that often boil down to race, and even to medical screening based on old and outdated science or hearsay (that some races simply don’t feel pain like others, down-grading the importance of their own reported pain).

Sure, you can use a screwdriver for good and bad and it has no morals because it’s a chunk of metal and plastic, but the crucial difference is that a screwdriver isn’t giving feedback, whereas AI does. And AI is trusted as if it’s unbiased and has little discernable error. So as AI advances, the need for scrutiny only increases.

6 Likes

I think the solution is very simple; to create not undirected humans, but only beneficial humans … or perhaps only nihilistic humans so they don’t care about evil AI :wink:

I don’t see how you can research and/or build ONLY beneficial intelligence.

Now I can’t stop thinking about creating a community to create only evil AI :smiley:
Perhaps that’s exactly what’s needed; people trying to make evil AI as a sort of white-hat hackers so others are forced to defend against it by making evil-resistant (beneficial) AI?

1 Like

That is burying one’s head in the sand.

Look who participated in formulating the criteria. FLI is a much broader organisation than just AI. Neuroscientists are not ethicists and typically not working in AI. Numenta is not doing neuroscience either - they look to the results of neuroscience for inspiration - their experiments are with machine learning algorithms not brains.

Ethics is already a major concern for AI. The ethical questions have been important in a great deal of AI research - consider how GPT3 is not made available to the general public because of ethical concerns.

Rather than proposing another solution, by a fiction author, why not take seriously the work of leading experts working in the field of AI who took significant time to collaborate on these questions and share the results.

It certainly seems worrying. There are other aspects that worry me too, for example the use of machine learning in social media, news article generation, individually targeted political marketing, reverse engineering from multiple sources to disclose privacy, the massive individual monitoring of the Western population by the largest user of computing power in the the world - the NSA…

Contemporary ethics does not see these distinctions as being so independent. Most people who have not studied ethics somehow think they know all about it. Perhaps a similar attitude leads to not studying the brain and trying to develop AI.

This is not a reasonable approach for technologies with huge potential negative impact. For example, the USA dropping nuclear weapons on civilian populations multiple times to understand the short and long term impacts of radiation poisoning is not a good solution. Taking ethics seriously and getting ethicists involved would be first a step. Based on this thread it seems that just acknowledging ethicists exist would be a fist step! Many people somehow think that their general knowledge about ethics is up to date - yet they would never believe the general knowledge of the average citizen provides a useful understanding of AI. And, arguably, ethics is much harder than AI.

I agree and AI systems are already making decisions such as whether someone should be released from jail. There are typically humans still in the loop. But there are already autonomous drones on the market for autonomously killing people. This has already lead to things like a school bus being destroyed. To think that people on this forum don’t consider AI ethics an issue is mind boggling.

Yes this would, for example, be the intention of most state actors. You don’t need to invent this job you just need to be smart enough to get hired to do it. Sadly the competition is probably quite tough so you might need to accept working on beneficial AI.

2 Likes

That generalization is not completely true. I’ve seen several posts where some people expressed their worries, or at least reacted to other people’s dismissal of the dangers. Even though you’re right that a thread on the subject was probably missing and is duly warranted.

Maybe it would be interesting, in this or another thread, to investigate how Numenta’s approach differs from other AI research, and how the typical arguments for dangers of AI change, or don’t.

1 Like

The definition of a generalization is that it is not completely true! But I was not trying to generalize - it is mind boggling to me that anyone interested in AI does not consider ethics an important concern. Hopefully those people are a minority on this forum.

It seems to me that studying ethics in order to know what is just, is like studying aesthetics in order to know what is beautiful. I’m skeptical that there can be advances in either field, such that “contemporary” flavors merit special authority or attention.

1 Like

Or just like studying intelligence to know what intelligence is. Clearly we are still living with exactly the same moral code as chimpanzees and changes in ethical standards are completely imaginary. No doubt you have slaves and own your wife. Of course there is nothing interesting going on in contemporary philosophy, we are fine with the philosophical foundations we have been taught (or more likely not taught) they are only a few hundred years old, what could possibly go wrong.

IMO, you’ll have better luck discussing ethics from this angle. The problem which you have encountered here already is that when the topic of ethics is brought up in the context of AI/ML, the conversation typically winds up in the realm of wildly hypothetical assumptions about the nature of superhuman AGI. We’ve had so many of those philosophical conversations that have lead to nothing actionable, that people are not easily motivated when the next animated newbie starts yet another thread on the subject.

It you are able to focus the conversation on immediate ethical concerns with current or very near-term technologies (i.e. not AGI), and relate that to HTM, you’ll probably have better luck getting useful feedback from folks here. Otherwise, you’ll only elicit eye-rolling and “here we go again with the Skynet garbage…”

3 Likes

To pick an ethical topic related to HTM in the not-too-distant future, one way HTM’s SMI research could be used unethically once it is worked out and implemented, would be to leverage the “one-shot-learning” capability for learning efficiency to create “smart computer viruses”. For example, they might be able to explore infected systems to find novel vulnerabilities on the fly for spreading (reducing some of the tedious technical human effort, and increasing the pool of potential hackers and the damage they could cause).

Considering this potential issue, what could/should be done ahead of time to prepare for it, reduce the potential damage, increase detection of it, etc?

2 Likes

I hope you see contradiction in telling me this when I never raised the topic of AGI and you did. The examples I have given are already current. When the “has beens” start these types of defensive moves it is saying something.

The topic is “AI research” not “AGI research”

You don’t need to invent scenarios of the future. The 2016 US elections were supposedly manipulated using the machine learning algorithms deployed by Cambridge Analytica.

3 Likes

No, I mean others will interpret it that way (see posts above to my point), and that you as the author of the OP will need to keep redirecting the conversation back away from AGI. I didn’t mean to direct it in that direction myself.

Lets start with this one then. Any thoughts how HTM might contribute nefariously to something like that in the future, and what could/should be done to prepare for it, reduce the potential damage, increase detection of it, etc?

BTW, this particular topic is fairly polarizing, so discussing it is likely to turn off about half of the audience :stuck_out_tongue_winking_eye:

4 Likes