Ethics of AI Research

Indeed. I raised the issue in the original post but nobody has deemed it worthy of an answer.

Sorry, I am not sure how you have interpreted the conversation so far as the community not deeming the topic of ethics worthy of an answer. Despite the off-topic parts, it seems that we are discussing it aren’t we?

Or are you referring to this specifically:

1 Like

This can be particularly dangerous, consider the impact of the stupid “like” button. Focusing on implementing small specific dumb things in large complex (I mean that in its technical sense i.e. not complicated) systems can lead to disastrous results. There are many cogs and we would probably all be better off if each cog were held accountable for the wheel (at least to themselves).

I doubt you will find a single AI engineer who says “Hey, let’s not treat or let other people be treated poorly” and yet many of them are working on systems that will have that side effect.

Morality works in such a way that you nearly always feel morally justified for whatever behavior you engaged in. The limits of the behavior you engage in is basically your morals. The idea that moral decisions occur when social moral norms are broken is an old idea from Nietzsche. That is really interesting idea - that you need to behave in an immoral way to shift moral norms e.g. not sitting at the back of the bus.

The question was

My original post contains:

How did you not see that?

I did see it, and I would argue that most likely others who have posted here did as well (for example the very first reply mentioned why such a goal is difficult in practice). From my perspective, there are two other sentences in the OP as well, so it certainly wasn’t clear to me that this was the most important sentence to you and the one you wanted to discuss.

So focusing on that then, what would you consider a successful discussion? A petition to Jeff that he should update Numenta’s stated goal to be the creation of beneficial intelligence, and not merely to understand biological intelligence and implement it in software?

I have said it is a concern. Maybe Jeff is right. What do you think?

The question in regards to TBT is about the direction that Jeff is pushing for. This seems to assume that amoral intelligence is possible and I have not seen any compelling philosophical argument to that effect. Maybe someone here can point to a source.

I’m not sure if Numenta have an official take on AI ethics. I am assuming that the 2/3rds of Jeff’s book that focus on these aspects are the unofficial official take.

Politics is crystalized ethics in action. As it stands, our political systems are the counterbalance or regulations of unfettered capitalism. What is currently acceptable falls in the Overton window. The position of the Overton window is given by the zeitgeist of times. What the ruling class thinks is normal and correct is disseminated through policy and the media to shape our norms. The window forms around those norms.

The step from feudalism to our current plutocracy has involved some job title changes and there is always a cakewalk to see who will pull the strings from the top, but very little has changed in the last 1000 years. The ruling class stays the same. Might still makes right.

As long as humans are ruled by the subcortical structures, and as long as genetics keeps stamping our innate behaviors as herd critters with instincts to both lead and follow, we will continue to accept that “those guys” are our leaders and we shall follow. That is how our brains are wired and there is very little we can do over-ride that.

BTW: This is the underlying mechanism of religious beliefs. You don’t get a bigger leader than god, and we are all programmed to be followers of what we perceive as a valid leader.

Currently, ethicists live in their own little bubble mostly removed from the mainstream. They may get interviewed on NPR or BBC, put out position papers that a few people read, hold conferences that are mostly attended by other ethicists, get quoted by woke people, but mostly they have very little direct input into the political process.

Example: The company I work for makes traffic weight enforcement systems with license plate readers and driver cameras. When a vehicle is overweight and passes over our system we capture the pertinent data and the local law enforcement uses that information to issue tickets. We worry about the data privacy of what we collect, and that is about as far as any ethical concerns go. While we have cameras pointing at every vehicle that passes we only capture the details of vehicles that are breaking the local weight laws. The local law enforcement has performance specifications that we must meet to sell the systems, and we comply with international standards for capturing that data in a way that is accurate to avoid false accusations.

We did not consult any ethicists, nor attempt to discern if something we are doing has any ethical issues. Broad statements about what is right or wrong simply are not useful to us. They might be slightly useful to our customers that employ the technology but I don’t see much outreach to provide useful guidance to that arena.

The current George Floyd trial is an example of when law enforcement guidance policies are tested and refined. Many more cops will be informed by the outcome of this trial than reams of musings by ivory tower ethicists.

4 Likes

I don’t think purely beneficial intelligence, which cannot be used nefariously, is possible (as is true for any other innovation). That doesn’t mean I support “degrowth”, or burying our collective heads in the sand either.

I believe the way you avoid the negative consequences of new technology is through transparency (so society can prepare for what is coming, and so that the innovators can be called out if they are making bad decisions) and regulation (to limit the damage that can be done).

2 Likes

The link I pointed to and quoted “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.” this is not the same thing as claiming that “purely beneficial intelligence, which cannot be used nefariously is possible”

You are adopting a strawman tactic by presenting AI Principles - Future of Life Institute as if they are making outlandish demands.

I like transparency - Numenta is leading the way on that. We can see that ethical concerns are not being discussed often. I am trying to call out Jeff and point out that amoral intelligence is not a credible ethical position. Ideally this forum could filter out issues, then Jeff does not need to lose his time. I am all for someone making the case as to why Jeff has this right.

1 Like

I’m certainly not trying to make a strawman argument. I read through those principles and I agree with many of them (there is a lot of emphasis on transparency and regulation in there, BTW…) I am simply stating that a goal of “beneficial intelligence” is (I believe) literally not possible. I’m not talking about how I’d like the world to be, but how it is.

I can concede that purely undirected intelligence is not a good idea either, but what should be the direction (it cannot be “beneficial intelligence” as that is not possible – any technology can be used for both good and bad). Something more realistic might be, “we’ve covered all the obvious loopholes we can think of here”.

Or do you mean that the innovators themselves should always state their opinion about how their technology should be used, and warn how it should not be used? I think the reality is that however pure an innovator’s intentions are, they ultimately have little control over how their creation will ultimately be used. That is the job of the regulators.

3 Likes

If I told you you cannot always be good would you stop trying to be good?

There are many advantages of striving for beneficial AI in a transparent manner. Firstly it means you need to define what beneficial means to you. Then people who help you know at least what you want others to think you are doing :slight_smile:

Don’t let perfection be the enemy of the better.

of course, we should always strive for perfection in every field, AI no less. but there is a amount of expectations to be considered to make any goal realistically plausible. Utopia is our ever present goal but practically, we all now it’s not happening anytime soon - unless we fundamentally change humans.

I am not saying we should stop trying to make our tools better, but I agree that there is no technology that does not have any bad. the best we can try is to make it as good as we can - nothing more.

I think the reality is that however pure an innovator’s intentions are, they ultimately have little control over how their creation will ultimately be used. That is the job of the regulators.

Perfectly summarizes it.

Do you see any indication that Numenta is not trying to be good? It isn’t like they are building Little Boy here – they are simply trying to figure out how the neocortex works. Because of Numenta’s transparency, the community here is absolutely an integral part of this effort. So to me, HTM needs more focus on ethics, then lets do it. We can certainly have people working out how to apply cortical processes for good, and discussing how they might be used for evil.

WRT setting up a section of the forum for that effort, I think it is a good idea if we can get enough content. I challenge you with spearheading that effort (I’ll help with tagging relevant content). If the amount of content warrants it, we can discuss reorganizing the forum a bit to support it.

2 Likes

You both keep coming back to the obvious statement that perfection is not possible. Get over it. Nobody is claiming it is. But if you believe intelligence is amoral then you are leaving the door wide open for abuse. The question is fairly simple: is Jeff right or wrong in this?

Yes. The PI just published a book that assumes amoral intelligence is a thing

The only reason GPT-3 is not available to the open public is that OpenAI is now a for-profit organization (contrary to what it started as) and needs to maintain a business - releasing their flagship model renders their source of income useless.

It is due to no ethical reasons posed by GPT3. that is just a cover story for lessening the hard truth.

3 Likes

Is this the same person who is worried about biased data? You think Open AI believe the internet is not biased or you think they don’t care about biased training like you do?

No I haven’t made any remarks about biased data.

OpenAi certainly might care about biased data (even may have employed their own techniques to counter it) but multiple tests have determined that GPT3 is pretty biased, has objectionable NSFW content, and further on can be easily reverse-engineered to reveal possible data used to train it including sensitive piece of information which should not be publicly available.

I am simply stating a simple reply to your comment about GPT3 being withheld due to ethical reasons. the current research is such that anyone can hand deliver you a pre-trained Megatron 13B to defeat GPT3 in terms of fake news (I certainly have trained it, just not on fake news).

OpenAi obviously knows this - whatever tool can be used for bad purposes will be used for bad purposes (google DeepNu** and first thing you get is a hosted service that can be used anyone for free) its simply that their refusal to share their model is purely for business reasons, nothing else.

1 Like

That is an over simplification. The book merely hypothesizes that the neocortex (which HTM is attempting to model in software) is amoral, and has no goals or values of its own – it simply learns models of the world. Thus, algorithms based on the neocortex would be neither inherently good or inherently evil (that would depend on how they are used).

You have to take that observation along with much of the rest of the book which extensively lays out the risks and negative aspects of human intelligence. With that in mind, I think it is clear that Jeff is advocating the use of this technology to improve our situation. That is far from an amoral goal.

3 Likes

The business reasons are based on what? The moral outrage that letting GPT3 loose would cause. You can’t make such black and white distinctions. The question is: on what ethical basis are Open AI going to find the solution to their ethical dilemma.