Ethics of AI Research

I am not sure if they used ML to learn the clustering in this specific case. It depends how the “algorithm of psychoanalysis” is implemented. But even without that, the event depended on AI.

1 Like

Well, there’s your answer:

Most people don’t understand ethics.
Those who do, disagree about it.
And those in charge don’t seem to care.

Or realising there’s most likely nothing one can do about it, and hope for the best.

“Provocative art can help push useful dialogue about the role of technology in our daily lives,” Boston Dynamics said. “This art, however, fundamentally misrepresents Spot and how it is being used to benefit our daily lives.”

1 Like

Which one of those AI users will be affected in any way, shape, or form by the ethics papers put out by yet another organization? More importantly, even if you are able to make some positive change in the USA, will that have any influence in the slightest in China?

I can just hear the giggles coming out of the NSA now. Why do you think there is such a huge concern about Chinese telecom equipment? Automated telecom keyword spying is already a huge thing in the USA, there is huge concern that the same techniques would be done to the USA by China.

Look at what happened to Snowden for trying to get anyone to be aware of the automated spying going on. Did it change anything? Not really. Much is said about the “Snowden effect” but in reality, very little has changed.

Is it a win if we make our autonomous weapons play nice and our opponent does not? They end up kicking our ass because of the difference in rules of engagement? See: Machine guns for how this has gone down in the past. Next up: Automated sentry towers. Hint - they are in use now.

Cambridge Analytica was caught with their hand in the cookie jar but you can be sure that many political operatives have similar companies on speed dial. I assume that most people have read about the mostly manual media manipulation from Russian troll farms. AI-automation can’t be far behind.

Ethics papers are a lot like locks on doors, they keep honest people honest. If anything, ethics papers will give the bad actors new ideas!

The message so far is that AI is being used for unethical things, and some readers are appalled that Jeff’s latest book is naively optimistic about the future of AI and doesn’t take the subject seriously.

Because bad actors will be bad actors regardless of recommendations, any technology comes with the potential for both good and evil. AFAICT, the only effective way to combat the latter is through transparency and regulation (anticipate what is coming, and try to prepare society for it). I feel that Numenta is very strong on the transparency front, and as for the latter, I don’t see any immediate regulation that would apply specifically to HTM as opposed to any other AI/ML technology.

A bit further in the future, I could see agents with the ability to learn on far less training data becoming a game changer that would come with new potential dangers. That is certainly worth discussing in detail. Or perhaps we should start with the low-hanging fruit of the big problems we already have today. What measures are needed to curtail the rampant wholesale collection, storage, and distribution of personal data? How should we organize to place limits on our governments’ authority to surveil us?

2 Likes

Yeah. It’s not like we have restrictions on nuclear weapons or biological weapons or chemical weapons. Let the autonomous weapons loose! We all know the Americans can be trusted. It’s not like they would drop a nuclear bomb on civilian populations - well at least not once. There’s nothing we can do and there is no point having people work on this topic. It is obvious - the answers are already known and the brilliant technicians have them all. If only those silly humanities students were smart enough to program!

Imagine how impossible it would be to regulate monopolies like Google and Facebook. Never in the history of capitalism has the state interfered. It’s only the communists who support the world’s wealthiest families with state subsidies.

Due to the previous paragraph, I can’t tell whether this sentence is serious or not. If it is serious, of course they could be regulated. In the US, we are a nation of political will – we just have to want it enough, and not be afraid to organize and be vocal about it. Our system of government is designed to place the most authority in the hands of our local legislatures, who are the most accountable to the people. I think a lot of the current generation has forgotten just how much power we have in the US to effect change.

Face recognition on every corner?
Back mirror dystopia -or- present-day China?

NSA running automated keyword searches on the telecom backbones? Yup.
What are we doing about it now? Nada.
Even blowing the whistle on this gets you a room in Leavenworth.
Apparently the people don’t care enough to get this stuff changed.

Autonomous weapons?
Already a thing. I’ll let you google it for yourself lest I be accused of pointing fingers at any particular state.

Gas weapons? Ask the Kurds. Who sold these weapons to be used? Several countries. Who provided satellite weather reports to plan for maximum effectiveness of deployment? Yup, the USA.
Why does the USA still hold an inventory of these weapons?

Setting MAD aside, several countries hold an inventory of nukes, and the club gets larger every so often. We all agree that we are not going to use them … unless XYZ happens or there is some stupid accident.
How is that ethics thing working out there?

We can try to make the world agree to some kinds of limits but I feel that this going to be of limited utility. Countries agree to these sorts of things until it is no longer convenient.
Then we can all be SHOCKED, shocked I tell you, that these agreements are breached.

So yes, you can do Virtue Signalling by stating what should happen in a perfect world. I am not sure about how much actual utility is obtained. I suppose it helps some people sleep better at night.

3 Likes

In my opinion we’re way too early for ethical AI and almost too late for ethical use of AI.

AI as it stands is decades off the kind of intelligence required to assess, impose or comply with ethical standards. All we really have is some really good pattern matching and a lot of engineered software on top of it.

But as it stands it’s massively helpful for rich powerful people in their quest to impose their power on others. There is no AI for the common people, it’s totally owned and controlled by large corporations and governments, and they are not here to help us.

I remain deeply pessimistic.

3 Likes

Says more about you. Just a way of avoiding discussion and living in a bubble. Yes the world changes even if you don’t want it to.

For those who are so enamored by science they might reflect on where modern science comes from and the role of philosophy. For those enamored by logic they could note the same story. Will the humanities - just words on paper -continue to change the meaning of science - yes. The scientists are slowly learning about what they are actually doing when they read the sociology of science.

Even though many of the general population would like to have science stop now and pretend that it is sufficient, actual scientists will expand science in the future to address questions that only philosophers discuss now.

Is AI a technology that will reshape society - obviously yes. Should the current power brokers be the people deciding on the power arrangements in that new society - absolutely not. Will that however most likely happen? Yes because of the sheep like attitude you are espousing.

That implies you don’t see the sarcasm in the rest of the second paragraph. You have been watching too much MSNBC ! :slight_smile: Ever hear of food stamps and the living wage?

Now if only we were developing a technology that could organize millions of people. Hmmm maybe we could call it artificial intelligence and put it in the service of individuals rather than state actors and multinational corporations. Oh no wait that doesn’t make sense my news feed tells me I should be worrying about the health of the stock market.

Are you saying you understand ethics and are telling us that ethicists don’t agree about ethics?

How many in AI agree on what the right approach to AI is? I don’t think there is even agreement on what intelligence means. This does not mean that AI researchers don’t have anything of value to add to AI and/or society.

I think there is a huge amount of agreement in ethics - there is a long history (much longer than AIs) of working to define terms and methodologies etc. There are of course lots of disagreements too and that is normal in every discipline.

If you are looking for ethicists to agree on one particular normative ethics then this would be like trying to get all AI researchers to adopt HTM. If you ask ethicists whether slavery is a good idea then I think you’ll get quite a large consensus.

Politicians are not concerned with ethics but they are very concerned with morality. They reflect the morality of the population that elects them. The morality of Donald Trump (who was accused of rape) or the morality of Joe Biden (who was accused of rape with more compelling evidence than Donald Trump) says a lot.

Sounds unreasonable - once I become aware of things I don’t understand then I ignore them.

Really? You think we forgot already who “you” guys elected only a couple of years ago? And this morning I woke up with the news that Amazon’s Alabama warehouse workers “chose” not to be unionised. What the eff? You know better, Paul. And don’t think I believe this couldn’t happen over here. Of course it can. Of course it does.

Public opinion is as hackable as a rotten egg.

Mark, we’re not the sheep. We’re Chicken Little in front of the stampeding herd.

I’m agreeing with you when you say I’m an amateur. I lack the abilities. (You know that).

This.

And this.

Where are you disagreeing with me, Mark?

Who says I ignore them? I’m just realistic enough that I won’t be able to do anything about it. I could curl up in a ball and wait for the end, or I could have a somewhat optimistic attitude.

Why not do it the old fashioned way, rather than from the couch :wink: Ultimately it comes down to what people want, and sadly the issue of collecting personal data is just not important to people. It is on those concerned to craft a convincing enough message, and if they can’t make it important to even the people in their own local community, then maybe they don’t actually have a winning argument.

It is called democracy. If the opposing side isn’t able to convince their base to get out and vote, that’s a “them” problem. Again, people have forgotten that the real power in the US system of government is at the local level. It is just too easy to blame everything on the guy at the top, and try to have your way with sweeping authoritarianism, rather than doing the legwork to start grassroots movements.

1 Like

I’m not blaming anyone. (Might as well blame an erupting volcano). And I’m certainly not advocating for authoritarianism.

It’s not that I’m giving up. I still go out and vote. I still watch the news. But I’m not naïve about any of it either.

1 Like

I thought this whole discussion was for AI ethics, but now it is just becoming too political.

Agree so much with @Paul_Lamb. If Mark indeed has a very specific ethical dilemma regarding HTM (or AGI in general) then it would be much more productive than just useless speculation and wasted brainpower.

2 Likes

Sorry, that was probably my fault (this last year has turned me a lot more political than I used to be). I’ll try to stay on topic.

Getting back to thinking about the potential impacts of algorithms which can learn on far less training data than today. The main thing which comes to mind is that the capabilities which are limited to those with a lot of resources, would become much more widely available. While this might be a good thing WRT the current monopolies, it might bring with it the danger of rogue actors who wouldn’t need a lot of resources to do damage.

3 Likes

I think it’s fair to say that it’s been a trying year for most of us… something for us all to keep in mind, realizing that we have this global shared experience which has spared almost nobody in its touch. To me, it re-emphasizes that we’re all much more connected, and share much more in common in our daily lives than we have different. We breath the same air, drink the same water, get sick, suffer heartache, experience joy from little things daily, and we’re all working towards different goals to keep ourselves busy/nourished/fed/housed/safe/etc…

Being from the U.S.A., growing up in relative poverty, and benefiting as a child from the marginal social safety net that we had, after launching out of that situation, I cannot help also want to help others who may be stuck for systemic reasons (lack of access to education, redlining, information bubbles, discrimination, etc.)… so for me, where there is any overlap between politics and AI, it’s around regulation and checking against bias of production deployed models, or the abuse of clustering populations to create manipulative information bubbles for the sake of profit (something I’d class as antisocial corporate behavior).

I try hard not to hew too closely with any tribe, unless that tribe says “Hey, let’s not treat or let other people be treated poorly.” Whoever shares that belief, regardless of political/party/religious/sports affiliation, that’s my people. To the extent that AI can help/hurt that, as pointed out by others here, really comes down to people and their use of AI systems for various means.

A(G)I isn’t a threat right now… The ethics of its use, or neglect of those ethics, is. It matters to me, personally, because it’s my day job to buid, maintain, and guide these production and exploration projects that are employing the family of algorithms collectively known as “A.I.”, and I have to keep in mind, in a very real manner, the potential impacts, both positive and negative, of the systems that I’m developing, and to act as a voice of conscience in the organization where I work.

I agree here. So I try to make dumb, narrow, and specific AI systems that do only one thing well. I’m not putting a lot of work into connecting it all together, but I do have my projects that explore combining HTM-like ideas with spiking neural network concepts, while reducing the amount of math/ops required to reach a result. The more we can push AI to the edge, without having to transmit any of that info across the wire, the better. So I’m working on network pruning, uncertainty, encoding compression, etc…, all for the purpose of trying to get these sometimes very useful and helpful systems out of the data center and into devices.

1 Like

Not that my goal is to disagree with you:

  1. We are the sheep - when you see sheep all around you are part of the heard :slight_smile:
  2. If you don’t know whether ethicists are in agreement or not then “Those who do, disagree about it.” seems like a guess. I think there is more agreement in contemporary ethics than people imagine, the idea that there is one right normative ethics has largely been left aside and the focus is much more on applied ethics and mixed normative approaches.
  3. Needing to believe that it is only worth acting when success is assured is a sheeps game. Getting to a place where near on assured failure is not scary is another option. Hope is not the same as optimism - people in depression hope they will come out of it that does not make them optimistic.Hope is not a strategy.

I hope you are happy now :slight_smile: