Politics in AI Research

In case you don’t know what I was referring to, the idea of a paperclip maximizer is where a superintelligent AI has the goal to make paperclips, so it turns everything into paperclips. I think that’s a valid concern, if the AI is superintelligent and has goals.

1 Like

The desire to have someone tell you what to do is not easy to get over. The desire to tell people what to do is perhaps just as difficult to get over. I suspect this is particulary the case in our Judeo-Christian society that has trained us over centuries in the capitalist model to become either a capitalist telling others what to do or a worker doing what someone tells you to do. God is alive and well in the form of the USD.

Engineers - like you and me - did not get a good broad education. The only way to deal with that is taking responsibility for improving our education. Assuming that you have enough knowledge, when you have not studied a topic, means you are letting the current social norms guide you. If you look at where the current social norms are in fields you do care about, this might provide enough motivation to become more educated. Engineers have been some of the workers who most benefitted from globalisation and financialisation.

The strategy you adopted above is to assume that you already know the answer, there are only two options: do nothing or compete to die. This is seriosly misinformed.

Evolution is understood in our individualistic, utilitarian society as competition. If you stop making the self the center of the universe and projecting yourself onto each suffering organism, then the synergies in nature are abundant. Look out the window into nature and you can see just as much cooperation as competition.

Concretely, a major step I think engineers need to make is to stop with the early 20th century positivism and accept that they do have a social, political, moral role to play in society. This means getting clear on what sort of world you are building and how you are going to contribute to that, rather than assuming someone else is looking after your (i.e. our) interests.

The goal of an engineer should (here comes the moral judgement) not be to simply build a technology but to particiapte in building a society. It seems that most technologists limit their children’s screen time, I would bet that some of those technologists are putting all their professional energy into making that as difficult as possible for other parents. The ideas of individual choice and technological destiny are in contradiction. We each should develop (and continue to develop) a clear moral/political position and that needs to be supported by the company you work for or you need to change company.

For me that means AI needs to serve the majority of citizens by dispersing concentratons of power rather than increasing them. This could be one goal of beneficial AI. Certain types of AI will make that impossible while other types can enable it. This lets me choose where to put my energy.

You can see this play out to some extent in the rift between Deep Mind and Google. They recently failed to extract themselves from Google and a number of employees have left or will leave because of moral concerns about the AI they are developing for one of the most powerful companies in the world with strong connections into the US military.

Believing that AI could be amoral is not a useful position. Perhaps having thought leaders in the AI industry aware of that would be some small gain.

As for the new fad in the USA of being afraid of China - it seems Russia now only scares half the population. China might be able to reunite the Americans in their competitive struggle to run off the environmental cliff. The most dangerous and destructive country on the planet is obviously the one waging the most wars, dropping the most bombs, funding the most gain-of-function biological research, and building the most ego-centric AIs in the world. God help us all :wink:

1 Like

No I do not, you assume that. It should, if you read what I wrote, be clear that I am saying the morality is not “inside” the AI. I’m not sure where you read about AI ethics but most AI ethics is not concerned about superintelligence, most of the work is on regulating today’s AI. It is obviously an industry that is creating social harm and it is unregulated, that is the main focus of AI ethics today. The stuff that gets the headlines is of course not the average daily grind of work in AI ethics. Same thing goes for AI research in general.

That attitude is from another century - litteraly over 100 years old. There is no magical line dividing tools and morality. You don’t just build whatever is possible, that is why the USA banned gain-of-function research into coronaviruses in in the early 2010s. The morality of people like Fouci is why they turned to funding that work in dual use Chinese research labs.

That is exactly the unconscious incompetence I was referring to.

No :slight_smile: Ethics is not something that should be discussed nicely - it is offensive to anyone when someone points out they are immoral, that is how morality shifts. The first step from unconscious incompetence is conscious incompetence and that is not a nice place to be.

See, I’ll regret every possible way I could respond or not respond. No hard feelings.

1 Like

Don’t worry about being nice. Water of a duck’s back. If you point out my unconscious incompetence then I will appreciate you nastiness :slight_smile:

From the other thread, I gathered that @markNZed believes the missing piece is a publicly displayed set of ethical guiding principles from Numenta (something like Future of Life’s AI Principles), and a commitment by all researchers and developers at the company (and presumably also enforced by management) to follow them. Another goal appears also to be to acknowledge any philosophical mistakes (from the perspective of the aforementioned principles) in “A Thousand Brains”.

2 Likes

Where did I suggest that?

I don’t recall suggesting that either.

It is not the acknowledgement that matters, the book is already out. That it leads to a debate rather then people buying into the position it promotes seems valuable.

2 Likes

You didn’t (hence “I gathered that”). It seems I still do not understand your point, so this cog will shut up now. :wink:

I mentioned the Future of Life’s AI Principles as an example of people doing serious work on ethics and AI. I’m not sure how Numenta should deal with this. Being aware of that work and mentioning it in the book would have been good but that ship has sailed.

One problem I see is that the young researchers who join do not show interest in the topic i.e. it is too late by the time they join Numenta. Consider how the “fireside chat” avoided the topic, one of the most popular questions (from the audience votes) was about ethics but not asked. The question was then rephrased and answered in the forum but showed little understanding of the topic.

But I do suspect the best researchers in the future will be more educated on this and will look to join companies with a clear political goal for their research. It might be a good recruitment strategy!

1 Like

This seems to fall in line with a couple of my earlier conclusions (such companies, to establish “clear political goals”, would want to publish those goals – and proof that those goals are valued would require enforcement by management).

“clear political goals” is very different from “ethical guiding principles”

I get to that idea because you pushed me to try, not because that was what I had in mind in the other thread :slight_smile:

Mind you, you could be a mind reader…

And yet, the shining example provided (Future of Life’s AI Principles) is literally the latter (notwithstanding a possible misunderstanding of the word “ethical”)

But it is not a shining example for Numenta, FLI is not doing AI research, they are providing a framework that could help companies/people become aware of the issues.

1 Like

Got it. So back to @bitking’s question:

From the latest discussions, I gather that @markNZed believes the missing piece is a publicly displayed set of political goals from Numenta (and presumably also enforcement of those goals by management, otherwise they would just be words on paper). Another requirement would be, at a minimum, a reference to those goals in future books like “A Thousand Brains”.

Let me know if I am still misinterpreting you :wink:

BTW, I would argue that this mindset (companies orienting themselves toward political goals) is why we are seeing rampant censorship in social media platforms against viewpoints which oppose the gated institutional narrative…

2 Likes

This is a technological solution to a social problem. Inventing an AI is not going to make the US anti-trust laws more enforceable. Companies already captured 99% of the world, and then they all merged into about 3 big corporations. There are powerful and malign interests holding our world and they’re not going to go away unless an equally powerful entity forces them to. The only solutions I know of are political, and so I vote.

Inventing an AI could increase the power of the average citizens, but that rising tide will lift all ships, including the ships of your enemies. This “rising tide” principle applies to almost every technology.

2 Likes

funny video

1 Like

I think you are still misinterpreting me :wink: It is not “the missing piece.” This would be one sign that it is being taken seriously - the unavoidable role of politics and morality would at least be recognized.

Without having a sophisticated view of ethics this will be a waste of time. It is not writing it down that is important. It would be the work required to get to a point where they can write something useful that would be most valuable.

Yes, it was much more comfortable when it was being done and you couldn’t see it. That the social media platforms are leveraging AI to do this more effectively is a good example of why there needs to be regulation ASAP.

How is that working out for you?

It depends on how and what technology gets developed. It would require thinking outside the box. If you already believe it is impossible then you will not be able to think outside of the box you put yourself into.

If we actaully lived in a democracy - where the will of the people is respected - then it would be possible to regulate in very different ways. History tells us that organised groups can be more powerful than the oligarchy. Of course most of the AI research is currently owned by people who have done extremely well out of the current situation. So I would not bet on them fixing it. However, they have been relatively unsuccessful in “cracking” the problem so far so, maybe there is some divine justice that leaves the space for an alternative approach.

1 Like

Ok, so tweaking my answer again (hopefully I am getting closer…) Everyone working for Numenta should study and become up-to-date on philosophy. Presumably this would involve the company investing in their education (or alternatively firing the current workers and hiring new ones which do have the necessary education). This would be a necessary prerequisite before establishing the company’s political goals, enforcement of those goals, and evangelization of those goals in all future works of literature.

2 Likes

Pretty well actually. Despite all of the political turmoil, there are no past historical periods which I would rather live in. And I certainly wouldn’t want to move to a country which does not hold elections.

2 Likes