Politics in AI Research

Ok, I’ll bite - unless you are just carping from the cheap seats - in your view what is it that us that have not read the same books as you should be doing differently. Should I use a different data set than MNIST to test my HTM models? Should I even be looking at HTM or is there some better model that is not morally suspect?

If you ever do start coding the subsystems that could eventually become part of a functional AI what is it that you will do differently than us poor souls afflicted by “unconscious incompetence.” What exactly is the woke way to do AI research at this very very early stage of the art?

Rather than telling us how ignorant we are, what do you suggest (in positive concrete things that are directly usable) we should all be doing and how does it differ from what we are doing now?

As far as “going slow” China comes to mind. For centuries they were the center of advanced civilization. They were made into Luddites by royal decree (in the pursuit of stability) and the only real result of this was when the rest of the world came banging on their doorstep they encountered war machines and steam engines far in advance of anything China could muster. Progress will continue no matter what some ethicist considers morally right. I have trouble thinking that it is the moral high ground to bring a knife to a gunfight but that is what is called for by walking away from technology that has a significant war-fighting potential. This is not some future problem: as I write this the eastern United States is still recovering from an act of cyber war - in this case - ransomware shutting down critical energy infrastructure. Add in AI and the stakes get higher.

Research in these areas is necessary for survival in a competitive world. See: Darwinism

Being totally ignorant by virtue of not having read the books you have, I still suspect that allowing yourself to be wiped from the face of the earth by not defending yourself has some moral issues involved. . With this in mind, how should research in these areas continue? Again, concrete “do this” and “don’t do that” recommendations - not vague platitudes. I will add that recommendations like Asimov’s 3 laws are utterly worthless - nobody knows how to do that level of programming! Real recommendations have to be able to be reduced to practice.


To play the devils advocate: are companies like paperclip gods? Publicly traded companies are non-human entities which exist for the purpose of maximizing shareholder returns.


I’ll try to keep this brief because this is kind of off topic.

It does matter, because AI doesn’t have to be an agent, which is an assumption in most of what I’ve read about AI ethics. It doesn’t have to have any goals or any behaviors which change the world. Building something to not hurt people in pursuit of goals is different from preventing a tool from causing harm.
You seem to assume AI has to be an agent:

I don’t see anything moral about concluding things about the world. AI is amoral if it has no goals. It’s how people use the tool which matters (or what people could do with a tool you’re creating), or the AI’s goals if it is an agent.

Be nice. I don’t like having memories of me being rude or arrogant, which is why I spent half an hour writing these two sentences.

1 Like

In case you don’t know what I was referring to, the idea of a paperclip maximizer is where a superintelligent AI has the goal to make paperclips, so it turns everything into paperclips. I think that’s a valid concern, if the AI is superintelligent and has goals.

1 Like

The desire to have someone tell you what to do is not easy to get over. The desire to tell people what to do is perhaps just as difficult to get over. I suspect this is particulary the case in our Judeo-Christian society that has trained us over centuries in the capitalist model to become either a capitalist telling others what to do or a worker doing what someone tells you to do. God is alive and well in the form of the USD.

Engineers - like you and me - did not get a good broad education. The only way to deal with that is taking responsibility for improving our education. Assuming that you have enough knowledge, when you have not studied a topic, means you are letting the current social norms guide you. If you look at where the current social norms are in fields you do care about, this might provide enough motivation to become more educated. Engineers have been some of the workers who most benefitted from globalisation and financialisation.

The strategy you adopted above is to assume that you already know the answer, there are only two options: do nothing or compete to die. This is seriosly misinformed.

Evolution is understood in our individualistic, utilitarian society as competition. If you stop making the self the center of the universe and projecting yourself onto each suffering organism, then the synergies in nature are abundant. Look out the window into nature and you can see just as much cooperation as competition.

Concretely, a major step I think engineers need to make is to stop with the early 20th century positivism and accept that they do have a social, political, moral role to play in society. This means getting clear on what sort of world you are building and how you are going to contribute to that, rather than assuming someone else is looking after your (i.e. our) interests.

The goal of an engineer should (here comes the moral judgement) not be to simply build a technology but to particiapte in building a society. It seems that most technologists limit their children’s screen time, I would bet that some of those technologists are putting all their professional energy into making that as difficult as possible for other parents. The ideas of individual choice and technological destiny are in contradiction. We each should develop (and continue to develop) a clear moral/political position and that needs to be supported by the company you work for or you need to change company.

For me that means AI needs to serve the majority of citizens by dispersing concentratons of power rather than increasing them. This could be one goal of beneficial AI. Certain types of AI will make that impossible while other types can enable it. This lets me choose where to put my energy.

You can see this play out to some extent in the rift between Deep Mind and Google. They recently failed to extract themselves from Google and a number of employees have left or will leave because of moral concerns about the AI they are developing for one of the most powerful companies in the world with strong connections into the US military.

Believing that AI could be amoral is not a useful position. Perhaps having thought leaders in the AI industry aware of that would be some small gain.

As for the new fad in the USA of being afraid of China - it seems Russia now only scares half the population. China might be able to reunite the Americans in their competitive struggle to run off the environmental cliff. The most dangerous and destructive country on the planet is obviously the one waging the most wars, dropping the most bombs, funding the most gain-of-function biological research, and building the most ego-centric AIs in the world. God help us all :wink:

1 Like

No I do not, you assume that. It should, if you read what I wrote, be clear that I am saying the morality is not “inside” the AI. I’m not sure where you read about AI ethics but most AI ethics is not concerned about superintelligence, most of the work is on regulating today’s AI. It is obviously an industry that is creating social harm and it is unregulated, that is the main focus of AI ethics today. The stuff that gets the headlines is of course not the average daily grind of work in AI ethics. Same thing goes for AI research in general.

That attitude is from another century - litteraly over 100 years old. There is no magical line dividing tools and morality. You don’t just build whatever is possible, that is why the USA banned gain-of-function research into coronaviruses in in the early 2010s. The morality of people like Fouci is why they turned to funding that work in dual use Chinese research labs.

That is exactly the unconscious incompetence I was referring to.

No :slight_smile: Ethics is not something that should be discussed nicely - it is offensive to anyone when someone points out they are immoral, that is how morality shifts. The first step from unconscious incompetence is conscious incompetence and that is not a nice place to be.

See, I’ll regret every possible way I could respond or not respond. No hard feelings.

1 Like

Don’t worry about being nice. Water of a duck’s back. If you point out my unconscious incompetence then I will appreciate you nastiness :slight_smile:

From the other thread, I gathered that @markNZed believes the missing piece is a publicly displayed set of ethical guiding principles from Numenta (something like Future of Life’s AI Principles), and a commitment by all researchers and developers at the company (and presumably also enforced by management) to follow them. Another goal appears also to be to acknowledge any philosophical mistakes (from the perspective of the aforementioned principles) in “A Thousand Brains”.


Where did I suggest that?

I don’t recall suggesting that either.

It is not the acknowledgement that matters, the book is already out. That it leads to a debate rather then people buying into the position it promotes seems valuable.


You didn’t (hence “I gathered that”). It seems I still do not understand your point, so this cog will shut up now. :wink:

I mentioned the Future of Life’s AI Principles as an example of people doing serious work on ethics and AI. I’m not sure how Numenta should deal with this. Being aware of that work and mentioning it in the book would have been good but that ship has sailed.

One problem I see is that the young researchers who join do not show interest in the topic i.e. it is too late by the time they join Numenta. Consider how the “fireside chat” avoided the topic, one of the most popular questions (from the audience votes) was about ethics but not asked. The question was then rephrased and answered in the forum but showed little understanding of the topic.

But I do suspect the best researchers in the future will be more educated on this and will look to join companies with a clear political goal for their research. It might be a good recruitment strategy!

1 Like

This seems to fall in line with a couple of my earlier conclusions (such companies, to establish “clear political goals”, would want to publish those goals – and proof that those goals are valued would require enforcement by management).

“clear political goals” is very different from “ethical guiding principles”

I get to that idea because you pushed me to try, not because that was what I had in mind in the other thread :slight_smile:

Mind you, you could be a mind reader…

And yet, the shining example provided (Future of Life’s AI Principles) is literally the latter (notwithstanding a possible misunderstanding of the word “ethical”)

But it is not a shining example for Numenta, FLI is not doing AI research, they are providing a framework that could help companies/people become aware of the issues.

1 Like

Got it. So back to @bitking’s question:

From the latest discussions, I gather that @markNZed believes the missing piece is a publicly displayed set of political goals from Numenta (and presumably also enforcement of those goals by management, otherwise they would just be words on paper). Another requirement would be, at a minimum, a reference to those goals in future books like “A Thousand Brains”.

Let me know if I am still misinterpreting you :wink:

BTW, I would argue that this mindset (companies orienting themselves toward political goals) is why we are seeing rampant censorship in social media platforms against viewpoints which oppose the gated institutional narrative…


This is a technological solution to a social problem. Inventing an AI is not going to make the US anti-trust laws more enforceable. Companies already captured 99% of the world, and then they all merged into about 3 big corporations. There are powerful and malign interests holding our world and they’re not going to go away unless an equally powerful entity forces them to. The only solutions I know of are political, and so I vote.

Inventing an AI could increase the power of the average citizens, but that rising tide will lift all ships, including the ships of your enemies. This “rising tide” principle applies to almost every technology.


funny video

1 Like