Politics in AI Research

I think you are still misinterpreting me :wink: It is not “the missing piece.” This would be one sign that it is being taken seriously - the unavoidable role of politics and morality would at least be recognized.

Without having a sophisticated view of ethics this will be a waste of time. It is not writing it down that is important. It would be the work required to get to a point where they can write something useful that would be most valuable.

Yes, it was much more comfortable when it was being done and you couldn’t see it. That the social media platforms are leveraging AI to do this more effectively is a good example of why there needs to be regulation ASAP.

How is that working out for you?

It depends on how and what technology gets developed. It would require thinking outside the box. If you already believe it is impossible then you will not be able to think outside of the box you put yourself into.

If we actaully lived in a democracy - where the will of the people is respected - then it would be possible to regulate in very different ways. History tells us that organised groups can be more powerful than the oligarchy. Of course most of the AI research is currently owned by people who have done extremely well out of the current situation. So I would not bet on them fixing it. However, they have been relatively unsuccessful in “cracking” the problem so far so, maybe there is some divine justice that leaves the space for an alternative approach.

1 Like

Ok, so tweaking my answer again (hopefully I am getting closer…) Everyone working for Numenta should study and become up-to-date on philosophy. Presumably this would involve the company investing in their education (or alternatively firing the current workers and hiring new ones which do have the necessary education). This would be a necessary prerequisite before establishing the company’s political goals, enforcement of those goals, and evangelization of those goals in all future works of literature.

2 Likes

Pretty well actually. Despite all of the political turmoil, there are no past historical periods which I would rather live in. And I certainly wouldn’t want to move to a country which does not hold elections.

2 Likes

Do you really expect that to lead to a reasonable conversation. I am giving up at this point.

Yeah, you probably wouldn’t want to move to many countries where you’ve financed the creation of elections either :slight_smile:

I am just repeating what I have heard thus far. You mentioned that a knowledge of philosophy is necessary before the other things. You also mentioned that organizations should consist of people who are well-rounded in their education (including their education in philosophy). There are only two ways to reach that goal – either the people get themselves to that level (which I would argue if a company were serious about, they would invest in that education), or you change the people. Of course firing and re-hiring are overly extreme (I only mentioned that to point out where this line of reasoning could end up going wrong in practice)

1 Like

If you don’t mind sharing your experience in the ethics material, I want to get a feel for how I would be looking at what I am doing. At this point, I am not looking for a do this / don’t do that kind of an answer, but more of an exploration of my ignorance in this area that you have spent time learning.

Please don’t take this as any sort of attack, I am asking you to help me understand something that I know little about.

As you may recall, I feel I understand the basic mechanism of consciousness, and the technology to implement this mechanism is starting to be widely available at the level of amateur AI researchers.
Let’s assume that I am successful in creating an agent that can read, talk, carry on, and actually understand the contents of a conversation. I should be able to give it basic drives and motivations to act.

As I understand it, the parts that Jeff H. has minimized in importance (the subcortical structures and the way the connections are made between processing maps) are the missing parts, and as the work with HTM is expanded to include the H of HTM these relationships will start to become obvious to everyone so it’s just a matter of time until this is widely discovered - if not by me, certainly by others.

The abilities and limits are unknown as this is new technology.

What should I be thinking that I may not have considered?

Are there any particular harms that I should consider to guide my actions in releasing this research to the wider community?

How should I (as an individual that may not think of everything) evaluate harms vs benefits without necessarily putting this tech in the hands of bad actors by accident? Is it even possible to limit the dissemination of this technology? As I said, I think that this is mostly just changing the way to look at how the parts are assembled and not really any new parts. Once this viewpoint is understood most researchers will be able to duplicate it without access to my code base.

The inventors of email did not think of spam or email cons; do the current benefits of email outweigh the harms? How can I see all the evil uses my technology might be put to when I really don’t even know how it will be developed at this stage?

You are repeating what you are imagining I think. Where did I mention that “knowledge of philosophy is necessary before the other things?”

Where did I state that oorganizations should only consist of people that are well-rounded in their education?

Because you are not listening and thinking but instead trying to create a strawman I gave up.

That seems a good question. I think we should assume we have not considered important points and get people involved who have diverse perpsectives on the issues. That probably means discussing the issues with people you normally would not and somehow learning to understand their pespective from their perspective.

I’m not sure we could come up with a list. It depends largely on your politics and morality as to what would be on the list and what would be considered harmful. If you are releasing something that will have a major impact then you need to consider perspectives more diverse than your own. Perhaps get someone involved who you respect as equally brilliant in ethics as you are in AI.

I think there have been cases of people deciding not to do certain work or to publish certain work because it would be harmful. It is probably not easy if the publication would lead to fortune and fame.

I think the way you frame the problem will lead to different solutions. You probably bought into a modern philosophical paradigm when understanding the problem. That will limit the range of solutions. Modernity is hell bent on misusing technology, so I guess unless the foundations of that paradigm are questioned you will go ahead.

The approach taken by the FLI is to prioritize the beneficial development of AI. This could be taken to mean not considering AI as a “raw” technology that has no moral values “built into it”. The challenge then becomes much greater, for example, the AI may need to be explainable. To be explainable it may need to be well understood through formal theory and that raises a different set of challenges for the research than hacking together a sort of Frankenstein system for better predicting clicks or stock prices.

If, at the outset, the goal is to build something that can be used for both good and ill then it will be used for both. The moral development of individuals needs to be in line with the tools at their disposition. A similar situation exists for society and technology. For example, we can engineer highly deadly and highly contageous viruses, this does not mean that we should. It also means we should not invest time and energy trying to make that sort of technology cheap, easy and widely available.

Most technologists don’t want to understand their role in political terms, it is not easy, it is not comfortable. It is hard to become repsonsible for the suffering of others. It is probably also essential that people who are building the tools for the future start behaving differently.

Having our best researchers career partly conditioned by their ongoing personal develoment, broadening sensitivity, and implication in society would probably lead to both better results in technical and social terms. The current pigeon holing of researchers and research is counter-productive in many ways.

So if modern philosophy and ethics have advanced, then could you point out some of those advancements? Are there any articles or literature which I could read in order to learn about the new ethics/philosophy? Or even a few search terms which I could follow up on?

Philosophy is a vast field and even describing philosophy assumes some philosophical framework. Given that you have no idea where to start then I would start with a history of philosophy. That will probably get you up to the mid 20th century without completely misleading you. But it will probably be very light on non-Western philosophy. When it comes to contemporary work it gets much harder. This is similar in any huge field, like neuroscience for example, at some point you start having a favorite angle and reading in that direction.

Getting to the bleading edge of any domain is very hard work in nearly any field. So you probably can’t understand contemporary work in more than one broad domain. But what you can probably do is learn enough philosophy to identify philosophers who are contemporary and align with your preferences and then trust those people to translate the issues into more general texts. Like you might have a preferred neuroscientist’s work.

A general approach would be to start with very simple texts e.g. general Wikipedia entries, and https://plato.stanford.edu/ if you run into limits of Wiki. After reading a broad history (you could start with a novel like Sophie’s World), then start reading some secondary texts, once you have some comfort with the technical vocabulary then try some primary texts.

Some people like to distinguish analytic philosophy and continental philosophy. I think continental philosophy is more useful but less structured. Analytic philosophy has been professionalized to a point where the work is very suspect in my opinion. I want to see my favorite philosopher embodying a way of life that aligns with their philosphy. The number of citations is a whole other game that professional philosophers are playing.

Because it is more structured, analytic philosophy can help get some clarity if you don’t read too deeply. They distinguish ethics into: meta ethics, normative ethics, applied ethics. A first step would be to go one level deeper and understand the main concepts within each of those sub-domains. Wiki has reasonable articles nowadays I think

1 Like

Pardon my butting in: I’ve been following this thread in the hope it would lead somewhere, but this takes the cake.

If the only way you can get a useful understanding of philosophy is to first do so much study as to become a philosopher then you’re describing a religion or a craft guild, not a modern discipline that can be applied to solve interesting problems. If the only way to get a grasp of recent advances in philosophy is to start with a history that goes back centuries you’re describing something akin to studying modern medicine based on the myths and fables of Olympus or Valhalla. Spare me.

The distinguishing characteristic of the true expert in a field is the ability to express and explain key points in their own field using language that can be understood by experts in others. If you can do that, I’m all ears. If not, a pointer to someone who can would be much appreciated.

2 Likes

This is a sad reflection of the state of out education system. What you want is something that helps you be more effective in the current philosohical framework that you are assuming without understanding what it is nor where it came from. In that regard, there are a bunch of people who will sell you that knowledge, just keep doing what you’re doing.

I am not a “true expert in the field” - that should be obvious - and I have never claimed to be such an expert. I know enough to know I don’t know enough.

Philosophy, if studied well, is not an extra string to your bow. It will change who you are and change the world from your perspective. It is not some set of facts that you pick up by reading a book, it is a process that does not stop, the more you learn the less you know.

But independent of that, if you think you can understand a topic well enough to publish about it by listening to someone do a good job of making you think you understand the subject, then I have a bridge in Brooklyn you might like too.

You have made several comments which strongly imply this. Two recent comments which, taken together, make the implication:

The second quote there was in response to my summarizing what I understood of your position up to that point – that a company engaged in AI research should formulate, publish, enforce, etc. its political goals. You responded that such efforts are a waste of time without a sophisticated view of ethics, implying that such a sophisticated view is a prerequisite to those efforts. Now one observation here is that I am interchanging philosophy and ethics, but that is because your understanding of ethics is highly philosophical (I don’t see someone having a sophisticated view of ethics independently of philosophy, do you?)

Pehaps where I misinterpreted you is in the fully sophisticated understanding part being a prerequisite? Still, there presumably is some level of understanding (not yet achieved by the relevant parties) prior to which the formulation of political goals etc. is still a waste of time (and the relevant parties need to educate themselves to reach that level first, to avoid wasting their time). I would still stand by the point I made about if a company is serious about this, they would invest in that education. Human nature is such that if one’s boss were to say “here is subject you should invest time and treasure in”, and provide virtually no support or incentive (or disincentive) to follow through with that, most people will simply shrug their shoulders and move on.

You have made many comments across multiple threads which imply this (such as references to the sorry state of the education system, turning out “cogs in the machine”, etc). Perhaps those comments are not relevant in this context though? Do you mean that cogs are not a problem, so long as the leadership of an organization has the sophisticated understanding of philosophy/ethics?

No, actually I was trying to summarize your viewpoint, but it is proving to be a moving target for me. I did throw in some hyperbole with the firing/re-hiring comment, but that wasn’t meant as a strawman argument, but to highlight where the strategy I thought you were proposing could go wrong in practice. The rest is actually a summary of what I understand your argument to be (though clearly I am not there yet…)

That’s ok (you have tried before and had to give up as well) I think we probably just have incompatible models of the world, in which it is difficult to reach areas of common understanding. Hopefully I haven’t offended you.

What is offensive is having you tell me what I think rather than you thinking for yourself.

Look at your argument:

I write that there needs to be some profound understanding of ethics before an organisation writes up a political agenda and that this is not easy.

You somehow convert that into a recruitment strategy that means everyone in the organisation needs to be fired, all research needs to stop, everyone needs to go back to university and get a PhD in philosophy before they can do anything. That is just plain stupid.

Would you suggest that an organisation embarked on an AI project have an AI expert work on the plan before launching into the publication of a research agenda? Would you recommend that the firm’s accountant needs a PhD in AI before they can do accounting work for the company?

In this particular instance, it is the boss who needs to do more work to learn about the topic before publishing a book that is intended to tell the world how to manage the topic. Another option would be to write about one’s field of expertise and hire experts who are already educated in the topcis one does not have time to master.

There is, obviously, space for cogs and given that most people have been educated to be cogs, that is naturally how it will go. That does not excuse leaders from leading people astray. Most people want to be cogs, they will probably not be reading this thread to this point.

That the education system would ideally be more complete seems essential. That people who have been miseducated at least get to hear one person, once, tell them that, seems fair. That they will nearly all ignore that point is also par for the course.

Rather than trying to reverse engineer what I think, go and do some reading and form your own opinion, or ignore the topic and write some code :slight_smile:

How else can I convey to you that I actually understand your perspective, if I do not try to summarize it? If I hadn’t summarized your point the few times I tried above, I might have said “yeh, I totally get you”, and then proceed with a completely distorted understanding that doesn’t actually match your perspective at all. This wasn’t an attempt to tell you what you think, but to tell you what I think you think (with the understanding that you will come back and make corrections). My hope was that by repeating an iterative process like that, I might eventually reach a better understanding of your perspective (though in practice, it seems that we just ended up circling a local minimum…)

Now who is using hyperbole :wink: Ok, I get your point – if you read the rest of my earlier summary without the parenthetical statement about recruitment, hopefully you can see how I understood your argument to be.

Got it. So to re-summarize my understanding of your perspective (please do not take this as a personal attack or me trying to tell you what you think, etc.):

The leadership at Numenta should study and gain a sophisticated understanding of ethics (and the requisite philosophy). This is a necessary prerequisite. Once complete, they should then establish political goals for the company, and make those goals publicly available. They should also enforce those goals in their organization (otherwise they would just be words on paper). Additionally, in future works of literature by the leadership, those political goals should at a minimum be referenced.

Yes, particularly if they are writing books on the topic to educate a general audience.

They already have political goals, they may or may not be aware of what they are and who set up those goals in the first place.

I don’t see any particular need for that. This is starting to get into moral prescription. It seems inappropriate to guess what the outcome might be

This seems to be your particular preference. Very authoritarian of you. I would let them figure out what works for them.

Again it depends on what the political goals are. Maybe they want to further the American empire and accelerate environmental disaster, so just staying mum would do it.

Planning to bounce back and forth to make this thread go infinite? You can at least find yourself a link or something.
You claimed your earlier rudeness is for exchanging information. But it only served to drag out the thread.

What Paul Lamb wrote sounds a lot like a summary of what you’re saying. Maybe you could give your own summary? Rather than claiming anything specific, are you reacting to what you perceive as people dismissing philosophy (or some topic)? Is that what you meant was our unconscious incompetence?

Alright, I’ll explain why I’m not super interested in complex morals.

First of all, I don’t care much about it in regards to modern AI, because I’m not interested in modern AI. It’s other people’s problem to solve. Yes, it’s a huge problem.

In regards to general AI, I think complex morals are important, but not all people and countries will care. That’s the bigger issue, and philosophy is just a distraction if that problem goes unsolved.

Expanding your quote a bit:

I basically said “you don’t just build whatever is possible” directly after the part you quoted, in the parenthesis. I was saying the tool itself has no responsibility unless it’s an agent. In that sense, non-agent AI is amoral. This was in response to something from before the thread was split,

You later contradicted yourself, denying you assume AI has to be an agent (despite that being an implicit assumption in the above quote):

Because of that contradiction, it’s hard to understand what you think beyond “you’re wrong” but it seems like you just misunderstood me and there’s no disagreement here.

Great response, very informative. You know I agree AI will have impacts with moral consequences? I’m saying something very specific: if intelligence is defined in a way which doesn’t include intentions, it is amoral in and of itself. That’s all. It’s impossible to make intelligence have good intentions if it has no intentions.

Do you get why that’s wrong? Maybe you were saying there’s an assumption that intelligence has no moral impacts on the world. I doubt anyone argued that. Also, you say any agent which acts will have moral consequences, but it needn’t act. Works I’ve seen on AI ethics assume general AI will have desires and act to produce desired world states, and from that they conclude AI will be problematic, e.g. paperclip maximizer (where a superintelligence has the goal to make paperclips and turns everything into paperclips). Perhaps what you read was responding to that assumption, and you misunderstood it. I haven’t yet finished the thousand brains book if that’s what you were responding to.

Here’s why I don’t think complex morals are the most important thing for general AI (still important though). First, not all countries or people will do that, if the world is like it is right now. We need a world where there’s no reason to compete with other countries.

Second, general AI will potentially cause our extinction before it does much besides process data. It’s easy to underestimate how big a deal processing data (e.g. for science) is, compared to things like AI warfare. Let’s say we’re at a point in time where general AI is fairly young and only has dog-level intelligence. Imagine a dog-level intelligence which is entirely directed towards something specific, e.g. math. Its entire world is math. There’s no need to translate from out messy world to an abstract concept. I think that dog would be at least as good as a human child at math. Except silicon is a million times faster, and it never loses focus. So imagine something as smart as a human child 100% focused on math doing over 2700 years of math per day. That’s just one AI device, and you can bet we’ll make at least a billion.

2.7 trillion years of child-level thought per day.

Yeah, everything’s gonna be solved in the first hundred thousand of that. So this’ll happen before we even have dog-level intelligence (maybe even before general AI is fully complete, which might be ideal because that’s slower). All of science, engineering, etc. will be solved or reach its limits, possibly in the first few years of general AI. That might sound extreme, but in modern times we have 1000x as many people as we did until like 10,000 years ago. A millenium of thought per year. Another 1000x multiplier isn’t unreasonable, especially compared to 2.7 trillion years per day. Except this time progress will happen in days, not a lifetime and not a thousand lifetimes. We better be as ready as we can be. And probably have a completely separate society on the other side of the galaxy out of communication range because otherwise we’ll probably go extinct.

Do you see why I don’t care much about complex morals compared to the world’s political state? If we don’t have a peaceful world, we’re dead. If we don’t have equality, most people are dead. If we don’t live spread across space, and we don’t have extreme surveillance, like on a molecular level, we’re dead. (I wouldn’t mind surveillance by something like deep learning but we better have fancy math to keep it limited to that.)

Complex morals are a distraction if they’re the main focus. Numenta is transparent and connected to science, which would help reduce harm caused by countries clashing from general AI. The economic benefits from solving all of science are massive, so if there’s collaboration, everyone will join or get left behind. If all countries get general AI at the same time, that will reduce the odds of our extinction. Although, maybe not by much.

Things look grim, but AI can’t explain the Fermi paradox (if the universe is so big and old, where are all the aliens?) Odds are some of the AI would spread across space (it takes only like a million years to colonize the galaxy), but we don’t see that. So while we’re worrying about AI, we better figure out the Fermi paradox too, because we might need AI to have a shot at surviving. Unless inflation theory is correct, in which case we’re the first species capable of space travel. (Because there’d be like 10^10^70 times more universes each second, almost all species are the first of their kind). We probably won’t need AI to survive the filter(s) between non-life to galaxy colonization. So I think it’s more likely AI will kill everyone, not by robot wars or superintelligent gods, but by solving all of science in a fraction of a lifetime, releasing who knows what. Probably things on the level of blackholes and super plagues, but less flashy. If we don’t get ready, we better hope science has low limits.

1 Like

The parenthetical statement explains why I am adding this point to the discussion – what good are stated goals by leadership, if the people working under you are not actually working to achieve those goals? The leadership can publish and discuss and evangelize their goals all day, but if the people actually doing the work are not striving to achieve those goals, then ultimately they mean nothing.

That is of course from a more generalized perspective – the specific folks at Numenta may not actually work against their leadership’s political goals once established, so it may not apply to them. I’ll back away from this a bit and say the enforcement aspect is a fail safe in the strategy that may not always be required. Basically it is a card that might need to be played in the case of a dysfunctional organization.

But you have stated that they should be “clear political goals”, and that you envisioned workers in the future seeking out companies which have such clear political goals. It seems that the best way to make them clear is to make them publicly available (how else would the hypothetical future workers come to know about them?) This also ties into the point about mentioning those political goals in works of literature (which is also a form of making them publicly available).

Sorry, now I am totally confused again. I thought the point of all this was moral political goals?

Any agent that acts includes the agent that builds/uses an AI whether that AI has agency or not. Again you need to broaden your understanding of the scope of the question.

There are moral dilemmas in either case, and in both cases.

I was using the term agent in a very loose fashion. For example I would consider a company to be an agent. AI is typically part of a larger agent but that does not remove it from the moral calculus.

Again you need to broaden the scope. It is not your definition that matters. It does not require an autonomous AI to wreak havoc. From a more sophisticated view of ethics than a Judeo-Christian, ego-centric, who goes to heaven first, perspective, it is the system and broader consequences that also need to be taken into account.