Politics in AI Research

You have made several comments which strongly imply this. Two recent comments which, taken together, make the implication:

The second quote there was in response to my summarizing what I understood of your position up to that point – that a company engaged in AI research should formulate, publish, enforce, etc. its political goals. You responded that such efforts are a waste of time without a sophisticated view of ethics, implying that such a sophisticated view is a prerequisite to those efforts. Now one observation here is that I am interchanging philosophy and ethics, but that is because your understanding of ethics is highly philosophical (I don’t see someone having a sophisticated view of ethics independently of philosophy, do you?)

Pehaps where I misinterpreted you is in the fully sophisticated understanding part being a prerequisite? Still, there presumably is some level of understanding (not yet achieved by the relevant parties) prior to which the formulation of political goals etc. is still a waste of time (and the relevant parties need to educate themselves to reach that level first, to avoid wasting their time). I would still stand by the point I made about if a company is serious about this, they would invest in that education. Human nature is such that if one’s boss were to say “here is subject you should invest time and treasure in”, and provide virtually no support or incentive (or disincentive) to follow through with that, most people will simply shrug their shoulders and move on.

You have made many comments across multiple threads which imply this (such as references to the sorry state of the education system, turning out “cogs in the machine”, etc). Perhaps those comments are not relevant in this context though? Do you mean that cogs are not a problem, so long as the leadership of an organization has the sophisticated understanding of philosophy/ethics?

No, actually I was trying to summarize your viewpoint, but it is proving to be a moving target for me. I did throw in some hyperbole with the firing/re-hiring comment, but that wasn’t meant as a strawman argument, but to highlight where the strategy I thought you were proposing could go wrong in practice. The rest is actually a summary of what I understand your argument to be (though clearly I am not there yet…)

That’s ok (you have tried before and had to give up as well) I think we probably just have incompatible models of the world, in which it is difficult to reach areas of common understanding. Hopefully I haven’t offended you.