Ethics of AI Research

One way HTM might contribute in a positive way might be through anomaly detection, to identify real-time when manipulation is potentially occurring. We could explore that idea with current technology. Are you aware of some documentation which support the hypothesis about Cambridge Analytica ML algorithms and any details about the algorithms themselves?

2 Likes

What is sorely lacking in current (and most likely also to be lacking in future) human affairs, hence also in by AGI augmented anthropic affairs, is a subjectivity-snubbing and supplied as part of “ÆPT” (≈’Atheistic Enlightenment Potently Textualised’) definition of maximal “Absolute Life Quality” (ALQ). This definition can be used as an ÆPT motto and a cultural relativism crushing but ALQholism and ALQwholesomeness promoting ‘measuring-stick’/rod for the neural Actention Selection Serving System of people with inEPT (inÆPT) attitudes or beliefs.:expressionless:
The ÆPT motto mentioned was conceived in a squeaky clean spirit of ‘antiseptic’ humor, so don’t worry too much.:sunglasses:

The greater the dearth of ALQ-holes in a neural ASSS the higher the ALQwholesomeness of the individual whose ASSS it is.
An ALQ-hole is maintained by post or pre synaptic inhibition that blocks the relay of excitatory messages on their way to create a paid (primarily in the currency of neurometabolic resources) “actention” of distress type.

The circumstantial cause of an ALQ-hole is a “Specific ‘Hibernation’ Imploring Threat” and the conditioned-in aftermath of a survived such predicament or ordeal is aptly ÆPTly denoted with the “MAD”-inspired (and shortest possible while still ÆPT) acronym CURSES; one which is short for ≈Conditioned-in kept Unconscious Reverberating State/s Effecting (EAVASIVE as well as somatic) Symptoms".

One way HTM might contribute in a positive way might be through anomaly detection, to identify real-time when manipulation is potentially occurring.

I think this is a great example. However, my own opinion on a lot of algorithms these days is in how they’re trained for the most part. Taking anomaly detection- the data you train it on defines what’s an anomaly, and if the data omits (mistakenly…or not, if you’re evil and sly…) certain cases, then those cases become anomalous and exposed to mitigation by the algorithm.

So if in a social feed where you’re looking to automagically weed out bot interference, for instance, and you haven’t trained your anomaly detector on certain use-cases of language, then you can possibly weed out sub-cultures without meaning to. People can report and complain, sure, but that all happens after their messages are stripped out of the feed, allowing other messages to dominate.

Using anomaly detection in language areas is actually insufficient, as you’ll need quite a bit of context to tell a bot from one of those motivated Twitter grannys retweeting stuff all day.

I hope you too can relieve yourselves of being coralled, by Freudian convention, to use the tacky term “trauma” for thinking and writing about how we are, and evolved to be, a uniquely EAVASIVE species of fauna.

If anyone would dare to know the ÆPT meaning of EAVASIVE then feel free to ask me.:blush:

Focusing on particular algorithms is important. There is also a broader consideration if we treat the entire domain of AI as an intelligent system. A small actor like Numenta or a smaller actor still like a single researcher is participating in the advancement of AI through the “evolution” of improved systems.

For example, it might not be a full blown HTM algorithm that dominates a particular service but it might be ideas that are in TBT that influence the next algorithm that does dominate a particular service.

From this perspective an issue with TBT is that it reads like an engineer’s opinion on morality and does not treat the topic seriously. For example, if Jeff thinks the AI principles outlined in the initial post are not valid, then TBT would have been a place to explain that. Instead TBT looks like one isolated opinion about morality and the bigger picture of coordinated industry wide policy and practices is avoided.

For example, the whole “intelligence is just like a screwdriver” style of argumentation is completely flawed from an ethical perspective. TBT is 2/3rds about the morality and that is clearly far from Jeff’s area of expertise. These types of texts downplay the importance of ethicists and of taking philosophy seriously. It comes across as “nothing to worry about here, keep moving along” when there are already serious problems such as autonomous killer robots in the marketplace and those products can leverage any research released by any researcher.

It is not reasonable to just leave this up to others if you are working in this domain, and yes it will slow down progress, and yes that can be a good thing. We are not lacking technical solutions for nearly all of the major issues in society. Yet in the richest nation in the history of the world you have people living in cardboard boxes on the street, no free healthcare, massive drug addiction problems… The most likely outcome in our current moral climate is that AI serves to further separate the haves from the have nots.

2 Likes

This is true. AI be damned, we could have fixed such issues decades ago without any fancy computers. We are living in a post-scarcity world. This is a fact: there is enough food in the world to feed every human currently alive. The scarcity that does exist today is mostly either superficial (things we want but don’t need) or artificial (we intentionally make scarcity). Alas, these are political issues and they probably won’t be helped simply by inventing an “ethical” AI. Although, I concede that an “unethical” AI could always make problems worse!

4 Likes

I am not sure. I’m in the middle of slowly exploring that topic, from my notes: Another related prediction is that AI would play the role of a cognitive prosthesis for humans (Ford et al. 1997; Hoffman et al. 2001). The prosthesis view sees AI as a “great equalizer” that would lead to less stratification in society, perhaps similar to how the Hindu-Arabic numeral system made arithmetic available to the masses, and to how the Guttenberg press contributed to literacy becoming more universal.

3 Likes

I don’t suggest this… but I do suggest that when people are creating things, that they take a moment, step back, and consider the negative ways that it may be used, and sincerely propose mitigation steps… essentially, take some real, honest-to-goodness responsibility over the creation you have wrought. I’ve personally found the exercise to be humbling, even walking away from opportunities due to how I couldn’t come up with honest mitigations to the misuse.

I’m suggesting that’s simply not happening enough in AI or tech.

3 Likes

This is a good start but without an education in ethics (and that requires a broader education in the humanities) it is of little value because you will use the morality of a past age while creating the dilemmas of tomorrow.

Just to reply to this, I think the replies/comments here suggest that as a community we are, in fact, considering ethics. I’d suggest that Ethics in AI is the low-hanging fruit of participation in AI. You don’t need to know how the systems work to be concerned and put those concerns forward, and for those who actually engineer these solutions, we need people continually watching over our shoulders.

I also think of this like medical devices (for which I written a few firmwares)… all medical devices in the U.S. go through a pretty tough review process that weighs safety and mitigation of errors in programming or analog circuitry. If you can’t demonstrate how to handle those cases where the device may cause harm, you can’t get approval for your device, and you can’t sell it… maybe there should be an equivalent for any commercially deployed AI systems?

3 Likes

This is the type of thinking that drives me nuts :slight_smile: Would you expect to get a decent AI system from people bringing the average knowledge of biology and computing to the table? If not then why would you expect the average understanding of morality to be effective in dealing with issues that are much more difficult than building an AI? Most people do not have ANY education in ethics and assume that morality is the same as ethics.

This drives me nuts!

Weak AI is not Strong AI, and people don’t understand the difference. Arguments about one don’t necessarily apply to the other, because they are different things. What’s more is that the people who do understand aren’t effective at communicating about such nuances, they tend to just lump everything into two categories “pro’s and con’s” style.

I imagine this is what it’s like to be in the field of nuclear energy and having every other conversation derailed by “but Chernobyl? what about Chernobyl?” (which by the way, I’m definitely guilty of doing)

3 Likes

Same here :smiley: , until I actually learned about nuclear energy and became a fan.

1 Like

I suspect that most people don’t care, for them we have AI now and it is doing harm via the attention economy, information bubbles, autonomous weapons, political manipulation, huge spying networks in the USA and China (the Stazi could only dream of this!) etc… It is probably only people who are passionate about AI that think ethics is only in relation to AGI. It is a nice excuse - AGI is far away so I don’t need to do anything about it. The person building detonators for “little boy” probably thought the same way!

2 Likes

There is this infamous presentation Alexander Nix gave a few years before the scandal broke. (It’s kind of surreal that they were so overt about it).

According to Cambridge Analytica whistleblower Christopher Wylie, the system was based on Dr. Michal Kosinski’s research, who developed the algorythm of psychoanalysis. This paper, co-authored by Dr. Kosinski, explains the analysis of individuals’ personalities based of Facebook Likes.

It’s not clear to me where ML is used to produce this algorithm or the resulting data. And it’s also important to know Dr. Kosinski never worked for nor with Cambridge Analytica.

[Edit] I have heard reporters say that the authors of the app that collected the FB data, later after the study, passed the data on to Cambridge Analytica. If this is true, then Dr. Kosinski might not be totally innocent in this matter. But I don’t know if it is true.

3 Likes

In terms of AI there is a case to be made that it requires intelligence to manipulate an election and those algorithms could not work at scale without machines so it is artificial (as in non-biological) intelligence. Maybe not Skynet :slight_smile: The data represented by the likes is certainly curated by ML because Facebook is using ML to orient the likes. I have not looked into the details.

1 Like

Never underestimate the power of clustering algorithms… All the points of data on you and millions of others is quite enough to identify groups of thinkers who share the same pursuasions and interests, and for whom certain topics seem to reign supreme. :frowning: