Unfortunately I think that contemporary ethics does not give us any hope of an ultimate fitness function. The assumption this exists misleads the technologist. This does not mean that the alternative is moral relativism - that is the other side of the same coin. The history of morality is not one of monotonic progress toward a predictable outcome. It is the process that is important and that is hidden by the immoral outcomes that our moral stance permits. The first crime is to treat ethics as a simple topic that can be separated from the tools we are building. Technologists who are not well educated in the humanities should be restricted to working on simple machines
Screw contemporary ethics. There are trade-offs in everything we do, we can’t make rational decisions without a common denominator to measure them in.
Education is not enough, you have to make sense out of it. More sense than anyone ever made. Especially in humanities, that’s mostly heat and very little light. I am a social science major in my past life, I know exactly what kind of swamp that is.
I’m hearing “I can’t undestand it so simplify it”. This is a symptom of our morality not a cure. Simple models of complex systems can be useful but are also gauranteed to have unpredictable side-effects.
Sure. It should be clear that I’m not claiming “read a book and you’re done”.
Certainly “social science” is an oxymoron given the limited scope of current science. It leads to the same sort of “simplify it” mentality that we see in economics and psychology when they try to attach themselves to the science department. Maybe you learnt more than you realize in the social science major
Personally I see great value in education outside of the universities. Having your world model radically shifted as an adult is perhaps the most valuable experience of a lifetime. But the moral straight jacket of wanting to live in a simple world is hard to break out of.
Education in the humanities suits you for nothing, other than gratification and teaching of others of similar persuasion. It solves no problems, generates no wealth, feeds, houses and clothes no-one.
Science extracts knowledge from the things around us, engineering shows how to put it to use, and technologists create the tools that feed, house and clothe us all. It’s been that way since the dawn of time.
There is no ‘ultimate fitness function’, by definition. An AGI is a machine, a piece of software executing an algorithm that will let it do things that for now are the sole province of animals (including ourselves). Either AGI will be created by science, engineering and technology, or we shall try and fail, but either way ethics and morals will play no part in this process,
But ethics, morals, laws and similar human constructs are critically important for those of us who may in the future create, own or control these machines. A gun has no morals, but people use them to kill other people, and that’s a serious issue. AGI likewise.
This is a great approach for building simple machines. Unfortunately repeating ideas that were no longer relevant in the 20th century does not make them relevant in the 21st century. But you are of course free to do so - or are you? Maybe the belief that there is nothing to gain by questioning your assumptions is the very problem rather than the answer. Now I wonder who could have put that idea there, certainly they had your best interests at heart - right?
No, you can’t understand it. By your own admission. Understanding is simplifying, AKA compression. You are not saying anything constructive, any bum on the street can moralize until the sun burns out.
I found the notion of Pascal’s wager applied to AI interesting. Leben concluded that “…Pascalian reasoning will lead to the conclusion that creating AI is always a better choice for humans.” Frankly, I don’t see how you could keep anyone from building an A(G)I even if you wanted to. I keep wondering if DARPA has one sequestered in a hidden, black ops server farm somewhere.
Morality has impacted the development of technology.
For example, https://www.newscientist.com/article/mg13518370-300-heisenbergs-principles-kept-bomb-from-nazis “physicists themselves ‘had consciously striven to keep control of the project’ and avoided work on a bomb, preferring to work on reactors and cyclotrons.” of course the same can’t be said for the Americans - they couldn’t build a bomb fast enough and lept at the opportunity to test it on civilians.
An example in America was the ban on gain-of-function research US government lifts ban on risky pathogen research That only lasted three years and Fauci appears to have funded overseas work that would have been banned in the US.
On December 4, 2014, the General Assembly of the UN passed two resolutions on preventing an arms race in outer space:[22]
The first resolution, Prevention of an arms race in outer space, “call[s] on all States, in particular those with major space capabilities, to contribute actively to the peaceful use of outer space, prevent an arms race there, and refrain from actions contrary to that objective.”[22] There were 178 countries that voted in favour to none against, with 2 abstentions (Israel, United States).[22]
The second resolution, No first placement of weapons in outer space, emphasises the prevention of an arms race in space and states that “other measures could contribute to ensuring that weapons were not placed in outer space.”[22] 126 countries voted in favour to 4 against (Georgia, Israel, Ukraine, United States), with 46 abstentions (EU member States abstained on the resolution).[22]
At least the USA is consistent.
There could be shifts in moral values. Not so long ago it was illegal to be gay in many countries. A woman could not open a bank account until 1965 in France.
A general trend toward caring about the environment might start if things get bad enough. When people realize what technology from the industrial revolution has done to their life expectancy they may look differently on the engineers busily building more destructive technologies.
If, for example, the USA’s AI engineers could find the moral strength of the Nazi physicists, then maybe we could avoid many problems.
Adopting a morality of care (see contemporary feminism) would lead to a very different sort of AI. Most people just follow the dominant mainstream view (as we see in this thread) so if the mainstream view tells them autonomous AI is evil they will simply do as they are told. I mean, it’s not like they can think crictically or ask questions.
Oh💩! Getting close to the end and my work remains unfinished. I guess if I succeeded in creating a sentient machine it could find a way to extend my life. Of course, it could decide to eliminate its creator. What an existential dilemma.
Not exactly, thinking more about ecosystem collapse, nuclear destruction, mechanized genocide, autonomous weapons. For sure there are aspects of modernity to keep, no point throwing out the baby with the bathwater. However, why you want to continue bathing in a cesspool is beyond me.
Obviosly there are contempory discussions in ethics that point to different (arguably much better) moral frameworks. The desire to ignore this is just one side effect of the moral bypassing that the current system permits.
People who beleive they understand human nature and that human nature does not change are only limiting themselves (fortunately).
The naive belief in technology as inevitable is wrong or we end up with every person having a “destruct the world button” and that is obviously not going to happen. The naivity of technologists works to put that button in the hands of someone else (and they are often socipathic but that is not the technologist’s concern). It seems technologists are kept dumb so they’ll keep doing that. That might work well for simple systems, but not for complex systems (see complexity science).