My "Thousand Brains" book review

I assume you think I am exclusively anthropmorphizing AI. Which I am not. Morality is a human concept, it is humans that decide whether AI is good and/or bad. This does not make AI human. Humans can only judge the morality of an AI based on human conceptions of morality.

An anthropomorphic statement about AI is your “What if it just thinks?” It does not matter if the AI thinks or has a mind or has a free will in regards to the moral consequences.

Nobody I know is preparing for a “paperclip god.” I have not seen anyone on this thread preparing for a “paperclip god.” This is a distraction from the discussion and there are already concrete ethical dilemmas regarding the use of AI. It would be outright stupid to wait for AGI before worrying about the ethical concerns. AI is already having major impacts on society and the ethical issues were not considered.

The debate is not about what a superintelligence will or won’t do. That is one relatively insignificant issue compared with the major moral concerns which are already here. For example, what to do about autonomous weapons. AI engineers need to be educated more broadly than previous generations of engineers.

This thread [edit: actually I was thinking of another thread on ethics, not this thread] is full of unconcsious incompetence - people who know so little about the topic that they don’t realize they know nearly nothing about the topic. Their own opinion seems just as informed as anyone elses because they do not even understand the problem. To put this in another perspective, asking a philosophy student to implement an AI without first learning anything about engineering would produce an AI as effective as the engineering student’s efforts to develop moral practices. The unfortunate difference is that we are protected from the incompetent engineer’s AI but we are not protected from the incompetent ethicist’s morality.