That is burying one’s head in the sand.
Look who participated in formulating the criteria. FLI is a much broader organisation than just AI. Neuroscientists are not ethicists and typically not working in AI. Numenta is not doing neuroscience either - they look to the results of neuroscience for inspiration - their experiments are with machine learning algorithms not brains.
Ethics is already a major concern for AI. The ethical questions have been important in a great deal of AI research - consider how GPT3 is not made available to the general public because of ethical concerns.
Rather than proposing another solution, by a fiction author, why not take seriously the work of leading experts working in the field of AI who took significant time to collaborate on these questions and share the results.
It certainly seems worrying. There are other aspects that worry me too, for example the use of machine learning in social media, news article generation, individually targeted political marketing, reverse engineering from multiple sources to disclose privacy, the massive individual monitoring of the Western population by the largest user of computing power in the the world - the NSA…
Contemporary ethics does not see these distinctions as being so independent. Most people who have not studied ethics somehow think they know all about it. Perhaps a similar attitude leads to not studying the brain and trying to develop AI.
This is not a reasonable approach for technologies with huge potential negative impact. For example, the USA dropping nuclear weapons on civilian populations multiple times to understand the short and long term impacts of radiation poisoning is not a good solution. Taking ethics seriously and getting ethicists involved would be first a step. Based on this thread it seems that just acknowledging ethicists exist would be a fist step! Many people somehow think that their general knowledge about ethics is up to date - yet they would never believe the general knowledge of the average citizen provides a useful understanding of AI. And, arguably, ethics is much harder than AI.
I agree and AI systems are already making decisions such as whether someone should be released from jail. There are typically humans still in the loop. But there are already autonomous drones on the market for autonomously killing people. This has already lead to things like a school bus being destroyed. To think that people on this forum don’t consider AI ethics an issue is mind boggling.
Yes this would, for example, be the intention of most state actors. You don’t need to invent this job you just need to be smart enough to get hired to do it. Sadly the competition is probably quite tough so you might need to accept working on beneficial AI.