Ok, I’ll bite - unless you are just carping from the cheap seats - in your view what is it that us that have not read the same books as you should be doing differently. Should I use a different data set than MNIST to test my HTM models? Should I even be looking at HTM or is there some better model that is not morally suspect?
If you ever do start coding the subsystems that could eventually become part of a functional AI what is it that you will do differently than us poor souls afflicted by “unconscious incompetence.” What exactly is the woke way to do AI research at this very very early stage of the art?
Rather than telling us how ignorant we are, what do you suggest (in positive concrete things that are directly usable) we should all be doing and how does it differ from what we are doing now?
As far as “going slow” China comes to mind. For centuries they were the center of advanced civilization. They were made into Luddites by royal decree (in the pursuit of stability) and the only real result of this was when the rest of the world came banging on their doorstep they encountered war machines and steam engines far in advance of anything China could muster. Progress will continue no matter what some ethicist considers morally right. I have trouble thinking that it is the moral high ground to bring a knife to a gunfight but that is what is called for by walking away from technology that has a significant war-fighting potential. This is not some future problem: as I write this the eastern United States is still recovering from an act of cyber war - in this case - ransomware shutting down critical energy infrastructure. Add in AI and the stakes get higher.
Research in these areas is necessary for survival in a competitive world. See: Darwinism
Being totally ignorant by virtue of not having read the books you have, I still suspect that allowing yourself to be wiped from the face of the earth by not defending yourself has some moral issues involved. . With this in mind, how should research in these areas continue? Again, concrete “do this” and “don’t do that” recommendations - not vague platitudes. I will add that recommendations like Asimov’s 3 laws are utterly worthless - nobody knows how to do that level of programming! Real recommendations have to be able to be reduced to practice.