I’m curious about whether HTM theorists believe that value alignment will be necessary in an AGI that’s designed from HTM. Much has been written in recent years about the potential necessity of such precautions for AGI, yet there’s very little discussion on the matter on this forum. At first I wanted to study neuroscience to help contribute to HTM, but I became apprehensive about it all when I recollected the writings of Elizer Yudkowsky and others from some years ago. What do you all think?
Thank you,
-Joel