AI Safety and Value Alignment

I’m curious about whether HTM theorists believe that value alignment will be necessary in an AGI that’s designed from HTM. Much has been written in recent years about the potential necessity of such precautions for AGI, yet there’s very little discussion on the matter on this forum. At first I wanted to study neuroscience to help contribute to HTM, but I became apprehensive about it all when I recollected the writings of Elizer Yudkowsky and others from some years ago. What do you all think?

Thank you,


All the more reason for you to continue.

The work will continue no matter what - there is no turning away at this point. The technology is being driven by moore’s law and as a result, computer and theory are becoming capable of larger things.

We need people developing that understand and appreciate the larger picture of what they are doing.

A Nice Artificial General Intelligence How To Make A Nice Artificial General Intelligence: