I agree we won’t learn how intelligence works by simulating details and seeing what pops out. Not for at least a few decades.
I still think neuroscience is super important. Ideas about brain-based AI can be partially wrong super easily. A lot of progress in HTM has been from finding partial wrongness. It’s an inevitable bottleneck. Neuroscience literature is always an option for dealing with partial wrongness.
It’s more of an analogy than a metaphor. Similar comparisons can be made for sense organs, eyes, muscles, In each case we have created a science-based manufactured analogue to a biological capability and in each case they have little in common other than serving a similar purpose.
Indeed the opposite is true: it’s hard to think of any instance where a useful outcome has come from closely adhering to a biological model. If AI does go down that route it will be the first time ever.