That is kind of Numenta’s thing!
All roads lead to Rome!
Or in this case, lizard brains.
Don’t be to hasty in throwing out a system that has proven it’s worth in creating HGI.
The built in programming had to build in fear of lethal aspects of the environment and a healthy fear of things with poisonous bites and stings, strangers, deformed members of you own group, (collectively- the other) heights, and the like make a great deal of sense.
You can retain the instincts that would work to implement the three laws, curiosity, social interactions and perhaps throw in a few new ones for good measure.
But the pallium is birds’ cortical-like structure, even it’s old
Again, I’m not saying these functions are useless or aren’t worth studying. I’m just saying that it would be a non-efficient (and dangerous) way to get these functions by the reproduction of the old brain.
We need to get cortical-like processing in any case, and because it’s capable to work with any patterns, why would we make this project 100 times more complicated by adding reengineering the legacy part?
Out of curiosity- how to you intent to manufacture judgement imbedded in the data?
Do you intend that your AGI figures everything out from some sort of first principles? If so - I would be further intrigued in how you come up with them.
By the way, some more info about them being well equipped in that area.
Neuronal densities and relative distribution of neurons in birds and mammals. ( A–C ) Neuronal densities in the pallium ( A ), cerebellum ( B ), and rest of the brain ( C ). Note that neuronal densities are higher in parrots and songbirds than in mammals (for statistics, see SI Results ). ( D–F ) Average proportions of neurons contained in the pallium ( D ), cerebellum ( E ), and rest of the brain ( F ). Note that increasing proportions of brain neurons in the rest of the brain in parrots are attributable specifically to increasing numbers of neurons in the subpallium (Fig. 5). Data points representing noncorvid songbirds are light green, and data points representing corvid songbirds are dark green. The fitted lines represent RMA regressions and are shown only for correlations that are significant ( r 2ranges between 0.389 and 0.956; P ≤ 0.033 in all cases). ( G ) Brains of corvids (jay and raven), parrots (macaw), and primates (monkeys) are drawn at the same scale. Numbers under each brain represent mass of the pallium (in grams) and total numbers of pallial/cortical neurons (in millions). Circular graphs show proportions of neurons contained in the pallium (green), cerebellum (red), and rest of the brain (yellow). Notice that brains of these highly intelligent birds harbor absolute numbers of neurons that are comparable, or even larger than those of primates with much larger brains. (Scale bar: 10 mm.) Data for mammals are from published reports (for details, see Methods ). CL, pigeon; DN, emu; GG, red junglefowl; TA, barn owl.
We consciously can reflect any known subconscious processes. That’s a big part of what therapists help to do to their patients, also we can describe it with language, draw as charts, make predictions related to our own or other’s behaviour. At the end of the day, developed empathy is an important evolutionary achievement, and we use empathy’s inference as a behaviour driver.
So, it’s easy to imagine a cortical-based agent, which would have all required for supporting homeostasis behaviour patterns the same way as any other patterns.
They can be hardcoded (but using cortical-like representation), learned by observing examples, or derived from the given constraints and abilities. In a real case, some combination of all three approaches is the most practical direction.
I think empathy is a more or less direct conversion of self-values to other-values, secondary to (cortical) recognition of self-patterns in other-patterns. The problem is that human “self” is not set in stone, even if AGI could (cortically) recognize as stable within some time frame. In a longer time frame, it will only recognize it as a pattern in which such self changes, AKA conditioning. Which may lead to empathy with something you don’t yet recognize as self, although you will in the future
I wouldn’t say conversion: you just add something that is valuable for somebody else at the moment to your own space of reflection and attributes some value to it, to get a perspective on this person potential behaviour at the next moment. Perhaps, it’s not how we feel it, but it’s the purpose of its evolving and its mechanics.
We and other animals (including birds) also have the general reflection, which is how we form our own behaviour patterns based on observation of others. In this case, we not only pretend to have somebody’s goal/value, but we are also pushed to adopt it because it’s a part of the behaviour pattern. However, because cortical representation is abstract, with some extra effort, we can substitute the adopted value or goal for another one.
I think the purpose is propagating your genes, even if they happened to be in someone else’s body. The question is how you decide who to be empathetic to, because we are obviously very selective at that. Empathy is a recognition and promotion of affinity, evolution is about selfish gene, not selfish you. It’s a distinct component of motivation, although always mixed with instrumental values in specific cases.
The propagating of genes is a driver in this context, the evolution itself doesn’t have a purpose, it’s a self-organized phenomenon.
I talked about the purpose as a reason of why this patterned evolutionarily entrenched.
Not always. For instance, when anybody hits his or her finger by a hammer, you spontaneously empathize the feeling of the pain.
But in general, sure, it’s placed in the context of different mind’s mechanics, and there is a range of reactions in different contexts.