I think you actually have to remove pressure to evolve GI.
Here’s my thinking. It’s a bit rough at this stage, so please be gentle.
TLDR: Give it a generalized reward mechanism for resolving a problem, and a “motivation” to treat some environmental states, as problems, e.g., threats, rivals, lack of food, lack of mate.
Longer version:
It seems to me that the way we achieved human GI was by learning to solve lots of Specific Intelligence problems, then a few individuals with plenty of time in which to not flee, not fight, not pursue food or sex, etc., became habituated to the evolutionarily provided positive reinforcement mechanism that gives the Aha! reward when we resolve a query, answer a question, solve a problem.
I think this signal probably arises from the mechanism for recognizing when we have achieved those basic threat/rival/food/sex needs, but I suspect evolution has optimized the original signal in the presence of lots of non-primally motivated time. Keeping motivated and active when all basic needs are satisfied can be difficult, at least for some people, so we see people pursuing adrenaline rushes, food over-satiety, political power, sex addiction, substance addiction, and even abstract learning. All of these goal-seeking behaviors must be physiologically motivated.
Becoming junkies for that reward signal, which must exist for us to know when the goal is achieved, created the motivation and evolutionary reinforcement (because solving and avoiding more problems is evolutionarily advantageous) to apply goal-seeking-thus-problem-solving behavior from specific contexts to as many contexts as possible, so many that we have come to speak of it as General Intelligence.
Some might ask how we get an AGI to recognize new problems to which to apply itself. Well, our human GI recognizes new problems to try to solve because the evolutionarily enhanced reward signal has optimized for that behavior. Presumably, a reward signal for problem-solving in an AGI could perform a similar role.