No, but it would be interested in maximizing whatever goal parameter is asked of it. Usually one of the first things that AI safety researchers say about their field is that they don’t believe that an AGI will harm humanity intentionally. “The AI neither loves you nor hates you. But you are made of molecules that it can use for something else.” Off hand I’m picturing an AGI that wants to harvest solar energy for its own computations and builds a Dyson cloud… and to save time and material, does so around the distance of Mercury, cutting Earth off from the Sun.
If it is only a story, then we have no certainty that an AGI will have the inclinations of God, because there’s no such thing as God, so we have no certainty that such properties can exist. Even if it isn’t a story, we have no way of knowing that an AGI would have God’s inclinations. How does omniscience and omnipotence necessarily lead to omnibenevolence?
I’m afraid I don’t know what you mean.
If the AI is so vastly superior to us that we pose no threat to it, then it stands to reason we’re in no position to aid it either. So the AI would not be helping itself by helping us.