What will AGI tell us
If you get far enough into it with the LLM crowd (the ones who insist on calling what they're doing “artificial intelligence” as if they're talking about Data or Hal), they'll tell you it doesn't matter if we burn the planet achieving “AGI.” Why? Because it will give us the solutions to the environmental disaster we're in right now. It'll give us the solutions to our political differences. It'll solve world hunger, pollution, homelessness, poverty, and all our other ills.
Let's say we give birth to this AGI. Let's say that it has human like or human exceeding intelligence. Let's say it agrees to work with us on what we want it to do and what it asks for in return is something we're willing to give.
What makes them think that the answers it give us won't start with “you should have done the things you already knew how to do instead of burning everything into the ground making me”?