EnbySpacePerson

LLM

A white bodied quad copter floating in front of a line of trees. The copter's camera is pointed at the viewer.

Image by anne773 from Pixabay

If you get far enough into it with the LLM crowd (the ones who insist on calling what they're doing “artificial intelligence” as if they're talking about Data or Hal), they'll tell you it doesn't matter if we burn the planet achieving “AGI.” Why? Because it will give us the solutions to the environmental disaster we're in right now. It'll give us the solutions to our political differences. It'll solve world hunger, pollution, homelessness, poverty, and all our other ills.

Let's say we give birth to this AGI. Let's say that it has human like or human exceeding intelligence. Let's say it agrees to work with us on what we want it to do and what it asks for in return is something we're willing to give.

What makes them think that the answers it give us won't start with “you should have done the things you already knew how to do instead of burning everything into the ground making me”?

Read more...

An image of five robot toys on a white background.

Photo by Eri Krull on Unsplash

This article isn't about the ethics of how models are trained or the ecological consequences of their use. DeepSeek shows that models can be generated on lesser hardware and that massive data centers with huge ecological consequencees aren't required to operate them. DeepSeek doesn't demonstrate the viability of ethically producing a model.

DeepSeek doesn't build a tool that actually does the kinds of things that real people would need it to do in order to be an effective tool.

Read more...