For most, LLMs are useful for two things. As a search engine, or to work out your own ideas. Other than those, they can be actively harmful — using them for medical advice or therapy. Everyone seems to hate half-baked AI features infecting everything they use.
Who likes it are executives. These ghouls hope that it will somehow let them fire and replace people. It’s clearly not to that level yet. In the short-term, it lets them pretend it can replace humans. They’ll fire people. They’ll make the rest work more to make up for it.
What the rest of us mostly hear is that AI is going to take our jobs. People supposed to be excited by that? OpenAI and Anthropic don’t want to make the world a better place, they want to sell a product. They don’t care about any damage it causes to the rest of us. Executives care about ROI. Maybe they’ll write a letter or fake crocodile tears on a call before rushing off to a yacht trip in Monaco.
My question is this. What if we treated each other with empathy? What if our goal as humans was not acquisition and expansion, but treating each other kindly and with respect? What if our packages showed up in three days instead of two, but no one was pissing in bottles? What if our number one priority was each other, and only secondly speed and efficiency?
I think that world might be better for everyone — including all the little Jack Welches running the world.