Of LLMs and Men
19 April, 2025
When new tech comes along, people usually overreact. They either think it’s going to change everything overnight, or they brush it off as pointless. The truth is usually somewhere in the middle.
Take CAD (computer-aided design) as an example. When it first showed up in the 1960s, engineers didn’t stop using their basic skills. Instead, they started using CAD to handle repetitive drafting tasks and to help them see complex designs more clearly. There was some pushback at first, especially from people in charge who didn’t really get how it worked. But as companies figured out how to use CAD, it became just another tool in the workflow. And it didn’t end up costing people their jobs—there’s little evidence that CAD or CAM (computer-aided manufacturing) hurt employment.
This happens with almost every new tech trend (although AI is not a new trend exactly). Crypto is a good example. There’s something real there — Bitcoin actually does something new — but it gets lost in all the copycat coins, scams, and pointless NFTs. For a while, every startup was “blockchain-enabled,” just like before when everything had to be “cloud-based.” Now it’s “AI-powered,” and everyone’s LinkedIn title says something about GenAI. The hype around AI this time around seems to be even louder, the marketing is everywhere, and there's a ton of money in it, so it’s hard to avoid. It leaves a lot of people confused and/or worried they’re missing out if they’re not part of it.
The talk around LLMs and AI today is pretty similar. Some people say it’s revolutionary, others call it just “glorified autocomplete.” The real value is probably in the middle. The best software fades into the background and just adds value. For example, when TurboTax auto-fills your forms, nobody calls it “AI”—it’s just a tool that saves time and helps in avoiding mistakes so we can focus on other things. People talk about “agents” as if we’ll have robot butlers soon and your laptop will somehow instruct your fridge to make you a sandwich by listening in on your stomach's activity or measuring blood glucose levels as you type, but I think they'll end up as something practical: a tool that helps new parents find the right preschool for their child by pulling together reviews, safety reports, and comparison tables from the web, removing all the fluff around it, and presenting it to the user. The LLM isn’t “thinking”—it’s just processing language really well, sorting through a lot of text and surfacing what’s relevant. We still have to decide what’s important.
The best tools don’t get rid of complexity; they just hide it. With CAD, engineers still had to make design decisions, but the software made drawing better. LLMs will probably do the same for language: handling large amounts of text, removing fluff, and surfacing connections in language that might otherwise be missed. They’re not deciding what matters—they’re just good at handling words and information.
This shift isn’t really about agents, chatbots, or clever prompts. It’s about the software that fits into specific tasks and makes them better by being good at language. Eventually, nobody will ask if “AI” helped with something, just like nobody asks if a calculator did. It’ll just be another tool: useful, sometimes buggy, but quietly making things a little bit better. Convenience might win the battle, but usefulness will win the war.
Just like CAD went from a simple drafting tool to something engineers rely on every day, I think LLMs will likely move from basic text generation to being quietly built into workflows that can make use of language processing behind the scenes.
Not a miracle, not a waste—just another step along the way.