Sometime in June or July of 2023, a friend asked me whether or not, as someone who writes, I'm worried about large language models like ChatGPT. Specifically, whether they'll make my work and my somewhat creative efforts obsolete or redundant.
I didn't have an answer to the question then. I still don't. However, that question did spur me to give ChatGPT a go. That was easier than expected, since one of the homegrown tools at The Day JobTM has ChatGPT baked into it — the product teams that I work with, in case you're wondering, use ChatGPT to try to refine the titles and the text of software release notes.
Admittedly, the bar isn't exactly set at a heighty height when comes to improving those release notes. ChatGPT's output is often better than what the product teams hammer out. It's not, however, at the level of what a professional technical communicator can craft.
Using that as my jumping-off point, I decided to see what ChatGPT could do with some of my (then) recent ideas for blog posts. I crafted fairly decent (in my opinion anyway) initial prompts. I even took time to carefully cobble together follow-up prompts to refine ChatGPT's initial output. Iterating over the results and all that.
For personal, opinion-type posts, what ChatGPT spat out was a failure. When I modified the prompts to tell ChatGPT to write a post in my style (pointing it to this blog, in case you're wondering), I wound up with what was essentially a weak pastiche.
That output brought to mind what the actor Fyvush Finkel said to writer Harlan Ellison on a panel show in the early 1990s. After Ellison finished one of his rants, Finkel looked at Ellison and bluntly said: You just spoke for a minute and a half, and said absolutely nothing.
While ChatGPT's output is readable, that output is also pretty generic. It's homogeneous. It's vapid. That's especially true when those results aren't massaged by humans.
I also tried ChatGPT with more concrete, fact-based ideas for my blog covering free and open source software. The results were, again, bland and homogeneous. Plus, there were more than a couple of factual errors — like ChatGPT describing functions that a piece of software didn't actually pack.
The words that ChatGPT spits out are, for the most part, adequate. They're OK. They're just good enough. And that's a problem. More than a few people don't notice or just don't care about anything better when it comes to the words that pass before their eyes. They've become so inured to reading content that it's become their norm.
When it comes to the written word, do we have to settle for what's just good enough? If I'm coming across as a bit of an elitist by writing that, so be it. But is it elitist to want something better than the just good enough? Is it elitist to prefer writing that someone sweated and poured some of themselves into? I don't believe it is. I'd rather read something that packs a certain personal style, a certain warmth, a certain verve and panache than consume content.
While I don't consider myself to be much of a prose stylist, I do think that I have a fairly unique voice and style. One that separates my work from other writers. I just don't get that from the output of ChatGPT or tools like it. From my experiments, I learned that writing from scratch is a lot easier than reworking what a large language model spits out. At least it is for me.
Writing online doesn't need to be letter perfect. It doesn't need to be finely hewn. It doesn't need to be exquisitely polished. But it should have the unique voice of the person crafting it. And you don't get that from a tool like ChatGPT.
I'm under no illusion that this state of affairs won't change. I'm under no illusion that the output from large language models like ChatGPT won't improve. And that change, that improvement will probably come around sooner than I expect. But for now, and well into the future, I'll do my own writing (personal and professional) rather than outsourcing it to a large language model.