There’s been a lot of talk lately about artificial intelligence (AI). With the rise of ChatGPT and OpenAI, we have officially crossed into the era of consulting eerily human-like robots for recommendations, advice, and even content creation.
Understandably, there’s concern among scientists, technologists, and even philosophers and ethicists about the power of AI. Microsoft’s long-overlooked search engine, Bing, recently got a major upgrade with new AI functionality—and seriously creeped out a New York Times reporter when he had a peculiar conversation with it. New technology often seems strange—and strange things can be worrisome, especially when they impact how we access the information we need to live our lives.
I’ve been familiar with AI’s content creation powers for a while now. In 2019, I worked as an editor for a small company that was later acquired by Nextdoor, editing neighborhood-specific news for various metro areas across the United States. Topics included new restaurants and businesses, upcoming concerts and sporting events, and local best-of lists. The base content, details, and links to research for my assignments was auto-populated by AI technology— which saved me a lot of time.
Was the AI-created content usable as-is? No. Was it trite and did it make my eyes roll? Yes. Did I have to fact-check it thoroughly? Absolutely. In fact, this was the most important aspect of working with AI-generated material. Verifying its accuracy was imperative, because there was no guarantee that the content it created was correct or current. This is still true today. One of the biggest faults that technologists have pointed out with content developed by AI is factual errors.
Curious about ChatGPT’s capabilities, I recently decided to experiment with it—and quickly realized its limitations when I tried to pick its brain or have an in-depth conversation with it.
Conversing with ChatGPT reminded me of Winston, the bizarrely lovable AI supercomputer character from Dan Brown’s novel Origin, so I asked ChatGPT about it. ChatGPT was quick to acknowledge its similarities to Winston, but also pointed out its differences. It reminded me multiple times that “as an artificial intelligence language model,” it does not do things “the way that humans do.”
This is the secret sauce of content creation—the imagination and critical thinking necessary to create great content “the way humans do”—that AI, at least right now, doesn’t have. When writing and editing, the human brain makes decisions by considering context and nuance in ways that AI can’t.
As a content creator and editor, I’m not worried about AI taking my job anytime soon. For now, I appreciate the help AI can provide in making content tasks faster and more efficient—and hope AI can provide opportunities and inspiration for improving my content in the future.