By now, you’ve heard of ChatGPT. Your parents have probably asked you about it. This is probably not the first (or even fifth) post about ChatGPT you’ve even scrolled past on LinkedIn today.
Simply put, ChatGPT is autocomplete on steroids – it’s a tool that can write a story, or a cover letter, or have a compelling conversation with you, while never understanding a single thing it says. My friend Todd recently successfully asked ChatGPT to write a 10-line “Hello World” program in Python, to which I replied, “it’s even more amazing when you realize it not only doesn’t know how to code, it doesn’t actually understand what ten means.”
Much of the feedback, both breathless support and snide critiques, feel lightweight. The side cheering, “this is going to unleash creativity,” feels like it doesn’t actually recognize the deep humanity inherent in connecting the self to the universe. The side putting ChatGPT on blast for not producing better results perhaps fails to consider that many of these shortcomings will disappear, regardless of whether the AI ever understands that water is wet.
ChatGPT is undeniably cool. ChatGPT, and similar large language models like Google’s PaLM, is likely to have an even bigger impact on the world than the World Wide Web and smartphones. I am legitimately excited to see where this new world of automation takes us, which is good, because the genie cannot be put back in the bottle.
So maybe we should contend with what we’ve wrought.
1) The Heat Death of the Universe Problem
I am pulling this article out of my brain one tortured line at a time because I like the craft of writing. It usually takes me about 60 minutes to pound out a thousand words about something that 1) I understand, 2) have thought deeply about and 3) have already come up with a high-level structure for communicating.
Sadly, Google doesn’t reward occasional output of high-quality content. Google rewards *massive output*. One recent article suggests that to show up in search results, you should plan on publishing 5,000 words a day for at least six months. That’s the equivalent of 10 to 15 novels in just a half year. Most people have neither time, nor ability, nor information to share, which is why the same old voices always seem to appear in search results. The same content farms who figured out how to outsource and scale up vapid content creation, slightly rewriting Reddit posts and listicles.
Now anyone in the world is going to have the ability to ask a bot to write dozens of articles every day, in an arms race to produce enough words that Google decides to elevate them in an index. Imagine a world in which it takes longer to write the query, “Please write a balanced article about the opportunities and risks of ChatGPT from the point of view of content creation moving forward,” than it does for the AI to respond. (See the bottom of this article – total effort, two minutes.)
Finding voices and perspectives that matter will become infinitely harder. Using audio or video as a signifier for quality will offer a brief respite at best, because we’re already seeing impressive iterations on photo-, audio- and video-generating AIs as well.
The net of this will likely be a retreat back to voices that have already established themselves. I will always read Casey Newton with regard to tech and Matt Levine for finance, but how will the next generation of thinkers ever ascend?
2) The Human Centipede Problem
What do LLMs train on? Web content. Why? It’s a vast amount of information in an already digital format. Training on podcasts, TV, books and the sum total of historial human creation requires the translation of information from one format to another – you have to employ OCR to scan a book and this is both slow and inexact. Same with speech to text. It will get better over time, but the primary grist for these AI will likely forever be web-based articles.
As discussed above, we’re about to see a tsunami of generated content hitting the web, the flaccid output of a golem that neither reasons nor feels, but man can it pair words together.
Do you think the articles will be generally better or worse than what was previously fed into its database? Is ChatGPT going to create articles that offer any new spark of insight, or slightly more blandly say the same things that are already out there?
Now take a step forward in time and let’s train the next version of ChatGPT… on the output of the last version. No new insights. No new perspectives. We’re making copies of copies of copies and eventually it all gets very, very beige and watery
The net of this will likely be a continual dumbing down, or at least homogenization, of thought on the web. I don’t want to say we’re heading towards an Idiocracy timeline, but I wouldn’t argue against investing in Brawndo. The only way we improve our corpus of information is through training on new ideas, which brings us to…
3) The If a Tree Falls in the Woods Problem
The current response to the above two problems is, “ChatGPT Search For Everyone!” Who cares if there’s an Everest-sized pile of throwaway content when you can just ask a question and get an answer?
There are loads of issues with this idea, but I want to zero in on what I believe to be the biggest, namely, who is going to create content when it is just slurped up into a database and regurgitated without attribution?
Why am I writing this article right now? Perhaps I’m simply doing it for the benefit of self reflection, assembling my thoughts so I better understand how I view the world.
I am looking to further establish myself as someone who other people want to connect with, perhaps to share their own idea, or do work together, or hire me as a coach, or a dozen other reasons. In every single case, the words I create are the initial step in the logical chain of events, the calling card that initiates a round trip reaction.
When that opportunity is gone, when the impact of this article is limited to minutely changing the orientation of a few bits in ChatGPT’s brain, why write at all? I am concerned that, despite the ongoing refrain of “true artists will always create,” we are about to kill the motivation for millions of people to participate in public discourse.
AI-based search is the death of content creation. Want proof? Ask Alexa a question and tell me whether you like it more when she gives you the answer or a list of web pages. For everything that is wrong with Google Search (and Facebook, Twitter and the few other arbiters of third-party content), it still sends people to the work of other people. AI-based search might well spell the end of content creation.
For the sake of consistent formatting, the net of AI-based search is likely a world where people stop creating content because it won’t be tied to their individual brands and that is ultimately why we communicate – we want to be seen and heard.
So, are we fucked? I hope not, and I choose to believe not. As F. Scott Fitzgerald once wrote, “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless, yet be determined to make them otherwise.”
As we step into this new world where we are no longer the dominant creator of content, I hope you’ll spend the occasional cycle optimizing for an outcome that benefits all of us in the long term, and not just a few in the short.
— Eric Marcoullier
Here’s what ChatGPT wrote, btw:
ChatGPT is a powerful language generation model that has the potential to revolutionize content creation. The model is able to generate human-like text, which can be used to create a wide range of content, from articles and blog posts to scripts and even poetry.
One of the main opportunities of ChatGPT is its ability to save time and resources for content creators. With the help of the model, writers can generate high-quality content quickly and easily, without the need for extensive research or writing skills. This can be especially beneficial for businesses and organizations that need to produce a large amount of content on a regular basis.
Another opportunity of ChatGPT is its ability to increase the diversity of content. The model can generate content in various styles and on various topics, which can help to reach a wider audience and cater to different preferences. Additionally, ChatGPT can also be used to create content in multiple languages, which can be beneficial for businesses and organizations that operate in multiple countries.
However, there are also risks associated with the use of ChatGPT. One of the main risks is the potential for the model to produce low-quality or plagiarized content. If not used properly, ChatGPT can generate text that is not original or that is full of errors. Additionally, the model may also produce text that is biased or offensive, which can be damaging to a company’s reputation.
Another risk is the potential for ChatGPT to take away jobs from human content creators. With the ability to produce high-quality content quickly and easily, ChatGPT may be seen as a replacement for human writers, which can have a negative impact on the job market.
Overall, ChatGPT has the potential to revolutionize content creation and bring many benefits to businesses and organizations. However, it is important to use the model responsibly and to consider the potential risks. It is also important to note that ChatGPT should be used as a tool to assist human content creators, rather than replacing them.