OPINION: OpenAI’s GPT-3 text generator is impressive, but we should be wary of such technology

A picture of a text generator robot taking over human jobs.
(Megan Li • The Student Life)

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

As stated in this excerpt from an article published in The Guardian last month, these paragraphs were not written by a human being, but rather a robot: OpenAI’s GPT-3 text generator, a language model trained with 175 billion parameters. Although the op-ed was pieced together by humans at The Guardian from eight GPT-3-generated essays, the editors stated that the editing process for this article was no different from that for a typical op-ed written by a human — and it was faster. 

Since the May publication of a research paper about GPT-3, the text generator has produced articles indistinguishable from those written by human beings, in addition to poems, songs and other creative works. The largest language model so far, GPT-3 calculates the likelihood of words to appear in a given context in order to produce new text. Additionally, GPT-3 does not need task-specific fine-tuning, meaning that it can perform a wide variety of tasks just by taking in different prompts.

While GPT-3’s accomplishments are impressive, there’s a lot to consider with respect to the societal implications of such technology — despite the AI’s quite convincing claims in The Guardian that we have nothing to worry about.

As an amateur journalist, when I first heard about GPT-3’s near-human writing abilities, I immediately thought about what that might mean for the future of journalism. With the ever-shortening news cycle, journalists must break the news minutes after important events occur. But what if AI could report the news faster? Rather than relying on a living, breathing journalist, it might be more efficient to feed a text generator a prompt and some facts and then have it spit out a clean article in just a few seconds.

Sure, journalists could still write long-form pieces. But even then, we could just feed GPT-3 a series of information and quotes to weave together into a coherent story.

And it’s not just writers who have reason to fear — it’s personal assistants who draft emails that could easily be pieced together with text generators, technical writers who put together documentation that could be written by AI, even software engineers who write code that could be generated with technology like GPT-3.

What’s more is that GPT-3 is just one of many computational tools that employ natural language processing to interpret and produce written and spoken communication. I’ve recently noticed that Google’s Smart Compose Tool has become really good at suggesting what I should type next in my emails, essays, articles; you name it.

I’m no Luddite — though some of my friends might believe otherwise — but perhaps we should think a little bit more before indulging ourselves in the convenience of technologies like GPT-3 and Smart Compose.

Language models are trained on existing datasets — like Common Crawl’s web archive for GPT-3 and phrases and sentences for Google’s Smart Compose. Since Smart Compose suggests to users next words to type based on the way it has seen words used before, that could detract from the originality of works created in G Suite — which is especially concerning, given that G Suite has more than 2 billion monthly active users.

Tools like Smart Compose may inadvertently cause pieces of writing to be flagged as plagiarized because they offer suggestions to users based on the words they associate together from past experiences. For instance, if two students in the same English class are both writing about the same novel, it’s possible that their essays may end up containing a plethora of phrases in common due to the suggestions of technologies like Smart Compose.

I’m not saying that we shouldn’t use these tools at all — and I also have to admit that while writing this piece, I often accepted the suggestions that Smart Compose posed. Most of the time, Google’s suggestions were spot on, and it was just so convenient; I only had to type five letters before Google realized that I wanted to type the phrase “indistinguishable from.”

Furthermore, natural language processing can help foster connections that would not otherwise exist. Google Translate helps break down language barriers, and the SignAll sign language translation platform facilitates communication between people who are deaf and those without knowledge of sign language. AI can also be incredibly useful for mundane tasks like scheduling, form auto-population and spell-checking.

However, we should think about where to draw the line. As we’ve seen with companies like Facebook, technology can get too big to handle. If the way in which we’re using some new AI can make lives easier without taking away jobs and livelihoods or innate human creativity and originality, then I’m all for it. But as with any new technology, we should be wary about the consequences of AI like GPT-3.

Plus, I think there’s a lot of value in the act of weaving words together and piecing together a meaningful story without extra computational aid, even if the end result might have a few mistakes. Think about the last time you smiled at a typo-filled late-night text a friend sent you or the most recent “aha” moment you had when you finally stumbled upon a word that you couldn’t quite put your finger upon. I would say that these are experiences more meaningful than the act of some AI stringing together a sequence of words in the hope that it might make sense to humans.

There is beauty in the creative process and in human imperfection and originality — things that GPT-3 cannot claim to possess.

Michelle Lum HM ’23 is an intended computer science major who sometimes wonders if we would be better off without some technologies. She did not turn off Smart Compose while writing this piece.

Facebook Comments