
On Nov. 12, Emily Bender voiced skepticism about the purported benefits of large language models (LLMs) in her lecture “Don’t Try to Get Answers from a Stochastic Parrot,” the third and final lecture in Harvey Mudd College’s Nelson Distinguished Speaker Series.
“We have to keep in mind that when [LLM] output is correct, that is just by chance,” Bender said. “You might as well be asking a Magic 8 ball, right? People think that retrieval augmented generation is going to make it better, but … paper mache of good data is still paper mache.”
Bender, a computational linguist and linguistics professor at the University of Washington, explored the limitations and ethical concerns surrounding LLMs, machine learning models designed to generate human-like text based on vast amounts of training data. Bender, recognized in TIME Magazine’s first 100 Most Influential People in Artificial Intelligence (AI), is the author of the upcoming book “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.”
Kyle Thompson, a member of the faculty organizing committee for the lecture series, spoke in conversation with Bender. He said Bender brings a distinct perspective often neglected in existing conversation around LLMs, typically dominated by those in computer science.
“If we think about what LLMs are, it’s about language and how we use language and whether these machines can use it the way we can,” Thompson said. “[Bender] can pinpoint the exact problem, clarify it very concisely and do it with some pizzazz.”
Bender emphasized the difference between natural languages and the text generated by large language models, hence terming it a “stochastic parrot.” LLMs stitch together linguistic forms in a pattern-based manner, similar to the imitative tendencies of a parrot, Bender said. By contrast, natural language is symbolic — pairing form and meaning.
“When we are using language, we usually can’t experience the form without the meaning, because we interpret it so fast,” Bender said. “On the other hand, the only thing a large language model can learn is about form, sequences of letters and punctuation, and what’s likely to go next.”
Bender critiqued large corporations who promote LLMs, arguing that environmental and social costs do not offset potential future benefits. She explained that the speculative mentality of AI leaders directly contradicts the sinuous scientific process.
“Are there any other areas of technology or science where we evaluate what’s happening now based on an imagined future?” Bender said. “[It] seems like AI somehow gets this pass … they can cite ‘the future,’ and therefore any limitations now are moot. It’s distressing.”
Bender highlighted a fundamental distinction in information retrieval between platforms like Google and Wikipedia and large language models (LLMs), focusing on the importance of information provenance. “Google directs its users to sources, so we can place its origin in the information landscape,” Bender said. LLMs lack such identifiable sources.
“With [LLMs], otherwise known as synthetic text extruding machines, you get a sequence of words that you interpret and make information out of, but you have no idea where they came from, and they didn’t actually come from anybody,” Bender said.
“Are there any other areas of technology or science where we evaluate what’s happening now based on an imagined future? [It] seems like AI somehow gets this pass … they can cite ‘the future,’ and therefore any limitations now are moot. It’s distressing.”
Beyond ethical and informational concerns, Bender discussed the hidden environmental costs of LLMs, particularly the heavy resource demands of cloud infrastructure. “The Cloud” is the current system of storing data at off-site locations.
“The Cloud sounds light and fluffy and clean and far away,” Bender said. “[They] are actually enormous data centers that have a lot of hardware in them, that run on a lot of electricity and that require water for cooling, clean drinking water that can’t go back into the water supply.”
Throughout the talk, Bender utilized provocative language regarding AI, claiming that ChatGPT has “ersatz fluency.” In her view, even the phrase “artificial intelligence” gives too much credit to the technology.
Attendee Mira Kaniyur HM ’26 was surprised that Bender presented her concerns about AI in an information literacy context.
“[The talk] was a lot about how [we’re] losing out on certain skills by relying on AI, not just writing skills but information literacy,” Kaniyur said.
Attendee Gautam Agarwal, neuroscience professor at Keck Science Center, took away many cautionary lessons from the lecture.
“I learned to be much more careful about my use of these tools and how excited I am in promoting these to my students,” Agarwal said.
Bender believes that language is one of the most powerful weapons humans have against the corporate grip on AI.
“There’s a lot of commercial interest behind wrapping [AI] up as shiny and exciting,” she said. “One of the tools that people who don’t have billions of dollars have … against that [commercial interest] is descriptive, evocative language.”
Facebook Comments