We Don’t Need the Common Core

Many of you may be familiar with the Common Core State Standards (CCSS) that have gone into effect in all but four states this year and last. They are a nationwide standardization of educational requirements for public schools, replacing requirements that varied state to state in the past. If you didn’t realize this was happening, don’t feel bad. 

CCSS was not federally mandated: Varied corporate interests brought the changes about by lobbying state governments. For such a vast overhaul of American public education, the CCSS has received a surprisingly small amount of public scrutiny. One of the most controversial aspects of the CCSS is the eventual goal of automated essay grading. Artificial intelligence has not quite caught up yet, but teams of linguists and computer programmers are vying to create a successful automated grading system. Their efforts are projected to reach implementation status in the next five years. 

The automated grading is supervised, meaning that the computerized system bases and cross-tests its grades against human grades. No actual reading comprehension takes place. When this system evaluates your essay, it doesn’t recognize themes, developing arguments, or really any facet of your content. It looks only at words: how long the words you’re using are, how many of them you have in a sentence, whether they are transitional or argumentative words. It sees how these statistics correlate with the grade that the essay gets from a human, then extrapolates to other essays based on those associations. The crazy thing is, it works really well. 

Turns out that statistics, such as the average number of words we use in a sentence, are actually great indicators of the quality of our essay. Of course, quality is a term subjectively defined by volatile human graders, meaning our biases, such as a negative bias toward vocabulary used more frequently in non-white communities, get ingrained right into this supposedly impersonal method of grading. But if those biases are going to affect grades no matter what, then why does it really matter whether a computer or a person evaluates your standardized test essay? 

This semester I am a peer tutor for first-year writers. As I pore through their first college papers, I cannot help but notice how they share many similar structural problems. Most of these problems stem from the students over-relying on formulas they learned to lean on in high school, such as that dreaded five-paragraph essay format. Their introductory paragraphs are almost interchangeable, starting with a generic description of the topic and ending with a thesis that is more of an observation than a claim. 

Most of us probably remember that moment when we realized our college professors actually expected an original argument when they assigned papers, one requiring critical thought, creative use of evidence, and deft argumentation. It is a demand that most students fresh out of our nation’s high schools are not prepared to meet. American education suffers from a focus on writing products over process. This notion of a preconceived product stifles student creativity in developing any semblance of a novel argument. Students simply try to produce what their teachers will reward. 

With the commencement of automated essay grading, they will shift from trying to please their teacher to trying to please an impersonal letter-counter. To institutionalize a system for grading arguments that does not even bother with evaluating the content of their arguments sends a powerful message. Do we want to teach the next generation that all that really matters in the end is meeting arbitrary prescriptive word counts, rather than what they are actually trying to say? 

To me, this future possibility feels dystopian. We are moving in the wrong direction from our already flawed pedagogy of writing, and we should not sacrifice the basic tenets of academic originality to reduce labor for the corporate powers behind the CCSS.

Facebook Comments

Leave a Reply