OPINION: Facebook's Scandals Blur The Lines of Internet Consent
Maddy Neff | April 13, 2018, 2:02 a.m.
Earlier this year, I wrote an article about Facebook’s fake news problem, battling between protections of free speech and government regulation. Since then, a few factors have shifted the Facebook conversation in a new direction.
One: Cambridge Analytica made psychological profiles of U.S. voters during the 2016 election with data mined from Facebook users. Two: Mark Zuckerberg gave a testimony before Congress regarding both Cambridge Analytica and Russian collusion.
Both of these events bring up a bigger problem within Facebook’s business model regarding consent.
I have had numerous conversations about my recent doubts about Facebook and my plan to delete my account. To my surprise, I have received many skeptical responses to my decision. My twin sister believed that I was making the wrong choice. My friend thought I was joking. And the list goes on.
I realize that many people use Facebook to communicate with distant relatives and friends, and there are most certainly other reasons to have an account. To be clear, I am not suggesting that everyone go out and delete their account, but I would like to explain my reasoning to get rid of mine.
In my opinion — one that is likely echoed elsewhere — Facebook has failed to keep its promise and maintain it mission of “[bringing] people closer together,” and instead, has made an environment created to build connections become associated with fear and anxiety over the security of personal information and the possibility of fake news.
Roughly a month ago, The New York Times Magazine reported that Cambridge Analytica used Amazon’s “Mechanical Turk” service, an online marketplace “where users can complete small tasks for commensurately modest sums of cash” in order to get information from American Facebook users.
In exchange for installing an app and taking a personality quiz, Cambridge Analytica was able to gain access to millions of Facebook profiles and rewarded its participants with a few dollars and, ultimately, stripped them of their privacy. This information was eventually used to target voters, working in favor of Donald Trump’s presidential campaign in 2016.
In response to this, Paul Grewal, vice president and deputy general counsel at Facebook, denied that this was a data breach and argued that Facebook users “knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”
Grewal was not wrong; this was technically not a privacy breach. People who subscribe to Facebook’s service consent to data collection. However, back in 2014, Facebook had a “feature” that made users turn over their Facebook friends’ information through apps, which Cambridge Analytica, and other third parties have used to collect data from millions of profiles.
While this was written in fine print, deep in the user agreement, the big picture does not appear to be consensual.
As I listened to Mark Zuckerberg’s testimony before Congress earlier this week, I thought about his initial response to Facebook’s involvement in the Russian Collusion scandal back in 2016, calling it “pretty crazy,” and the idea that Facebook had prior knowledge of Cambridge Analytica’s data collection in 2015.
This is an instance in which I both resent and lose trust in ‘big tech.’ I have been — and still am — disappointed that Facebook cannot anticipate risks that not only seem to be very obvious, but also could be easily fixed. For these reasons, I sign off.
Maddy Neff SC ’21 is a politics and Middle Eastern studies major from Palo Alto, CA. Maddy is stressed out about life.