
“Reminder to kill all humans,” said Alexa, my Amazon Echo Dot, at 2 a.m. on a Monday morning. I almost fell out of my bed. Was I still dreaming? Nope: A second later, the dot coolly repeated herself. “Reminder to kill all humans.”
In my hazy, half-asleep state, I was shaken to my core, wondering if this was the beginning of the end. Fortunately, in the light of day, I realized it was just an unfortunate prank by my roommate and sponsor.
But the whole incident forced me to think about how I could trust Alexa enough to keep her in my room. I began to question my relationship with several major tech companies, thinking about how each has carved itself a space in my life.
Such concerns are not new: As technology has crept further into our personal lives over the past years, they have been shared by many. Millennials and Generation Z-ers are constantly warned of technology’s dangers, whether it’s by our parents, opinion columnists, or shows like “Black Mirror.” And despite what older generations may believe, we are aware of its effects on our lives: I often hear friends and peers talking about deleting social media to try and “cleanse” themselves, or putting their phones away at meals and social occasions.
Such anecdotal evidence is supported by recent studies that display correlations between mental health issues, such as anxiety, and phone usage among young people. I don’t believe that connecting and sharing posts with people online always detracts from our lives offline. All too often, though, it forces us to compare ourselves with others, cultivates a dependency on online affirmation, and wastes precious time.
Alexa may live in our homes, but apps like Facebook, Instagram, and YouTube, which profit from selling advertisements, are built to worm their way into our minds. Companies specifically create their platforms with the intention of entrapping us in their capitalist “Hell.” They curate feeds that cater to your interests and biases, provide hits of dopamine through likes and comments, and create incentives to keep people online, such as colorful alerts and notifications.
Facebook’s founding president, Sean Parker, recently told Axios News: “[T]he thought process that went into building these applications … was all about: ‘How do we consume as much of your time and conscious attention as possible?’” The answer to that has manifested itself in design elements we interact with every day.
For example, when Snapchat’s version 10.25 was announced last November, CEO Evan Spiegel said that Snap was willing to risk users being initially uncomfortable with the update because it would bring “substantial long-term benefits to [their] business.” The widely disliked separation of the pages “Friends” and “Discover” is meant to make it easier for users to see more advertisements.
On a larger scale, the influence of social media has brought attention to activism and helped marginalized people elevate their voices. However, by showing us more of what we respond positively to, social media has also further polarized American society. Facebook, for instance, has been heavily criticized for allowing falsehoods to be circulated based on its algorithms and thus impacting national politics.
At this point, it’s not enough to simply blame consumers for their engagement with technology. Assigning the responsibility to consumers to deal with what tech companies have created is similar to the problematic strategy used by tobacco and fast food industries. But similar to Big Macs, technology appeals to some of our basest instincts in order to profit from money. The pressure to succumb is high, especially when we are being targeted by cradle-to-grave advertising, as shown by Facebook’s recent launch of a messaging platform made for kids. The responsibility thus lies with tech companies to transform our interactions with what they produce.
Snap, Facebook, and YouTube have little incentive to stop monopolizing our attention. However, companies like Apple, Samsung, and Microsoft don’t rely on our time and can afford to redesign their products to give us a better experience. For instance, Tristan Harris, former design ethicist for Google, suggests that Apple could have apps assign priority levels to their notifications so that not every update to social media draws users back to their phones. Harris is one of the founders and leaders of the Center for Humane Technology, which consists of people who have left the tech industry to work toward a future in which tech is more ethical.
Other possibilities could include offering a “Settings” option for people to set limits on their time on each app. Personally, I would love to restrain myself after a certain amount of Instagramming per day, giving me a good middle ground between deleting the app and getting lost on the Explore page at midnight.
Long-term, the Center for Humane Technology advocates that governments “pressure technology companies toward humane business models by including the negative externalities of attention extraction on their balance sheets, and creating better protections for consumers.”
Institutions of higher education such as the Claremont Colleges can also push for accountability within the tech industry. Students studying computer science or engineering should be required to take courses on ethics and philosophy in order to complete their majors. Harvard University, Stanford University, and MIT, among others, are already tailoring ethics courses to applying technology and considering integrating them into majors.
Let’s cultivate an awareness around technology that helps us find solutions beyond just telling people to get off their phones.