A Social Media Experiment

One of the hottest topics on social media these days is…social media.  Recent court rulings against Meta and Google’s YouTube have placed these two online heavyweights at the top of our search results – both were found liable for deliberately engineering their platforms to “maximize compulsive use.”  The addictive design was ruled a significant factor in causing anxiety and depression in a young plaintiff.  Furthermore, the jury found that company leaders had prior knowledge of the harm their platforms could cause a young user’s mental health.  These and other similar rulings call our attention to the need for accountability and ethical design in social media platforms.

Social media hasn’t always been like this.  Beginning in the 1960’s, connecting, sharing knowledge and building a global sense of community were the promises of early Internet optimism.  With the invention of the World Wide Web by Tim Berners-Lee in 1989, the general public gained access, and early online user groups focused on shared interests, discussion and curiosity.  I once attended a workshop about the many ways in which the Internet could enhance our lives.  Rather than just show PowerPoint slides of a Brazilian coffee farmer’s use of Internet technology, the presenter interviewed him live online during one segment of the talk. 

The focus shifted in the early 2000’s, with more user generated content and interaction.  This is when companies like YouTube and Facebook (now Meta) helped transform users from content consumers to content creators.  Building on the assumption that connectivity itself is inherently positive, people across cultures could now strengthen relationships, while amplifying the voices of the heretofore marginalized.  We eagerly anticipated greater accountability, transparency and empathy.

Facebook now has over 3 billion users worldwide, and the shift from community-driven spaces to advertising fueled businesses is glaring.  Algorithms chew up data on clicks, likes, shares and eyeball time and spit out content intended to elicit strong emotional responses.  We are drawn to information that confirms what we already believe, and the algorithms are happy to oblige. 

Large scale research warns that social media enhances political polarization, social divisions and the propagation of misinformation.  The barrier of entry to responsible fact-checking is high – most of us just don’t bother.  The original emotion-charged posts always seem to spread much faster than the follow-up corrections.  Topics related to finances and public health, along with political and cultural subjects, quickly grow wings.  As social media has evolved, its promise of connection has been overshadowed by systems that reward division, outrage, and emotional manipulation.

My wife and I are frequent flyers, and it has always been my practice to check the MyTSA app on my phone for estimated wait times so we can plan accordingly.  This was especially important recently, given the current government shutdown of TSA funding.  As I soon discovered, tsa.gov was also not being updated.  Social media was focused on the airports with 3-4 hour waiting lines, while airports with no lines were not considered click-worthy.  In the end, we arrived at Denver International Airport much too early but passed through security in less than 5 minutes and comfortably made our flight.

All of this made me think of a social media experiment.  Note that I am not a social scientist and can only describe the experiment and share the basic results.  I began by posting the following on Facebook.  (Link to Facebook post).  It features a cartoon chosen to evoke an emotional response (typical social media ploy), accompanied by a short paragraph with some factual, albeit incomplete, information.  It concludes with a request for clarification from readers. 

Parsing the entire response stream with an AI agent helped sort out the ambiguity of the emoji’s by matching them to accompanying comments. The response entailed 3,250 clicks, with 556 comments, 522 emoji reactions and 175 shares. It consisted of Pro-ICE (from simple ICE support to making a case for ICE being more capable or better trained), Anti-ICE (general opposition to ICE anywhere), Informative (adding useful information on funding, training, role differences between TSA and ICE),  and General political noise that does not relate to the topic (e.g. one side thread that debated the allocation of state lottery money).  There were 22 attacks on me personally.

The majority of the responses attempted to clarify or explain the situation (as requested in the OP), although many of the posts were misleading or incorrect. Some responders correctly pointed out that airline ticket surcharges pay only a portion of TSA salaries, and that the money goes into a general government fund, not directly to the TSA.  Some justified higher ICE salaries by citing the lengthier training, and the risks that ICE agents face.  On the other hand, a few suggested that taking over US Airports by ICE and dispensing with TSA altogether is part of the master plan, citing the “Mandate for Leadership” (aka “Project 2025”) from the Heritage Foundation.  While Project 2025 calls for modernizing airport security, privatizing the TSA, and dismantling portions of the DHS, it does not specifically propose putting ICE in charge. 

In summary, the feed was politically polarized and divisive, directed at “correcting” the original premise, loaded with misinformation and political talking points, and refreshingly light on personal hostility.

Social media began as an optimistic experiment in global connection but has evolved into a profit-driven communication network that intensifies polarization, emotional conflict, and social divisiveness. Technology ultimately exemplifies the values embedded in its design and incentives – though it can still foster community, it often rewards acrimonious behavior. Whether social media deepens societal rifts or supports healthier public discourse depends on our ability to align technology with human values, rather than simply allowing it to exploit human psychology.