“What we are seeing is death by a thousand cuts.” -Maria Ressa
In 2013, the Philippines became a trial-run for Facebook’s “Free Facebook” campaign, an effort to subsidize internet access for smartphone users. The campaign was heralded as a success; nearly 69 million Filipinos use Facebook today, compared to 29 million in 2012. Yet, as newspaper readership declined and was replaced by Facebook, Filipinos became more vulnerable to consuming misinformation. Rodrigo Duterte rose to the presidency in 2016, utilizing Facebook to mobilize his base with the support of social media superstars like Mocha Uson, R.J. Nieto, and Sass Sasot. Duterte and his allies spread disinformation about political opponents like Senator Leila de Lima, including a deep-fake video of her supposedly having sex with her chauffeur. “De Lima is not only screwing her driver, she is also screwing the nation,” said Duterte. In the Philippines, where Filipinos spend more time on social media per day than any other nationality (nearly twice as much as the U.S.), stories like these matter.
Misinformation undermines democracy. Maria Ressa is the CEO of Rappler, an independent media outlet in the Philippines. Ressa says that “What we are seeing is death by a thousand cuts of our democracy.” Each fake news story shared on Facebook harms people’s abilities to detect fact from fiction, gradually wearing down democratic institutions. In the case of the Philippines, Duterte’s administration has been accused of “threatening the Philippine justice system by ousting the chief justice of the Supreme Court; attacking press freedom; upending international relations by cozying up to China; cleansing a former Philippine dictator of his crimes; and sanctioning the extrajudicial executions of more than 12,000 Filipinos suspected of selling or using drugs in the country.” Ressa has been the target of a misinformation campaign herself. R.J. Nieto, a social media star with 1.7 million followers, charged that Ressa made the Philippines a target of the North Korean nuclear program after she wrote a story about a conversation between Trump and Duterte. Would-be authoritarians know that if you control the media, you control the narrative: This includes social media, where a growing number of people receive their news from. Misinformation can be used to erode trust in the media and inform voter choices.
Unfortunately, misinformation is tempting. Stories about Senator de Lima pole-dancing for a convict, using government funds to buy a $6 million mansion in New York, and being a political target of the Queen of England are as exciting as they are false. As Yuval Noah Harari says, “People prefer power over truth.” But to harvest civic virtues, misinformation must be rejected, starting with the regulation of the platform that allows it to spread.
In other countries, Facebook lacks the cultural understanding to intervene. Facebook has a team of 7,500 content reviews that evaluate content in 50 languages, but it clearly isn’t enough. The tech giant has become a tool of the populist far-right party in the Philippines, like many other countries, because its platform is conducive to hate speech and misinformation used against political opponents. I witnessed this myself when I visited the Philippines in 2019 to do research on the Catholic Church for a Sheffer Grant. Some members of the clergy spoke out against the administration and were harassed on Facebook by Duterte supporters as a result. Facebook can be used for grassroots organization and empowering communities, as Maria Ressa has noted, but the tool is too often misused.
Another problem is that Section 230, a part of the 1996 Communications Decency Act, is not functioning as intended. When figures as wide-ranging as Mark Zuckerberg, Kevin McCarthy (R-CA 23rd District), and Ron Wyden (D-OR) agree that the law should be reviewed (albeit for varying reasons), then that law should probably be reviewed. Originally created with the intent that internet companies would self-regulate, Section 230 as it is interpreted today instead gives companies like Facebook broad protection for what is said on the platform. But if it were to be reinterpreted as it was originally intended, what content should social media companies ban from their platform? Conservatives worry that their voices will be silenced, while liberals fear that not enough content is being taken down. When Facebook did finally remove 200 fake accounts in the Philippines, Rodrigo Duterte lashed out at the company, saying “You cannot bar or prevent me from espousing the objectives of government.”
The Philippines isn’t alone. Racist posts directed at Rohingya Muslims spread rampantly in Myanmar, leading to an ethnic cleansing carried out by the military. In the lead-up to the 2018 Brazilian presidential election, 48% of viral messages on Whatsapp contained “externally verified falsehoods mentioning a fictional plot to fraudulently manipulate the electronic ballot system.” We are all too familiar with misinformation in the U.S., where the Stop the Steal Facebook page which alleged that Donald Trump had won the 2020 election amassed over 300,000 members in less than a day. Journalists Bill Kovach and Tom Rosenstiel write that “Information created democracy.” But can misinformation just as easily destroy it?
Learning about misinformation in an American classroom (a Zoom classroom, to be specific) is a privilege because, as Dr. Juan Lindau writes, “From the earliest days of the internet, the U.S. government claimed the right to police the world wide web.” We have the capacity to do something about it beyond pleading with Facebook that misinformation be banned, as the government of Myanmar did repeatedly leading up to the ethnic cleansing of the Rohingya. Since the U.S. claims the power of regulation, we must wield it when necessary to prevent misinformation from undermining democracy. But there isn’t an easy fix. Just as misinformation can leave a thousand cuts in a democracy, healing those wounds may take a thousand remedies.