Two years ago, the social media giant Facebook finally banned then-President Donald Trump from its platforms. The decision came after he had repeatedly violated Facebook’s policies by spreading election disinformation and inciting the violence, which ultimately led to the January 6 Capitol insurrection.
Facebook’s decision was too little, too late. The platform had allowed the former president to violate its policies, spread misinformation, and stoke division and distrust for so long that the damage was already done. Until his ultimate removal, Trump barely suffered any consequences for his racist, conspiratorial, and antisemitic activity on the platform.
Facebook’s parent company, Meta, is now weighing one of its most consequential content moderation decisions of the last two years: whether to reinstate Trump. This decision could have serious ramifications for the future of online hate, harassment, and our very democracy. The real question we should be asking, however, is why Facebook allowed Trump to violate their policies for so long to begin with. Had Facebook held President Trump to the same standard as the rest of us, they might not have ever made it to this dramatic moment.
Shortly after I joined Facebook in June 2018 as its “head of Global Elections Integrity Ops” for political advertising, it became clear to me that users with the largest platforms—including political leaders who themselves often spread lies and hateful content—were being given special treatment. When I questioned why we were not fact-checking political ads, including those that spread lies about the elections ahead of the U.S. midterms, I was sidelined and then pushed out. Since then, there has been continued evidence that Meta’s policies allow millions of high-profile users’ violative content to be amplified and spread.
I have long stated that Mark Zuckerberg’s exemptions for fact-checking politicians and the so-called “newsworthiness exception” that grew out of that are two of the most dangerous elections-related decisions the company has made. The exemptions given to the powerful at the expense of the rest of us, coupled with a business optimized for frictionless virality, were decisions that harmed our democracy. By refusing to fact-check politicians while also providing them sophisticated tools to grow their audiences and make their content go viral, Facebook tilted the scales to push more people to believe in these conspiracy theories about the election. And let’s be clear: Meta is not implementing these policies in the name of free speech. Policies that allow for corrosive content to flourish are good for business. Even Meta’s Oversight Board found that cross-check is primarily implemented to serve business needs.
Meta has an opportunity now to do the right thing by upholding its decision. But I’m deeply concerned that they are already looking for reasons to backtrack.
Meta has said that this decision will rest on whether or not Trump still poses a risk to public safety but, ultimately, Meta’s policy decisions thus far have been driven by the ruthless business model to optimize for engagement at any cost—including the cost of our democracy. Judging by Trump’s continued peddling of the false narrative that the 2020 election was “stolen,” the right choice seems obvious. One need to look no further than the bipartisan report from the January 6 Committee that directly blames Trump as being the key instigator behind the violent coup attempt.
And what about the risk Trump poses to those he targets? Any one post on Facebook or other mainstream platforms from Trump can trigger an entire ecosystem that spreads hate, harassment, and even incitement to violence. Throughout his presidency, Trump used social media platforms to spread hate and incite violence, both directly and indirectly. My team at ADL has documented how the online harassment ecosystem actually works, and the dangerous impact it can have.
There is no reason to believe Trump will behave differently now if allowed back on Facebook.
In fact, Trump’s recent posts on Truth Social—a platform he owns, but which has a fraction of Facebook’s user numbers—reflect a continuation of his reckless behavior, including making direct threats against Jews, slinging racist anti-Asian slurs at his political enemies, embracing QAnon, and singling out elections officials for his followers to harass.
To be clear: I understand that when such a large and influential company with unchecked power over the public conversation suspends a world leader, it sets a potentially dangerous precedent for free speech. But Meta’s decision about Trump doesn’t exist in a vacuum. The political climate in the U.S. remains incredibly vulnerable to violent provocations: extremist incidents in 2022 have been on track to eclipse their 2021 levels, and antisemitism—historically a harbinger for other forms of hate and violence—is once again on the rise.
Moreover, Meta has repeatedly failed to protect its users from hate and harassment; in fact, internal documents leaked by whistleblower Frances Haugen in 2021 show Facebook only removes 3-5% of the hate speech on the platform.
For the leaders of Meta to assume Trump will suddenly abide by their terms of service would be a serious miscalculation at best, and a callous continuation of irresponsible corporate behavior at worst.
Facebook remains the largest social media platform in the world and has a track record as a damaging influence on democratic elections worldwide. Facebook should not be complicit in furthering the damage Trump and his followers have already done to our democracy.