Stay Updated on Developing Stories

Big Tech's head-spinning rules for the 2020 election

(CNN) Late Wednesday, Twitter made waves by temporarily restricting a Trump campaign account's ability to tweet because it shared a video containing false claims President Trump had made about the coronavirus. But Twitter took no action on President Trump's personal account, which re-shared the video.

The next day, Facebook cracked down on a pro-Trump PAC's ability to advertise after spreading falsehoods even though the platform has repeatedly said it will continue to allow politicians to lie openly in advertising.

If all that makes your head spin, you're not alone.

These examples are indicative of a much wider problem for the big tech platforms: At a time when Facebook (FB) and Twitter (TWTR) have put a greater spotlight on themselves by taking action against posts from Trump and accounts linked to him, the companies' rules for handling content generally and political misinformation specifically remain so confusing, so ad-hoc, that even the staff at these companies sometimes struggle to comprehend them.

Both Facebook and Twitter allow politicians to lie in posts on their platforms; misinformation sent to tens of millions of followers from the accounts of the most powerful people in the world is not against their rules. As Mark Zuckerberg has put it previously, "Facebook shouldn't be the arbiters of truth."

Except they are. Every hour of every day Facebook and Twitter make judgments about the veracity of information shared on their platforms — they are arbiters of truth.

Both companies have explicit policies preventing users from sharing dangerous Covid-19 misinformation. Both companies also say they do not allow voter misinformation, including voter suppression efforts. By setting those rules and enforcing them at all, they are deciding between what is true and untrue.

The issue isn't only whether the companies are willing to arbitrate truth, but whether the rules they use to do so are applied consistently and clearly.

While Zuckerberg has defended the company's policy of giving politicians the power to pay Facebook to target voters with lies, Facebook takes a very different approach with other political groups, including political action committees.

On Thursday, for example, the company announced it had banned The Committee to Defend the President, a pro-Trump PAC, from running ads because it had repeatedly shared misinformation. CNN has reached out to the PAC for comment.

Twitter was lauded by some earlier this summer for fact-checking (with a tiny label) President Trump's false claims about mail-in ballots in California. The company suggested it would not tolerate this kind of voter misinformation.

But in the weeks since, it's been clear Twitter will only act when Trump makes very specific voting claims about specific instances and specific states. His almost daily declarations that the entire election will be rigged do not break its rules.

Internal confusion — and frustration — over how tech platforms apply their policies spilled into the open in late May when Facebook and Zuckerberg decided not to take action on Trump's post saying "when the looting starts, the shooting starts" amid demonstrations in Minneapolis.

Facebook employees staged a virtual walkout and publicly expressed disappointment and disagreement with the decision. Zuckerberg himself had previously told Congress: "If anyone, including a politician, is saying things that can cause, that is calling for violence or could risk imminent physical harm ... we will take that content down."

Twitter, by comparison, placed a label on an identical post from Trump on its platform, saying it glorified violence. The episode highlighted another key cause of confusion: The big tech platforms sometimes diverge from one another in their approach to handling political misinformation and incendiary speech. This is particularly problematic when the same content is often shared across multiple platforms, whether posts, ads or video clips.

Last weekend, a fake video of House Speaker Nancy Pelosi went viral on Facebook (yes, again — a different fake video of Pelosi went viral last year). Copies of the video were also posted to TikTok, Twitter, and YouTube. Those three platforms took the videos down. Facebook left it up and applied a fact-check label to it after it had millions of views — enraging Democrats who insist a false video of the third-in-line to the presidency should be removed.

Inconsistency of rules within and across companies, combined with a lack of enforcement of said rules, means peddlers of misinformation have a better chance of success if they post something false or incendiary to multiple platforms. Sure, it may get removed from some platforms, but maybe not all of them.

Four years after online misinformation was weaponized in the 2016 election, Big Tech is still on the backfoot.

Paid Partner Content