Samuel Corum/Getty Images/FILE
Pro-Trump supporters storm the US Capitol following a rally with President Donald Trump on January 6, 2021 in Washington, DC.

Editor’s Note: Andrea Hailey is CEO of Vote.org, a nonprofit, nonpartisan voting registration platform. The opinions expressed in this commentary are her own. Read more opinion on CNN.

CNN  — 

After the attack on the US Capitol on January 6, 2021, tech companies cracked down on election disinformation. Two and a half years later, as voters start to look toward the 2024 presidential election, many of those policies have been reversed.

Nicole Mondestin
Andrea Hailey

Last month, YouTube announced a reversal of its stance on election disinformation, stating that it would no longer ban content that falsely claimed that the 2020 election was stolen. Twitter and Facebook reinstated former President Donald Trump’s accounts after initially banning him in the wake of the Capitol insurrection.

CNN recently discovered that this spring and last fall, Meta — the parent company of Facebook — made cuts to teams that counter misinformation and disinformation. And Twitter has laid off employees who evaluate information about elections.

These moves are dangerous for our democracy. Social media companies often set the parameters for political discourse. Information and disinformation can spread quickly, especially since algorithms prize virality.

Algorithms and automated or person-led moderation select what content is and is not allowed, and can continue to feed users more of the same type of political content. In the case of YouTube, especially, this can push users into deeper rabbit holes of conspiracy theories and radicalization.

We’ve already witnessed what radicalization and conspiracy theories can do — they can quite literally mobilize an angry mob and lead to innocent deaths. Hiding the truth about disinformation is not supporting free speech, but rather a perversion of its promise.

This is especially true when the charge to stifle conversations about disinformation is being led by politicians. Indeed, in the wake of the January 6, 2021 insurrection, many politicians continue to argue that the 2020 election was stolen, and some even have drafted bills to curtail election-related “fraud” that isn’t occurring.

Since the 2020 election, there has been a wave of voter suppression bills introduced in state legislatures — at least 322 voter suppression bills in 45 states have been introduced so far in 2023. Most of these bills seek to make voting more complicated, rather than bringing more people into the democratic process. (Vote.org has taken on several of these voter suppression bills in the courts.)

Some bills even empower police or other law enforcement to target voters, election administrators or poll workers for small or benign errors. In addition, some lawmakers are leading a charge to crack down on researchers and academics who study disinformation.

Get Our Free Weekly Newsletter

Tech companies can make good and effective policy changes that de-platform disinformation. In a recent statement to CNN, a Meta spokesperson said, “Protecting the US 2024 elections is one of our top priorities, and our integrity efforts continue to lead the industry.” In its blog post about its updated election misinformation policy, YouTube said, “All of our election misinformation policies remain in place, including those that disallow content aiming to mislead voters about the time, place, means, or eligibility requirements for voting; false claims that could materially discourage voting, including those disputing the validity of voting by mail; and content that encourages others to interfere with democratic processes.” And under Twitter’s synthetic media policy, “voter suppression or intimidation” and “targeted content that aims to harass, intimidate, or silence someone else’s voice” could be removed.

Last year, Vote.org worked with Nextdoor to mitigate the spread of misinformation and provide trusted resources to inform neighbors on its platform. Tech companies can and should also be transparent with their plans and policies related to election integrity and disinformation.

Vote.org has asked Meta to do just that with its new Twitter competitor, Threads. If tech companies have systems by which they moderate or push certain types of posts, they should explain that to the voters who can often use those platforms to find reliable information.

The problem here is clear: Tech platforms must take more ownership over the content they host when it is directly harming the future of our democracy. They often set the agenda for political conversations. Elections and voting are not political acts, but rather the core principle that underlies our democracy.