Three years ago, major internet platforms including Meta, Twitter and YouTube responded to the January 6, 2021, Capitol riots with decisive action — suspending thousands of accounts that had spread election lies and removing posts glorifying the attack on US democracy.
Their efforts weren’t perfect, certainly; groups promoting baseless allegations of election fraud hid in plain sight even after some platforms announced a crackdown.
But since 2021, the social media industry has undergone a dramatic transformation and pivoted from many of the commitments, policies and tools it once embraced to help safeguard the peaceful transfer of democratic power.
The public got a taste of the new normal this summer, when social media was flooded with misinformation following the attempted assassination of former President Donald Trump and the platforms said nothing.
Though platforms still maintain pages describing what election safeguards they do support, such as specific bans on content suppressing the vote or promoting violence near polling places, many who have worked with those companies to contain misinformation in the past report an overall decline in their engagement with the issue.
“The last few years have been challenging for the knowledge community working with platforms,” said Baybars Orsek, managing director at the fact-checking organization Logically Facts. “The impact of layoffs, budget cuts in journalism programs, and the crackdown on trust and safety teams at X (formerly Twitter) and other major platforms have set troubling precedents as we approach the upcoming elections.”
The shift took place against the backdrop of a yearslong intimidation campaign led by Republican attorneys general and state and federal lawmakers aimed at forcing social media companies to platform falsehoods and hate speech and thwarting those working to study or limit the spread of that destabilizing content.
Those efforts coincided with the rise of a vocal cadre of elite Silicon Valley reactionaries, an increasingly ideological group that bristles at notions of corporate social responsibility. The people involved are among the world’s wealthiest and most influential, with the power to shape the products and services used by billions. And they are growing more politically assertive — warning government leaders to back off or face millions of dollars in campaign contributions to their opponents and laying down political manifestos that serve as litmus tests for startup founders who need funding.
In Elon Musk’s first appearance with Trump on the campaign trail this month, the tech billionaire called on Republicans to get out the vote, warning of dire consequences if they failed. Musk’s role in reshaping Twitter into X — turning it from the world’s foremost social media platform for real-time news into a hotbed of conspiracy theories and misinformation, in part by axing its trust and safety teams and watering down its content policies, has been well-documented.
The effects of that shift weren’t limited to Twitter itself, however. Musk has played an undeniable role in reducing the social and political costs of tech platforms walking back their earlier investments and commitments, said David Karpf, an associate professor in the School of Media and Public Affairs at George Washington University.
Just as being the first to remove Trump’s account in 2021 quickly led to YouTube and Facebook owner Meta following suit, Twitter being the first to restore Trump’s account provided that much more justification for the other platforms to do the same.
The industry retrenchment continued as YouTube and Meta relaxed their rules and chose to permit, once again, false claims that the 2020 election had been stolen.
“The platforms only ever took this as seriously as they felt like they needed to,” Karpf said.
“If you want serious trust and safety from these companies,” he added, “then either it needs to be demanded the way it is in the European Union, which means actual regulation, or they need to be doing a cost-benefit analysis that says, ‘This is important, not just for democracy but for our own bottom line in the near term,’ because that’s the only thing that has worked.”
Over the past several years, that cost-benefit analysis has increasingly tilted in favor of dismantling the infrastructure that social media companies built in response to Russia’s meddling attempts surrounding the 2016 US election.
The pullback is most apparent in widespread layoffs that started with X but that have hit ethics and trust and safety teams across Silicon Valley. Often announced in the name of efficiency, the job cuts indirectly revealed how many tech companies interpreted these programs as a drag on revenues rather than as a necessary product function.
Cutting off monitors
Tech companies have made it far more difficult for outsiders to monitor the platforms, creating blind spots in which false, viral claims can thrive.
Last year, X announced it would begin charging steep fees for access to its firehose of posts and other data. The change immediately triggered concerns that transit agencies and the National Weather Service would have to stop posting real-time updates that millions depend on. Musk quickly gave those organizations an exemption, but the paywall has affected civil society groups and academics who need large volumes of posts to study how false claims traverse networks.
Misinformation researchers decried the ”outrageously expensive” fees for accessing Twitter’s firehose, but the complaints went nowhere. Prior to Musk, Twitter’s data was given to researchers for free or at minimal cost. After the change, they were asked to pony up as much as $2.5 million a year for less data than was available before — a significant new barrier to transparency and accountability.
X has touted its crowdsourced fact-checking feature, Community Notes, as a solution to counter misinformation, but independent analysts have widely criticized the tool as slow and inconsistently applied.
In a similar move, Meta shut down CrowdTangle, a monitoring platform for Facebook and Instagram that it once promoted to election officials in all 50 states “to help them quickly identify misinformation, voter interference and suppression.”
CrowdTangle’s data had shown that right-wing content performs exceptionally well on Meta’s platforms, contrary to conservative allegations. Though the company said a successor tool would be even better, research published by the Columbia Journalism Review found the replacement had fewer features and was less accessible.
Pressure campaign from Republicans
These corporate pivots didn’t happen in a vacuum. They coincided with two other significant shifts. The first was a political and legal effort by conservative politicians to restrict truth-telling and information-sharing by social media companies.
The second was the renewal of a 1990s-era strain of thinking that takes the industry’s “move fast and break things” mantra and cranks it to the extreme, demonizing doubters as enemies of progress that must be defeated.
For years, Republicans have alleged that because companies including Meta and Google are based in liberal strongholds such as California, the platforms must be discriminating against right-wing viewpoints. Social media companies have insisted their technology is politically neutral, a claim their conservative critics have turned against them to devastating effect.
Conservatives have accused platforms of violating their own self-professed neutrality since before the 2016 election. The critique has prompted conciliatory changes, with some going to great lengths to accommodate right-wing figures.
That dynamic has intensified in recent years. As platforms ramped up enforcement against conspiracy theories, hate speech and election lies, conservative politicians increasingly protested what they claimed was censorship of right-wing views on social media. (Liberals promote misinformation too, as a study from New York University found in 2021, but misinformation from right-wing sources tends to attract far more engagement.)
This has culminated in several Republican-led efforts at the state and federal levels to hamstring content moderation by private platforms.
In Texas and Florida, Republican lawmakers passed legislation in 2021 that would restrict the ability of social media companies to moderate their own websites. Officials from both states explicitly said the laws were intended to keep social media from unfairly silencing conservatives. Amid a legal challenge by the tech industry, more than a dozen Republican attorneys general backed the Texas and Florida laws.
Meanwhile, Republican officials in Missouri and Louisiana, along with several private plaintiffs, sued the Biden administration over its decision to lean on platforms in recent years to remove content related to Covid-19 and the election that the government viewed as mis- and disinformation. That case, Murthy v. Missouri, also wound up before the Supreme Court this past term.
Both initiatives sought to keep platforms from enforcing their terms of service, on the grounds that the companies were violating Americans’ free-speech rights. But they eventually ran into a major hurdle in the courts: The First Amendment binds the government, not private businesses.
The Supreme Court largely punted on both cases this year with procedural decisions, but not before expressing doubts about the state laws’ scope as well as skepticism of the idea that the First Amendment prevents the White House from warning companies about perceived threats to public health or election integrity. Notably, the court left the Texas and Florida laws blocked for now and tacitly allowed the Biden administration to keep communicating with social media companies.
“The Supreme Court said that the social media platforms have, sort of, First Amendment rights as speakers, and so they have the right to censor,” said Jenin Younes, an attorney with the conservative-leaning New Civil Liberties Alliance. “So, the Texas and Florida laws prohibiting censorship were likely to be struck down.”
But Younes said that even without government efforts to moderate some online content, she sees any content removals by the platforms as detrimental given the conflicting views on what might amount to election misinformation.
“I tend to err on the side of: Instead of censoring, more speech is better,” she said, pointing to X’s Community Notes tool as a better way to approach the issue.
But, she added, “the companies are entitled to do what they want — even if I think, from a philosophical perspective, censorship is not the right approach.”
Congressional investigations
Republican officials also used congressional subpoenas and hearings to increase the political costs of investing in anti-misinformation initiatives. (Democrats put their own pressure on social media too, but for the opposite reason: They wanted platforms to moderate more, not less.)
“All they keep getting is criticized,” Katie Harbath, a former policy director at Facebook, said of the tech platforms in an interview with CNN last year. “I’m not saying they should get a pat on the back … but there comes a point in time where I think (Meta CEO Mark Zuckerberg) and other CEOs are like, is this worth the investment?”
One of the social media industry’s chief antagonists was House Judiciary Committee Chairman Jim Jordan, who led a charge to prove social media platforms’ liberal bias, sending subpoenas to Big Tech companies and demanding their testimony about content moderation decisions.
The Ohio Republican defended an ideological ally in Musk by attacking the Federal Trade Commission for its investigation into Twitter, a probe that stemmed not from content moderation but from a bombshell whistleblower disclosure by a senior security executive alleging violations of user privacy.
At one point, Jordan even threatened to try to hold Google and Meta in contempt of Congress for failing to hand over enough documents.
In August, Zuckerberg extended an olive branch to Jordan with a letter acknowledging that the Biden administration sometimes “pressured” Meta to remove Covid-19 content and that Zuckerberg regretted not pushing back harder.
House Republicans declared victory, saying the letter “admitted” the White House was out to censor Americans.
Other House Republicans have hauled tech leaders before Congress for uncomfortable hearings, further sending the message that well-intentioned efforts to protect America’s information spaces would be interpreted as bad-faith censorship.
In February 2023, House Oversight Committee Chairman James Comer, a Kentucky Republican, summoned former Twitter officials to testify about their role in suppressing a New York Post article about Hunter Biden in the heat of the 2020 election.
Pointing to internal Twitter communications that Musk had selectively released to a sympathetic reporter, Comer alleged a “coordinated campaign” by social media and the US government to suppress the Hunter Biden story.
But the former Twitter officials testified then, and Musk’s leaks, his own lawyers and other court records also showed, that what was made out to be a conspiracy to silence the New York Post was little more than internal confusion at Twitter.
Stanford Internet Observatory ends election program
Republican officials did not just cast doubts on the motivations of tech platforms or the US government. They also sowed doubt about the intentions of the misinformation research community. As with the tech CEOs, they targeted academics with subpoenas and demands for information about their work, which included identifying foreign influence operations and studying election rumors.
Amid the scrutiny, some centers for this type of research have shut down or redirected their focus. One such organization was the Stanford Internet Observatory, whose election-related work came under fire as an alleged censorship plot. The organization later terminated the election research program and cut ties with some of its staff. It claimed that its shift in mission was not “a result of outside pressure,” but Republicans took credit anyway.
“After the House flipped to Republican control in 2022, the investigations began,” Renée DiResta, the observatory’s research director who was among those let go, wrote in a June New York Times op-ed. “The investigations have led to threats and sustained harassment for researchers who find themselves the focus of congressional attention.”
After Stanford’s changes were announced, House Judiciary Committee Republicans “reacted … by saying their ‘robust oversight’ over the center had resulted in a ‘big win’ for free speech,” DiResta added. “This is an alarming statement for government officials to make about a private research institution with First Amendment rights.”
Misinformation researchers say they are adapting to a changed social media landscape and, despite the challenges they face, are continuing to shine a light on conspiracy theories and false claims.
One of the conspiracy narratives that stands out this electoral cycle is a larger focus on claims about voting by noncitizens, said Danielle Lee Tomson, research manager for a Center for Informed Public at the University of Washington.
“Our ability to look at Facebook has been curtailed with CrowdTangle being shut down, so we don’t use that as much for our discovery work,” Tomson added, but “we study ads, we study TikTok, we study Telegram, we study the alt-platforms. … Changes breed creativity, and changes also create new research questions.”
Even though researchers are shifting their own tactics, Karpf said, the GOP pressure campaign still achieved its primary goal, which was to create the space for tech companies to give up on something they viewed as an inconvenience from the start.
“If Jim Jordan makes a lot of noise, then the platforms will decide, ‘Hey, why are we spending all this money just to get in trouble? Let’s not spend money.’ And that’s pretty much exactly what happened,” he said.
In Silicon Valley
In October 2023, a few months after Jordan sent subpoenas to Meta, Google and other tech platforms, the longtime venture capitalist and Netscape co-founder Marc Andreessen published what he called “The Techno-Optimist Manifesto.”
The essay’s many short, declarative sentences gave off a punchy, defiant vibe. It set out to argue that society had lost its way and that the tech industry would lead the world into a bright new future, if only naysayers and regulations would step aside.
“We believe that there is no material problem — whether created by nature or by technology — that cannot be solved with more technology,” Andreessen wrote.
Blocking that progress are a range of “enemies,” he added, ticking them off on a list: Fears of existential risk (a likely reference to runaway artificial intelligence). Sustainability. Social responsibility. Trust and safety. Tech ethics. Risk management. Credentialed experts. Central planning.
By itself, the manifesto was little more than a restatement of the libertarian ethos that has long pervaded some corners of Silicon Valley. But the post’s true impact is reflected in its timing. It came at a moment when the public had grown more skeptical than ever of the tech industry’s promises, and at a time when a growing number of tech billionaires had grown tired of being blamed for the country’s problems.
Andreessen exists on a spectrum of wealthy Silicon Valley financiers who have, to greater and lesser degrees, spurned the left, if not thrown in with the right. It includes well-known conservatives such as PayPal co-founder Peter Thiel, who directly funded the rise of Trump’s running mate, Ohio Sen. JD Vance. It includes the venture investor David Sacks, a friend of Musk’s who went on X to help launch Florida Gov. Ron DeSantis’ presidential campaign and who later endorsed Trump. And it includes others that The Atlantic has collectively described as “Trumpy VCs.” A few have poured their souls out on X, describing journeys of political awakening to explain why they were endorsing Trump or could no longer support Democrats.
A couple of months after Andreessen’s manifesto, Ben Horowitz — the other half of the famed VC firm Andreessen Horowitz — published an ominous-sounding companion post. It announced that the firm would, for the first time, be donating to political candidates who met a simple test.
“We are non-partisan, one issue voters: If a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them,” Horowitz wrote. “Every penny we donate will go to support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.”
The pledge seemed innocent enough on the surface. But paired with Andreessen’s earlier manifesto, it could not be read as anything other than a veiled threat against the regulatory state.
“It is interconnected,” said Alicia Wanless, director of the Information Environment Project at the Carnegie Endowment for International Peace. “We are connected by the people we know, the ideas we hold, the groups we belong to, the places we go to, the technology we use, the content we consume. We’re part of communities, and these communities overlap. … They feed off of each other and react to each other.”