The Supreme Court on Wednesday issued a decision that could have enormous consequences for the 2024 election, saying the US government can – for now – keep warning social media companies about mis- and disinformation threats it’s seeing online.
Although the case was decided narrowly on a technicality and not on the substance of the issues, the decision is nonetheless among the most consequential of the court’s current term.
Here’s everything you need to know about this critical case affecting online speech and the democratic process.
What can the US government tell social media companies to do?
As a result of Wednesday’s ruling in Murthy v. Missouri, agencies such as the FBI and the Department of Homeland Security will continue to be able to contact social media companies about posts they view as mis- and disinformation.
Examples could include false claims about Covid-19, unfounded allegations about election fraud or other statements that in some situations may violate the platforms’ own policies.
Republican-led states, including Missouri and Louisiana, along with five social media users, claimed in 2022 that those contacts with social media companies were in fact part of an unconstitutional government campaign to silence free speech.
The US government has been flagging this type of content to social media companies for years, at times using extremely forceful language to demand the content be removed. At oral arguments, justices spent more than 90 minutes trying to discern the line between government persuasion on one hand and undue government coercion on the other.
Why is the government talking to social media companies?
The government’s outreach to social media platforms has been happening since the 2016 election and in direct response to Russia’s attempts to meddle in US politics. In 2020, a congressional inquiry faulted the US government and tech platforms for not working together more to respond to those types of informational threats, which could baselessly sow division among voters and weaken the United States on the world stage.
The ties between Washington and Silicon Valley involved routine, recurring meetings between government and company officials that were often publicly announced and included platforms such as Meta and Twitter (now known as X), as well as input by outside academics and researchers.
Those lines of communication became important ways for the government and social media companies to trade information about fake accounts or malicious foreign actors looking to destabilize the United States, said Laura Edelson, an assistant professor of computer science at Northeastern University and the co-director of Cybersecurity for Democracy, a research group focused on digital misinformation.
The relationships provided everyone involved with vital situational awareness about a rapidly evolving misinformation landscape, one whose danger has only grown with the advent of generative artificial intelligence, Edelson said.
“Moving into [this] election season, we’ve already started to see influence campaigns starting to come out of the woodwork, be identified and reported on,” Edelson said. “That’s not news that those exist, and something that the government used to be a really good conduit for was making sure that when something was reported or identified in one platform, it was identified and shared with others.”
What does the Supreme Court think?
The court said Wednesday that the plaintiffs in the case – the states and private citizens – didn’t have the legal right, also known as standing, to bring their suit.
Writing for a 6-3 majority, Justice Amy Coney Barrett said the plaintiffs didn’t do enough to prove that it was government pressure on social media companies that directly led to their past posts being censored, let alone that the plaintiffs were in any imminent danger of the government somehow censoring their future posts.
Social media sites clearly make their own decisions about how to moderate their platforms, regardless of what the US government may call for them to do, Barrett wrote. In fact, as evidence presented in the case showed, online platforms sometimes did the opposite of what the government wanted.
“The platforms continued to exercise their independent judgment even after communications with the defendants began,” Barrett wrote. “For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy.”
Barrett added: “The plaintiffs, without any concrete link between their injuries and the defendants’ conduct, ask us to conduct a review of the years-long communications between dozens of federal officials, across different agencies, with different social-media platforms, about different topics.”
Rather than delve into all that, the court punted. It avoided ruling on whether the government’s communications with social media companies violated the First Amendment. But in so doing, the court didn’t say that they were unconstitutional, either, which effectively means they can continue for now.
This doesn’t rule out a future set of plaintiffs bringing similar claims under a different set of circumstances, which could potentially allow the court to weigh in on the issue again, perhaps even on the merits of the matter.
Legal experts said Wednesday that the court’s decision carefully threads the needle.
“The Supreme Court’s decision is a sensible response to a difficult question,” said James Grimmelmann, professor of digital and information law at Cornell University. “It recognizes that platforms are free to set their own content-moderation policies against harmful and deceptive posts. It protects them from government coercion of their moderation decisions, but it also allows them to listen to the government’s views.”
What does this mean for the 2024 elections?
It is up to the Biden administration whether to fully revive the lines of communication it had temporarily paused during the litigation.
The FBI resumed sharing some threat information with social media companies earlier this year, prior to the Supreme Court’s decision, CNN has previously reported. In light of Wednesday’s ruling, however, the administration could potentially restore much more of its information-sharing infrastructure.
“The Supreme Court’s decision is the right one, and it helps ensure the Biden administration can continue our important work with technology companies to protect the safety and security of the American people, after years of extreme and unfounded Republican attacks on public officials who engaged in critical work to keep Americans safe,” White House press secretary Karine Jean-Pierre said in a statement following the court’s decision.
Despite the ruling’s narrow nature, some legal experts said it is a significant win for those trying to ensure the 2024 race isn’t disrupted by false claims.
“There are essential moments when our government should be allowed, even encouraged, to contact private companies like social-media platforms and provide factual information to them, especially when issues of foreign interference, election integrity, national security and encouragement of violence crop up online and pose real-world threats,” said Nora Benavidez, senior counsel at the civil rights and consumer advocacy group Free Press.
“Of course, we should be wary of government intrusions into private speech,” Benavidez added. “But the Biden administration’s efforts to fight misinformation do not amount to censorship; rather, they are efforts to make platforms aware of the potential public harms that could result from the unvetted spread of falsehoods via their networks.”