A public feeding frenzy over artificial intelligence may encourage some companies to make hyped-up claims about their use of AI or what the technology can deliver — but they do so at their own peril, according to the chair of the Securities and Exchange Commission.
Publicly traded companies that misleadingly or untruthfully promote their use of artificial intelligence risk engaging in “AI-washing” that can harm investors and run afoul of US securities law, said SEC Chair Gary Gensler in a speech on Tuesday.
“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims,” Gensler told an audience at Yale Law School. “If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.”
Instead of disclosing those risks using “boilerplate” language about AI, Gensler said, executives should consider whether artificial intelligence plays a significant part in a company’s business, including its internal operations, and craft specific disclosures that speak to those risks.
They also shouldn’t lie about whether they use an AI model or how they use AI in specific applications, Gensler added.
Gensler’s warnings about AI-washing highlight a growing push by federal agencies to underscore how many of the country’s existing laws already apply to artificial intelligence, even as many policy experts have called for new regulations on the technology.
The Federal Trade Commission, for example, has issued numerous warnings about how artificial intelligence stands to “turbocharge” scams and fraud, but also that the agency stands ready to apply US consumer protection law and antitrust law to guard against some AI-related harms.
“Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” FTC Commissioner Alvaro Bedoya told House lawmakers last year. “There is law, and companies will need to abide by it.”
In a similar fashion, the SEC has ample authority to go after certain financial crimes linked to AI. One would be the intentional use of AI to facilitate securities fraud, Gensler said Tuesday.
The SEC could target those who deploy AI in ways that create reckless or knowing disregard for the risks to investors, Gensler said. He said the SEC could also investigate those who place fake orders in violation of securities law, or investment advisers who place their own interests ahead of their clients’.