Google is officially set to confront OpenAI’s ChatGPT — and soon.
The tech titan, which has had a stranglehold on internet search for as long as most web users can remember, formally announced Monday that it will roll out Bard, its experimental conversational AI service, in the “coming weeks.”
The announcement comes just a day before Microsoft (MSFT), which is working to integrate ChatGPT-like technology into its products, including its search engine Bing, is set to hold an event with OpenAI at its Washington state headquarters.
“The internet search wars are back,” wrote the Financial Times’ Richard Waters in a piece published Monday, noting that AI has “opened the first new front in the battle for search dominance since Google fended off a concerted challenge from Microsoft’s Bing more than a decade ago.”
A version of this article first appeared in the “Reliable Sources” newsletter. Sign up for the daily digest chronicling the evolving media landscape here.
But the rapid emergence of the technology has also raised serious ethical questions, especially since it is being taken to market at a breakneck speed.
“We are reliving the social media era,” said Beena Ammanath, who leads Trustworthy Tech Ethics at Deloitte and is the executive director of the Global Deloitte AI Institute.
Ammanath said that “unintended consequences” accompany every new technology and reluctantly expressed confidence that it too will occur with AI chatbots, unless significant precautions are taken. For now, she doesn’t see the guardrails in place to rein in the nascent technology. Instead, Ammanath equated what is currently transpiring with the swift deployment of AI as companies “building Jurassic Park, putting some danger signs on the fences, but leaving all the gates open.” Yes, there is some acknowledgment about the dangers the technology poses. But it’s not enough, given the risks.
Ammanath stressed that computer scientists working on AI have yet to solve for bias, a years-long problem, as well as other worrisome issues that plague the technology. One major problem is that AI bots cannot separate truth from fantasy.
“The challenge with new language models is they blend fact and fiction,” Ammanath told me. “It spreads misinformation effectively. It cannot understand the content. So it can spout out completely logical sounding content, but incorrect. And it delivers it with complete confidence.”
That’s effectively what happened last month when CNET was forced to issue corrections on a number of articles, including some that it described as “substantial,” after using an AI-powered tool to help the news outlet write dozens of stories. And in its wake, other outlets like BuzzFeed, are already embracing the robot-writing technology to help it generate content and quizzes.
“This is a new dimension that generative AI has brought in,” Ammanath added.
In announcing that Google will roll out its AI soon, chief executive Sundar Pichai stressed that “it’s critical that we bring experiences rooted in these models to the world in a bold and responsible way.” And Pichai underscored that Google is “committed to developing AI responsibly.”
But it’s hard to deny that the company, under tremendous pressure from investors after ChatGPT stormed onto the scene, is not rushing to deploy its product to the market as quickly as possible. In an internal note to staff, Pichai himself said all hands are on deck and that the company will be “enlisting every Googler to help shape Bard and contribute through a special company-wide” event he said will have “the spirit of an internal hackathon.”
“We’ve been approaching this effort with an intensity and focus that reminds me of early Google,” Pichai wrote, “so thanks to everyone who has contributed.”
But its clear that both Google and Microsoft, some of the most valuable and pioneering companies on the web, understand well that AI technology has the power to reshape the world as we know it. The only question is will they follow Silicon Valley’s “move fast and break things” maxim that has caused so much turmoil in the past?