Editor’s Note: Linda Thomas-Greenfield is the US ambassador to the United Nations. She served as the assistant secretary of state for African affairs, where she led the bureau focused on the development and management of US policy toward sub-Saharan Africa, and as director general of the Foreign Service and director of human resources. The views expressed in this commentary are hers. View more opinion at CNN.
In the summer of 1959, Eleanor Roosevelt found herself at a research center just south of Poughkeepsie. There, she met a scientist eager to introduce her to his latest innovation. “The scientist,” Roosevelt wrote in her column, “My Day,” “has taught a machine to play checkers.”
But the machine wasn’t just winning, Roosevelt explained: it was “learning.”
She went on to note that, while the technology may not be used for decades to come, it held the potential to be applied “in the solution of real social and economic problems.” And the questions it raised — about what it means to interact with automation, and what it is that makes us human — “will be answered by the coming generations.”
It’s unsurprising that Roosevelt was interested in the defining traits of humanity, and how to solve the challenges that plague it. After all, a decade before visiting that research center, she had led the drafting process of the Universal Declaration of Human Rights, a document that reaffirmed the fundamental dignity and freedoms of every person, everywhere.
And yet, 76 years after this historic document was signed, and 65 years after her trip to Poughkeepsie, questions about humanity, machine learning and the intersection of the two still remain.
In recent years, we’ve seen a wave of efforts to understand, leverage and govern the technological descendant of that checkers-playing automaton: artificial intelligence.
From the G7 Hiroshima AI Process initiated in May, to President Joe Biden’s Executive Order on AI issued in October, to the United Kingdom government’s first AI safety summit at Bletchley Park held in November — at which Vice President Kamala Harris outlined the state of AI today, and the promise it holds when used to benefit ordinary people — there have been numerous initiatives to design and deploy technology that not only pushes us forward, but pushes us in the right direction.
Yet in order for AI to truly advance sustainable development and affirm human rights, we need a global approach to AI agreed upon by every single country — not just major global powers.
That is why, a few months ago, the United States embarked upon an ambitious undertaking: a resolution in the United Nations General Assembly — a body where 193 member states are represented — to, for the very first time, create a global approach to AI. Over the course of several months, the resolution picked up 122 co-sponsors, including many from the global south.
And today, it was not only adopted, but adopted by consensus — meaning all member states agreed to adopt the resolution, without going to a vote.
In a moment in which the world seems to agree on little, finding consensus on a common-sense approach for safe, secure and trustworthy AI is a mammoth achievement. But as the body charged with maintaining global peace, upholding human rights and fundamental freedoms, and preserving the planet we all call home, this is also exactly the kind of task the United Nations was designed for.
After all — just as Roosevelt predicted — the benefits of AI are already impacting people across the globe.
Today, the technology is being used to predict earthquakes and hurricanes, helping vulnerable countries prepare for and respond to natural disasters. AI is able to detect and diagnose disease earlier, while telemedicine and virtual assistants can provide health education to those in remote areas. And innovations from plant identification apps to soil monitoring tools help farmers in Africa and around the globe produce more food, more sustainably, for more people.
On the flip side, of course, AI poses challenges that affect us all.
Mis- and disinformation, turbo-charged by AI, threaten to undermine the integrity of democratic processes in a year where countries with more than half of the world’s population will elect their leaders. Algorithmic bias can deepen societal fissures and enable discrimination against marginalized communities. And even the wonders of generative AI, realized by more and more people since the launch of tools like ChatGPT — to automate tedious tasks, explain complex topics and even create original works — could disrupt the labor force in nearly every industry.
The resolution passed Thursday provides a framework to address challenges head on, with a focus on capacity-building to ensure equitable access to the benefits of AI, and equitable cover from its harms. It lays out the steps countries can take to ensure responsible governance, and protect all individuals — including vulnerable individuals — from discrimination, as well as the ways in which the United Nations itself can use AI to advance human rights and sustainable development.
These principles and practices are rooted in the UN’s founding charter and the Universal Declaration of Human Rights. And indeed, there were echoes of Roosevelt’s mission in the one we undertook: Once again, countries big and small, from all development levels, spoke in one, unified voice to reaffirm that AI will be created and leveraged through the lens of freedom and dignity — and reflective of our collective fate.
Now that this resolution has received overwhelming support from member states, our hope is that civil society, local governments, tech companies and academics throw their support behind it, too.
Doing so would reflect the clearly global consensus that people should have equitable access to the benefits of AI, including in regions still striving to overcome digital divides; that no entity should use AI to undermine peace or repress human rights, and that even the most well-intentioned of actors need help catching and rooting out vulnerabilities and bias; that private companies driving the rapid spread and evolution of this technology must be responsible when it comes to designing and launching new capabilities.
And so, so much more: that privacy, intellectual property and copyright should be respected; that the risks of disinformation could be mitigated in part through the development of tools, standards and practices to help people authenticate content; and that artificial intelligence systems should be human-centric.
Ultimately, this resolution was a massive first step — but it was also just that: a first step. Now comes the hard work of putting those principles not only to paper, but into practice.
In that research center just south of Poughkeepsie, six and a half decades ago, Eleanor Roosevelt asked “coming generations” to consider what makes us human, and what it means to interact with new and powerful technology. Today, as we celebrate a milestone in realizing the potential of artificial intelligence, it is not on “coming generations,” but this generation, to continue answering her call — together.