Adobe Stock
The ChatGPT chatbot developed by OpenAI.

A version of this story appeared in CNN’s What Matters newsletter. To get it in your inbox, sign up for free here.

CNN  — 

The emergence of ChatGPT and now GPT-4, the artificial intelligence interface from OpenAI that will chat with you, answer questions and passably write a high school term paper, is both a quirky diversion and a harbinger of how technology is changing the way we live in the world.

After reading a report in The New York Times by a writer who said a Microsoft chatbot professed its love for him and suggested he leave his wife, I wanted to learn more about how AI works and what, if anything is being done to give it a moral compass.

I talked to Reid Blackman, who has advised companies and governments on digital ethics and wrote the book “Ethical Machines.” Our conversation focuses on the flaws in AI but also recognizes how it will change people’s lives in remarkable ways. Excerpts are below.

What is AI?

WOLF: What is the definition of artificial intelligence, and how do we interact with it every day?

BLACKMAN: It’s super simple. … It’s called a fancy word: machine learning. All it means is software that learns by example.

Everyone knows what software is; we use it all the time. Any website you go on, you’re interacting with the software. We all know what it is to learn by example, right?

We do interact with it every day. One common way is in your photos app. It can recognize when it’s a picture of you or your dog or your daughter or your son or your spouse, whatever. And that’s because you’ve given it a bunch of examples of what those people or that animal looks like.

So it learns, oh that’s Pepe the dog, by giving it all these examples, that is to say photos. And then when you upload or take a new picture of your dog, it “recognizes” that that’s Pepe. It puts it in the Pepe folder automatically.

Your phone knows a lot about you

WOLF: I’m glad you brought up the photos example. It is actually kind of frightening the first time you search for a person’s name in your photos and your phone has learned everybody’s name without you telling it.

BLACKMAN: Yeah. It can learn a lot. It pulls information from all over the place. In many cases, we’ve tagged photos or you may have at one point, tagged a photo of yourself or someone else and it just goes from there.

Self-driving cars. AI?

WOLF: OK, I’m going to list some things and I want you to tell me if you feel like that’s an example of AI or not. Self-driving cars.

BLACKMAN: It’s an example of an application of AI or machine learning. It’s using lots of different technologies so that it can “learn” what a pedestrian looks like when they’re crossing the street. It can “learn” what the yellow lines in the street are, or where they are. …

When Google asks you to verify that you’re a human and you’re clicking on all those images – yes, these are all the traffic lights, these are all the stop signs in the pictures – what you’re doing is you’re training an AI.

You’re taking part in it; you’re telling it that these are the things you need to look out for – this is what a stop sign looks like. And then they use that stuff for self-driving cars to recognize that’s a stop sign, that’s a pedestrian, that’s a fire hydrant, etc.

Social media algorithms. AI?

WOLF: How about the algorithm, say, for Twitter or Facebook? It’s learning what I want and reinforcing that, sending me things that it thinks that I want. Is that an AI?

BLACKMAN: I don’t know exactly how their algorithm works. But what it’s probably doing is noticing a certain pattern in your behavior.

You spend a particular amount of time watching sports videos or clips of stand-up comedians or whatever it is, and it “sees” what you’re doing and recognizes a pattern. And then it starts feeding you similar stuff.

So it’s definitely engaging in pattern recognition. I don’t know whether it’s strictly speaking a machine learning algorithm that they’re using.

Are the creepy stories something to worry about?

WOLF: We’ve heard a lot in recent weeks about ChatGPT and about Sydney, the AI that essentially tried to get a New York Times writer to leave his wife. These kinds of strange things are happening when AI is allowed out into the wild. What are your thoughts when you read stories like that?

BLACKMAN: They feel a little bit creepy. I guess The New York Times journalist was unsettled. Those things could just be creepy and relatively harmless. The question is whether there are applications, accidental or not, in which the output turned out to be dangerous in some way or other.

For instance, not Microsoft Bing, which is what The New York Times journalist was talking to, but another chatbot once responded to the question, “Should I kill myself,” with (essentially), “Yes, you should kill yourself.”

So, if people go to this thing and ask for life advice, you can get pretty harmful advice from that thing. … Could turn out to be really bad financial advice. Especially because these chatbots are notorious – I think that’s the right word – for giving out, outputting false information.

In fact, the developers of it, OpenAI, they just say: This thing will make things up sometimes. If you are using it in certain kinds of high-stakes situations, you can get misinformation easily. You can use it to autogenerate misinformation, and then you can start spreading that around the internet as much as you can. So, there are harmful applications of it.

What can we guess AI will look like in the future?

WOLF: We’re at the beginning of interacting with AI. What’s it going to look like in 10 years? How ingrained in our lives is it going to be in some number of years?

BLACKMAN: It already is ingrained in our lives. We just don’t always see it, like the photo example. … It’s already spreading like wildfire. … The question is, how many cases will there be of harm or wronging people? And what will be the severity of those wrongs? That we don’t know yet. …

Most people, certainly the average person, didn’t see ChatGPT around the corner. Data scientists? They saw it a while back, but we didn’t see this until something like November, I think, is when it was released.

We don’t know what’s gonna come out next year, or the year after that, or the year after that. Not only will there be more advanced generative AI, there’s also going to be AI for which we don’t even have names yet. So, there’s a tremendous amount of uncertainty.

What kinds of human jobs will AI displace?

WOLF: Everybody had always assumed that the robots would come for blue-collar jobs, but the recent iterations of AI suggest maybe they’re going to come for the white-collar jobs – journalists, lawyers, writers. Do you agree with that?

BLACKMAN: It’s really hard to say. I think that there are going to be use cases where yeah, maybe you don’t need that sort of more junior writer. It’s not at the level of being an expert. At best, it performs as a novice performs.

So you’ll get maybe a really good freshman English essay, but you’re not gonna get an essay written by, you know, a proper scholar or a proper writer – someone who’s properly trained and has a ton of experience. …

It’s the sort of the rough draft stuff that will probably get replaced. Not in every case, but in many. Certainly in things like marketing, where businesses are going to be looking to save some money by not hiring that junior marketing person or that junior copywriter.

No concept of or interest in truth?

WOLF: AI can also reinforce racism and sexism. It doesn’t have the sensitivity that people have. How can you improve the ethics of a machine that doesn’t know better?

BLACKMAN: When we’re talking about things like chatbots and misinformation or just false information, these things have no concept of the truth, let alone respect for the truth.

They are just outputting things based on certain statistical probabilities of what word or series of words is most likely to come next in a way that makes sense. That’s the core of it. It’s not truth tracking. It doesn’t pay attention to the truth. It doesn’t know what the truth is. … So, that’s one thing.

Why is AI biased, racist and sexist? Because it gets data that tells it to be that way

BLACKMAN: The bias issue, or discriminatory AI, is a separate issue. … Remember: AI is just software that learns by example. So if you give it examples that contain or reflect certain kinds of biases or discriminatory attitudes … you’re going to get outputs that resemble that.

Somewhat infamously, Amazon created an AI resume-reading software. They get tens of thousands of applications every day. Getting a human to look, or a series of humans to look at, all these applications is phenomenally time consuming and expensive.

So why don’t we just give the AI all these examples of successful resumes? This is a resume that some human judged to be worthy of an interview. Let’s get the resumes from the past 10 years.

And they gave it to the AI to learn by example … what are the interview-worthy resumes versus the non-interview-worthy resumes. What it learned from those examples – contrary to the intentions of the developers, by the way – is we don’t hire women around here.

When you uploaded a resume by a woman, it would, all else equal, red light it, as opposed to green lighting it for a man, all else equal.

That’s a classic case of biased or discriminatory AI. It’s not an easy problem to solve. In fact, Amazon worked on this project for two years, trying various kinds of bias-mitigation techniques. And at the end of the day, they couldn’t sufficiently de-bias it, and so they threw it out. (Here’s a 2018 Reuters report on this.)

This is actually a success story, in some sense, because Amazon had the good sense not to release the AI. … There are many other companies who have released biased AIs and haven’t even done the investigation to figure out whether it’s biased. …

The work that I do is helping companies figure out how to systematically look for bias in their models and how to mitigate it. You can’t just depend upon the straight data scientist or the straight developer. They need organizational support in order to do this, because what we know is that if they are going to sufficiently de-bias this AI, it requires a diverse range of experts to be involved.

Yes, you need data scientists and data engineers. You need those tech people. You also need people like sociologists, attorneys, especially civil rights attorneys, and people from risk. You need that cross-functional expertise because solving or mitigating bias in AI is not something that can just be left in the technologists’ hands.

What should the government do? Protect us from the worst things AI can do at scale

WOLF: What is the government role then? You pointed to Amazon as an ethics success story. I think there aren’t a lot of people out there who would put up Amazon as the absolute most ethical company in the world.

BLACKMAN: Nor would I. I think they clearly did the right thing in that case. That might be against the backdrop of a bunch of not good cases.

I don’t think there’s any question that we need regulation. In fact, I wrote an op-ed in The New York Times … where I highlighted Microsoft as being historically one of the biggest supporters of AI ethics. They’ve been very vocal about it, taking it very seriously.

They have been internally integrating an AI ethical risk program in a variety of ways, with senior executives involved. But still, in my estimation, they rolled out their Bing chatbot way too quickly, in a way that completely flouts five of their six principles that they say that they live by.

The reason, of course, is that they wanted market share. They saw an opportunity to really get ahead in the search game, which they’ve been trying to do for many years with Bing and failing against Google. They saw an opportunity with a potentially large financial windfall for them. And so they took it. …

What this shows us, among other things, is that the businesses can’t self-regulate. When there are massive dollar signs around, they’re not going to do it.

And even if one company does have the moral backbone to refrain from doing ethically dangerous things, hoping that most companies, that all companies, want to do this is a terrible strategy at scale.

We need government to be able to at least protect us from the worst kinds of things that AI can do.

For instance, discriminating against people of color at scale, or discriminating against women at scale, people of a certain ethnicity or a certain religion. We need the government to say certain kinds of controls, certain kinds of processes and policies need to be put in place. It needs to be auditable by a third party. We need government to require this kind of thing. …

You mentioned self-driving cars. What are the risks there? Well, bias and discrimination aren’t the main ones, but it’s killing and maiming pedestrians. That’s high on my list of ethical risks with regards to self-driving cars.

And then there’s all sorts of use cases. We’re talking about whether using AI to deny or approve mortgage applications or other kinds of loan applications; using AI, like the Amazon case, to interview or not interview people; using AI to serve people ads.

Facebook served ads for houses to buy to White people and houses to rent to Black people. That’s discriminatory. It’s part and parcel of having White people own the capital and Black people rent from White people who own the capital. (ProPublica has investigated this.)

The government’s role is to help protect us from, at a minimum, the biggest ethical nightmares that can result from the irresponsible development deployment of AI.

Is regulation happening?

WOLF: What would the structure of that be in the US or the European government? How can it happen?

BLACKMAN: The US government is doing very little around this. There’s talk about various attorneys looking for potentially discriminatory or biased AI.

Relatively recently, the attorney general of the state of California asked for all hospitals to provide inventory of where they’re using AI. This is the result of it being fairly widely reported that there was an algorithm being used in health care that recommended to doctors and nurses to pay more attention to White patients than to sicker Black patients.

So it’s bubbling up. It’s mostly at the state-by-state level at this point, and it’s barely there.

Currently in the US government, there’s a bigger focus on data privacy. There’s a bill floating around there that may or may not be passed that is supposed to protect the data privacy of American citizens. It’s not clear whether that’s gonna go through, and if it does, when it will.

We are way behind the European Union … (which) has what’s called the GDPR, General Data Protection Regulation. That’s about making sure that the data privacy of European citizens is respected.

They also have, or it looks like they’re about to have, what’s called the AI Act. … That has been going around, through the legislative procedure of the EU, for several years now. It looks like it’s on the cusp of being passed.

Their approach is similar to the one that I articulated earlier, which is they are looking out for the high-risk applications of AI.

These things also have potential to change lives in incredible ways

WOLF: Should people be more excited or afraid of machines or software that learns by example right now?

BLACKMAN: There’s reason for excitement. There’s reason for concern.

I’m not a Luddite. I think that there are potentially tremendous benefits from AI. There are ways in which, even though it standardly produces or often produces discriminatory, biased outputs, there’s the potential for increased awareness and truth of that issue being an easier problem to solve in AI than it is human hire managers. There’s lots of potential benefits to businesses, to citizens, etc.

You can be excited and concerned at the same time. You can think that this is great. We don’t want to completely hamper innovation. I don’t think regulation should say no one do AI, no one develop AI. That would be ridiculous.

We also have to do it, if we’re going to stay economically competitive. China is certainly pouring tons of money into artificial intelligence. …

That said, you can do it, if you like, recklessly or you can do it responsibly. People should be excited, but also equally passionate about urging government to put in the appropriate regulations to protect citizens.