• Show Notes
  • Transcript

Preet speaks with Rebecca Heilweil, a tech reporter at Recode by Vox, about the new artificial intelligence model ChatGPT. What will be the implications of conversational AI for students, professionals, and the future of writing and researching?

Stay Tuned in Brief is presented by CAFE and the Vox Media Podcast Network. Please let us know what you think! Email us at letters@cafe.com, or leave a voicemail at 669-247-7338.

References & Supplemental Materials:

  • Rebecca Heilweil, ā€œAI is finally good at stuff, and thatā€™s a problem,ā€ Vox, 12/7/22
  • Frank Bruni, ā€œWill ChatGPT Make Me Irrelevant?ā€ NYT, 12/15/22

Preet Bharara:

From Cafe and the Vox Media Podcast Network, this is Stay Tuned In Brief. I’m Preet Bharara. If you’ve watched the news over the past few weeks, you’ve likely heard about a new chatbot called ChatGPT. It’s an artificial intelligence tool that can answer questions, tell jokes, and even write complete essays. For some new AI models like ChatGPT represent progress. For others they are caus for alarm. To get a better sense of how these advances in artificial intelligence will impact our lives, I’m joined by Rebecca Heilweil. She’s a reporter at Recode by Vox who covers emerging technologies and AI. Rebecca, welcome to the show.

Rebecca Heilweil:

Thanks for having me.

Preet Bharara:

So you’re a real person, right? You’re not AI.

Rebecca Heilweil:

I hope so.

Preet Bharara:

How would I know? How would I even be able to confirm that?

Rebecca Heilweil:

That’s a good question.

Preet Bharara:

Okay, so first let’s start with for folks who don’t know and are not following, what is ChatGPT and how in the hell does it work?

Rebecca Heilweil:

So on its face ChatGPT is a chatbot. You go on the internet, you go on the website that hosts it, and you type it questions, you give it instructions, any sort of prompt and it’ll just interact with you and provide a text-based response. So at its core it works like any other chatbot you might encounter on the internet. But unlike a really dense customer service bot that might drive you crazy, ChatGPT gives really, really good and even convincing answers.

Preet Bharara:

And how does it do that? How does it have the information to be able to do that?

Rebecca Heilweil:

So this is the product of years and years of research into artificial intelligence and specifically a kind of artificial intelligence called machine learning. Essentially the way to think about it is, imagine if you fed a machine that was trying to imitate humans lots and lots and lots of data, a lot of text from the internet and you said, “Study this a lot and try to imitate how humans talk, how humans write.” And this is what results from that AI training process. So tons of data collected from the internet turned into a kind of predictive system that basically makes guesses at what humans ought to sound like. So when you’re getting a response from this chatbot, what it’s doing is making a prediction about what it thinks a human would say in response to the question that it’s being asked or the prompt that it’s being told.

Preet Bharara:

So does it work kind of like Siri, except it’s much more intelligent?

Rebecca Heilweil:

Siri is possibly one comparison you could make. Another way to think about it is a really, really smart version or a really powerful version of the Google Smart Compose feature you might have, I don’t know if you’ve ever been typing an email and you see Google will make a guess at how you should finish the email or finish the greeting, it’s sort of like that. Inside the technology, each thing works its own way, but it’s essentially just making guesses based on what it’s seen from other humans.

Preet Bharara:

Okay. So who’s responsible for, who developed, who funded ChatGPT?

Rebecca Heilweil:

So ChatGPT was created by this research firm called OpenAI, which is a leading artificial intelligence research organization. They primarily focus on building really advanced artificial intelligence models that could be used for something else and made available for all sorts of different applications. The company itself was created by some of the biggest names in Silicon Valley who have given support to it. That includes Sam Altman, Elon Musk, Peter Thiel, Reid Hoffman. And it was the product of a big debate among a lot of leaders in Silicon Valley over what the purpose of AI should be and whether it’s good or bad and how it might be used to advance humanity basically.

Preet Bharara:

So are they monetizing it in some way or no, because it’s free?

Rebecca Heilweil:

Yes. So the tool that you could use right now is free in the sense that you don’t have to pay for it, but when you are using that tool it’s worth noting you are helping this company make this AI better. They say on their website that the answers you’re providing are helping them improve their artificial intelligence for future editions, essentially.

At the same time they are trying to monetize versions of this. So Microsoft for example is using GPT3, a slightly earlier version of what we just saw released to improve the coding process and automate aspects of that. So there are certainly plans to monetize it. The owner of OpenAI is predicting that they’re going to have about 1 billion in revenue by 2024 from the AI projects that they’re developing.

Preet Bharara:

Can you say what the goal of all this is? What’s the problem that they’re trying to solve?

Rebecca Heilweil:

I think the way to think about artificial intelligence, or at least this kind of research into artificial intelligence, is that it’s not really aimed at solving one particular problem that we humans encounter in everyday life. It’s more of a goal of building AI that can increasingly do things that humans currently do. And once that model is created, figuring out applications after it exists. So now that ChatGPT exists, the people are talking about using it as the next iteration of a search engine and replacing Google with this kind of technology, it’s much more intelligent and maybe more able to understand the nuances of question. There is a goal to maybe use it to automate more of computer programming, and that potentially would save costs for companies that are in software on developing things. You could use it to produce texts, produce advertisements. So it’s more thinking about it as we’ve reached the stage where we have this super powerful tool, and now there’s a race to look at ways to apply that tool to different applications in everyday life.

Preet Bharara:

So let’s talk about some mundane or everyday uses. If I went into the chat box and typed in the query, “Can you write a commencement address in the style of Preet Bharara,” would it be able to do that, or is that too obscure?

Rebecca Heilweil:

So before you interviewed me I figured I’d plug in a few questions like that and it was able to imitate you for a different prompt.

Preet Bharara:

It was?

Rebecca Heilweil:

Yeah.

Preet Bharara:

Can you send that to me? Because that could be very handy.

Rebecca Heilweil:

I will send it to you. And I was told by one of your producers that it included some Preet-isms, so you should take a look.

Preet Bharara:

The ChatGPT, has it basically swallowed everything on the internet or some subset? What’s the selection process?

Rebecca Heilweil:

It’s swallowed a lot of the internet. I think OpenAI has given the caveat that I don’t think it’s updated past 2021. So there are certain things it doesn’t know about. It also has some amount of guardrails built in that it’s not just repeating everything it ever saw on the internet with no sense of holding its own tongue, if you will. So it knows that certain things are offensive, it doesn’t know that everything’s offensive, it knows that… You know what I mean? It has some sort of guardrail. It’s not just spitting back everything that it’s ever seen on the internet. But if you are someone who is more well known on the internet, it’s more likely to know you. It’s more likely to understand the quirks of your voice or how you might interact with other people. It is very good at impressions. If you ask it to do Trump impressions or things about Trump, it’ll throw a lot of biglys in there, and it’s not perfect but it gets pretty close. It understands what’s going on in terms of when you ask it to do an impression. So for less famous people it probably isn’t as effective with that, but it certainly knows something.

Preet Bharara:

So if I didn’t do this, but if I had asked it to draft some questions for my interview with you, would that have come out okay?

Rebecca Heilweil:

Yeah, we also tried that one to prep for this interview, just to see what would happen, and it came up with questions, it did not come up with exactly the same questions.

Preet Bharara:

I hope not as good, because that’s depressing.

Rebecca Heilweil:

No, they were more specific to AI. So you can tell this AI doesn’t understand, oh, there’s a podcast for a general audience. It thinks, oh, I’ve been asked to write questions about AI and I’m going to do that, but it’ll use terms that maybe I would not use in everyday conversations with my friends who are not interested in artificial intelligence, if that makes sense. So it showed some amount of not getting it, but it certainly came up with questions. With some of them were like, “How do you see ChatGPT fitting into the larger landscape of AI research and development, and what do you think the future holds for this technology?” It sounds like a question someone might ask.

Preet Bharara:

So a lot of the internet, as I understand it, is full of garbage and bad information. So how does ChatGPT separate true stuff from the fake stuff? So for example, here’s another application that I want to get into with you that has been somewhat controversial. So let’s say I’m a high school sophomore and I have to write a paper on the Civil War, and what high school sophomore has not had to do that? Describe the causes for the Civil War, the war between the states and the consequences, and I query ChatGPT, A, will it do a reasonable job, and B, will it be accurate?

Rebecca Heilweil:

So this is one of the scenarios where it depends on the prompt that you ask it. If you ask it a pretty general question like that you’ll probably get a pretty reasonable response. What I heard from professors that I spoke to about this is, yeah, you may not get an A if you submitted it, but you might get a B and you might get a B+ on that assignment. Where it gets-

Preet Bharara:

For no effort, for no effort.

Rebecca Heilweil:

For no effort. And for what it’s worth, the anti-plagiarism software like Turnitin doesn’t catch any of this. So you don’t have to worry about getting flagged for plagiarism at least in the short term. But where it gets trickier is if you ask it about subjects that are a little more niche and where it might get a little bit more confused. So Civil War, there’s probably a lot of stuff about that on the internet. If you’re asking about what the symbol in a novel that not many people have read, what’s the importance of it to a character in a very niche novel that there’s not much written about on the internet, it might have a lot more trouble. And I think this brings up the second point that you mentioned is that this chatbot is not humble in the sense that it won’t just say, “I don’t know,” sometimes it will, but sometimes it’ll just make stuff up. And it sounds really confident, really earnest when it says it, and could be pretty convincing. There’s no guarantee that what it’s giving you, especially in more niche topics is correct, and you might fail your class if you do that.

Preet Bharara:

You recently wrote a piece entitled AI Is Finally Good At Stuff And That’s A Problem. Is the problem what we’re talking about now, the possibility of people cheating on school assignments, or something broader than that?

Rebecca Heilweil:

I think what I was trying to get at is that we’re in this kind of bullshitter era phase for AI, where AI is-

Preet Bharara:

And also politics, probably.

Rebecca Heilweil:

Yeah, yeah, AI is pretty good, pretty convincing, not perfect by any means. Are we prepared as a society to deal with that? I would say my guess is probably most teachers in the United States are not particularly ready or prepared for this new type of cheating. I don’t even know if they would know about this chatbot depending on how much you’re following the tech news. It’s not clear that we’re necessarily ready for this era of AI. It’s certainly really impressive and it stands to do a lot of stuff that humans are used to feeling special because they could do.

Preet Bharara:

Will AI reasonably be able to replace lawyers who draft contracts and agreements?

Rebecca Heilweil:

This is just my own personal look at it, but I feel like there’s a reasonable chance that a lot of that becomes automated and maybe your job is not writing the contract but looking over it for mistakes or nuances or things like that. Today I asked ChatGPT to write me a public records request, which they’re really annoying to write as anyone who’s written them knows. And I was just like, “Write me one for this agency about this topic.” And what it came up with was pretty good. I wouldn’t read over it, but it didn’t seem like it was this really badly written document. And I suspect the same is probably true for a lot of professions that have that kind of form-based writing.

Preet Bharara:

So can it replace local reporting?

Rebecca Heilweil:

I think this is where it gets tricky, that this AI is not particularly good at finding out new information or reporting a story. Journalism, at least the best journalism finds out things that were not previously known. So if the story that you’re looking for was not freely available on the internet, it’s not clear that an AI like this wouldn’t able to produce that reporting. Maybe you have more time if you’re a local journalist to do, again, an optimistic take, to do more of that original reporting and then just inputting what you found out into an AI that can draft it up in a nice way for you. So I think it certainly stands to automate a good amount of that depending on how it’s used. But of course that raises real questions about authorship and authenticity and veracity. So there are a lot of questions ahead of us.

Preet Bharara:

Here’s another use that I’ve heard suggested, that some might even find preferable to humans, and that is therapy. And it’s been suggested in various places that young people in particular, if it was reasonably decent, might prefer an artificial intelligence therapist because it’s more private, you’re not talking to an actual human being, so maybe you’ll be more forthcoming, and you can also have a session with an AI therapist anytime of day, night, weekend, holiday. It doesn’t matter. Anything in your reporting and in your travels that tells you anything about AI therapy?

Rebecca Heilweil:

I think one thing to keep in mind, and this is on the OpenAI website, is they say, “Do not put personal information in this chatbot.” It is being used for improving that AI, and I think that’s something that with any system like this you really have to be careful, because sure you’re not directly chatting with a human, but it may be worth pondering a little bit more about whether that privacy is actually there and to what extent your very personal thoughts are being fed into a system that is being used to profit in the end.

I think there are certainly applications already of chatbots that perform a little bit of that kind of therapy work. I think like a lot of things, there’s probably a future where you mix two things together, but there are also real questions to be asked about who is liable if the AI doesn’t give particularly good therapy advice or whatever kind of therapy someone’s seeking, who’s accountable if it doesn’t do a good job. I think those are questions that are worth asking.

Preet Bharara:

So we’re at this level of sophistication at the end of 2022. Can you make a prediction about what we’ll be talking about in the year, and the quality and performance of AI in five to 10 years?

Rebecca Heilweil:

Five to 10 sounds pretty tricky, but I think the next year people are talking about the next iteration of this OpenAI model, GPT4 and whether that’s coming and what that’s going to look like. Maybe we’ll see improvements with tone and that little bit of the answers that make you feel like, oh wait, this might not actually be a human, but I’m not sure, maybe it’ll overcome that. I think generally we’re going to see a lot of conversations about this category of artificial intelligence, which is generative ai, which will be producing not just text, but art and other types of content that we’re used to being produced by humans. And it’s going to raise a lot of questions and challenges about what humans can offload to technology and what can’t be. And I think we’re going to start to see a lot more of AI produced things in our lived environment, whether that’s what we’re reading online or visually looking at or things like that.

Preet Bharara:

Can we expect, given the advance of this technology, politicians and/or government regulators to get involved in some way, or no?

Rebecca Heilweil:

There have been proposals for regulating certain forms of artificial intelligence like facial recognition, but right now it’s kind of a moment for even asking what would be regulated, if anything. Are we asking for transparency and rules that say an AI must reveal itself as AI? Are we asking for privacy rules? I think we’re still working out what regulators should even do about this, or what the problem is regulators would even be trying to reign in, if that makes sense.

Preet Bharara:

Final question, somewhat esoteric and philosophical. Does something about continuing developing a sophisticated AI erase something about human creativity or our humanity in general? And some people have raised that issue. Are we at that point, or is that overstated?

Rebecca Heilweil:

I think this is a question a lot of people really disagree on. One of the sources I spoke to for my story who was an expert in cheating on exams pointed out, for a while people would always say to you, “Don’t use a calculator in class because you can’t have a calculator in the real world.” And obviously that’s not true. You do have a calculator in the real world, and humans may have gotten less adept at doing calculations in their head, but math hasn’t gone away and computations haven’t gone away. And I think the question of whether this kind of cognitive offloading, being able to put things that we would normally do on technology, I think the question is this going to take away aspects of human authenticity and uniqueness or is it going to create new ways of being creative that we previously wouldn’t be able to do because we were busy writing out text. So maybe it’ll make human creativity even better, or maybe it’ll make things worse, but it’s going to go in one of those directions. The world is certainly changing. It’s not going to stay as it was.

Preet Bharara:

I’m going to let you go and I’m going to go ask ChatGPT this question, see what it says about its own future. Rebecca Heilweil, thank you so much for joining us.

Rebecca Heilweil:

Thanks for having me.

Preet Bharara:

For more analysis of legal and political issues making the headlines, become a member of the Cafe Insider. Members get access to exclusive content including the weekly podcast I co-host with former US attorney, Joyce Vance. Head to cafe.com/insider to sign up for a trial. That’s cafe.com/insider.

If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics and justice. Tweet them to me at preetbharara with the hashtag #askpreet. Or you can call and leave me a message at (669) 247-7338. That’s (669) 24PREET. Or you can send an email to letters@cafe.com. Stay Tuned is presented by Cafe and the Vox Media Podcast network. The executive producer is Tamara Sepper. The technical director is David Tatasciore. The senior producer is Adam Waller. The editorial producers are Sam Ozer-Staton and Noa Azulai. The audio producer is Nat Weiner, and the Cafe team is Matthew Billy, David Kurlander, Jake Kaplan, Nama Tasha and Claudia Hernandez. Our music is by Andrew Dost. I’m your host, Preet Bharara. Stay Tuned.