• Show Notes
  • Transcript

David Autor, economics professor at MIT, is widely regarded as one of the top labor economists in the world. Preet and Autor discuss what artificial intelligence tools will mean for jobs, and how technological innovations have impacted the labor market from the Industrial revolution until today.

Plus, a U.S. Attorney resigns after investigations by multiple federal watchdog agencies found evidence of ethics violations.

Don’t miss the Insider bonus, where Preet and Autor discuss the value of engineering as a profession in the age of AI. To listen, try the membership for just $1 for one month: cafe.com/insider.

Tweet your questions to @PreetBharara with the hashtag #AskPreet, email us your questions and comments at staytuned@cafe.com, or call 669-247-7338 to leave a voicemail.

Listen to the new season of Up Against The Mob with Elie Honig. 

Stay Tuned with Preet is brought to you by CAFE and the Vox Media Podcast Network.

Executive Producer: Tamara Sepper; Senior Editorial Producer: Adam Waller; Technical Director: David Tatasciore; Audio Producer: Matthew Billy; Editorial Producers: Noa Azulai, Sam Ozer-Staton.

 

REFERENCES & SUPPLEMENTAL MATERIALS: 

Q&A:

  • “Massachusetts US Attorney Rachael Rollins formally resigns in wake of ethics probes,” AP, 5/19/23

INTERVIEW:

  • David Autor’s faculty page at MIT
  • “What if AI could rebuild the middle class?” NPR, 5/9/23
  • “Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society,’” NYT, 3/29/23
  • “ChatGPT is about to revolutionize the economy. We need to decide what that looks like,” MIT Technology Review, 3/25/23
  • “Beltway sniper attacks of 2002,” Britanica

BUTTON: 

  • “Independent bookselling expanded again in 2022, with new and diverse stores opening nationwide,” AP, 5/22/23
  • “Banned in the USA: State Laws Supercharge Book Suppression in Schools,” Pen America, 4/20/23

Preet Bharara :

From Cafe and the Vox Media Podcast Network, welcome to Stay Tuned. I’m Preet Bharara .

David Autor:

If you say, well, what are the big technological transitions of the last couple hundred years, right? They move out of agriculture. They’re the rise of mass production, the computer era, and now the AI era that we’re in.

Preet Bharara :

That’s David Autor. He’s the Ford professor of economics at MIT and widely regarded as one of the leading labor economists in the world. His scholarship focuses on the ways in which advances in technology have impacted the labor market from the industrial revolution to the present. Autor’s attention these days is squarely on artificial intelligence and what tools like ChatGPT will mean for jobs up and down the income scale. Autor joins me to discuss which jobs are most at risk from new technologies, why he’s optimistic that AI can help revitalize the middle class, and whether the very notion of human expertise will have value going forward. That’s coming up, stay tuned. Now let’s get to your questions.

Steve:

Hi, this is Steve in Indiana. I’ve just read two very dramatic stories, one about Ms. Rollins, the US attorney in Boston, who’s about to be brought up on charges, I think for quite a list of violations, taking money, taking gifts, and fixing some things for a preferred political candidate. I’m looking forward to hearing you and maybe Ms. Vance talk about these things. It’s pretty rare, so thank you much. Love your shows.

Preet Bharara :

Well, Steve, thanks for your question, Joyce Vance and I do on the Insider Podcast this week, talk at some length about the transgressions that are set forth in an inspector general report with respect to the US attorney in Massachusetts who just resigned as of last week because of the seriousness of the allegations. I’m going to correct you on one thing, she’s not brought up on charges, I’m not sure any laws were broken of a criminal nature, but other laws were according to the report, including Hatch Act violations with respect to a political fundraiser that Ms. Rollins should not have attended in her capacity as a city United States attorney, most egregiously are allegations that she favored a political adversary of one of her successors as a district attorney in the local community. By revealing and disclosing confidential information about the Justice Department’s potential investigation to the local press.

And then most egregiously in interviews with investigators lying about those things. There are also questions that have been raised about her accepting tickets to a Celtics game and some other things as well. And as Joyce and I discussed from time to time, a high level official at the Justice Department or elsewhere in the government commits violations. And at least in this case, she seems to have taken responsibility and the administration as a whole seems to have taken it very seriously. And I’m glad to see that that office will have new leadership because those transgressions are inexcusable and cause people to question whether or not justice is being administered and enforced in a fair manner in a very, very important US Attorney’s office that covers the entire state of Massachusetts.

Now, I’ll make one final point, and that is I think it’s worth contrasting the seriousness with which both that person and the Justice Department and the administration took credible allegations of violations of the Hatch Act as compared to Hatch Act violations that were ignored, scoffed at, and laughed at by high level officials in the Trump administration, including by Kellyanne Conway and others. You’ll recall that part of the Republican National Convention back in the election cycle of 2020 was conducted on the White House law, and there had been some disputes about whether or not that was a technical hatch violation or not. There is reams of evidence that multiple members of the Trump administration committed ethics violations for which there was no accountability, no responsibility taken, and in fact, somewhat celebrated. So it’s sad and unfortunate to see what happened to the US attorney in Massachusetts and the conduct she engaged in, but it’s refreshing to see the accountability.

I’ll be right back with my conversation with David Autor.

THE INTERVIEW:

The AI era has arrived and it is likely to affect a wide swath of jobs and professions in the coming years. The labor economist David Autor, has spent much of his career thinking about the consequences of technological change on society. David Autor, welcome to the show.

David Autor:

Thank you very much.

Preet Bharara :

Before we started recording, I heard the clicking of a keyboard and I thought it was a member of my team and you said, no, it was you. And I said, if you need to finish some work, you may go ahead and you said you were communicating with your kids and I said, seeking some approval, do you text your family, including your children from within your home when everyone is within the home?

David Autor:

Yes, I definitely do.

Preet Bharara:

And that’s approved. That’s approved and that’s okay conduct?

David Autor:

Oh, no, no texting. I mean, you-

Preet Bharara :

You don’t have to walk upstairs to have the conversation?

David Autor:

It’s so much less intrusive, I mean, than calling, than knocking on the door, it’s the lowest overhead way of communicating. And I think kids appreciate it. Yeah, I think there’s so many ways that technology has actually made being a parent easier. For example, I bought my kids smartphones when they were, I don’t remember, 10, 12, and I said, I’m going to pay for this for as long as you want, but you will always leave find my friends on and I’ll always know where you are, and then I’ll be a less annoying parent because I won’t call you, I won’t be worried, I just need to know where you are. And similarly, I don’t interrupt them in the middle of things. I don’t knock on the door very much. I don’t ring and demand to speak, I just text.

Preet Bharara :

So we have a lot to talk about with respect to technology generally. And with AI specifically, I think it’s right that we are spending so much time talking about it, how it affects the world, how it might affect our humanity, how it affects the arts, how it affects as you and I will discuss employment and labor markets. But since we’re talking about kids, I don’t know how old your children are, if you want to share that. Are you noticing some trends in adoption of AI and some of this very, very emergent technology among young people versus older people?

David Autor:

Well, I have one college-aged kid. Two of my kids are done with college and certainly ChatGPT and generative AI is ubiquitous in what they do and they’re integrated in real time and their instructors are trying to deal with it in real time. I would say actually one thing I’ve noticed generationally in terms of dealing with technology and information is that I graduated high school in 1982, so I’m considerably older than my kids. And I grew up in what I would call a information scarce environment where a lot of the goal in school was to find information, memorize information, and track it down. I would go to the library to look something up and then I would read an article and then I would look in the social sciences’ citation index to figure out what other people had cited this article.

Then I would find those articles and so on. And so it was always about finding authoritative information. But in the present period, we are inundated with information, but it’s a incredibly heterogeneous quality. You don’t really know what’s reliable and what’s fictitious, but I think kids or young people have a sense that information is not to be trusted. And so for example, whenever my kids send me a video of something or some news story, they always source it. They say, oh, it’s from here, but I’ve checked it, it’s probably not BS because I’ve looked in three other places and it seems to be true. So they know to be skeptical. Whereas I really think among adults, older adults grew up in a time when if something was on TV, it was probably somewhat authoritative, all the networks said the same thing.

And so they were in some sense less skeptical and felt less need to cross-check everything because it was assumed that information was coming from authorities. So I do think people generationally have learned that the scarcity is not information anymore, but it’s reliable, verifiable, factual information. And that’s a real change in the milieu in which we live.

Preet Bharara :

But the quality of information or its reliability is going to get less and less with the advent of AI and deep fakes the and like. You just described circumstances in which your children were skeptical and then they could do some research and figure out if they were right to be skeptical or wrong to be skeptical. Are we rapidly getting to the point where confirmation of something is accurate and genuine is going to be almost impossible?

David Autor:

I think it’s going to be very difficult.

Preet Bharara :

But not impossible. You have resisted my word impossible.

David Autor:

Well, I think we’ll develop defenses against it, but I think it’s very challenging. So the thought experiment, imagine the infamous Access Hollywood video were surfaced today with Donald Trump saying-

Preet Bharara :

He would never concede that that was him. Right?

David Autor:

Exactly. He would say, oh, it’s a deep fake, it’s a deep fake. And people would have no way to confirm or deny that. So they would believe him, or at least people who wanted to believe him would believe him. So yeah, and that’s even without AI, you don’t need any AI for him to say that, but the existence of AI makes it credible that it could be true.

Preet Bharara :

Right. But it’s even worse than that, isn’t it? Because let’s say you had an expert, let’s say it was you or one of your colleagues at MIT and they had what the consensus would consider a good technology for detecting the accuracy or genuine nature of a video or a photograph or an audio clip, who cares? All Donald Trump has to say is that guy’s fake news. And so even if you have a general consensus that there’s a detection capability, why do people have to believe that, isn’t that part of the problem going forward?

David Autor:

Lots of people dispute what’s true, what’s false, but nobody wants to admit that what they believe is false. And so everyone’s invested in saying that what they’re telling you is true, whether or not it’s really true. And so I do think if you could disprove it, that many people would accept that disprove, the question is how do you do it? And that’s the challenge that used to be, well look at a photograph. A photograph is proof or look at the video or listen to the audio, but now we know of course that can be simulated. So we lose that. So how do we get it back? So I have an idea about this, it’s pretty speculative, which is if I were a public figure or more of a public figure, I would essentially always have cameras and microphones around me with an NFT that verified that whatever I said or was being recorded and in some sense could be reliably authenticated to having occurred in the presence of my non-fungible token that proved uniquely that it was me.

And therefore anything that showed video or audio with me that didn’t have that, I could say, well, that didn’t happen, that’s false. This is provably false. And so you would have to develop a defensive posture.

Preet Bharara :

Right. But there are going to be times, it’s like the defensive aspect of a police body cam, right?

David Autor:

Exactly right.

Preet Bharara :

But there are going to be times I imagine, I don’t know how private a person you are.

David Autor:

Exactly. Yeah. It’s not everything I want is recorded.

Preet Bharara :

There are going to be times when you don’t want to be recorded, and if there are times that you’re not going to be recorded, doesn’t that defeat the protection you’ve described?

David Autor:

It depends on the circumstances. If I’m shown on a television camera appearing to say something I never said, presumably I would’ve done that while I was being recorded. But I agree, this is not simple, but I do think with technologies, we often are using technologies for defense against the technologies to create for offense. I mean, that’s the nature of warfare. Constantly you have offensive technologies, then you have defensive technologies that are built to thwart them. We have missiles, we have things that shoot down missiles. We have malware, we have software that attempts to detect malware. So I think looking forward, we’re going to be using a lot of AI to protect ourselves against AI, and I think that’s the only tool that will be suitable for that purpose.

Preet Bharara :

Yeah. Can regulation accomplish some part of maintaining authenticity? Can we require people, or at least corporations and other significant stakeholders in the economy to make declarations about things to say that they’re fake or not fake? Or is that unworkable?

David Autor:

I think that’s pretty difficult because any entity can create fake information and they don’t need to promulgate it, they don’t need a return address. So I think you can regulate corporations and organizations, certainly you could say that to Google, you can say that to Facebook and you could get them to comply. But there are so many other actors with access to these technologies, and in some sense, they’re not bound, they’re not licensed. Yeah, you can come after them if you can, it’s illegal to shout fire in a crowded theater, anyone could do it, but presumably they’d be punished. But in many cases, many of these things don’t have a return address. So I think that’s going to be quite difficult. So I think this is going to be a pretty decentralized problem because bad information will emanate from many, many sources. We’ll get better at being skeptical of it.

So I don’t think people would react to it in the same way now as they would 10 years ago when they didn’t know deep fakes existed. But I do think we’re going to have to develop defensive technologies that allow us to thwart misinformation and fake information. And I don’t think there’s a simple solution to that, but many such things, we’re going to have to figure out a defensive posture that is, if not an equal of at least is some protection against the offensive posture that we have created with these technologies.

Preet Bharara :

What do you think of the people, and maybe you’re among this group who have suggested there should be a pause in the development of AI technology until we figure out what the hell’s going on and at least consider what regulations are appropriate, is that the right way to go about it? And even if it is it even possible?

David Autor:

Yeah. So I don’t think it’s even possible.

Preet Bharara :

Finally you’re endorsing and embracing the idea of impossibility.

David Autor:

I think we need to regulate it for sure. And I think the people who signed the letter saying, we should have a six-month pause, I am with them in terms of the concerns that they raised, which are extremely valid and scary. The ability to actually pause, it’s not clear what that means. But in a competitive environment, if everyone says, okay, everybody go home and we’re all going to take the weekend off, nobody practiced for the track meet on Monday morning. Well, you can be sure people will be practicing in the privacy of their backyard. There’s no way you can stop them from advancing during the so-called hiatus. And of course, even if we in the US all agreed to do that, we still have adversaries who would not be doing that.

Preet Bharara :

You don’t think North Korea would abide by the pause?

David Autor:

I’m going to put my money on them not abiding nor the People’s Republic of China. And this is unfortunate and scary competitive logic of these things that even if… I think Google is actually much more hesitant than open AI in releasing powerful chatbots.

Preet Bharara :

But then they had no choice, competitively they had no choice.

David Autor:

OpenAI did it, they had to do it too. And then so did Facebook and so on. So yeah, I think don’t like the phrase the genie is out of the bottle, but I think it applies in this case.

Preet Bharara :

But there you’ve used it.

David Autor:

I have used it, exactly. And unfortunately, and this is, you say, well, how have we dealt with other dangerous technologies that we have created? And there are a variety of answers. One, of course, in the case of nuclear weapons, that was actually, as it turns out, not as hard a problem as some because they’re extremely expensive, they’re extremely detectable, they take a long time. And so we can figure out where they are. No one can just build a nuclear weapon at home and deploy it. So that was somewhat controllable. But of course, AI doesn’t have that feature, it’s not at all centralizable. It costs a lot of money to train an AI model, but once it’s trained-

Preet Bharara :

And it’s made available-

David Autor:

Then it can be run on a lightweight computer. So it may cost dozens or hundreds of millions of dollars to build, but then to actually operate takes a few thousand dollars. So this is a thought puzzle I give to myself is imagine today was September 12th, 2001, and there has just been this horrific terrorist attack on the US homeland. I know you were in New York at that time, and it was done through asymmetric actors using inexpensive technology. And if we were sitting there then and having conversations, what is the likelihood that the next 22 years will elapse without something similarly catastrophic happening in the United States? And I know I would’ve said oh zero, right?

Preet Bharara :

I would’ve said zero too. I’ve thought about this and talked about this for years and years. I would’ve said zero.

David Autor:

Oh, okay, good.

Preet Bharara :

Yeah.

David Autor:

So it’s an interesting question. How did we do that, right? Because it is a lot like terrorism flawed like AI, it’s inexpensive, it’s decentralized, and they’re just an infinite set of potential actors. And yet somehow we’ve managed to contain it over two decades, even though it’s very, very different from nuclear weaponry where that would be in theory, much easier to do. So the fact that we did have done that seemingly should give us some hope that we are better at this than we recognize.

Preet Bharara :

That’s super interesting. I just want to pause on that for one second because the way I’ve thought about 9/11 and the lack of a repeat, if we change the question a little bit and you ask me, or maybe I’ll change my answer, what’s the likelihood that over the next 22 years there will be something as catastrophic as 9/11 perpetrated on American soil? My answer wouldn’t be a 100%, or if you’ve changed the way you asked the question 0%, it would be some medium level number. But what I’ve thought about is what’s the likelihood that we won’t have one-off attacks, backpack bomb on the subway, sniper loose Al-Qaeda sniper loose at the train station, the kinds of things that actually can bring a city not to the stop that it was brought to on 9/11, but the random terror and fear can also parallel.

I remember the Malvo father and son who were snipers in the DC area, and I didn’t want to go fill my car up with gas in New York because there were these snipers 200 miles to the south, and that’s not as hard as taking down two iconic buildings. And we saw none of that either. And I’ve never understood the reason for that.

David Autor:

Yeah. And most of the terrorism we face in the United States is domestic terrorism that we essentially condone or don’t attempt to contain. So we are definitely facing a lot of risk from people with dangerous weapons, but they’re homegrown and we’ve made it legal for them to be that dangerous.

Preet Bharara :

I mean, I’d like to think part of the reason it hasn’t happened again is law enforcement intel capability. We’re ratchet it up and I was part of that infrastructure for a while.

David Autor:

Yeah, massively. I mean, yes.

Preet Bharara :

And also we’re a little bit far field, but it is super interesting. People who are experts on terrorism will tell you that when it comes to America, and I’m glad about this “lack of imagination” on the part of people who hate America and want to kill Americans. A little bit in the wake of 9/11, these are some experts who say this and may or may not be true, people didn’t want to do something small. If you want to bring death to America, the standard now became 9/11 and a couple of backpack bombs if you had like some of these people do who wage jihad against America, delusions of grandeur, and they seem small and not the kind of thing to aspire to. So there’s a certain terrorist psychology or mentality that some people refer to and maybe have relied on for there not to be the same kind of attacks that even if lesser than 9/11 would still have been devastating.

David Autor:

That’s an interesting point.

Preet Bharara :

So you’ve studied labor markets and the effective technology on the economies of the world generally, and then labor markets in particular. So before we get to AI and what this moment means for us, can we go back to the industrial revolution? We don’t talk about the industrial revolution enough.

David Autor:

Absolutely.

Preet Bharara :

On the podcast, certainly, and in general in the world. Just quickly give us an historian’s account of the ways in which the industrial revolution changed economies generally and disrupted and upended labor markets in particular.

David Autor:

Very happy to do that. Thanks for the question. So prior to the industrial revolution or prior to mass production, most things were made by artisans. Artisans were people who had a specific skillset that allowed them to make a product essentially from end to end, right? You’d be a shoemaker or a wheelwright who made wheels for carriages or a blacksmith. And so this was a broad set of expertise. It took years to acquire, there were very few people who did it, and it was of course a slow labor-intensive process, which meant that manufacture goods were expensive. The industrial revolution and mass production is what many people think of when they think of industrial revolution was a way of producing things that instead of using skilled artisans, you used capital i.e machines managers and often cheap, unskilled and exploited labor.

So many of the workers in the early British textile factories were indenture children who were given into indenture two to at say age 10 and kept their until age 18. And so it was a really different way of making things. It was extremely disruptive. So people often derived, so the so-called Luddites, people who rose up against the power looms, the textiles making machines, but they had every reason to be concerned. In fact, their livelihoods were wiped out, many artisans could not compete with mass production. And as a result, it led to a decline in the material standard of living of people who did that type of work.

Preet Bharara :

Can we pause on that for a moment because this is a phrase that we’ll come to when we come to the modern day expertise? Now, the artisans in the pre-industrial revolution era were experts, were they not? And people who had substantial skills. And so the disruption that technology wrought with respect to the industrial revolution worked adversity against people who had expertise.

David Autor:

Had that particular form of expertise.

Preet Bharara :

Right. Which is not how we in the interim have often thought about for some people the curse of technology, which puts unskilled people out of work.

David Autor:

That’s exactly how I think about it actually is expertise. So, I mean, I think you exactly put your finger on it is that what the industrial revolution did initially was displace artisan expertise. And of course that artisan expertise was the source of livelihood for many people who had invested effectively their lives in it. And it created adversity. And initially the work that replaced it was non-expert work. It was work that required basically generic skills which are physical dexterity, attentiveness, and willingness to work under in grueling conditions, often in loud, dirty and dangerous places at low pay. And in fact, one reason children were preferred in the early textile mills is because you often had to change threads or bobbins while the machines were still operating without being able to stop them because sloping them would slow the line. And so if you weren’t have quick reflexes, you would lose a finger.

And if you did in fact lose a finger, it wasn’t uncommon and that was considered an acceptable price for indentured children. But not to end on the [inaudible 00:24:37] note, as the industrial revolution rolled forward, the complexity of products dramatically increased. The weight given to quality and consistency of those products increased. And as a result, the skill demands rose for people who were actually doing that work. It was not sufficient to just show up and do the thing that was shown immediately in front of you, people had to have what I would call mass expertise. They had to be able to learn rules and master tools. And that was facilitated in the United States in particular by the high school movement that had sent most US young people through school until about age 18. And that started in the late 1800s, excuse me, in the late 1900s, continued to the early 20th century so that before the second World War, essentially the entire US young population was being sent through high school.

Not all of them were completing, but so they’re extremely well-educated by any world standard at that time. And so mass production eventually became much more skill intensive, but it was a different set of skills. It was not artisanal skills, it was skills at executing well-defined tasks, either physical tasks like repetitive production tasks or cognitive tasks like bookkeeping, copying, typing, spell checking, and even phone answering, and so on. And so these were what I call the mass expertise skills of the 20th century up until the computer revolution in some sense, which was this ability to carry out codified procedures using technology and literacy and numeracy and some judgment for sure, but not a ton of discretion.

But that helped create the US middle class and the middle class in many industrialized countries because those were productive jobs that made use of skills that people had acquired in schools. And they took expertise because you couldn’t be immediately good at them the day you started. You really needed experience and training and on the job learning to become good at it, and so your skills were valuable. They were different skills from what artisans had, but they were used well in a productive setting. And so they led to relatively high earnings. Again, if you were white and male, let’s be clear, this was not egalitarian in that period.

Preet Bharara :

So there was a certain skillset that was valuable among broad populations. Those people, because of the advent of technology, mass production, and other things, those people suffered as a result, their expertise became devalued.

David Autor:

The artisanal.

Preet Bharara :

Yeah. But an entirely new expertise became necessary and important and developed over time thereafter, is that how normally big changes in technology make themselves felt economically?

David Autor:

We have not undergone enough technological transitions of that scale to generalize very well. I think if you say, well, what are the big technological transitions of the last couple of hundred years? They’re moved out of agriculture. They’re the rise of mass production, the computer era, and now the AI era that we’re in. So I would say each of them, each time is different. Yeah.

Preet Bharara :

So let’s talk about, before we get to the computer era, just to preview a little bit, that’s really saying something that we’re talking about mass production, the computer era, and this thing that many, many, many people in the world and in America are just getting their hands around for the first time. AI, artificial intelligence, you put on the same list as those other two things.

David Autor:

I do. I do.

Preet Bharara :

And does everyone?

David Autor:

No, but more and more people do. And I’ve been watching this for a long time.

Preet Bharara :

Because you’ve been studying it and watching it in your MIT, how many years ago, if you were thinking about it, would you have put AI on the shelf with the industrial revolution and with the computer era?

David Autor:

Only in the last couple of years. But I have put it on that list.

Preet Bharara :

So how does something go from not being on that list to being on that list by someone like you with the studying you’re doing and the exposure you have and the research you do? I mean, just what are we to take from the fact that AI came from back in the pack to being on a par with the computer era and the industrial revolution? I find that remarkable.

David Autor:

Yeah, I find it remarkable as well. So let me say my surprise is downstream of the surprise of my colleagues in computer science and artificial intelligence. So at my good fortune being in MIT, I’ve spent the last two decades talking with people who work on and develop these things. And two decades ago we were in the middle of what people would’ve called the AI winter. The period in which there had first been this initial optimism, then it was really a flop. And so people were aware that AI had really gone no nowhere, and it had not in any sense fulfilled its promise. And then 10, maybe 12 years ago, people in the AI community at MIT were talking about neural nets and back propagation and the stuff that had come out from Jeffrey Hinton and Yan Lecun and these early AI pioneers.

And they’re saying, oh wow, this stuff is actually getting pretty good at recognizing things in photographs and creating sentences and observing patterns and drawing inferences. And some of the folks in the room were like, yeah, this is actually really going somewhere. And others were like, nah, it’s going to get it right on average and miss every important case. It’s not sophisticated enough, it’s not just a question of power, it just doesn’t have enough of a conceptual model of what the world is to do anything that’s really smart. And so this was a wide open debate and some people were very skeptical that this technology could in some sense get very far, that the idea would hit the flat of the curve very quickly. And others were skeptical but impressed and still others were pretty gung-ho. And I would say at this point, most everybody is pretty impressed and has come around the idea that the potential of this idea is much greater than was recognized at the time.

And it hasn’t fundamentally changed since the ideas of Hinton and Lecun and others. It’s really that the scale of it has improved so much, the processing power, the size of the hardware, the speed, and that has allowed, it turned out to be such a powerful idea that scaling it to that magnitude allowed it to realize a lot of that potential. So my skepticism, as I said, is not because of my direct expertise of the technology, but because of my direct observation of people who are at the frontier of the technology. The computer revolution, you can think of it as starting in the 1980s, computers have been around since the second World War, but they didn’t become cheap and commonplace until the 1980s. And what is a computer? What distinguishes it from prior technology? Well, a computer is a symbolic processor, meaning it can act upon stored representations of information, it can take data, it can analyze data, it can do calculations.

And in a flexible way, symbolic processing is a very, very general notion. Computers are no longer purpose built to do one thing. Anything that can be represented as a series of logical instructions and steps can be executed by a computer. The computer won’t use judgment, it won’t solve problems, it won’t improvise, but it’ll do many of the, what I would call mass expertise tasks that many workers were doing from the 1920s Ford, which is mastering rules and using tools. And so computers had the effect of displacing a lot of that middle skilled work that was done in offices, that was done in factories by essentially taking that work that required literacy and numeracy or analytical reasoning and processing and following a set of well understood rules and procedures. And instead of having people with high school educations do that, computers could do that.

And so this was very unequal for the labor market because it really hollowed out the middle, it hollowed out these middle tier of jobs that were done by people with high school and sometimes college educations that were really provided the backbone of many middle class families and pushed people into one of two categories. For people who went on for college, it was very complimentary to professional and technical and managerial workers. Why? Because they need to make decisions. Having lots of information and processing power makes that much easier, whether you’re doing law, whether you’re doing medicine, whether you’re doing computer science, whether you’re doing marketing, whether you’re managing a large organization, having all that information readily available and all the calculation and the processing power they’ll view into what’s going on allows you to then make a better decision, write a better legal brief, do more data analysis, et cetera.

So that was great. However, if you weren’t in that realm and no longer was available to you, a good production blue collar job, good white collar office job, a lot of people moved into personal services, food service, cleaning, security, entertainment, recreation. And that’s socially valuable work, but it pays poorly. And the reason it pays poorly is because the skills required are not expert. Most people can be productive in that type of work within a few weeks or few months of training, even though that work is valuable. So think about being a daycare worker or a crossing guard or a security guard, these are life and death occupations. People’s lives are at stake in the work that you do, and yet it’s poorly paid. And the reason is because it doesn’t require much expertise in the way we think of it, just like the word term you used so aptly a few minutes ago.

So that is the situation we’ve found ourselves up till recently. And people with rarefied expertise in the professions, people with BAs and MBAs and JDs and MDs and PhDs, and so on, have become in some sense scarcer and scarcer. Now you might say, well, how could they become scarcer? There’s more than there used to be. It’s true, but they’re paid more and they work more hours. They’re the bottleneck, and so many things that need to happen because all the easy information processing and calculation has been done. And now someone has to make these high stakes decisions about how do we care for this patient? How do we architect this building, how do we design this piece of software, et cetera. So now what does AI do that traditional computers did not? Well, traditional computers could only follow rules and procedures that we could write down, only things that we could codify that we knew the steps.

The irony is that there are many, many things that we do that we don’t actually know how we do them. The philosopher Michael Polanyi said, we know more than we can tell. You know how to ride a bicycle, but you couldn’t teach a class on how to ride a bicycle. You know how to cook a meal or set a table, but you could never ride a piece of software that would do that work because it requires all kinds of what we call tacit knowledge.

Preet Bharara :

Can I give an example from your work?

David Autor:

Absolutely.

Preet Bharara :

In some of your writing, which I think is fascinating and explains maybe to lay people who are new to the subject, what AI is about. And you talk about how difficult it is to explain what a chair is, and you write “It is extraordinarily challenging to explicitly define what makes a chair a chair. Must it have legs? And if so, how many? Must it have a back? What range of heights is acceptable? Must it be comfortable? And what makes a chair comfortable anyway? Writing the rules for this problem is maddening. If written too narrowly, they will exclude stools and rocking chairs, if written too broadly, they will include tables and countertops.” So take us from that example as you’ve just been describing and explaining how AI handles that issue that computers generally were not able to.

David Autor:

Right. So the way AI handles that problem is we never write down the rules for what makes a chair a chair. Instead, we show pictures of chairs to AI and things that are not chairs. And we say, this is a chair and this is not a chair. And somehow it learns, the machine figures it out, it generalizes from those examples. And you might say, well, how’s that possible? Well, you and I and our children do this all the time. You could show a bicycle to a six-year-old child, say, hey, this is a bicycle. Then you could show them a picture of a tricycle and a bicycle wrapped around a tree and they would immediately recognize, ooh, those are all bicycles somehow, we don’t even know how they do it, but it’s quite mysterious in fact. But they generalize and they make inferences from a collection of data and fact, and AI is now capable of doing that thing. Do it exactly like we do? We don’t actually know, does it do as efficiently as we do?

Almost surely not, you might take a million images to train a AI what a chair is and a kid could figure out in a matter of a few photographs. But nevertheless, it does it. And so that ability to do this inferential learning means that we don’t any longer have to write down the rules. AI can learn to do something by example and if it gets the right feedback, it can do it better than we can because it can learn from its own mistakes often very rapidly. It can even two computerized go players, the game of go can play against one another against them itself effectively and learn from playing against itself.

Preet Bharara :

What’s interesting about the chair example and how we classify things and how AI develops without being told written rules, it learns through this process of being given examples. There’s an area of study and of profession that doesn’t work that way and can’t work that way. And that’s the profession I practice, which is law. And I remember, this is just reminding me of, I think an example that was given to me early in law school about the importance of definitions and rules and principles. There’s a sign at a park and it says, no vehicle’s allowed in the park, it’s like your chair issue. Well what’s a vehicle? Obviously a station wagon is a vehicle, but is a tricycle a vehicle? Is a bicycle a vehicle? Is a car of vehicle?

And if you had AI being the judge deciding whether or not there was an infraction of the no vehicle rule, maybe I’m going off on a crazy tangent, but it just occurs to me that you would have a different way of evaluating questions of justice and punishment and enforcement if you had AI trying to figure out what Israel is not a vehicle, what is not a chair because the law demands black and white rules and principles.

David Autor:

Right. But then again, a lot of legal work is figuring out what principles apply and coming up with a fresh argument to define an issue.

Preet Bharara :

My question to you is, when you train AI on being able to recognize a chair, that all makes sense to me. And as you write, nowhere in the learning process does AI formally codify or reveal the underlying features i.e rules that constitute chairness. But if you queried AI, and I guess I could have done this, if you queried AI after it’s gone through the process of learning what a chair is and recognizing chairs and said, what is the definition of a chair? What constitutes a chair? What are the features of an object that give it chairness? Would it defy your question? Would it not answer your question? Because that’s not how it operates it.

David Autor:

It would answer your question, but no better than you would answer it. I’m sure you’d give a great answer. But it would say something, well, it’s a thing that’s good for sitting in, right? And then you’d say, well, what does that mean? And then we would be back into this infinite regress. So this is actually a great challenge with AI is that it can’t explain to us what it is doing. So its representation of information and rules is highly abstract just like our own, you could peel my head open and say, well what is Autor thinking right now? And it wouldn’t be obvious from looking at my neurons. And similarly, when AI learns there is no codification, it’s a set of associations and weights among different pieces of information that is opaque to us. And this is actually a major challenge. So the point I made earlier about we know more than we can tell, this is what I refer to as Polanyi’s paradox.

The paradox that we understand things that we don’t know how to explain, but when it comes to AI, we end up in the converse situation that I call Polanyi’s revenge, which is that AI now knows things that it can’t explain to us. And so neither are we good at codifying tacit information, nor is AI good at explaining what it has learned tacitly. And this makes it very challenging to predict, even though AI is following the rules of physics and quantum mechanics and so on, from our perspective, it’s in some sense stochastic. It’s not fully predictable what it will do in any given situation. And that provides a challenge because how do we develop confidence in it and how do we know why it decided what it decided or said what it said? And as you know in law, intention matters. It’s not just what you do, it’s why you did it.

So there are three different types of manslaughter. There’s homicide, there’s unintentional, and so on, and they all result in a person being killed, but it matters why you did it, and that would have an enormous effect on whether you are set free or sent to jail for the rest of your life. And so the fact that AI can do things and yet we won’t know why it’s doing what it’s doing, is actually quite problematic from our point of view of being having intention behind action.

Preet Bharara :

I’ll be right back with David Autor after this. So one thing you’ve written, and we’ve talked about this already, is about AI, is that you are worried about the devaluation of expertise. On the other hand, you express hope and optimism that AI, if used in certain ways, can expand the middle class and give opportunities to people. And so I want you to address that dichotomy. But if you might do it in the context of an experiment done by some of your colleagues at MIT, which showed that the use of an AI tool within a group helped the least skilled and accomplished workers the most, decreasing the performance gap between employees. In other words, the poor writers got much better, the good writers simply got a little faster. So should we be hopeful about expansion of the middle class and the lifting up, the people who are left skilled or bemoan the focus on expertise and the devaluation of expertise or both?

David Autor:

That’s a terrific question. So the study you’re speaking of is by my students, [inaudible 00:43:03] and Whitney Zang. And essentially they did a experiment with college educated professionals who do writing and had them do tasks that involve either writing advertising copy or creating a business report and just repeating in some sense what you summarized, everybody got faster. So they had a treatment and control group. The treatment group was given access to ChatGPT in the second round, and the control group just did two rounds using paper and pencil or whatever word processor, and everybody got faster using the chatbot. So the average time spent on the task fell from 20 minutes to about 12 minutes, so 40% savings. The average quality of the output evaluated by another set of college graduates who evaluated the work on precision, brevity, originality and accuracy, that average quality rose.

But the most striking result was that the great writers were great at the beginning and great at the end, the not so good writers, not incompetent, but not so good, improved. And the degree of improvement was negatively proportional to where you started. The less good you were initially, the better you got. Now it didn’t fully close the gap, the best writers were still the best at the end, but it made the less good people better. Now to come back to your question, does that mean that it’s going to make all of us more expert or none of us an expert? If you’ve seen the movie Incredibles, the famous line says, if everybody’s special, nobody’s special. Well, if everybody’s an expert, nobody’s an expert, expertise refers to the notion that you know how to do something that others don’t and it’s something that needs to be done.

Preet Bharara :

So who needs expertise if everyone’s an expert? And why is that a bad thing?

David Autor:

Exactly. So, well, expertise is what differentiates labor, without expertise, we’re all waiting tables in some sense doing valuable work that is poorly paid because everybody else can do it. So the question for the writing example, and this is not something they tested, is let’s say you had given this task to people who were high school graduates and didn’t have any college, didn’t have any experience writing, could they also have done as well? In sense, if any person without any training or experience could do this job equally well, then expertise is no longer required.

Preet Bharara :

And the consequence of that economically is you can pay them a lot less.

David Autor:

You can pay them a lot less.

Preet Bharara :

You have oversupply.

David Autor:

Exactly. It’s not that it doesn’t make anyone better off. And sometimes look, if you’re buying that service, it saves you money, but it creates a challenge for income distribution. But I submit that most things will not become so devoid of expertise. That expertise is, you can think of as having multiple components. One component is sort of specific knowledge. Do I know punctuation? Do I know spelling? Do I know grammar? Another expertise is judgment about what goes with what when do you do what? So you could imagine a future where, oh look, I don’t know much about medicine, but I have a AI in the room, it’ll tell me what to do in each case. Well, you would never allow someone to perform medical procedures on you not knowing what they’re doing just because AI was in the room, right? Because what if something went wrong?

It wasn’t anticipated, they would have no judgment to react appropriately in short order when the circumstances arose. But you can easily imagine a situation where people who have some foundational expertise in healthcare could do more tasks when enabled by technology that provides in some sense, knowledge, guidance, and some guardrails, right? Not only telling you what to do, but telling you what not to do, as in don’t put these two medications together, for example. And I think there’s a lot of potential there. The situation we face now in the current labor market is elite experts, people with high levels of education are too scarce and they are the bottleneck for a lot of work. If you could make some of that elite expertise more available and less expensive, i.e, in the case of writing, you could enable more people to do valuable work with a foundation of expertise.

So let me give you two examples to motivate that. One simple one and one a little more in depth. Imagine I’m free this coming Sunday and when I decide I’m going home and I’m going to rewire my basement. I’m going to pull out my old fuse box and replace it with a breaker box, and I’m going to do that by going to YouTube. Well, let’s assume I don’t know anything about electrical work. I’m almost surely going to set my house on fire or I’m going to electrocute myself that day, right? This is not going to end well.

Preet Bharara :

We don’t want that.

David Autor:

But now take another case. Let’s say, oh look, actually as a kid I learned how to solder. I know how to cut and strip wire. I know how to wear insulated gloves. I know how to measure voltage and impedance. I don’t know how to change a fuse box, but I could do it safely with the right instructions. So YouTube would actually be helpful to me, right? It’s a compliment to the expertise I have, right? Now, let’s take another example that’s that’s higher stakes. Nurse practitioners. Nurse practitioners are registered nurses with an additional master’s degree in the NP field and certification. And they do many tasks that used to be exclusively limited to MDs. They diagnose, they treat, and they prescribe. And they do that based on expertise and judgment. And they have a lot of technical supports.

They have access to electronic patient records, they have software that warns them about drug interactions and they have a lot of tools that make that more feasible. And in some sense, this is a job that has become more prevalent. So there are almost no NPs two decades ago, now there’s about 300,000 of them in the United States. They earn about $130,000 a year at the median, which is quite a good income. And they make medical care actually more available, more convenient, less expensive than it would be for many of us. And this has created a really good middle skill job. I mean, they’re educated professionals, let’s be clear, but they have five fewer years of education than does a medical doctor. So I think of that as an archival example of what we could potentially do using these tools more effectively is we can allow people to have foundational expertise. Again, these are medical professionals, they have judgment, they’ve been trained, they’ve apprenticed, but they don’t have to know everything at the frontier of medicine.

In fact, no one can possibly do that, right? But supported by the right tools, they can use that judgment to carry out more expert tasks. But you could imagine the same thing occurring in software development, right? People are going to need to know less coding going forward, but they still need to know computer architecture, software architecture to design a good app. You may need to know less engineering to make a building stand, but still to be a good architect, there’s a lot that goes into that. To make a legal case, there’s many boilerplate things that will be done by software. But to make the right argument and to surface the right body of law and decisions, that’s going to require expertise.

So I think there’s a potential for AI to reduce this bottleneck that prevents many people from using their skills well and causes many things to be hogged by the highly educated, enable more people to do skilled work with a combination of foundational knowledge, judgment that comes from experience, comes from apprenticeship, that comes from training, and then tools that enable them to do a broader range of things. Because those tools provide knowledge, they provide coaching and they provide guardrails so people stay within the bounce.

Preet Bharara :

So are you saying that at least in part, if this goes the way you’re describing and could go and people are wise about how AI gets integrated into workplaces and in the economy generally, that this would reduce income inequality? Or is that too optimistic?

David Autor:

No, I think that’s possible. Or at least income inequality not necessarily what goes to the top 1%, which is a very complicated phenomenon. But income inequality between college graduates and high school graduates, which has exploded over the last four decades and it’s now actually coming down and even more so than inequality. What I lament this happened over the last four decades is many skilled people who did expert work in production in offices have been pushed downward into non-expert work where they cannot use the same level of expertise and therefore are poorly rewarded. I would like to see the reinstatement of a new middle class of artisan workers, or maybe artisan is the wrong word, of workers, who using better tools and foundational expertise can carry out more expert work, whether it’s in healthcare, whether it’s in repair, whether it’s in the trades, whether it’s in software, whether it’s in engineering, in design, that those elite skills will become more accessible and what will matter is the judgment to use that skillset well.

And that will require training, it will require expertise, but maybe not as much, maybe not as many years of schooling, maybe not as few workers delivering vital medical care as we have right now.

Preet Bharara :

Let me ask you a little bit of a meta question. You’re talking about the ways in which AI will benefit people or be less consequential or more consequential with respect to education level and education level may be less important. What do you think AI means for education itself in schooling itself and teachers and professors?

David Autor:

Well, I think there’s hope there as well. So let me emphasize what I’m saying. I’m presenting not what will happen, but what I think could be made to happen.

Preet Bharara :

Yes.

David Autor:

And let me emphasize again, or let me say this is the sub oven the future is not a prediction problem, the future is a creation problem. We are creating the future and we can use these tools in a variety of ways. So China is very effective at using AI for surveillance, for real-time content filtering. And they’re extremely good at that, better than we are, and they couldn’t do it without AI. But that’s not because that’s what AI does, that’s because where China has put its AI dollars, we have a choice about where we invest and what capabilities we developed. AI is a very malleable technology, it can be used all kinds of ways and we can use it for good things or for bad things, and it’s a choice.

In the case of education, my hope is education can be made more customized, more immersive, more accessible, and less expensive. And you can see that this could occur at all types of levels. But one area in particular where it’s so crucially needed is for adult education. We are constantly talking about retraining adults, but adults do not like to go back to school and they don’t learn very well on average in classrooms, they’ve grown out of that. Adults learn much better in actual hands-on work. But we could simulate more of that just like pilots spend hundreds or thousands of hours per year in flight simulators, right? Well, you could have construction simulators, you could have medical simulators, you could have software development simulators. So AI could be used for a lot of virtual learning using augmented reality, virtual reality, and generated environments.

So I think there’s enormous potential there. The other thing that I have hoped for, but I’m less certain, is how we use it more effectively in the classroom. So it’s easy to say, well, AI could just teach, and in theory that’s true. But we’ve had all kinds of technologies and have had for decades or arguably centuries that present information to people. If presenting information was the limitation, libraries would’ve solved our public ignorance problems centuries ago. So there is something that’s uniquely motivational about a teacher that makes people somehow pay attention, tune in an exert effort. And I don’t know how quickly we can learn how to get machines to carry out that same role, I’m not sure.

But I am confident that we can use machines for tutoring, for supporting learning, and have teachers play more of a coaching role where they’re supplemented by technology that does, provides a lot more customized information and customized exercise and so on. So I think there is real hope that the technology can make us, allow us to become more effective at education. And partly that makes means making education more interesting for many more people.

Preet Bharara :

Are there whole classes of jobs that you expect will go away as this happened before? I mean, I don’t know if there are anywhere in the country, human tollbooth collectors, because you don’t need that anymore. And they used to be everywhere. I’m sure that job was occupied by tens of thousands of people as recently as when I was young. Any whole classes of jobs you think are potentially headed towards extinction because of AI?

David Autor:

Yeah. And the toll collector example is a good one, another one is telephone operators, right? AT&T used to have several hundred thousand women who worked as telephone operators and now it has none. So I think the first order effect is much more changing the jobs that we do or causing them to grow or slowly erode than to wholly eliminating them. But there will be some. So for example, there’s a lot of time spent in document formatting and creating slides and so on out of other sources. I think a lot of that work can be automated. There’s a lot of people who do coding. Their job is to translate from old cobalt code, which is six decades old, to some modern language to Java or to Python or something. And that work is actually fairly automatable now. There’s unfortunately, I’m very concerned about this, a lot of graphic design and even creation of music where we now have machines that can do actually a startlingly good job of, in some sense recycling what has done in the past and putting it together in new ways.

That’s a big intellectual property issue. So I’m very concerned about that activity. Translation, right? Language translation is shrinking as a field. Now the people who still do it are really, really good at it, but machine translation is just a really close up student and it can do it in real time. Even sign language translation. I am concerned that eventually you’ll just basically have a computer monitor that will sign what is happening in the room. So yes, I do think there are categories of work that will be almost fully replaced. I don’t think that’s the most substantial effect of this technology, but it will occur.

And concern for people who are put in that position is not will they find work, at least in industrialized economies, we are essentially at full employment. We’re not short on jobs. But if they do less expert work, their standard living will fall. Their pay will fall. So if you were a software coder and you end up doing security or driving or food prep, it’s very unlikely that you’ll make as much money and it’s because your expertise is no longer as valuable. And that’s the concern.

Preet Bharara :

But what’s interesting about what you just said, one example in particular, jobs aren’t the only thing and without disparaging or minimizing the disruption to people who do certain kinds of jobs, you talked about people who can sign. Well, most places that I go and most places where there are conversations, including in theaters and at lectures and speeches and everywhere else, no one’s signing because there’s not enough people. And if you have a technology that can allow literally every conversation live or otherwise to be signed through software notwithstanding the injury to the people who used to do that job, overall, there’s a great benefit to people who have hearing issues.

David Autor:

Absolutely.

Preet Bharara :

Because you can make it universal. So I don’t know how you weigh each of those considerations.

David Autor:

This is the paradox, right? So almost all of these technologies raise aggregate productivity. They make things cheaper and more convenient for us as consumers, but we are also workers and we can’t buy those things as consumers unless we have a decent income. And so the challenge is not will AI make society wealthier, barring that we don’t kill ourselves with it, it almost surely will make us more productive and therefore more affluent in net. But the problem, the challenge and the one we faced so severely for the last four decades is income distribution. And if it devalues many, many people’s skills, then a lot of the benefits of that technology will go to the owners of the technology themselves itself rather than, or themselves rather than the workers. And so then we have wealth, but we have maldistribution.

Preet Bharara :

David Autor, thank you so much for being with us.

David Autor:

Thank you. This was a fabulous conversation.

Preet Bharara :

My conversation with David Autor continues for members of the Cafe Insider Community. To try out the membership for just $1 for a month, head to Cafe.com/insider. Again, that’s cafe.com/insider.

THE BUTTON:

I want to end the show this week with some news that made me happy. We’ve talked a lot about bookstores and libraries on this show, from book bands to library initiatives to young publishers. And this week, as reported by AP, the American Book Sellers Association, the nonprofit, which promotes independent bookstores in the US reported their highest membership level in 20 years. Even as the pandemic threatened so many storefront booksellers, they have persevered, so much so that membership is actually higher this year than it was in 2019 before the pandemic. And even more optimistic, many booksellers reported a higher number of young readers than they’d seen in years past.

One bookstore owner hypothesized that young people are “Rediscovering the bookstore and the importance of community after being locked down.” Like so many of you, I cared deeply about the health of libraries and bookstores in this country, and I feared a combination of the pandemic and online behemoths like Amazon might threaten these businesses. And I’m so glad to find out that many of these bookstores have been so resilient. According to AP, the booksellers organization also reported an increase in diversity among their bookstore owners. A profession that’s long been white dominated. Many sellers feel that their work is mission driven. And in today’s political environment in which we’ve seen a 28% increase in book bands across the country just this school year, I couldn’t agree more.

To me, this story is one of optimism and hope for the future of a crucial part of our society, and let it also be a reminder to support your local and independent booksellers whenever you can. I love the local bookstores where I live. When my book Doing Justice came out, I’d go every few weeks and sign copies and talk with the owners. And of course, as you know, I recorded a recent podcast from the story Strand Bookstore in Manhattan. So to all the booksellers out there, I commend you. Keep doing the good work and we’ll keep supporting you and your work. Well, that’s it for this episode of Stay Tuned. Thanks again to my guest, David Autor.

If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me at Preet Bharara  with the #AskPreet. Or you can call and leave me a message at 669-247-7338. That’s 669-24-PREET, or you can send an email to letters@cafe.com. Stay tuned, as presented by Cafe and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The technical director is David Tatasciore. The senior producers are Adam Waller and Matthew Billy. The Cafe team is David Kurlander, Sam Ozer-Staton, Noa Azulai, Nat Wiener, Jake Kaplan, Namita Shah and Claudia Hernandez. Our music is by Andrew Dost. I’m your host Preet Bharara . Stay tuned.