• Show Notes
  • Transcript

President Biden recently issued a sweeping executive order on the safety and security of artificial intelligence. What will that mean for the developing technology? Former FCC Chairman Tom Wheeler joins Preet to discuss the implications of the order and the future of AI regulation. 

Take the CAFE survey to help us plan for our future!

REFERENCES & SUPPLEMENTAL MATERIALS:

Stay Tuned in Brief is presented by CAFE and the Vox Media Podcast Network. Please write to us with your thoughts and questions at letters@cafe.com, or leave a voicemail at 669-247-7338.

For analysis of recent legal news, join the CAFE Insider community. Head to cafe.com/insider to join for just $1 for the first month. 

Preet Bharara:

From Cafe and the Vox Media Podcast Network, this is Stay Tuned in Brief, I’m Preet Bharara.

Artificial intelligence is undoubtedly a huge part of modern life and it will continue to be. That’s why President Biden recently signed a 111-page executive order outlining potential regulations for the safety and security of AI. The sweeping order was the first of its kind from the president and covered many different aspects of the developing field. Some advocates say the regulation is merely a good start, while others from the tech industry say it risks stifling innovation. So what’s in the order and what are the implications of it? Joining me to discuss it all is Tom Wheeler, the former chairman of the FCC, the Federal Communications Commission, under President Obama. He’s a visiting fellow at the Brookings Institution. Mr. Wheeler, Tom, welcome to the show.

Tom Wheeler:

Preet, thank you. It’s great to be with you.

Preet Bharara:

So is this a big deal or not?

Tom Wheeler:

You bet it’s a big deal.

Preet Bharara:

It is. Is that because it’s 111-pages long?

Tom Wheeler:

Exactly, just by sheer weight. I mean, it’s a big deal in so many different ways. I mean, AI itself is a big deal and holds the potential to be transformational, but for the last year, the Biden administration has been moving incrementally on the question of how should we be overseeing artificial intelligence. And then with the dropping of this 111-page executive order, I mean moved with such speed and completeness as to make your head spin. I mean, you and I have been hanging around the government for some time. And to watch this kind of completeness and this kind of depth move as fast as it did through the government processes was no small task in and of itself.

Preet Bharara:

Do we understand who in the government is responsible for this document?

Tom Wheeler:

Well, Bruce Reed, who’s the Deputy Chief of Staff at the White House, is going to be heading the new AI Council in the White House, which is going to be a very high level council of cabinet members, the chairman of the joint chief and others. But it was a team inside OSTP, the Office of Science Technology Policy that did the real legwork on this.

Preet Bharara:

So the executive order runs the gamut of issues from intellectual property to national security. Can we talk about the latter for a moment. On national security, what stands out to you in the executive order?

Tom Wheeler:

Well, let’s just back up and reflect on the fact that the executive order used the national security and economic security of the nation as the basis to trigger the Defense Production Act, which is a Korean War piece of legislation that gives the president specific authority to act and make mandatory requirements. The challenge of an executive order typically is despite the fact that we all talk about the power of the presidency, the executive branch authority in areas such as this is actually rather constrained. As a matter of fact, after the president released the executive order, he had a meeting with some senators where he said to them, “And you guys have to understand, I understand the limits of presidential authority and we need legislation and you folks need to pass legislation.” The other thing that is constraining about an executive order is the fact that the next person who comes in can undo it. And again, that’s why you need legislation.

Preet Bharara:

So one of the national security aspects of this that I want to ask you about, in the order, basically provides that companies deploying the most advanced AI tools are supposed to test their systems to ensure, and this is kind of interesting, that they cannot be used to produce biological or nuclear weapons. So on the one hand, I guess it’s reassuring that you have that in there. But the fact that you have to worry about AI producing biological and nuclear weapons is not reassuring at all. What do you make of that?

Tom Wheeler:

I agree with you that it’s not the kind of thing that you say, “Oh, good, we can go out and produce bioweapons or nuclear weapons.” Obviously, it is terribly important that the so-called foundation models, which are the most advanced, most complex models and form the foundation for AI activities, have those kind of guardrails on them. The challenge is that the models themselves, the large language models themselves are now in the wild and have moved beyond the handful of big AI companies with the release of Facebook’s large language model and can be in the hands of anybody and will be much more difficult to deal with. They won’t be as cutting edge as the foundation models from the big companies like Google or OpenAI or Anthropic or others, but they will be out there for anyone to do with as they wish. And that is probably the most sleeping challenge of AI and one that is beyond the scope of the EO because of the fact that how in the world are you going to be able to get to two guys and a dog in a garage in Estonia.

Preet Bharara:

What are some other things in the EO that strike you as important?

Tom Wheeler:

Well, I think it’s terribly important that the president took these kinds of steps insofar as national security and economic security, but I think it’s also important to recognize that there are an awful lot of things that were hortatory. We’re going to start, we’re going to study, we’re going to standardize, we’re going to have guidance, we’re going to use federal purchasing and funding activities. And it goes back to the point that the president was making that I was reinforcing a moment ago that we need legislation to put these policies in statute and grant the appropriate level of authority to enforce these kinds of concepts.

The president and the executive order did an excellent job of balancing the need for oversight regulation with not wanting to get in the way of innovation and the stimulus for individuals to go out and innovate. But we need to focus that, as I said a moment ago, not just on the big dominant companies but on the other smaller companies, and we have to do that through legislation that creates a new oversight structure. It was today that Senator Chuck Schumer, the majority leader in the Senate, surfaced the idea, which you would be very familiar with from English common law, of a duty of care, and how do we get into legislation, the expectation that those who are creating and using AI models have a duty of care to anticipate what the untoward results may be and mitigate them upfront. That is the next step in this process, and I think the president again realized that his limited authority, he could not go that far.

Preet Bharara:

Is there any worry that as the United States rolls out regulation, or suggestions for regulation, that it will be at odds with or there’ll be discrepancies with what Europe is doing for example? Or should everyone just do as much as they can and work that out later?

Tom Wheeler:

So Europe and China are significantly ahead of US. China, it’s a simple authoritarian process of saying this is the way it’s going to be. The Western European liberal democracies are going through the more convoluted democratic process, but are still significantly ahead of us. The EU, it looks like is going to pass the AI Act before the end of the year establishing policies inside the WU. What worries me, Preet, is that in an interconnected world, the decisions that are made by others end up affecting everybody, number one. And number two is if we want to have a seat at the table in terms of developing international norms about artificial intelligence, we’ve got to know where we stand. We got to have our own policies. You can’t be existing in the Never, Neverland as good as the EO is, and I want to keep emphasizing that, but you can’t be existing in a policy that can be overturned tomorrow in a new administration and be a successful participant in the creation of international coordinated oversight.

Preet Bharara:

Now, ordinarily, when the government talks about regulation, the industries and the companies that are sought to be regulated get up in arms, and I guess there’s some of that here, but there’s a lot of relief per you and other people who comment on these issues on the part of industry that the government is getting involved. A, how peculiar is that? And B, how would you describe the reaction within industry?

Tom Wheeler:

So yes, normally industry’s response to government regulation is, “I’ll meet you at the fence with my shotgun.” But I think big AI, the principle companies are responsible in recognizing, “Hey, we need to do something.” The CEO of Google had a great way of putting it. He said that AI is too important not to regulate and too important not to regulate well. And that leads to the question of, “Okay, what is regulate well?” And there I think is where the traditional challenge falls apart, or that the new position falls apart. When I was chairman of the FCC, nobody ever came into me and said, “Hey, I want you to do this because of the fact it’s going to benefit me.” They always came in and said, “Hey, I want you to do this. And this is why it’s in the public interest. Even though you maybe had to stand on your head in the corner and squint your eyes before you could see how it was truly in the public interest.”

And so what the companies have done, I think, has been aggressive in saying, “We need to do something.” But the proposals that have been put forth early on, such as licensing, end up benefiting the big companies themselves. And so what we need to be going through is a process that says, “All right, we have crossed the Rubicon that there is going to be regulation. Now let’s get serious about what the tactical implementation of that regulation is.” I mean, hooray, for the companies who are, as you know, principally the same companies who over the last 20 years have been saying, “Don’t touch what we’re doing on the internet.” Hooray for the fact that they’ve come through and said, “We need to do something here.” Now we need to get real serious as to what are the public interests in that oversight, not just what are the private interests.

Preet Bharara:

You’ve said, and others have said, and it seems clear from the executive order and its unenforceability in various respects that Congress needs to act. If you had to say the one or two most important things for Congress to do as a concrete matter or that experts say Congress must do as a concrete matter, what would they be?

Tom Wheeler:

I think Senator Schumer has started down that path talking about the duty of care and responsibility that needs to be enshrined. Then I think you need to have an agency that has AI and digital expertise that is focused on that. And that agency, while the headline is a new agency, that agency needs to depart from the old style government regulation that was created during the industrial era and embrace the kind of management concepts that the digital companies use themselves, how to be transparent, how to be risk-based, how to be agile, which is everything that existing industrial era agencies are not, but that we’re going to need that kind of new structure in order to be able to keep pace with the changes that AI is going to be bringing to us.

Preet Bharara:

There are some things that this order does not do, and I wonder what you think of it. So for example, there is litigation and controversy surrounding the practice of training AI, large language models, on vast quantities of data available on the internet, some of which or much of which is copyrighted. And this order doesn’t get into that. Is that wise for the president?

Tom Wheeler:

Yeah. Well, I mean, I think that, as I said at the outset, this covers an awful lot of ground and it was put together awfully quickly. There’s privacy issues that aren’t addressed in here. Let’s go back one step from your intellectual property point. You and I put things on the internet. We are in those databases whether or not we want and whether or not we ever gave permission. What you have written, and what I have written in terms of our own intellectual property, is in those databases without our permission. We need to be establishing not only the rights of individuals and their privacy, but the rights of the owners of copyrights. Because what has happened is that we have a situation where my personal information and your personal information and my intellectual property and your intellectual property has become the asset, the corporate asset of another entity beyond my control.

Preet Bharara:

Are you optimistic, pessimistic, or somewhere in between on the reasonable likelihood that we’ll be able to regulate AI well?

Tom Wheeler:

You know, well is of course the key word in the sentence. They-

Preet Bharara:

Yeah. No, I’m referring back to your quoting the Google CEO.

Tom Wheeler:

Yeah. Let’s start with the basic issue. We need to have oversight. That oversight needs to continually be evolving as the technology is evolving. If our oversight of AI is based on statutes and structures created for oversight of the industrial era, then the answer is no, we will not be regulating well. But if we say we are going to clone the management techniques of the digital era just like we clone the management techniques of the industrial era to create the structure of industrial regulation, we’re going to clone the management techniques used in the digital era to create a new kind of agile oversight, then I think we have a real chance of doing it.

Preet Bharara:

Well, that’s good. I’m glad there’s some optimism here because there’s a lot of work to do. Tom Wheeler, thank you so much for joining the show.

Tom Wheeler:

Preet, thank you very much.

Preet Bharara:

For more analysis of legal and political issues making the headlines, become a member of the Cafe Insider. Members get access to exclusive content, including the weekly podcast I host with former US attorney, Joyce Vance. Head to cafe.com/insider to sign up for a trial. That’s cafe.com/insider.

If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me @PreetBharara with the hashtag #AskPreet. You can also now reach me on Threads. Or you can call and leave me a message at 669-247-7338. That’s 669-24-PREET. Or you can send an email to letters@cafe.com.

Stay Tuned in Brief is presented by Cafe and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The technical director is David Tatasciore. The senior producer is Matthew Billy. The audio producer is Nat Weiner. The editorial producers are David Kurlander, Noa Azulai, and Jake Kaplan. The production coordinator is Claudia Hernández. And the email marketing manager is Namita Shah. Our music is by Andrew Dost. I’m your host, Preet Bharara. Stay tuned.