• Show Notes
  • Transcript

On this week’s episode of Stay Tuned, “Misinformation Apocalypse,” Preet answers listener questions about: 

— Grand jury rules, and speculation that a grand jury declined to indict former FBI Deputy Director Andrew McCabe

— The D.C. Court of Appeals’ ruling preventing the House Judiciary Committee from enforcing its subpoena for Don McGahn’s testimony 

— Trump’s defamation lawsuit against the New York Times Company

— Super Tuesday predictions and punditry

The guest is Hany Farid, a digital forensics expert and a professor at University of California, Berkeley. Farid is at the forefront of the race to develop technology that can detect visual, audio, and video manipulation. With the recent rise of AI-based fakes, also known as “deepfakes,” Farid is part of a small team of analysts creating new techniques to assist with the identification of fake content.  

As always, tweet your questions to @PreetBharara with hashtag #askpreet, email us at staytuned@cafe.com, or call 669-247-7338 to leave a voicemail.

 To listen to Stay Tuned bonus content, become a member of CAFE Insider.

 Sign up to receive the CAFE Brief, a weekly newsletter featuring analysis of politically charged legal news, and updates from Preet.

 THE Q&A 

ANDREW MCCABE

MCGAHN DECISION

  • The decision in House Committee on the Judiciary v. Donald F. McGahn, 2/28/20
  • “Court of Appeals Decision Amounts to a Constitutional Earthquake,” CNN, 3/2/20
  • “Dems tread cautiously on investigations after impeachment,” Politico, 3/1/20

NYT LAWSUIT

THE INTERVIEW

  • “Deepfakes are Getting Better, But They’re Still Easy to Spot,” Wired, 5/26/19
  • “Fake-porn videos are being weaponized to harass and humiliate women: ‘Everybody is a potential target,’” Washington Post, 12/30/18
  • Faux Rogan

 PELOSI “SHALLOWFAKE”

  •  “Speaker Pelosi at CAP Ideas Conference,” C-SPAN, 5/22/19
  • “Distorted Videos of Nancy Pelosi Spread on Facebook and Twitter, Helped by Trump,” New York Times, 5/24/19

LEE HARVEY OSWALD

  •  Srivamshi Pittala, Emily Whiting, Hany Farid: A 3-D Stability Analysis of Lee Harvey Oswald in the Backyard Photo (2009)
  • “Settling the Controversy Over Photo of Lee Harvey Oswald,” Dartmouth News, 10/19/15

 YOUTUBE & CONSPIRACY VIDEOS

  • Marc Faddoul, Guiilaume Chaslot, and Hany Farid: A longitudinal analysis of YouTube’s promotion of conspiracy videos (2020)
  • “Can YouTube Quiet Its Conspiracy Theorists?,” New York Times, 3/2/20
  • “Pizzagate: From rumor, to hashtag, to gunfire in D.C.,” Washington Post, 12/6/16
  • “5 Theories About Conspiracy Theories,” New York Mag, 2/6/20

 REGULATING TECH COMPANIES

  • Watch: House Science, Space, and Technology Committee Hearing on Online Imposters and Disinformation, 9/26/19
  • 47 U.S. Code § 230, a provision of the Communications Decency Act: Protection for private blocking and screening of offensive material
  • Telecommunications Act of 1996
  • “What is GDPR, the EU’s new data protection law?,” GDPR.EU
  • “Silicon Valley Heads to Europe, Nervous About New Rules,” New York Times, 2/16/20
  • “Europe Is Toughest on Big Tech, Yet Big Tech Still Reigns,” New York Times, 11/11/19
  • “Two New California Laws Tackle Deepfake Videos in Politics and Porn,” David Wright Tremaine Blog, 10/11/19

THE BUTTON

  • James Lipton, ‘Inside the Actors Studio’ Host, Dies at 93,” New York Times, 3/2/20
  • Inside the Actors Studio, Bravo TV

Preet Bharara:              From CAFE, welcome to Stay Tuned. I’m Preet Bharara.

Hany Farid:                   It’s fake news is becoming a mantra and that in some ways is the real danger here that we are getting into where we simply are going to struggle to believe what we see, hear, and read online and I don’t know how you have a democracy in that situation.

Preet Bharara:              That’s Hay Farid. He’s a professor at the University of California Berkeley and an expert in Digital Forensics. Farid stumbled onto the field of computer science while he was in college and quickly rose to be one of the leading analysts tackling the tricky world of image, audio, and video manipulation. In the last few years, a new iteration of his fakery has come onto the scene. You might recognize the term deep fakes, which refers to a computerized algorithm that can make people say and do things that they’ve never actually said or done.

Preet Bharara:              Well, that’s why Farid joins me to get into the nitty gritty of this terrifying tech. We also talk about how to get trusted information, whether Congress will place checks and balances on social media companies, and the rising power of plausible deniability. That’s coming up, stay tuned.

Robin Friedman:           Hi, Preet. This is Robin Friedman from the Chicago area. There has been speculation that the government chose not to continue the prosecution of Andrew McCabe because the grand jury declined to indict him. Doesn’t it seem likely in this day and age that if a grand jury had been convened, the fact of its existence would have leaked in a more definitive way? What are the legal restrictions on a grand juror both during the pendency of any proceeding and after its conclusion to disclose the fact that they sat on a grand jury, the identity of the accused, and the decision reached by the grand jury? Thanks. My week isn’t complete until I’ve listened to your show. Bye, bye.

Preet Bharara:              Hey, Robin. Thanks for your question. There’s been a lot of speculation in the last few months about whether or not the former Deputy Director of the FBI, Andrew McCabe was going to be charged by the DC US Attorney’s office. Although you talk about whether or not there should have been or could have been a more definitive leak, the fact that there has been some reporting that prosecutors went to a sitting grand jury to seek an indictment of Andrew McCabe and that they rejected the indictment, that’s still a pretty extraordinary thing. There are very strict grand jury secrecy rules. They are governed by something that’s called Rule 6(e) that you might’ve heard of from time to time because it gets litigated a bit and grand jurors are absolutely barred from talking about the proceedings that take place. They can’t talk about the testimony they heard. They can’t talk about the ways in which they voted.

Preet Bharara:              I don’t know that anything prevents them from disclosing the fact that they were on a grand jury. In fact, you have to be able to do that because if you don’t show up for work for six months, people are going to get suspicious. “what happened to Robin from Chicago?” The fact of serving on a grand jury, fine, everything that goes on in the grand jury, not fine. That also applies to Assistant US Attorneys and Law Enforcement Agents who go into the grand jury.

Preet Bharara:              The one party that it does not apply to are witnesses who appear in the grand jury. They can go into grand jury and talk about what they said in the grand jury and that’s why you sometimes get leaks of information about what did or did not happen and maybe that’s one of the reasons we know about it here, but it’s still an extraordinary thing, A, for it to leak out than the diamond may have been presented B, that it was rejected and sC, the fact of a rejection in and of itself as we’ve discussed before is a very, very unusual thing and probably is the reason not to pursue a second shot at Andrew McCabe.

Peter:                           Hi, Preet. This is Peter from Natick, Massachusetts. Could you please explain the decision by the DC Court of Appeals that it does not have jurisdiction over the McGahn subpoena from the House of Representatives and how is that possibly consistent with things like the decision and USA Nixon and everything having to do with congressional oversight? Thanks very much, been a listener since your very first podcast and intend to continue to be. Thanks very much, bye.

Preet Bharara:              Peter, thanks for your question. Lots of people have been talking about this decision from the DC Court of Appeals that basically says, “The court is going to sidestep the question of whether or not Don McGahn former White House Counsel can be compelled to testify based on an inquiry undertaken by the House of Representatives.” Milgrom and I actually talk about this at some length on the CAFE Insider podcast and go into the details of the opinion. Let me give a shortened version here. It was a two one decision. There’s a very striking and stride and dissent from the case by Judge Judith Rogers, but the majority opinion basically says, “Look, when there is a dispute of a significant nature between the other two branches of government, the executive and the legislature, and it doesn’t involve harm outside of those two branches of government, that’s something for them to resolve on their own.

Preet Bharara:              Congress has various authorities like spending power and other public pressure mechanisms and tactics that they can apply to get the testimony that they want and so we’re not going to decide the question and let the status quo stand under a doctrine that is commonly known as the Political Question Doctrine. It’s politics. It’s not for the courts. It’s not for us to play mommy and decide between the two parties so we are going to stay out of it.” That, of course, has some logic, but historically, there is a period of negotiation and compromise between the legislative branch, the Congress and the executive branch when they’re seeking information in connection with oversight responsibilities and I was involved in that from time to time during my four and a half years in the Senate.

Preet Bharara:              Because there’s a possibility down the line that the court is going to resolve it in favor of the Congress or not in favor of the Congress, both parties have some incentive to negotiate and compromise. That’s what happened in the US Attorney Firings Investigation that I helped to lead back in 2007. That’s what’s happened in other investigations too. This decision in the DC Circuit Court of Appeals seems to take away that possibility and take away all incentive for the executive branch ever to comply with anything it seems and that is the foundation for the criticism put forward by Judge Judith Rogers in the dissent. She basically says, “Look, there’s always some kind of negotiation and compromise. If we say as an automatic matter that things like these are a political question and we’re not going to get involved, any administration can decide we’re just going to be scorched earth opposed to any kind of service of process. We’re not going to participate at all. We’re just going to blow it off completely and they can get away with it.”

Preet Bharara:              My view and the view of a lot of other folks is that’s not a good position to be in. The likelihood is it will not end with a DC Circuit Court of Appeals. It will go to the Supreme Court and then someone is fond of saying, “We’ll see what happens.”

Preet Bharara:              This next question comes from Twitter user @Treneau who says, “@PreetBharara, with Trump’s libel suit against the New York times, do they now have the right to discovery? #AskPreet #staytuned.” You’re referring to this lawsuit brought by the Trump campaign against the New York Times, which is rare. As Anne and I also discussed in the CAFE Insider podcast, Trump often threatens to sue. That’s true with respect to news publications. That’s true with respect to people who have accused him of sexual assault. Usually, he does not make good on that threat. He yells a lot, he tweets a bit about it. In this case, he has filed suit.

Preet Bharara:              In fact, since the time of your tweet, I believe, the campaign has also sued the Washington Post. In all of these cases, both the Washington Post and the New York Times, the campaign has sued on the basis of what it calls a defamatory piece in their papers. All of these pieces have been opinion pieces. Before I get to your question about discovery, let me just quickly address the merits of the claim. Genuinely speaking, opinion made in good faith that doesn’t rely on no known and knowable false facts, but opinion, especially about people who are in the public eye, public figures, that’s fair game. That’s what the first amendment is all about. That’s what it’s meant to protect.

Preet Bharara:              By my reading and by the reading of almost every expert, the things on which Trump seeks recovery, both with the New York Times and the Washington Post, are the kind of normal in the heartland of opinion kinds of pieces that don’t lend themselves to a libel suit. In fact, that’s not just people who oppose Donald Trump saying that, that’s people on the right as well. Anne and I talked about a piece by Andrew McCarthy at the National Review Online who says there’s absolutely no merit to this lawsuit, whatever. That’s relevant to your question about discovery or nearly when you have a lawsuit that’s utterly lacking in merit. The defendant in this case, in New York Times and also the Washington Post, will make a motion to dismiss. That is before discovery, the thrust of which is in a motion to dismiss that even if we assume all the allegations in the complaint are true, even if we assume that all the things you’re saying are correct, if there’s still no basis to bring a lawsuit, then we dismiss the case and no discovery. It’s only after that point that discovery takes place.

Preet Bharara:              Discovery of course is the exchange of documents and the taking of depositions and sometimes, and maybe this is the spirit in which you’re asking the question, it is something that is not to the benefit of the person filing the lawsuit. The plaintiff subjects himself, in this case Trump, to discovery and depositions and question asking and sometimes it can be bad for him. I think that the case is so meritless that we’ll never get there. But let me say this for the second time today, we’ll see what happens.

Preet Bharara:              I’m recording this mid morning, Wednesday, March 4th the day after Super Tuesday, long-awaited 14 states went to the polls to decide who might be the Democratic nominee. One thing that’s clear from Super Tuesday, no matter whose side you’re on, elections are not so predictable. There’s been a bit of good natured and sometimes not so good natured ribbing on Twitter of people who made strong predictions about what would happen on Super Tuesday and what happens every election cycle. It happened in 2016 with respect to Donald Trump.

Preet Bharara:              Last week, Hugh Hewitt, noted conservative commentator who said he was going to vote for Bernie Sanders himself and I think a gambit and deploy to make him the nominee because he’ve used him to be easier to defeat by Donald Trump. Whether that’s true or not, I don’t have a view. You had tweeted about an op ed that he’d written in which he said, “Of course Joe Biden is taking a victory lap.” This is after South Carolina. “Of course, Joe Biden is taking a victory lap, but it’s not going to stop his losing Super Tuesday, to which I responded, “LOL.”

Preet Bharara:              Twitter user, Jim Driskell poses this question in his tweet in response to my tweet that said LOL. Jim Driskell asks, “Has he ever been right? #AskPreet.” You People who speak for Living About Politics and write for Living About Politics like Hugh Hewitt, they make mistakes sometimes. They get things right sometimes. I think what’s more important than sort of dunking on people like Hugh Hewitt is to realize that all of us are fallible, that we have to be careful about making predictions of any sort whatsoever. I’ll point you to another tweet that I sent last night quoting the best screenwriter of all time, William Goldman, who very famously said in a book relating to screenwriting in Hollywood and predicting box office success. He said, “Nobody knows anything that could apply to Hollywood and predictions about blockbuster success, but it also can apply to political pundits.”

Preet Bharara:              I’m guilty of it sometimes myself. I try to not make many predictions lately because I tend to be wrong. My predictions tend to be in the area of what’s going to happen with respect to legal cases or pardons or that kind of thing. I’ve been wrong a bunch recently. But when it comes to elections, the reason it’s important for everyone to appreciate that nobody knows anything and it’s surprises can happen and that unforeseen events can take place and that long shot candidacies can win is that your vote counts and you must go vote. Don’t let folks in the media, don’t let the sort of professional paid pundits tell you who’s going to win, who’s not, what matters and what doesn’t. Especially don’t let Twitter do that for you.

Preet Bharara:              Barack Obama’s candidacy was one considered an impossibility. Bernie Sanders’ candidacy was one considered an impossibility and it has thrived. Same is true for people to judge. He didn’t get there, but boy, he got a lot farther than people thought. Andrew Yang, same thing. A number of the people I just described may still become the president in the future and of course the mother of all bad predictions, the Donald Trump victory in 2016. Anything can happen. There were a lot of people saying five days ago, Joe Biden’s candidacy was dead. Many of those same people are now saying that Joe Biden’s nomination is all, but assured. Neither of those things are true.

Preet Bharara:              There’s still a lot of states that have to vote. Things could yet change and I’ll repeat what I’ve seen. I think wisely people have said on social media and on the airwaves for a while, vote for the candidate you believe in. Support the candidate you believe in. Pick the person who either aligns with your values or who you think has the best possibility of defeating the person you want to have defeated, and I’m speaking about Trump, and do it with all your heart and convince your friends and knock on doors and make phone calls and don’t let one inevitable narrative replace another inevitable narrative because there’s no such thing as inevitability, at least not yet.

Preet Bharara:              My guest this week is Hany Farid. He’s a professor and digital forensics expert working to develop the suite of tools that will help us detect deep fakes, and other visual, and auditory manipulations. As a heads up to our younger listeners at home, some of our discussion references explicit content as Farid’s also been at the forefront of the fight to get non consensual pornography and child pornography off the internet. In our conversation, Farid explains that the incentive to make fakes isn’t new, but today’s technology and social platforms exacerbate the problem. For example, we discuss how a clever, deep fake could upend the election. We’ll talk about the power of a shallow fake, why the democratization of information isn’t always a good thing, and whether we’re on the precipice of what Farid calls a misinformation apocalypse, plus how Farid became tied up in a conspiracy theory with none other than the alleged Kennedy assassin, Lee Harvey Oswald. That’s coming up, stay tuned.

Preet Bharara:              Professor Hany Farid, thanks so much for being on the show.

Hany Farid:                   It’s good to be here, Preet.

Preet Bharara:              I don’t know how to sort of introduce the topic other than to say we’re recording on Monday, March 2nd, and we thought we would take a break from the horror of the coronavirus and educate folks on yet a different horror that is looming in the country. It’s a thing that some folks have been hearing about and we’ll talk generally about the weaponizing of technology, which is something that you’ve written about, and have studied, and have taught about. But in particular, this thing that people are beginning to hear deep fakes.

Hany Farid:                   Yeah.

Preet Bharara:              Now, you are a very smart guy and trained in a lot of different things. In fact, I understand that you have been called the father of digital forensics. Is that true?

Hany Farid:                   I have. I’m sort of grateful it’s not grandfather right now. I’ll stick with father for now.

Preet Bharara:              That’s not a thing to name your child. But anyway, come here digital. Can we start at the beginning on this issue?

Hany Farid:                   Good.

Preet Bharara:              What is a deep fake, what is meant by the term? Then we’ll get into how you do it, how you detect that something is fake? What the consequences are? How we should be worried and all sorts of terrible things. But then how are we going to fix it all? What’s a deep fake as the term is understood?

Hany Farid:                   Good. Let’s start with some definitions. First, let’s start acknowledging that for a long time since the inception of photography, we have been manipulating photographs, and videos, and audios. Typically, that has been done by a handful of very talented experts. Over the last 15 years, that has been democratized a little bit with programs like Photoshop that now allow maybe somebody with a little bit of skill to manipulate a photograph. What’s been happening over the years and the decades is we have been making the technology to manipulate digital media easier and easier to use.

Hany Farid:                   Today, the latest instantiation is so-called deep fakes. What deep fakes are is a general term to describe using computerized algorithms, typically machine learning algorithms, to automatically either, for example, synthesize audio in somebody else’s voice, synthesize images of people who have never existed, and things like create deep fake videos where you swap one person’s face for another person’s face inside of a video or you change their mouth to be consistent with a new audio track. Literally, you are putting words in their mouth.

Hany Farid:                   The important thing here is to understand that while the creation of fake content is not you, it is this automation, the fact that we are now using automatic algorithms, you simply point the algorithms at some data and you say, “Swap this person’s face for this person’s face. Create me an image of a person who doesn’t exist, synthesize president Trump saying whatever I want him to say.” It is that democratization of access to technology that I think have many of us concerned about the misinformation apocalypse that is upon us.

Hany Farid:                   I think it’s important here. The tough term-

Preet Bharara:              True.

Hany Farid:                   I know we’re coming off the coronavirus and we’re still wrestling with that, and I don’t want to freak everybody out. But I do think that there is something bubbling up and you can’t just blame the deep fakes because we’ve been now seeing this unfold for years and it’s really a trifecta, if you will. We now have the ability to create fake content that is very convincing. We have the ability to publish that to the world through social media with essentially no filters and apparently very little oversight from the Facebooks, and the YouTubes, and the Twitters of the world.

Hany Farid:                   The third part of that is that we have a willing consumer that we have become so partisan both here in the US and around the world, that we are simply willing to believe the absolute worst and the people that we don’t like or we disagree with and that’s the perfect storm. Create, disseminate, consume, amplify, rinse, repeat. I think that’s what we’ve been seeing over the last few years, and that’s sort of that the landscape ahead of us right now.

Preet Bharara:              It gets to use the shampoo analogy, continuing that it gets people kind of lathered up. But what’s interesting, you use the word democratization. You used that word a multiple times a minute ago, and people usually think that’s a good thing. It has positive connotations and yet you’re talking about it in the context of it being something dangerous and bad.

Hany Farid:                   Well, here’s another good example of that. For a long time we thought democratization of access to publish information was largely a good thing. Give everybody seven billion plus people in the world the ability to say and do anything they want online. That seemed this sort of beautiful utopian idea. But it turns out when you do that, some pretty bad things happen because there are some bad people out there. When we don’t put checks and balances on the digital world the way we do in the offline world, well, some bad things happen. The same is true of technologies.

Hany Farid:                   For a long time in Computer Science and Technology, the thinking was, “Develop technology and give it to the world and let’s see what happens.” But let me point out that if a biologist, speaking of the coronavirus, figured out how to create a deadly virus from ingredients in their kitchen, nobody would think it’s a good idea for them to put that recipe online and then ask, “Well, let’s see what happens next.” But that’s sort of what we do with technology on a routine basis. For a long time it’s been okay. But I think now 20 years into the modern internet that we know, we have to start thinking a little bit more carefully about how technology is being weaponized.

Preet Bharara:              As you point out, the manipulation of images and moving images videos has been around for a long time. In fact, it’s a staple of Hollywood, right? This thing that we’re talking about, the showing of people who are not really in the room, I think there had been some films where you have somebody who’s passed away and they use digital technology to have them appear in the film for entertainment purposes and at a cost of millions of dollars.

Preet Bharara:              People don’t seem to have a problem with that. But if everyone with a laptop can do something that is false and have it pass as real, then I guess there’s a real danger of nefarious conduct. Before we get to how it’s done and how fixed or detected, give us some examples of bad things that people can do.

Hany Farid:                   Let’s talk about what people are doing today and in fact the term deep fake comes from the Reddit handle user name of a guy who created the first deep fake non consensual pornography. They took the likeness of a woman and they put it into a sexually explicit content and they distribute it online. The vast majority today of deep fake content is in that realm of non consensual pornography. It is yet the latest incarnation of how technology is being weaponized against women, and I think it’s pretty awful.

Hany Farid:                   Many states around the country and at the federal level are looking to figure out how they can regulate the space because it is… I think many people, and I agree with this as yet another assault on women, particularly online, and it’s threatening and it’s awful.

Preet Bharara:              To understand what that is, you’re saying they’re not creating a video whole cloth. They’re taking an existing video and they’re super imposing some celebrity’s face on it.

Hany Farid:                   Not just a celebrity, journalists, people who participate in the Me Too movement, people who just attract unwanted attention. It can be anybody. Because what’s so interesting about this technology is it’s not just you are vulnerable if you are the Scarlet Johansen’s of the world, but if you have your likeness on the internet, which we all do, you now are exposed and have a vulnerability and we are seeing that type of weaponization against women.

Preet Bharara:              What’s another example?

Hany Farid:                   That’s one particularly problematic and troubling issue that we’re seeing. Here’s another where we now start to get into some interesting landscape. Imagine I create a video of Mark Zuckerberg, of Jeff Bezos saying, “Our profits are down 10%.” I leaked that on Twitter or YouTube and that goes viral in 10 seconds, how much can I manipulate the stock market? How many billions of dollars can I move the stock market before anybody figures out that it’s real? While that has not been done yet, we have the ability to create those types of videos. We have seen smaller scale versions of these videos being used for fraud.

Hany Farid:                   But in terms of economic security, I think there are real concerns about how information makes its way on the internet and how it can move markets to the tunes of billions of dollars.

Preet Bharara:              I assume you can also affect politics in this way too. There’s a concern about, certainly, future elections, but maybe even this one too.

Hany Farid:                   Sure, sure. Here’s the nightmare situation for our democracies, which is that 24 or 48 hours before election night, somebody creates a video of a candidate and releases it online. Before anybody figures it out, we’ve moved hundreds of thousands of votes. If you don’t think that matters, I will remind you that in 2016, the difference between President Trump and Hillary Clinton was 80,000 votes in three states. When the margins are thin, you can move people very quickly.

Hany Farid:                   By the way, I’ll remind people too that Mark Zuckerberg has said, “We have no problem with the campaigns lying, outright lies, as long as you pay us, and then we will allow you to micro target those lies.” Imagine it’s not the Russian hackers. Imagine it’s the campaign themselves that create fraudulent videos, perfectly legal. Mark Zuckerberg has no problem with it and I’m going to micro target Ohio, and Florida, and Pennsylvania, and Michigan and we are going to change an election.

Preet Bharara:              Okay. These things are not widespread yet. We keep hearing the threat is on the horizon. So, what is possible today? Can somebody with a few hundred dollars, or you tell us how much it costs today, create the kind of video that you’re describing. Or is it still only in the realm of people who have a lot of money and very sophisticated technology?

Hany Farid:                   So, two things. One is misinformation is clearly here today, whether it’s in the form of deep fakes or not, it is here. We have been seeing it for years, both here in the US, and Europe, and around the world. Now the question is, when do we get to the next stage where we start seeing full blown synthetic images and videos? What we have been seeing is the use of deep fake images to create fraudulent accounts on LinkedIn, on Twitter, on Facebook to promote fake news and missing disinformation. We’ve started to see this low level fraud happening. I agree with you, we have not seen the full blown video of Elizabeth Warren, Joe Biden, Bernie Sanders or Donald Trump saying something that’s offensive or crazy or illegal.

Hany Farid:                   I think most of us think that maybe if it’s not 2020, it’s going to come down the line.

Preet Bharara:              But is that because people are abstaining or is that because it’s still too difficult to do? Could you actually create a video of Joe Biden saying something that would be offensive to a large segment of the population?

Hany Farid:                   We can and we have. Not something offensive, but we have created video in my lab of politicians saying things that they’ve never said. To your question, where are we? The core code that we use to create these deep fakes, anybody can download from the internet. You go to GitHub. GitHub-

Preet Bharara:              Oh, don’t tell people.

Hany Farid:                   I’m not going to tell you where to find. You know what? It’s too late.

Preet Bharara:              Okay guys, don’t try this at home.

Hany Farid:                   You can download the code. I will say that if you try to run it right out of the can, you’ll get some interesting results, but they won’t be great. They’re not going to fool anybody. But if you have a little bit of skill and a little bit of computing power and typically around a week or so of time, you can create a pretty compelling fake. But what’s important here is not where are we today, but what is the trajectory? What we’ve been seeing over the last 18, 12, 6 months is that software is getting better and better and better. It’s running faster and faster. It needs less and less data and it is just a matter of time before it’s going to be plug and play.

Preet Bharara:              How much does that cost-

Hany Farid:                   [crosstalk] press your button.

Preet Bharara:              What’s the cost?

Hany Farid:                   It depends on the quality of the video. We for example, have a single computer instance sitting in Amazon’s cloud and that costs us a couple of thousand dollars a month to run and that’s it. If you’re a little more patient and you have a high end laptop, you can probably run that on your laptop too. The computing is not the rate limiting step anymore and that is narrowing and narrowing. I think what’s going to be interesting to see is… I think two things are going to be interesting. One is when and if these deep fakes really start to penetrate because the fact is that good old fashioned fake news and fake video work.

Hany Farid:                   If you saw the Speaker Pelosi video where they simply slowed it down to make her sound drunk, that wasn’t a deep fake. It was what we now are calling a shallow fake and it was really effective. I got it viral online, people were outraged by it. Simple misinformation where people tweet out things that didn’t happen are incredibly effective. But I think here’s what I would argue is the larger underlying issue that is going to be difficult for us because whether deep fakes gets weaponized in 2020, 2022 or 2024 is sort of missing the broader point because as we enter a world where images, and video, and audio, and the new stories we read can be faked, well, then nothing is real. Everybody has plausible deniability or the so called Liar’s Dividend.

Hany Farid:                   Now, any politician, anybody who doesn’t like an image or a video or an audio of them can say, “Oh, it’s fake.”

Preet Bharara:              It’s an interesting point. As you say, we keep talking about the fake thing, but in the world of tomorrow, the real thing that’s offensive can be disclaimed as fake.

Hany Farid:                   Yeah. To give you a sense of how quickly the landscape has shifted, let’s go back to 2015, five years ago. Then candidate Trump gets himself in a little bit of trouble for the access Hollywood tape where he says things that are very offensive and what did he do? He apologized and his campaign apologized, and they apologized, and they apologize. Does anybody today think that they would be apologizing if that audio came out? No. You would say it’s fake news. You can’t believe it. Not only would you say that, you would have plausible deniability because if you remember in that tape, you never saw him. All you did was hear the voice.

Hany Farid:                   By the way, some people may have missed this, is that two years on in 2017 now President Trump says, “Oh, I think that that audio is fake. We don’t have to talk about it anymore.” That’s how quickly the landscape has shifted. I will tell you on a weekly and sometimes daily basis, I get emails from people around the world saying, “There’s a video of me, there’s an image of me and it’s fake.” Whether that’s a court of law, whether that’s just simply embarrassing, whether it’s a politician in trouble, that it’s fake, it’s fake news is becoming a mantra. That in some ways is the real danger here that we are getting into where we simply are going to struggle to believe what we see here and read online. I don’t know how you have a democracy in that situation.

Preet Bharara:              You mentioned the trifecta. It’s interesting the fact that you can make fake stuff, you can publish it to the world, and there’s a willing consumer. Would you add to that or do you think it’s sort of an umbrella concern that we are more susceptible these days to conspiracy theory and to fake stuff than we ever have been before? Or have we always been this way?

Hany Farid:                   I would add that to it and I think it’s an important addition to it and I would say it’s a combination of things probably at play. One of them that we have to take a very hard look at is the filter bubble that is social media, that the underlying business model of Facebook, of YouTube, of Twitter, of social media is to engage you on their platform. That means that their incentives are to show you things that conform to your worldview because that’s what’s going to keep you clicking.

Hany Farid:                   Their incentive, as Mark Zuckerberg has admitted, is that sensational outrageous conspiratorial content engages you more. The algorithms that are being optimized to figure out how do we keep users on our platform for as long as possible so we can deliver ads and extract data from them are learning to keep reinforcing your previously accepted views and to give you more sensational, more conspiratorial. Now, I imagine there was other issues at play here that we are more polarized society but you can’t ignore and we can’t ignore what is this filter bubble of social media where now the majority of Americans and outside of the US, the majority of people get the majority of their news. That’s a very troubling landscape.

Preet Bharara:              Yeah. I think it sounds like it’s a little bit of a subset of 0.3 in the trifecta. We have a willing consumer base because people are prepared to believe almost anything. For example, going back to prior elections, if you had a belief because you didn’t like President Obama, that he had certain views or he talked a certain way with his confidence or that he was from Kenya, they’re going to be willing to believe manipulated images that show those things.

Hany Farid:                   That’s right. That’s exactly right. Part of that is we are more polarized, but part of that is that imagine that you get the majority of your news on Facebook and every news article keeps conforming to that narrative and the micro targeted ads by the people who didn’t like President Obama conformed to that narrative. Well, it’s hard to escape from that filter bubble or that rabbit hole as it’s called. There’s two issues here with the social media platforms that not only do they allow the stuff on the platform, but they’re pushing it down your throat. They are not just neutral platforms.

Hany Farid:                   70% of content on YouTube that is viewed is promoted by YouTube. They are telling you what to watch, watch next. 100% of material on Facebook is Facebook algorithmically telling you what to look at on your newsfeed. These are some of the most powerful editorial pages in the world because they’re doing this at the global scale, not just at the US scale.

Preet Bharara:              You mentioned a minute ago the access Hollywood tape and the idea that real things are going to be accused of being fake and that’s not a new thing. There’s an interesting story in your background that I was reading about over the weekend where there’s a famous photograph, I think of Lee Harvey Oswald, who is alleged to be the person who shot President Kennedy and he’s standing with a weapon that he claimed back then was fake and you had some involvement in proving that it was real. Explain that.

Hany Farid:                   Yeah. First of all, this is a great story.

Preet Bharara:              That’s why I’m asking you about it.

Hany Farid:                   Yeah. This is actually really one of my favorite analysis that we’ve done for a couple of reasons. One is that you are absolutely right. Lee Harvey Oswald when shown that picture of him holding the rifle, that was the same type of rifle that was used to kill President Kennedy. He said it’s fake, which is really amazing. If you think about pre Photoshop to be able to say that-

Preet Bharara:              1963.

Hany Farid:                   1963. Now, since 1963 I’m sure most people know there have been any number of conspiracies and theories about what actually happened in the assassination of President Kennedy up to including aliens, and time travelers, and the Cubans, and the Russians, and the FBI, and the CIA. There’s a long laundry list of conspiracies.

Hany Farid:                   One of the things that those conspiracy theorists. Point two is purported inconsistencies and the so called backyard photo of Lee Harvey Oswald. For years, I was getting email from people telling me, “You have to look at this image, you have to look at this image, you’re going to blow the lid off of this thing.” One summer, a few years back, I got a particularly interesting email that was very coherent. They pointed to things in the image that I honestly couldn’t understand. The shadows and the lighting did look odd to me. I thought, “Well, this might be interesting.” I set off for a few months to analyze the photo and did a very careful three-dimensional reconstruction of the scene to understand the lighting, and the shadows, and the size of the gun, and Oswald’s stability.

Hany Farid:                   We did a full blown 3D analysis and we found that the image is completely consistent with the expected physics of lighting and gravity. Everything sort of just came together very nicely. We published this work and I was very excited because I thought, “Guys, I’ve got some good news for you. You can move on from now.”

Preet Bharara:              Right.

Hany Farid:                   Which looking back on it was incredibly naive because what happened is I became part of the conspiracy. The narrative emerged as ah.

Preet Bharara:              Right.

Hany Farid:                   He is working for the FBI. He’s part of the conspiracy. This is in some ways the brilliance of conspiracies because there’s two types of data. There’s the data that supports your conspiracy and there’s the data that that doesn’t, which is part of the conspiracy to cover it up.

Preet Bharara:              Right. You can’t win.

Hany Farid:                   I became part of the ladder. You can’t win.

Preet Bharara:              You can’t win in the conspiracy theory.

Hany Farid:                   Nope. There was no discussion.

Preet Bharara:              Because whatever new information enters the dataset, you just reject it or you explain it away as biased.

Hany Farid:                   Exactly. Flat earth, September 11th didn’t happen. School shootings didn’t happen. The moon landing didn’t happen. That long laundry list of conspiracy theories, there is no debate to be had. I will say, by the way, just today, we released a study which is a 15 month study of how YouTube promotes conspiracy videos. What we find is that at its peak in the late 2018, some 10% of promoted videos on informational channels, so the CNN, the NPR, the New York Times in the world, were conspiratorial and that number has gone down and it’s now been fluctuating between three and 5%.

Hany Farid:                   It’s been a really interesting study to see how YouTube has finally started to respond to this. But I would still argue somewhat anemically because we are still seeing, again, not just neutral hosting. They are telling you to watch these videos and that is a very dangerous landscape when those videos say things like, “Drink a bleach and you won’t get the coronavirus.

Preet Bharara:              Right.

Hany Farid:                   This is a little-

Preet Bharara:              [crosstalk] either.

Hany Farid:                   Exactly. Not only is YouTube along on the platform, they’re telling people to watch this. It’s easy to sort of make fun of these conspiracies. The fact is that some of them are incredibly dangerous. The pizza gate conspiracy where the guy showed up at a pizza joint in DC because he was convinced that Hillary Clinton was running a child pornography ring. He showed up with an AR 15 and fired shots. What happens online doesn’t stay online. It bleeds over into the real world and that’s incredibly dangerous.

Preet Bharara:              It seems to me, and maybe tell me if this is too outlandish, that you’re taking a conventional weapon that’s existed for a long time in the same way firearms have existed that are very dangerous and then can do damage to an adversary or an opponent. Now, you throw in very, very realistic video and sound and you’ve now created something that was conventional and maybe made it more of nuclear power.

Hany Farid:                   I think that’s exactly the right way to think about it is that we have this existing problem of missing disinformation, and fraud, and conspiracies, and harassment. Now, we’re injecting into this incredibly powerful, deep fake technology that is on steroids, everything that we have seen before. I think there’s two things important there is that it’s not the deep fakes in and of themselves, but it’s the injection of this new powerful medium.

Hany Farid:                   Look, let’s be honest that up until fairly recently when you saw video, you sort of believed it. I mean, images we have come to sort of cast some doubt on them, all the sharp videos of sharks swimming down the streets after a hurricane. But video still held this sort of sacred spot as did audio recordings and that is starting to go away. What’s left now? Okay, so what I read, what I see, what I hear-

Preet Bharara:              There was nothing left.

Hany Farid:                   There’s nothing left, right?

Preet Bharara:              I have to physically be touched in order to confer.

Hany Farid:                   Right. We have to touch it, right? Okay. Well, that’s a dangerous landscape. Now, what’s the answer to this? Well, I think part of the answer is the social media companies have to start getting more serious about how their platforms and services are being weaponized and they have been too slow to do that. Then I think we as the consumer have to start thinking about, how do we get trusted information?

Hany Farid:                   Honestly, what that means is, get off of Twitter, and get off of Facebook, and get off of YouTube and return to our trusted sources.

Preet Bharara:              Like podcasts.

Hany Farid:                   Like podcasts, exactly.

Preet Bharara:              How does the listener know right now that this is you and me talking?

Hany Farid:                   Good. They don’t, obviously. At the end of the day, look, this whole thing can be synthesized.

Preet Bharara:              It would save me a lot of work.

Hany Farid:                   It would, but there’s a difference between the incentives of you and the incentives of somebody on YouTube who’s trying to drive advertising dollars, right? Your credibility matters down the line. Maybe you can get away with it once, maybe twice. But your credibility as a journalist and as a serious thinker and as somebody who does podcasts is that your audience has to trust you. That’s important and that’s true of most mainstream outlets. That exchange that we have is what sort of keeps us honest in this conversation right now. The same cannot be said of Facebook, and Twitter, and YouTube where you are rewarded the more outrageous you are because that’s what drives activity and that’s what drives advertising dollars.

Preet Bharara:              With respect to what people believe, you’ve said based on human psychology, that people are deeply visual beings and that’s been true forever. There are jokes that people say like, “Who are you going to believe me or your lying eyes?”

Hany Farid:                   Right.

Preet Bharara:              That may be changing. Let me ask you this question, what is harder to do a deep fake with respect to video of a person or audio?

Hany Farid:                   It’s a great question. The answer is, it depends. With the video, it depends on what you’re trying to do. If you want to create a video of a talking head, say 15 seconds of video staring directly at the camera and you want to simply change the words or map a face on, that’s relatively easy. I mean, you can do that and that technology is more or less open source and you can download that and run it.

Preet Bharara:              What if I want to do something? I’ll give you the example. What if you want to show someone you don’t like killing another person, stabbing someone to death?

Hany Farid:                   Yeah. Right now, that’s very hard because most of the deep fake video is from the neck up. We are manipulating what somebody is saying and their facial expressions. Think talking heads on television, politicians. Now, to do this, you could certainly do the following. You could fully reenact the scene that you want to put somebody into. Get a bunch of actors, studio, film that whole scene, and then do a so-called face swap, deep fake. But now you’re getting into a very, very high threshold, right? A very-

Preet Bharara:              For example, the movies do it all the time. There are people-

Hany Farid:                   The movies do it all the time, but with budgets of millions of dollars, right? Millions and millions of dollars. So, sure.

Preet Bharara:              But for the ordinary person, if you want to find someone for a homicide with video proof, that’s really hard?

Hany Farid:                   Yeah, that’s really hard. Now, on the other hand, and this is why we worry more about the political realm, is if I want to put 10 seconds of audio into a candidate or a president’s mouth, that’s relatively easy to do, or a CEO’s mouth. The only exception to what I’m saying is in the non consensual porn, that’s pretty common because all of that material already exists. You simply download sexually explicit material and you map a woman’s face into it, and there’s also a different goal there.

Hany Farid:                   The goal is not to convince you that a famous actress or somebody that you don’t like is actually in there. It’s really more of a bullying and a terrorizing. Whereas if you’re trying to create digital evidence to put somebody in jail, that’s a different threshold and we’re not there yet, but-

Preet Bharara:              A little bit, it seems an easier thing to do based on what you’re saying. Going back to my framing for a homicide example is you might be able to get someone to look like they’ve confessed to the crime.

Hany Farid:                   Yes. That’s more likely to happen, right? Somebody saying, “Oh, I can’t believe I did that.” Five seconds of video done.

Preet Bharara:              Right. Somebody says that I came across this person who was my friend or my adversary or whatever and takes an iPhone video and it looks like that person has confessed and you give that to the police. That’s a much more realistic concern.

Hany Farid:                   That’s exactly right. It’s absolutely realistic today. Now imagine we go into the court knowing what we know. I’m going to come back to the plausible deniability question and defendants can say, “Well, that video evidence of me is fake. The police faked it, my friend faked it. The CCTV is fake.” Where is the jury now? Where’s the courts? When they read about deep fakes and know that things can be manipulated, how do we come to grips with authenticating content for everything from national security to dealing with political campaigns to the courts?

Preet Bharara:              Let’s take an easy example. Let’s say on the confession example, which I’m hung up on. You don’t have video and you have a person who says, “I got the guy to confess this.” You see this scenario play out in novels and movies all the time and you have the person’s voice. If you just have to do voice and you try to get Preet Bharara confessing that he was responsible for some homicide that took place, that’s not that hard, right? I’m going to play you an example. We’re going to do a little test in a moment, but how hard is that to do?

Hany Farid:                   Here’s some bad news for you, Preet. You, somebody who has recorded a lot of podcasts, have a lot of exposure.

Preet Bharara:              Because I’ve said a lot of words.

Hany Farid:                   You’ve said a lot of words. I can download all those podcasts and I could train a deep neural network to synthesize speech in your voice. It’s so called text to speech. I have it listen to hours and hours and hours of you, and then I type out the computer and it will synthesize audio in your voice.

Preet Bharara:              Just to be clear, it is not doing what I sometimes see Jimmy Fallon do, taking words that I’ve actually said and-

Hany Farid:                   No.

Preet Bharara:              … editing them together.

Hany Farid:                   No, no.

Preet Bharara:              You’re talking about a pure synthesis.

Hany Farid:                   Full synthesis from the ground up. It’s not, “Tape this word from this podcast and this word and then spice them together.” Because that never sounds that good by the way. Although one of your other vulnerabilities is that all your podcasts are recorded, very high quality studio setting, no background noise. But this is full on synthesis. I don’t need you to have said exactly that word because what it learns is so called phonemes, right? The building blocks of the words and then I can get you to say those words entirely at my discretion.

Preet Bharara:              Can we try this experiment?

Hany Farid:                   Sure.

Preet Bharara:              There’s a guy named Joe Rogan. I think he does some kind of podcast or something. I don’t know.

Hany Farid:                   Sure. I’ve heard of him.

Preet Bharara:              We have two clips and it is my understanding that one clip is taken actually from Joe Rogan, from the podcast, I believe and the other is a result of this audio manipulation, totally fake.

Hany Farid:                   Good.

Preet Bharara:              Okay. You’re the expert, so we’re going to play them both. They’re very short. Then we’ll have the listeners sort of think to themselves what they think is the true one, which one is the false one? They’re going to come back and ask the professor. Can we play clip one?

Speaker 5:                    Fantastic old world craftsmanship that you just don’t see anymore.

Preet Bharara:              Okay, that’s clip one. Here’s clip two.

Speaker 6:                    What was the person thinking when they discovered cow’s milk was fine for human consumption and why did they do it in the first place?

Preet Bharara:              Okay, so before you answer, Professor, let’s have everyone who’s listening, and I’m betting a lot of people are familiar with Joe Rogan’s voice, and those two sounded very similar to me and if you’re not familiar with his voice. Before you say what you think, do you have any basis? Do you have any ability that’s greater than the average person to judge which one of those is fake and which is false?

Hany Farid:                   Absolutely not. Absolutely not.

Preet Bharara:              Without using equipment?

Hany Farid:                   Yeah. Without using equipment. I mean, honestly, and this is really what’s a little terrifying is because we have so much experience with audio, and video, and images. We’re very comfortable with it and we think we’re pretty good at it. But the truth is that we’re not that good at it and we’re now-

Preet Bharara:              Which one was the fake one?

Hany Farid:                   I don’t know. I would say this is close to a coin flip. First of all, the synthetic Joe Rogan voices were a game changer for me when this came out because that was really the first example of, “Wow, we really can do this now.” I think number two is the fake one, but it’s-

Preet Bharara:              Based on what? Why would you guess that?

Hany Farid:                   The cadence of his voice seemed a little bit faster than what I’m used to hearing. But I’m not entirely positive that that’s true. I certainly-

Preet Bharara:              This is why you have the academic accolades that you have. Clip number two was the fake one. Let’s play it one more time so people can hear it again.

Speaker 6:                    What was the person thinking when they discovered cow’s milk was fine for human consumption, and why did they do it in the first place?

Preet Bharara:              Now, if it mattered that you got this right, what would be the way that you would determine with equipment and methodology that it was the fake one?

Hany Farid:                   We have a couple of techniques for audio and they’re not honestly as well developed as for the video. One of them is that we’ve discovered, and I don’t think this will surprise people, is that when you synthesize something in a computer, in a very sort of distilled environment, there’s no noise, there’s no imperfections, there’s no microphone variation. The statistics of the sound are different, just fundamentally slightly different. We have these techniques that can look at these very complex statistical properties of the signals and tell whether they are consistent with a person talking, going through a microphone, being recorded, and maybe being compressed as opposed to being whole cloth synthesized on a computer. That’s one technique that we’ve developed.

Hany Farid:                   The next technique we’re developing is a biometric technique where we actually look at something very similar to what I said, which is the cadence of the speech. Both of those are very, very good. They sound like Joe Rogan and if you just played fraction of a second of a clip, nobody would be able to tell. But when we talk, it’s not just the sound of our voice, how deep or how high it is, but it’s how we end sentences. It’s the cadence of our speech. It’s how we pause on certain words that can be somewhat distinct.

Hany Farid:                   With people like Joe Rogan who have this phenomenal volume of podcast that you can draw from, you can build very complex statistical models of his speech patterns. Does he use certain words more than other words? What order does he use the words? And the same way we do author attribution.

Preet Bharara:              That would be probabilistic, right? That wouldn’t be definitive.

Hany Farid:                   It would. In fact, everything we do is probabilistic.

Preet Bharara:              Right.

Hany Farid:                   There is no certainty in this game of forensics.

Preet Bharara:              Then you’re entering into a world of battle of the experts in the same way you do with blood splatter and anything else in the courtroom.

Hany Farid:                   That’s exactly right. This is where things get very complicated is that it is rare that you can say, “100% certainty, I will swear on my life that this is real or this is fake.” That is almost never ever happened in my experience.

Preet Bharara:              With respect to the second technique and trying to figure out the accuracy of a voice clip, if it’s Joe Rogan, I guess you can do a comparative analysis because as you said, there’s a lot of Joe Rogan out there. But going back to my homicide example, if you’re trying to frame somebody as a private citizen who doesn’t have a lot of that, would you be able to use the second technique on that person?

Hany Farid:                   Now, that’s the right question to ask. The answer is no. This is true for both the audio and the video work that we do is we typically require in some cases, hours of video and audio recording of the individual so we can build models of how they sound, how they move their face expressions, how they move their head. That is very good for CEOs, political candidates, people who have a big footprint online but not so much for some random person who’s arrested and charged with murder. That’s one of the challenges that we are facing and we’re sort of working from where we see the major threats.

Hany Farid:                   Please understand, I’m not saying it’s not a major threat that somebody is falsely accused, but we are looking at things like threats to democracy, threats to the stability of the stock market. The hope is that as we get better and better at this, we can start to protect more and more and detect more and more of these types of deep fakes.

Preet Bharara:              What about visual? You have different techniques to detect whether something that’s in a video or a photograph is fake or real.

Hany Farid:                   Let’s talk about video for a second and we’ll come back to the images. With video, we take fairly similar approaches. One of my favorite techniques I’ll describe two of them is that we look typically at a couple of hours of video recording of today, President Obama. What we noticed is that he has these really interesting behavioral ticks or mannerisms. For example, with President Obama, when he delivers bad news or when he’s sad, he tends to frown, the sides of his mouth go down a little bit, and he tilts his head forward ever so slightly, almost looking downward.

Hany Farid:                   When he started almost every single one of his weekly addresses with that, he would record, he would say, “Hi everybody.” He would tilt his head up into the right. It was just almost like this tick that he had. What we do is we look at hours of video of President Obama, President Trump, Senator Warren, Senator Sanders, so on and so forth and we learn these mannerisms. We learn these behavioral mannerisms that unfold over, not just fractions of seconds but seconds, like 10 second clips is typically what we look at and then we build, if you will, a biometric.

Hany Farid:                   Then when videos come in, because if you think about the nature of what we call a face swap, deep fake, where you’ve replaced one person’s face with another, fundamentally, the person you’re looking at is not who would purport to be. The way they move their facial expressions, their head movements, are simply inconsistent as good as impersonators they are with the person that they are trying to imitate. That’s one of the techniques that we have.

Hany Farid:                   The other one that I’ll mention very briefly is there’s another type of deep fake called the lip sync deep fake, where you just take the person talking and you’re now going to synthesize a new audio recording from them. For example, using the technology that we just heard with the Joe Rogan. Then I’m going to synthesize their mouth to be consistent with that audio recording, right? So I’m literally putting words into their mouths.

Hany Farid:                   One of the things that we’ve noticed is that the shape of the mouth, although it looks really good when you watch the video at 30 frames a second, is not exactly physically correct. My favorite example of this, and your listeners can do this, try to say a word that starts with M, B or P. So, mother, brother, parent. When I’m doing it, and if you want to look in the mirror, you’ll notice that your mouth has to completely close. On mother, my lips have to close and if they don’t, I can’t quite form that phony mother.

Preet Bharara:              Unless you’re a ventriloquist.

Hany Farid:                   Unless you’re a ventriloquist.

Preet Bharara:              We can talk about [crosstalk 00:50:17].

Hany Farid:                   Those are a huge threat to me. Okay, fine. I can deal with that.

Preet Bharara:              I’m sorry, I’ll be quiet now.

Hany Farid:                   Good. No, no, no. I was thinking the same thing by the way, Preet, so it’s fine. What we noticed in the lip sync deep fakes is they don’t always get it right. The mouth doesn’t always form the correct shape for different phonemes. My other favorite one is favor and Victor, whether your lower lip kicks in a little bit and your teeth come down. The mapping of what we call visemes, the shape of your mouth, to phonemes, the sounds that you make are not always perfectly preserved in the creation of these fakes. Well, at least for now. That’s the second technique that we have to go after these types of deep fakes.

Preet Bharara:              These also probabilistic, not definitive. What can they be?

Hany Farid:                   Absolutely. Well, look, I mean, nothing is really definitive. The way I think about this is we don’t have one or two tests that we do. We have dozens and dozens of them. If I can run dozens and dozens of tests and I have a hit on not one, not two, not three, but five of them, that there are these inconsistencies, then I think we can say with a reasonable degree of certainty that this is not fake.

Hany Farid:                   Now, if all of them pass, that’s sort of an interesting question because when you find these inconsistencies, you can usually say something reasonably definitively. But when you don’t find inconsistencies, you’re left with one of two options. It’s real or it is a fake made by somebody who’s smarter than me and I can’t separate those two things out. Authenticating, in some ways, is harder than debunking.

Preet Bharara:              The other problem is we’ve been talking a little bit, because I keep using the example of the homicide, of the ultimate test of the fake to be in a court of law where you will have experts, and rules of evidence, and a judge ruling. Whereas probably the most likely use of this kind of thing is not going to be to frame someone for a homicide. It’s going to be in general political discourse or to destroy people’s reputations about which there’s never going to be an adjudication and all that matters-

Hany Farid:                   Absolutely.

Preet Bharara:              All that matters is, do people in the public want to believe the thing that they saw, where they heard, and who the hell is Professor Hany Farid to say otherwise?

Hany Farid:                   That’s right.

Preet Bharara:              Even listening to what you’re saying now, it’s very complicated and it all just becomes a bunch of jargon and people saw with their own eyes and heard with their own ears these things. There’s not going to be a final disposition of the issue in a lot of contexts, right?

Hany Farid:                   I think you’re absolutely right. Is that at the end of the day, adjudicating these things in the court of public opinion is incredibly difficult. Who am I? I’m the guy who’s part of the conspiracy to kill President Kennedy. Who are you going to believe? Right? These things are very complicated and it’s particularly complicated and messy when we come into this with our preconceived notions.

Hany Farid:                   I’ll give you an example of this, when Speaker Nancy Pelosi made the rounds of her purportedly drunk and the video was just slowed down. This was an easy case because we can go back to C-SPAN, and we can look at the original video and you could see it was slowed down. There was no debate. This was 100% case. Okay? I just came off saying, we almost never say this. This was incredibly easy because the original video was there, game over, no more conversation, four new cycles. Over four days, we were debating this in the media. I still get hate mail from people saying, “You have no idea what you’re talking about.” I’m like, “Dude-

Preet Bharara:              Why are you covering [crosstalk 00:53:35]?

Hany Farid:                   … why am I having this conversation still?”

Preet Bharara:              This could have happened to me. Somebody tweeted last week that they were listening to my podcast on 0.5 speed and they paid me the compliment of saying I sounded like a very coherent drunk and I listened to it, myself at 0.5 and I don’t know if you want to try this at home folks. I don’t know how coherent I sound, but I definitely sound dead drunk. You can put that out.

Hany Farid:                   This was the brilliance of slowing down that video and how effective it was at making her sound drunk. It was really well done.

Preet Bharara:              Photos. Let’s do photos quickly.

Hany Farid:                   Yeah, good.

Preet Bharara:              You have a couple of techniques that I was reading about and I find fascinating to figure out whether or not a manipulated image of say two people together because there might be some incentive for people to show that person A and person B knew each other and were involved with each other. What are some techniques you use there to determine fakery?

Hany Farid:                   That is probably one of the most common type of manipulation where you want to damage somebody’s reputation and people did this to President Obama for eight years, putting him next to other people because they wanted to say that he was involved in some nefarious things. Some of my favorite techniques for that are or have to do with lighting and shadows. If you imagine two people being photographed in different rooms or one outdoors and one indoors, then the light that illuminates them is going to be different. We have these really nice techniques that can estimate what the surrounding lighting environment was like for different people in an image and then determine if those are consistent or not with a single scene.

Hany Farid:                   My other favorite example of that, and this doesn’t work in all cases, but when it does, it’s very nice, is that if you are being photographed and there is a light in front of you, there will be a reflection of that light in your eye. It’s called a specularity, those little white dots. You’ll really see this very common when you’re taking a flash photograph, there’s a white dot in your eye. The location, and the shape, and the color of those tell you something about the surrounding lighting. If those are inconsistent with two people standing right next to each other, then you have a little bit of a problem.

Preet Bharara:              That’s actually pretty. I don’t want to use the word definitive again, but that’s pretty good.

Hany Farid:                   It’s pretty good. I’ll give you a couple of examples where you have to be very careful is that if two people sitting next to each other have very complex stage lighting, where you’ve had this very narrow lighting focusing on two people in very specific ways, it could look like an inconsistency. But again, when we do this, again, we don’t look at one or two or three things. We find lots of inconsistencies and they together will combine to be able to say something reasonably definitive.

Preet Bharara:              Right. Okay, so let’s talk about how we solve all this. You have noted as have others, there’s very little regulation here. Laws don’t keep up with technology. That’s something that I often say when we’re talking about the cyber threat or AI and other things and I mean no disparagement to any particular member of Congress, but they’re not so tech savvy. Some of them are, but most of them aren’t. Some of them have never sent an email and so I don’t know how up to the task they are. Should any of the things we’ve been talking about in this conversation be straight up outlawed?

Hany Farid:                   Let’s start by saying there’s tension here. There’s tension between first amendment, freedom of speech, and safety, security, and protecting people and that tension is of course in sort of what underlies all of this. There’s another tension, which is that the core business model of Facebook, and YouTube, and Twitter is engagement, is to drive users to the service, to keep them on there for as long as humanly possible, to have them provide data, and then to deliver ads to them. That’s the economic tension here.

Hany Farid:                   I think in my mind, there’s no doubt we have to regulate, but we have to regulate lightly, and gently, and thoughtfully, and try to avoid unintended consequences. Let me give you a couple of thoughts.

Preet Bharara:              Oh, good luck. Good luck with that.

Hany Farid:                   Good luck with that.

Preet Bharara:              Lightly, I remember working in the Senate and we did a lot of regulating lightly and gently.

Hany Farid:                   Yeah, I know, I am still spectacularly naïve despite decades of evidence to the contrary.

Preet Bharara:              Give us an example of the Professor Farid law.

Hany Farid:                   Right now what you should know is that Section 230 of the Communications Decency Act is the Law of the Land and the Technology Sector, which says that technology platforms, and that word is very important, are not liable for what their users do on their services. If you are Facebook of the world and somebody commits a crime and records that and puts it on Facebook, you’re not responsible.

Hany Farid:                   If somebody tells somebody to go commit a crime, you’re not responsible. If somebody posts a bomb making video, and somebody goes off and makes a bomb and kill somebody, you’re not responsible. If somebody creates a non consensual pornography to destroy a woman’s reputation and post it on Facebook and Facebook promotes it to you, they are not responsible. This is the gift of the gods to the technology sector. It is, in my opinion, where all the power is. Because if we had the ability in limited cases to sue the Facebooks, and the YouTubes, and the Googles, and the Twitters of the world for really outright malfeasance, well, then we would have a very different technology landscape.

Hany Farid:                   The conversation that we are having on Capitol Hill at the Senate and on the house side is, how do we rewrite clauses of Section 230 of the Communications Decency Act to say that this liability protection is a right, not a privilege. That if you knowingly allowed for bad behavior on your platform, for example, knowingly trafficking in young children, knowingly and allowing child sexual abuse material in their service, you don’t get the protection of the court. You are going to be held liable in criminal and civil court.

Preet Bharara:              But knowingly is where the whole game-

Hany Farid:                   Knowingly is the hard part. Exactly, and what’s reasonable. The language that is being discussed is that there is a reasonable duty of care and this is the language that’s also being used in the UK and in Brussels, which frankly are much more ahead of us in thinking about regulating tech. We saw our GDPR come out a few years ago on the privacy side. In many ways, I think the Western Europeans are leading on this front and we are playing catch up. But I do think that it’s coming. I testified before Congress a few months back on 230, there have been many, many conversations since then. I think regulation is coming. There’s a few concerns I have.

Hany Farid:                   One is if we start regulating in this climate now with virtual monopolies of Google, Facebook, Twitter, and YouTube, well, then we are going to stifle innovation because the little guys coming up are going to have a hard time competing in a regulatory landscape that the big guys didn’t have to. We have to figure out how to create carve outs and we have to figure out how to regulate just enough to get the companies to be more responsible without becoming overburdensome, and overreacting, and stifling an open and free exchange of ideas and stifling innovation. That’s a delicate balance.

Preet Bharara:              Now, you are working with Facebook at the moment, correct?

Hany Farid:                   I have a grant from Facebook to help them contend with their missing disinformation problem that they’re having on their services. Yeah.

Preet Bharara:              How’s that going?

Hany Farid:                   Here’s my view, is that for a long time what we heard from Facebook and from Google and from everybody else’s, there’s no problem. There’s no problem. We don’t know what you’re talking about. It’s fine. Then we heard, “Wow, there’s a little bit of a problem, but it’s not as big as you think it is.” Then we eventually heard, “Okay, there’s really big problems, but they’re so big, we don’t know what to do about it.”

Hany Farid:                   But in some ways, this is good news because at least we are now admitting that we have a problem. Then the question is, how do you start solving those problems, particularly when we have been negligent in addressing them for 10, 15 years and we have grown to a scale of billions of users, global services, and we don’t have the culture of regulating.

Preet Bharara:              Yeah. You have the most powerful companies on earth led by some of the richest people on earth. You said a couple of things that I think are very sobering. One is with respect to these near monopolistic platforms, you’ve said you can’t have it both ways. It’s incredibly disingenuous for Facebook, and Twitter, and Google to say, “We respect our user’s privacy when their entire business model is violating our privacy.” Then you’ve also said, “The problem I have with the tech companies is whenever they want to do something unpopular, that’s in their financial interest, they hide behind their terms of service. But when they don’t want to do something, say screen for extremist content, they hide behind the first amendment.” There’s a lot of hypocrisy here.

Hany Farid:                   Yeah. Let me give you a really good example of that latter one that I think is important for people to understand. What you will hear from these big companies is first amendment free speech. First of all, the first amendment doesn’t protect you from Facebook or Google or YouTube. It protects you from the government. It was designed to say that you can say things that are unpopular and you will not get arrested and thrown in jail and killed. This is not a first amendment issue.

Hany Farid:                   I will remind people, by the way, that on Facebook and on YouTube, they routinely ban adult pornography, perfectly protected speech, and they have no problem doing that even though it violates your first amendment rights. Why did they do it? They do it because it’s bad for business, because they know that advertisers don’t want their ads running against sexually explicit content. They have no problem taking down huge amounts of protected speech when it’s in their financial interest. But then when you go to them and say, “Guys, there are images and videos of eight year olds, and four year olds, and two year olds, and one year olds being sexually assaulted on your servers.” They’re like, “Not our business. We respect the privacy of our customers.” You can’t have it both ways. I stand by that quote and I think it is incredibly disingenuous.

Hany Farid:                   The core tension here that you have to understand is that the business model of social media is engagement. It is user generated content that they are going to monetize and that runs at odds with the issues that we are talking about, missing disinformation, child abuse, terrorism, illegal drugs, illegal weapons, sex trade, and so on and so forth. Once they open the gates of saying, “Well, we actually can remove this material,” then they are going to be responsible.

Hany Farid:                   I’ll also remind you, by the way, that YouTube is very good at taking down copyright infringement material. You know this if you’ve gone to YouTube and tried to watch John Oliver clip and why? Well, because the government said that you can be sued for hosting copyright material because the lobby for the music and movie industry is far more powerful than the lobby that protects children around the world.

Preet Bharara:              Right.

Hany Farid:                   We have mechanisms that are proven to be able to contend with this. The problem is it’s simply not in their interest.

Preet Bharara:              Putting aside the huge platforms, and I get that point and that’s the easiest way. In some ways, that’s the intersection where they’re posting things and people are consuming them. But on the laptops of the people who are creating some of this fake stuff, what should be the legal repercussions for them? For example, I don’t even know the answer to this question.

Hany Farid:                   Yeah, I don’t either.

Preet Bharara:              Is putting Scarlett Johansson’s face realistically upon pornographic actors? Is that unlawful? If not, should it be?

Hany Farid:                   Right. This is one of the easier ones. Let’s do this one and then we’ll get into the political realm in a second. I think you can have a debate about this. I’ll tell you where I come down on it. I think it should be illegal. I think the threat to the individual far outweighs any first amendment free expression. It just outweighs it for me, but we should have the debate. But I will tell you that there are several states including California that have banned it, but there’s a little bit of a catch there, which is that they ban it if the intent was malicious. There’s this funny wording in the legislation that makes it… it’s not clear to me that you can actually litigate these things because you have to show intent. And that of course impossible.

Preet Bharara:              Let’s make it an easier question. Let’s make it not a celebrity. There’s this issue that lots of DA’s offices are grappling with “revenge porn.”

Hany Farid:                   Yeah. Right.

Preet Bharara:              I guess that would meet the standard of maliciousness.

Hany Farid:                   Right, exactly.

Preet Bharara:              If you take some non-public figure and use their likeness for some purpose because you have deep fake technology at your fingertips, generally speaking, do you think there should be a serious look at making all those things illegal.

Hany Farid:                   I think there shouldn’t. You might be able to make a case as the law carves out for public figures, that they are different than the average citizen. For the average citizen, we will have a different adjudication of these issues.

Preet Bharara:              You still want Jimmy Fallon and Stephen Colbert to do funny things at night?

Hany Farid:                   Absolutely.

Preet Bharara:              Public figures.

Hany Farid:                   We want satire, we want comedy. But that’s not necessarily true of the reporter who is reporting on me too, or something that is unpopular or somebody who simply attracts unwanted attention. I think that we should have had serious debate about it. We should look at the pros and the cons and we should make a decision as society. I’ll tell you what, where I fall down is that I think it crosses a line and I can tell you having talked to women who have been the victims of this, it is a very real threat to them. It is life altering and in many cases, devastating material. But let’s have the debate. I’m open for that.

Hany Farid:                   Now, in the political realm, things get much more complicated of course, because it’s political speech. We want satire, we want to be able to make fun of and criticize and have satirical coverage of our politicians. But we also recognize that if you release a video 24 hours before an election, that is not satire. It is not comedy. It is intended to fool people. It is by definition fraudulent. That’s very different. How do we define these things and where the tension now is in the regulatory round, both at the state and the federal level, at least with the people I’ve been talking about is how do you define harmful content in the political realm? That, I don’t have good answers to and I don’t think anybody does. I think that’s what we have to start thinking about.

Preet Bharara:              You’ve done a lot of work in this area. You have an academic and a tech background and you also work in the real world. Why are you so fascinated by this? What drew you into this?

Hany Farid:                   I was originally drawn into this way before I should have been, which was in 1999. This is in the very early days of the digital revolution. What drew me in was in the very early days, seeing how digital technology was evolving and how it was getting easier and easier to manipulate digital media. It’s not that I could foresee the future, but we all knew what was coming. We all knew that the digital revolution was here. I started thinking about is, what happens when we enter a world where everything is digital, everything is malleable and I can manipulate reality?

Hany Farid:                   I was primarily interested in how this would be managed in the courts. That’s the thing I really worried about. Back to your example, I don’t think I could have predicted that 20 years later, we would be talking about existential threats to our democracy and society.

Preet Bharara:              Well, here we are.

Hany Farid:                   Yeah, but here we are. I don’t think that that’s an exaggeration. I think that if we can’t believe what we hear and see and read online, we have a real problem with our democracy in our society and being able to interact in a civilized way. I continue to be very interested in the underlying science and technology, but I’m growing more and more concern about the real implications of a world where anything could be manipulated and that means everything can be fake or nothing is real. Where are we going to be as a society then? I don’t know the answer to that, honestly.

Preet Bharara:              Professor Farid, thank you so much for joining us. Really, really-

Hany Farid:                   It’s great to be here, Preet.

Preet Bharara:              … helpful and informative and also entertaining.

Hany Farid:                   Very good to talk to you. Thank you, sir.

Preet Bharara:              Thank you, sir.

Preet Bharara:              The conversation continues for members of the CAFE Insider Community. To hear the Stay Tuned bonus with Hany Farid, and get the exclusive weekly CAFE Insider podcast and other exclusive content, head to cafe.com/insider. Right now, you can try a CAFE Insider Membership free for two weeks at cafe.com/insider.

Preet Bharara:              As you know, there’s so much significant news going on in the country and in the world. It’s hard to keep up. There’s a gathering storm of the coronavirus. There were the actual tornadoes in Tennessee that killed, I think, up to two dozen Americans. Then, of course, there’s all the drama surrounding the election peaking with Super Tuesday yesterday. But I want to end the show this week, instead of talking about any of those things to just mark the passing of a significant person who many of you have heard of, many of you may not have, and it’s a gentleman by the name of James Lipton who died on Monday at age 93 from bladder cancer.

Preet Bharara:              James Lipton was a lot of things. He worked as a script writer, an author, an actor. According to his obituary, he also had a stint working as a pimp in France, colorful stuff. But what James Lipton was most known for and the reason I knew him and admired him was he had a television talk show that was very particular in its focus and maybe you’ve seen it. It was called, Inside the Actors Studio. On that show, he would interview actors who would talk about their craft and there wasn’t a great that I think he didn’t have the chance to interview. These were not short interviews. This was not Access Hollywood, this was not Entertainment Tonight. These were lengthy, in-depth, thoughtful, thought-provoking interviews with the most significant and talented acting legends of our time.

Preet Bharara:              There was no pomp or circumstance. The camera setup was simple. He sat with his guests at a table and a stage with no fanfare and they talked. I mean, it was a little bit like the modern podcast to give you an idea of how seriously Lipton took every interview. It’s reported that each interview went for four or five hours and then it was edited down to one hour for television. It’s a far cry from the quick two to three minute red carpet interviews you see before the Oscars and the way that we come to know the folks we see on the big screen and on the small screen. There’s something about that attention, that affection for the material, that desire to get at the craft of these people who we often read about only in the gossip pages that I think was so compelling.

Preet Bharara:              I used to watch the show like a religion. Probably in some way that I don’t fully appreciate and never fully thought about until reading of his passing, it probably affected how I decided to do this show. For every guest who appeared on Inside the Actors Studio, James Lipton prepared for two weeks watching all their films, including the ones they may have made in high school, and digging out obscure and unknown facts about their lives.

Preet Bharara:              As the New York Times wrote this week, “It is in an age that often seems to disdain content, a show about ideas.” Here’s how Lipton himself described his show. He said, “It is not journalism. It is meant as an antidote to what is normally done with these people. I want to create an environment where people are willing to talk about the craft, not about themselves as people, but as artists.” That left, I think, a lasting impression for all of us who watched Lipton show after show, after show. He got people like Dave Chappelle, Robin Williams, Ben Kingsley, Jack Lemmon, and others to open up in ways they had never done before.

Preet Bharara:              A hallmark of the show that I found especially interesting, he would conclude every interview with a series of questions that he borrowed from the French Talk Show host, Bernard Pivot. I had never heard of that guy. But every week I would hear James Lipton say, “And now for the questions from Bernard Pivot, what is your favorite curse word? What is the profession you wouldn’t have wanted to practice? If God exists, what would you like to hear him say after your death? Don’t worry, I’m not going to answer them.”

Preet Bharara:              But there was an occasion where James Lipton appeared on Bernard Pivot show and he gave his answers. Question, what is your favorite curse word? Answer, Jesus Christ. Question, what is the profession you wouldn’t have wanted to practice? Answer, executioner. Question, if God exists, what would you like to hear him say after your death. Answer, “You see, Jim, you were wrong. I exist, but you may come in anyway.” Lipton was an interesting character. He cut an interesting figure on the show interviewing folks. He was a subject of parodies. He had the honor and privilege, I’m sure, of being parodied by none other than Will Ferrell on SNL. But he always seemed to maintain a good sense of humor about it, not take himself too seriously.

Preet Bharara:              Sometimes we don’t fully appreciate the influence that somebody has had on us until we reflect on their passing. James Lipton, rest in peace. Well, that’s it for this episode of Stay Tuned. Thanks again to my guest, Professor Hany Farid.

Preet Bharara:              If you like what we do, rate and review the show on Apple podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me @PreetBharara with the #AskPreet or you can call and leave me a message at 669-247-7338. That’s 669-24 Preet. Or you can send an email to Stay Tuned at cafe.com. Stay Tuned is presented by CAFE. The Executive Producer is Tamara Sepper. The Senior Audio Producer is David Tatasciore, and the CAFE team is Julia Doyle, Matthew Billy, David Kurlander, Calvin Lord, Sam Ozer-Staton, and Jeff Eisenman. Our music is by Andrew Dost. I’m Preet Bharara, stay tuned.