Preet Bharara:
From Cafe and the Vox Media Podcast Network, this is Stay Tuned in Brief. I’m Preet Bharara. Earlier this month I sat down with my friend Nita Farahany, who as longtime listeners will know, is a leading scholar on the ethical, legal, and social implications of emerging technologies. We discussed her new book, The Battle for Your Brain, which grapples with AI and neurotechnology, and how they will impact our rights to privacy, freedom of thought and self-determination. The event was hosted by City Arts and Lectures at the Sydney Goldstein Theater in San Francisco. I found our conversation so thought-provoking that I wanted to share some of it with all of you.
Preet Bharara:
How are you? All right. This is San Francisco, right?
Nita Farahany:
Yes. Yes. That’s where we’ve arrived.
Preet Bharara:
Nita, how are you?
Nita Farahany:
I’m doing well. How are you, Preet?
Preet Bharara:
It’s good to see you.
Nita Farahany:
Likewise.
Preet Bharara:
Congratulations on your book.
Nita Farahany:
Thank you.
Preet Bharara:
The Battle for Your Brain. Defending the right to think freely in the age of neurotechnology. Give folks a sense with respect to the area that we focus on, the legal arena, what kinds of things technology has on the table that are going to cause us, and you in particular, a lot of consternation about how we have guardrails for the ethics of these things, both with respect to brain technology and artificial intelligence? What’s Possible now? And then we’ll break it down.
Nita Farahany:
Yeah. So I’ll back up and say what the book is about just to give a kind of context of the technology that we’re talking about. So anybody who isn’t living under a rock right now realizes that AI has made extraordinary advances and as of recently, it’s the topic that most people are talking about. Right? So generative AI, GPT-4, ChatGPT, Bing, all of the different-
Preet Bharara:
Can you just explain what generative AI is? Because people throw that phrase around.
Nita Farahany:
Yeah. So what we have had up until now, other than these more recent large language models, is we’ve had pattern classification. So we’ve had very sophisticated AI that can do things like look at big data sets, and through machine learning algorithms can tell us patterns or identify patterns that we might not have easily seen before. So a good example of that is something like you have a bunch of different radiological images and you want to try to identify the earliest stages of breast cancer in those radiological images. And so you have a huge data set of many millions of images that have been marked by a human who’s gone through them and said, “This one has breast cancer, this one doesn’t have breast cancer.” And you train the AI on that data to be able to then take a new image, look at it and predict whether or not it has breast cancer or evidence of breast cancer.
Generative AI is, instead of just doing that basic pattern classification, predicting the next step. And so the simplest, as I try to describe this to my daughter is if I say to her 1, 3, 5 what comes next, she can say seven-
Preet Bharara:
Six.
Nita Farahany:
That’s right. Yeah. You’re very good at this Preet. We were working through that.
Preet Bharara:
It’s not six?
Nita Farahany:
Later we’re going to do pattern recognition for you and we’re going to put you through those. I understand that GPT-4 has passed the bar exam happily. There wasn’t sequence numbers, I guess, in it for you and you made it through but-
Preet Bharara:
I got that question wrong.
Nita Farahany:
Yeah. Tough.
Preet Bharara:
Not only. So just so people know, not just pass the bar exam. What else has it passed?
Nita Farahany:
I think pass the medical boards, passed a law class I think. And for that one, I wonder, wait-
Preet Bharara:
Well how does that work?
Nita Farahany:
Exactly? I really wonder about that one because last I checked, I can’t drink yet, and so-
Preet Bharara:
Yeah.
Nita Farahany:
I’m not sure.
Preet Bharara:
Well, you got to be 20, 21 years old AI.
Nita Farahany:
It’s a written test, right? There you go. It’s got a while.
Preet Bharara:
AI is too young to be drinking.
Nita Farahany:
But it’s a written test I think, rather than actually drinking. And anyway, that one’s the one that surprised me the most. And I think when somebody told me that, I said, “Yes, but can it drink and enjoy the wine?” I mean, that’s the real question.
Preet Bharara:
So talk about some of the things that AI and neurotechnology can do.
Nita Farahany:
Well, so let me get to the neurotech part of it, right. So given that you can do this, which is there’s pattern classification. It’s been used in a lot of different contexts that we’re going to talk about. Generative AI enables everything from the generation of new text and new, like you can have it generate an email for you, generate a pitch for you. I’ve done a deep dive with GPT-4 where I’ve basically had a private tutor through every philosophical theory I wanted to go through. You can have it have these dialogues with you.
Preet Bharara:
How much of your book was written by generative AI?
Nita Farahany:
It was before. It was, I mean, maybe my next book, but this book was not. And okay, so generative AI. Neurotechnology for a very long time has been under development to try to decode what’s happening in the human brain. And the most complex form of this is something like functional magnetic resonance imaging. These are like an MRI giant machine that somebody goes into and you’re trying to look at their brain activity and see basically as blood flows through the brain from one region to the next while they’re doing something like looking at images.
Most recently, a extraordinary pairing of generative AI plus FMRI was, some researchers in Singapore put somebody into an FMRI machine, but trained the pattern classifier, the algorithm to decode what the person was seeing using generative AI in stable diffusion. What they did was basically labeled things with both text as well as images that a person was seeing. And what they showed was they could show a person, like a picture of a puppy dog or a teddy bear and then decode from their brain activity a puppy dog or a teddy bear. And it was much better than past decoding.
Preet Bharara:
So no word spoken?
Nita Farahany:
Right.
Preet Bharara:
That’s reading a thought.
Nita Farahany:
Well, it’s reading. So when people say are we at mind reading?
Preet Bharara:
That’s a thought, isn’t it?
Nita Farahany:
Well. I mean I’m a philosopher too, so what is a thought, right? But-
Preet Bharara:
Oh my god, this is going to be a long evening.
Nita Farahany:
It’s going to be… It is decoding things in your brain. And if you want to think of the images and the words, yeah, we can decodes, I can’t, but scientists can decode some of that activity. What this book is about is not about Neuralink, which is Elon Musk company of trying to implant electrodes into the brain to read brain activity and change brain activity. Or even Tom Oxley’s company Synchron that goes up through the jugular vein and puts neurotechnology inside the body. It’s about everyday sensors that people are using like heart rate sensors and Fitbits and Oura Rings that track their everyday activity now applied to brain activity.
And what’s coming, what’s already here but hasn’t come at scale yet, is the embedding of brain sensors to pick up electrical activity in the brain into earbuds, headphones, watches, small wearable tattoos so that brain activity can be picked up as incidentally as all of the rest of the physical information from our body has been picked up and then using AI decoded. And with those sensors, you can’t pick up the puppy dog and the teddy bear, but you can pick up a lot from the brain already and already it’s being applied in a lot of different contexts.
Preet Bharara:
We’ve talked about one example of that… Which is you put an object in front of someone or a face or something else, and you can tell us with what degree of certainty can you know if the subject recognizes the face and what implications that has for among other things, testimony at trial, criminal law enforcement, et cetera.
Nita Farahany:
So there are a number of examples in your book that are a lot of fun for me as somebody who’s been thinking about neurotechnology and you have a lot of lines you drop in the book about credibility determinations of witnesses or credibility determinations of defendants. Or you say things like, since we can’t really know what another person is thinking-
Preet Bharara:
Yeah, apparently we can’t.
Nita Farahany:
… what’s really on their mind, right? So let’s talk about the P300 and then I want to talk about the Menendez brothers in your case.
Preet Bharara:
All right. We’ll talk about your thing first.
Nita Farahany:
All right. I’ll talk about this first, which is, so there’s a technique that a number of law enforcements are using across the world, and it actually has an interesting… It all comes back to Trump, which is there’s a company called Brainwave Sciences and on the board of Brainwave Sciences with Michael Flynn, and so was a X-KGB spy, and that was one of the original Russia connections that-
Preet Bharara:
Should they invest in that company?
Nita Farahany:
No, I mean, I don’t know. I’m not the person to ask for investment advice. I’m a law professor. I don’t take investment advice from me. But anyway, so go back a couple of decades. There was a researcher by the name of Larry Farwell who was curious about this signal in the brain, this P300 signal in the brain, which signals certainty or uncertainty. At least that’s how it had been developed, which is you’re certain that something that’s going to be flashed in front of you, a light or a sound is coming and it registers in your brain before you’re consciously aware of it.
So you can decode this recognition signal in your brain. And he wondered whether he could develop that technology for application for people who had committed a crime or who were involved in a crime. And so he developed what he called a series of probes where he would work with law enforcement, figure out something in the file that had not been disclosed to the general public, and then would expose the person to an image or scene or words, and then measure the P300 response to see if it’s signaled recognition or not. And then the idea was if it signaled recognition of details that they shouldn’t know about, that that could be relevant for using it in a criminal case. And it was tried out. I mean, it was funded and invested in by the CIA and by the FBI, and they did a big report here in the United States and they decided it wasn’t reliable enough to use in law enforcement in the US. Even though it did end up getting used by a few criminal defendants who introduced it here.
Fast forward, Larry Farwell sells the IP to Brainwave Sciences. Brainwave Sciences with Michael Flynn and his consulting companies start selling it to law enforcement worldwide. And a bunch of different law enforcement agencies worldwide have used this to interrogate criminal suspects to see if they have recognition memory, and then have convicted them based on what their brain-based data has shown. And there’s all reasons-
Preet Bharara:
Is that a good thing?
Nita Farahany:
No.
Preet Bharara:
Why is it? But how reliable is it?
Nita Farahany:
That’s the problem, right? I mean, so you talk a lot in your book about the worries about unreliability of evidence and whether or not we can trust the data. And one of the biggest problems with it is that the development of the probes is more of an art than a science. And so it’s very difficult to replicate. And there have been challenges in replicating the development of those probes.
Like how do you figure out something that truly… If I show somebody a knife and their brain signals recognition to that knife, how do we know that it’s not similar to a knife that they have at home? What is it that really makes it unique so that we can say that that recognition is something salient and meaningful about the crime? There’s a lot of problems like that. There’s some replication that’s occurred. There’s some researchers in Australia who’ve been working on it. There are other signals in the brain that people are starting to develop. But in something like high stakes decision making, and you hear this a lot with AI right now, applying tools like AI and neurotechnology into high stakes decision making without fully understanding what it is that we’re decoding and over trusting and over relying on it, I think is deeply problematic.
Preet Bharara:
So it seems to me the easier situation to deal with is when you have technology that’s not fully reliable in the sense that courts will not allow it to be admitted. People will have a healthy skepticism of the results that are obtained. The harder question is when you develop technology that’s foolproof, but that goes to this reading of brain activity, which in some circumstances you would say, well, that’s better because it’s a hundred percent foolproof, but it’s really not. And this you talk about a bunch in your book and you said you’re a philosopher. So this is the philosopher in you. Let’s say you develop a technology that scientists have a consensus about, experts have a consensus about that is it’s a hundred percent on recognition. So when a witness had comes and testifies that I was shown a photo array, that’s what people do in police departments all over the country all the time.
Looked at six faces. The one that robbed me is face number two. And you can check that through this technology that I’m hypothesizing about. If you see the neurons fire on person two, it seems like you’re corroborated. If it doesn’t fire on two and it fire’s on four, maybe the witness is lying. How do you use that technology in court in a way that is fair? Because as we’ve also talked about before, the criminal justice system is not only about truth, it’s about fairness. And sometimes it’s the case that the truth is that someone has committed the crime, but if the investigation was not done properly or the confession was coerced, that person goes free. Because it’s not only about the truth, it’s about the fairness of the process. How do you think about the use of that kind of technology to put people in prison?
Nita Farahany:
Raises a lot of interesting questions. I haven’t forgotten about the Menendez brothers. [inaudible 00:14:41] We’re going to come back to the Menendez brothers, but it raises another example in your book. So let’s back up for a moment and think about the perfect technology, which is what you’re inviting me to do.
Preet Bharara:
The perfect technology is the scarier technology.
Nita Farahany:
Well, they’re both scary, right?
Preet Bharara:
Yeah.
Nita Farahany:
So there’s the imperfect technology that we over rely on and use for high stakes decision making. And because we have a tendency to trust or over trust technology and think that it’s the truth, and it happens in a black box. So there’s a lot of AI that’s being used already in the criminal justice system to give recommendations about the likelihood of a person committing a future crime, for example.
Preet Bharara:
Oh, we’re going to talk about minority report too.
Nita Farahany:
We’re going to talk about all these things. But wait, so we’ve got the compass and the other technologies, which are giving recommendations based on a set of factors that are opaque to a judge. And what is mostly happening we found is when those recommendations are being given to a judge, if it aligns with their own perception of what they think about the defendant, they’re following the recommendation. And when it doesn’t align with their own perception, they’re not following the recommendation. So it feeds into confirmation bias that you talk about in your book. But if we imagine that it’s perfect, okay, perfect prediction of a likelihood of committing a future crime or a perfect… We’re decoding a-
Preet Bharara:
Well, just use an example I gave you. Just recognition technology a hundred percent.
Nita Farahany:
Yeah. But I mean, what are they recognizing? What does that mean? So you recognize a crime scene in detail, what does that mean?
Preet Bharara:
Well, the example I gave was you recognize the face of the person. The brain activity shows that the face you picked out is the one that the brainwave activity-
Nita Farahany:
But you recognize it, right?
Preet Bharara:
Yeah.
Nita Farahany:
But what does that mean that you recognize it?
Preet Bharara:
Maybe it’s in the contrary example, which is you picked the person out. I mean, again, I-
Nita Farahany:
Okay. Well, so I’m going to say there’s several problems with your example.
Preet Bharara:
Yeah, no, but-
Nita Farahany:
I’m going to resist the hypothetical as any good student would, right?
Preet Bharara:
Philosophers use hypotheticals a lot, I thought.
Nita Farahany:
And students resist them all the time.
Preet Bharara:
But you’re not the student here.
Nita Farahany:
Right. But I do have students, I’m used to what they do in response.
Preet Bharara:
I guess what I’m getting at is this, and this is something you talk about in the book. That is suppose you have technology that can be probative in some… That can give you some probative evidence in some way that comes from your brain. Our legal system is not constructed at the moment to deal with that. And this is, I know an issue that’s on your mind as a philosopher and an ethicist and a lawyer. And if you have technology that can give you information from someone’s brain, is that speech or is that some other physical thing like blood? So for example, a judge can order someone to be fingerprinted or to take blood from them to help incriminate them in a crime. A judge cannot order in this country, someone to speak. So brain activity is which?
Nita Farahany:
I mean, it’s a great question for which there is not a great answer. So this book is designed to do two things. One is to help people understand that neurotechnology is here, that it’s about to go mainstream, that it’s going to be part of our everyday lives and it’s going to raise these complex questions about do you have any right to mental privacy? Do you have a right to cognitive liberty? Do you have a right to keep private your thoughts? And if your thoughts or if your recognition memory or whatever we want to call it could be perfectly reliable in cases like that, is there a societal interest that would justify intervening in your brain? And the question of what is it right now? That a decade ago, I started with a series of law review articles that were looking at this question of if you could use the brain-based reactions of a person as evidence in a crime, would you have a right against self-incrimination? Would you have a right under the fourth Amendments, search and seizure? Would it be an unreasonable search and seizure?
Preet Bharara:
So where do you lean?
Nita Farahany:
And the answer I believe is no, you wouldn’t have protection. That if you’ve read current law and current-
Preet Bharara:
You wouldn’t or you shouldn’t? You wouldn’t.
Nita Farahany:
You wouldn’t but you should.
Preet Bharara:
Under current law you should?
Nita Farahany:
Yes.
Preet Bharara:
Okay.
Nita Farahany:
Yeah. So you wouldn’t, because we treat real physical evidence from the body like blood or even functional things that we can make you do like give a handwriting sample or give a voice exemplar. We treat those in law-
Preet Bharara:
All things can be compelled under the constitution at this moment in this country, everywhere.
Nita Farahany:
That’s right.
Preet Bharara:
So this is a new thing and as a general matter, do you think that the law keeps up with technology? I’m guessing the answer is no.
Nita Farahany:
No, no.
Preet Bharara:
And-
Nita Farahany:
Do you?
Preet Bharara:
I do not.
Nita Farahany:
Yeah.
Preet Bharara:
Whether you’re talking about the… I mean, we still can’t keep up with the internet.
Nita Farahany:
Right. Yeah.
Preet Bharara:
And that’s been around for a while much less AI and neurotechnology.
Nita Farahany:
But let’s compare it for a moment. Let’s try something fun here. I want to go back to the Menendez brothers.
Preet Bharara:
Oh, you’re not having fun to this point?
Nita Farahany:
I know. No, we’re going to have fun now. So I bet some people here remember the Menendez Brothers case. One of the reasons I found it striking and reading about it in your book was your personal connection to it, which makes you sound more guilty than you are.
Preet Bharara:
I had nothing to do with the-
Nita Farahany:
We’re going to get to your connection with it but it was-
Preet Bharara:
And don’t attach something to my brain to find out if I’m lying.
Nita Farahany:
So I raise it, which I want you to tell the personal story about, but I raise it because one question that I think is really worth exploring with respect to neurotechnology and AI is comparing it to what, right? And so comparing it to human judgments we make in the criminal justice system about other people, is it better, is a question that a lot of people ask. And is that a justification for using it? So first, for people who don’t know the Menendez Brothers case, tell them about it but also your connection I think is a really interesting one.
Preet Bharara:
Well, some people may… It’s the first chapter of my book Doing Justice. When I was thinking about writing the book, I was writing only about experiences that I’d had as the US attorney and as a prosecutor. And I remember at some meeting with publishers, we were trying to promote the idea of my writing a book. Someone said, “Is there some story from your life before you became a lawyer that informed how you think about your job or thought about your job?” And I remembered in the moment that when I was 19 years old, between my junior and senior year of college, I got a phone call out of the blue in the summer from my best friend from high school, Jessica, who for a long time had told me about her family’s best friends, Jose and Kitty Menendez, who they had lived with in New York City.
And then the Menendez had gotten very successful and moved out to Beverly Hills and were involved in all sorts of activities. They had two handsome and really smart sons, Lyle, and I’m forgetting the other son’s name.
Nita Farahany:
Eric.
Preet Bharara:
Eric. Lyle and Eric.
Nita Farahany:
Yeah.
Preet Bharara:
Jessica had a crush on one of them. And I’d heard about this family for a long time, and she called me and she’s crying and she’s telling me that the parents had been killed and not just killed, but massacred, brutally shot, was shotguns bodies mutilated by the gunfire. And there was a lot of speculation about who might have done it. And every few months I would get a call from her as there were more developments in the case. And she called me, I remember a few days later to tell me about the moving ceremony at the funeral and how one or both of the brothers had spoken and got choked up.
And then I remember she called me when the brothers got arrested in the murder of their parents. And we spent the whole night talking. And she’s like, “It was not them. It couldn’t have been them. I’ve known them for my whole life. I’ve been around them. I’ve seen them. I was at the funeral.” And then of course, some months later, the defense that they decided to proffer was self-defense, claiming they had been abused, basically admitting that they had done the thing that my friend Jessica and others thought was inconceivable. And that’s one of the premises.
Nita Farahany:
And you say you even believed her.
Preet Bharara:
I believed her.
Nita Farahany:
Right.
Preet Bharara:
And so-
Nita Farahany:
You trusted her judgment for a while.
Preet Bharara:
Yeah. Because you think a person and she knew them. I mean, she grew up with them and she was wrong, and she was wrong in a really profound and dramatic and upsetting way in the context of the most brutal thing that you can see, in the most brutal crime you can imagine. The killing of members of your own family that you would think you would detect. Now in retrospect, I didn’t put this in the book, and maybe there’s some distortion here because-
Nita Farahany:
They’d killed the puppies along the way.
Preet Bharara:
Well, she remembered things later.
Nita Farahany:
Like that. Oh Gosh.
Preet Bharara:
Not killing the puppies, not killing the puppies. But she then remembered things that seemed more significant, that suddenly seemed more significant. She never saw any actual abuse. She never saw any tendency towards patricide, et cetera. But she saw things and that’s part of the same story though, that-
Nita Farahany:
Later we try to fill in the gaps and see-
Preet Bharara:
You see what you see and maybe the after the fact remembrance of weird moments was right and should have been for more attention. Or maybe it’s trying to explain post hoc the thing that she missed. I don’t know. You’re the philosopher.
Nita Farahany:
Well, so first of all, memory is far from perfect. And the idea that neurotechnology could somehow decode what’s in your brain and what you remember presumes that memory is perfect and that our brains act like video recorders, which they don’t. So when she’s then later me remembering things that could be constructed to fit her new narrative. But one of the things that struck me about that case, besides the fact that you can be so wrong in your perceptions of another person, is that a lot of people who talk about AI tools or talk about neurotechnology and their use of it in the criminal justice system, think that it will bring us objective truth.
And that seductive allure of the truth of actually being able to know another person justifies breaching what I think of as our last bastion of freedom, our last bastion of privacy, which is our brains in order to know the truth. And I wonder for you as somebody who writes a lot about truth, that what we’re trying to find is both justice but truth. Do you think that’s compelling? Is that a good enough reason to peer into other people’s brains?
Preet Bharara:
I don’t think so because I’ll answer the question that you answered. I think a certain kind of information you get from the brain is speech and should be protected. In my law school seminar, I will often ask the question, what do you think is the most protective constitutional provision for defendants? And you could say it’s the fourth Amendment right against unreasonable search and seizure. You could say it’s the Sixth Amendment right to counsel, speedy trial, trial by jury, indictment only by grand jury. It’s all sorts of rights in the Constitution, in the Bill of Rights. I think the most protective is… And you know this is true if you have children who misbehave is the Fifth Amendment right against self-incrimination,
Nita Farahany:
Yeah. Right.
Preet Bharara:
Right. If you can compel someone to speak and to answer your questions, if they are lying and you don’t even need a lie detector and you put them… Especially if you put them under oath and under pressure and you grill them about whether or not they took the cookie, you’re going to get them, right? Lots and lots of people go unprosecuted because the system that we have is very protective of defendant’s rights as it should be I think. Just because you might get to more truth by compelling cross-examination or examination of a person who’s alleged to have committed a crime, to me that principle applies both in the trial context with respect to testifying and also applies in the brain context. Wouldn’t you agree?
Nita Farahany:
It would. It reminds me of The Minority Report and also the example in your book. So are you a Tom Cruise fan?
Preet Bharara:
Yeah, he runs really well.
Nita Farahany:
Yeah. So I love the movie Minority Report. I also think it’s amazing that Tom Cruise hasn’t aged in the entire time that I have been aging. I think that’s amazing.
Preet Bharara:
He runs well in that movie too.
Nita Farahany:
Yes. But Minority Report is… The first day of criminal law every year I tell my students, “If you haven’t watched Minority Report,” at this point, most of them haven’t watched it and so I start by saying, “Look, it’s pivotal to a lot of the conversations we’re going to have in this class because the premise of it was that there was…” At that point, there were these precogs there were like beings, humans that could see the future. And they were being used to inform this pre-crime unit about the Future Commission of Crimes. Then people were being arrested for the Future Commission of Violent Crimes, and there were no longer any violent crimes within the district that they had deployed this. And I say, “You need to read that both because it’s technology that is relevant to all of my work with respect to neurotechnology, but it’s also… It raises for us complex questions about when do we arrest person? When do we arrest people? How good does our evidence need to be before we actually decide to intervene, especially if we could prevent significant harm.”
And so there was an example in your book that to me is just fascinating as an example of this kind of thought crime and the extent to which… Like how certain do you need to be? And could neurotechnology and AI make you that much more certain so that you could more comfortably arrest people and put them away before they ever commit the crime? But you have to tell the story. I can’t reconstruct it from your book, which is the gruesome details, especially, you can leave out some of them, but you have to give some of them.
Preet Bharara:
Well, Minority Report is different in the sense that as… And I rewatched the movie recently, given our conversations.
Nita Farahany:
Good.
Preet Bharara:
And by the way, the only things that the Precogs can see, and I’ve forgotten this is homicide. So homicide in that district in six years goes to zero. So-
Nita Farahany:
They can see violent crime. They can’t see-
Preet Bharara:
They don’t see fraud.
Nita Farahany:
Right.
Preet Bharara:
No fraudsters go free.
Nita Farahany:
Yeah.
Preet Bharara:
You hear that Silicon Valley? But in the Minority Report cases, they’re not developing any other evidence, and then they’re getting confirmation from the Precogs. They just get a vision one day, the ball comes up and it says, “This person is going to commit this crime against this victim.” And they’re identified. And then Tom Cruise runs and stops the crime. The case you’re talking about was gained a lot of notoriety in New York when I was the US attorney, young woman recently married to a man was concerned about… It was suspicious of her husband’s activity. He would be downstairs till late at night on his computer, came up late. She thought that maybe he was going behind her back. She put surveillance software, on his laptop-
Nita Farahany:
Let’s pause there for a moment because it’s about to ger… Well, I think he’s a real jerk. But I mean, she put surveillance software on his computer.
Preet Bharara:
Yeah, that’s wrong.
Nita Farahany:
Yeah. But-
Preet Bharara:
But she goes down to check when he goes to work one day, and as I write in the book, it’s much worse than his having an affair. And what could be worse than that? Well, she sees all these chats and communications between him and other people about specific women including her that he claims in the communications who he wants to rape, kill, mutilate, and consume. The New York Post when we end up making the arrest referred to him as a cannibal cop. He was on the cover of the New York Post for four days.
Nita Farahany:
Because he was a cop, which is I mean-
Preet Bharara:
So the wife leaves, goes to Las Vegas to live with their father. They call the FBI. There’s a connection, lands in our office. I get told about the case. And remember at that moment, he hasn’t necessarily done anything. He hasn’t raped anyone. He hasn’t killed anyone. He hasn’t stabbed anyone and he hasn’t done the other thing. And the question is, how do you lead the investigation? So we planned all sorts of surveillance. We did searches. We found that among other things, he had actually searched for how chloroform works on the internet. We found out that he had staked out the homes of people that he mentions in the chats to whom he wanted to do these heinous things. And the investigation proceeds just a little bit. And suddenly he puts in with his supervisor for vacation. He’s going to go away for 10 days. And the question is, do we arrest him now or do we continue the investigation and be assured that we have… We can keep eyes on him.
And the FBI was very confident, could keep eyes on him, and we were less confident, not because we don’t trust the FBI, but imagine the title of the chapter is God forbid, that voice in your head if you’re responsible for public safety, God forbid we lost eyes. And on day four of his vacation, he kills someone and does terrible things. So we arrested him prematurely. And I guess I understand why you led in with Minority Report. And so the question became a trial. Was it a fantasy or had it graduated to reality? And at the end of the day, based on these other things like the staking out and the nature of the conversations and the specificity of them and the chloroform search, it was not a ton, but it’s sufficient to get 12 people to convict him beyond a reasonable doubt. And then the trial judge sometime later overturned the conviction. And I disagree with that ruling, but I get it. And so it’s hard to know. If I had precogs, I would’ve been much more confident about the case.
Nita Farahany:
Well, and I wonder. So in my book, The Battle for Your Brain, I give examples of the ways in which neurotechnology is already being used in the workplace and governments and individuals who are using it for their own benefits. I wonder the extent to which people are wearing everyday neurotechnology, right? In the same way that already data from Fitbits and data from Apple watches, or if you watched any aspect of the Murdoch case, so much of the evidence that was used was digital data from everyday devices. I wonder the extent to which people are going to become comfortable with the predictive capabilities, whether it’s of thought crime or add AI, scanning everything and predictive algorithms interpreting and creating psychogenic profiles of people. At what point do you decide that a person is dangerous enough that spending time on subreddits, spending time fantasizing about horrific things and showing neurological patterns of violence and AI predictions of likelihood of committing a crime, why wait? Why wait until they actually take the step? God forbid.
Preet Bharara:
Yeah.
Nita Farahany:
Right.
Preet Bharara:
Because you also believe in free will. And we also need a certain standard of proof because you don’t really know until you know. I want to get to a couple of things before we open it up for questions that have been on my mind to ask you from the book and otherwise. So first, with respect to ChatGPT, GPT-4, AI, all of that, I’m really curious to know what you think about the recent call by a number of people who are either experts or self-described experts who are worried about where all this is going and how quickly it’s going to that place have called for a pause for some period of time. What is the great worry? Is it justified? And what do you think about the call for a pause?
Nita Farahany:
I think it’s right for us to have concerns about the rate at which AI is developing and being applied in high stakes settings without having good insights into what’s happening with the technology and where it can go. I don’t think a pause is the answer. First of all-
Preet Bharara:
Full stoppage?
Nita Farahany:
No. Mm-mm.
Preet Bharara:
No.
Nita Farahany:
No. I don’t think that’s right.
Preet Bharara:
Full speed ahead.
Nita Farahany:
No, no. There’s this idea of prudent vigilance-
Preet Bharara:
40 miles an hour?
Nita Farahany:
Yeah, about 35. 35. No, I mean, there’s this idea of prudent vigilance, which is we know what a lot of the immediate risks are. We’ve known about a lot of the immediate risk for a very long time.
Preet Bharara:
Like what?
Nita Farahany:
Like algorithmic bias? So systems that are trained, like garbage in, garbage out means that you’re going to have biased predictions and biased data sets.
Preet Bharara:
But that’s not what they’re talking about. Are they talking about an Armageddon?
Nita Farahany:
Well, they’re talking about existential risks of AI destroying humanity?
Preet Bharara:
Okay.
Nita Farahany:
Yeah.
Preet Bharara:
That seems like a big thing.
Nita Farahany:
Does seem like a big-
Preet Bharara:
You’re not worried about that?
Nita Farahany:
I mean, today, no. Do I think a six-month pause is going to prevent that from happening? No.
Preet Bharara:
Am I right that this arms race is happening because it’s unleashed. People have a lot of interest. There’s a gigantic accelerating rate of adoption. And there was also, dare I say it, lots of money to be made.
Nita Farahany:
All of those things are true. There’s also lots of benefits to be had and there’s lots of problems with the technology today. For example, you and I were playing with it and we decided to ask it questions about you.
Preet Bharara:
Oh, AI lies.
Nita Farahany:
Yes. It lies. Right? And it like-
Preet Bharara:
ChatGPT lies, it broke.
Nita Farahany:
Came up with… I don’t remember what it was, but it just came up with fake-
Preet Bharara:
It lied about me.
Nita Farahany:
Yeah, it did. It came up with fake things that people had said about you that they hadn’t said about you. And we looked into it, it didn’t exist.
Preet Bharara:
I think it said something like, Ben Bernanke wrote a book in which he said that Preet Bharara is a SOB.
Nita Farahany:
Right. Something like that.
Preet Bharara:
And I was very upset.
Nita Farahany:
Yes. And we looked into it wasn’t true.
Preet Bharara:
No, it’s totally made up.
Nita Farahany:
Yeah, totally made up.
Preet Bharara:
No, he thinks I’m great.
Nita Farahany:
We know about… Right. Maybe, I don’t know. But I mean, maybe it’s generative, right? So it’s telling you the next thing he’s going to say.
Preet Bharara:
Its so random Ben Bernanke.
Nita Farahany:
Maybe that’s like a book that he’s actually write. It’s the next thing-
Preet Bharara:
I guess.
Nita Farahany:
Its coming. But my point is, when it comes to… We know what a lot of the problems are, we ought to be doing things to address those problems today from the falsehoods and problems like that. I worry less about the existential destruction of humanity from AI than I do about manipulation, mental manipulation. And the risk of so many technologies today are really designed to hack into our shortcuts and thinking or hacking into our cognitive biases. For example, clickbait headline plays to your brain by your… You look for novel stimulus and so you’re trying to figure out amongst all the things that are coming at you, which one is the tiger? And so you make your headline look like the tiger and your brain takes its selective attention to it, or subtle things that I worry about with generative AI. A lot of people thought that that was a real picture of the Pope that was floating around or politician being-
Preet Bharara:
With the jacket?
Nita Farahany:
With the jacket. Yeah.
Preet Bharara:
That wasn’t real?
Nita Farahany:
No, that wasn’t real. We’ll talk about it later. Right? But the things you can do with generative AI to subtly change images so that you trust the images. These are the things I worry about more.
Preet Bharara:
But there’s a twofold worry about that, its A that you’re going to believe something that’s fake is real. But there’s also, and people have been talking about this, the danger that comes from being able to say that something is real is fake. So the example given in recent times is remember the Access Hollywood tape about Donald Trump a few years ago? He got elected anyway, but he admitted and conceded that it was him. He apologized. Today that comes out… Maybe not today, maybe in a few months, but I think maybe even today, something bad comes out about someone you say that’s a deep fake. Is that what-
Nita Farahany:
Right. Now you can have other points of you’re a prosecutor, you need different corroborating evidence.
Preet Bharara:
But there’s going to come a point-
Nita Farahany:
There will be, I agree.
Preet Bharara:
… where nothing… You’re not going to be able to certify that some video of you or me or Barack Obama or someone else is true or false. Correct?
Nita Farahany:
True.
Preet Bharara:
So what is a world in which even what you see with your own eyes is not verifiable going to be like?
Nita Farahany:
That’s a good question. I mean-
Preet Bharara:
That’s why I asked it.
Nita Farahany:
Yeah. And I wish I had the right answer for you. But you know-
Preet Bharara:
You could ask ChatGPT.
Nita Farahany:
We can ask, but I don’t think we’re going to get-
Preet Bharara:
And just make some stuff up.
Nita Farahany:
… a verified answer. I have a chapter in the book called Mental Manipulation, and it focuses on both the ways in which our brains can be hacked. But where we’re going to need to draw the line to think about what violates this concept of cognitive liberty, this right to freedom of thought and where we’re going to need to start to define things that cross the line? Because we’re trying to persuade people and each other all the time. We’re trying to read each other’s minds all the time. We’re even trying to manipulate people in some ways, in ways that we don’t find problematic. I talk about my three-year-old, very good at manipulating me, but we mostly just laugh and smile when a child does it, there’s something different that gives us concern. And I think what a lot of people worry about with AI is either the developers or the AI itself developing some evil intentionality and manipulating humans to do its bidding. And there’s some reasons to be concerned about bad actors doing that for sure. And we need to develop safeguards to that. But do I think pause is the right answer? No.
Preet Bharara:
I want to ask you about our humanity. In a universe in which AI becomes so advanced that it has absorbed all of human… I’ve heard you say this and so I’m throwing it back at you. All of human art, history, culture, learning, knowledge, mathematics, you name it, it’s all in the AI brain, so to speak. And it can, because it’s generative, can spout out not just articles, not just poems, but works of art, novels, architectural plans for the grandest buildings we’ve ever had. That all sounds lovely. But I’m going to ask you a version of the question I asked you a minute ago that you didn’t want to answer. What kind of world is that in which a machine is producing, creating arguably at some point in the future, more and better than any human ever has?
Nita Farahany:
Yeah. So I mean, first-
Preet Bharara:
That’s a real thing, right?
Nita Farahany:
Yeah, it’s a real thing. So there was a really nice opinion piece written by Yuval Harari and Tristan Harris and Aza Raskin in the New York Times, asking this question about… And calling for slowing down for this very reason, which is it’s going to transform humanity. But the transformation of humanity is already underway. I mean, from AI, from technology, the idea that everything we have created isn’t with technology. I mean animation, if you watch movies these days rather than these giant sets, most of it is created with one small set with a lot of computer generated imagery. And so this is part of a continuum of the transformation that’s already occurred, which isn’t to say that I think it’s all fine and we shouldn’t be worried about it. We need to be having much more open dialogues about this and figuring out what we want and what we think is beauty and what we believe is creativity and the extent to which these tools can enhance human flourishing and the extent to which they diminish human flourishing.
And then try to direct it in ways that are better for human flourishing, which requires us to figure out what we mean by human flourishing in the modern age. And I have a take on that in the book about this idea of cognitive liberty, this right to self-determination over our brain’s mental experiences. But it’s an invitation, the book to a broader dialogue, a call to action for us all to be having the conversation about what does it mean to actually have liberty over our brains and mental experiences in an age in which much of what we’re surrounded with is created by machines.
Preet Bharara:
Okay, so you talk a lot about liberty in ballpark figure. In how many days, months, or years will AI be our overlord?
Nita Farahany:
I guess it depends on what you mean by overlord. That’s a good lawyerly answer, right? I mean, if you mean how long until we’re all plugged into the matrix, I would invite you to read The Anomaly.
Preet Bharara:
You have a fascinating section of your book where you talk about something that I was not so familiar with, this idea of transhumanism. But at some point, and this is real too, not in the immediate future, if we can extract information from our brains, plug it into a digital format, we can have an existence outside of our physical bodies. And if you have an existence outside of your physical body, you have existence outside of your physical limitations of death.
Nita Farahany:
Maybe.
Preet Bharara:
Maybe?
Nita Farahany:
Yeah. I mean so maybe-
Preet Bharara:
When’s that going to happen?
Nita Farahany:
I mean there’s a lot we need to learn about the brain to be able to fully ever upload it. And the extent to which your brain actually exists from the rest of your sensory system, we have no idea about. I think the more limited form of that, that’s to come much sooner is things like brain to brain communication or brain to device, which is what most manufacturers are really focused on, is to figure out if we can have a more seamless integration between what’s happening in our brains and what’s happening with the rest of our technology.
Preet Bharara:
And that’s really on the horizon. And there have been advances-
Nita Farahany:
Yeah.
Preet Bharara:
For people-
Nita Farahany:
That’s already happening with… Yeah. I mean you-
Preet Bharara:
For people who don’t have the power of speech or don’t have the power of movement, they can communicate with their brain and what’s the most advanced technology of that sort? How fast? How many words per minute?
Nita Farahany:
I think it was 62 words per minute was the most recent. So there was an ALS patient that couldn’t otherwise communicate who was able to communicate brain to text at a rate of 62 words per minute versus just as a comparison, Stephen Hawking, who had the best technology possible trained on him at the end of his life, I think only could generate 15 words per minute. And much of his life it was one word and at one point it was even one character a minute. And just think about that rate of advancement.
Preet Bharara:
So there’s a lot of good and some danger, some challenges. I want to mention again, Nita, your amazing, excellent book, The Battle for Your Brain, Defending the Right to Think Freely in the Age of Neurotechnology.
Nita Farahany:
Thank you.
Preet Bharara:
For more analysis of legal and political issues, making the headlines become a member of The Cafe Insider. Members get access to exclusive content including the weekly podcast I co-host with former US attorney, Joyce Vance. Head to cafe.com/insider to sign up for a trial. That’s cafe.com/insider.
If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics and justice. Tweet them to me @PreetBharara with the hashtag ask Preet. Or you can call and leave me a message at 669 247 7338. That’s 669 24 Preet, or you can send an email to letters@cafe.com. Stay Tuned is presented by Cafe and the Vox Media Podcast network. The executive producer is Tamara Sepper. The technical director is David Tatasciore. The senior producer is Adam Waller. The editorial producers are Sam Ozer-Staton and Noah Azulai. The audio producer is Nat Wiener. And the Cafe team is Matthew Billy, David Kurlander, Jake Kaplan, Namita and Claudia Hernandez. Our music is by Andrew Dost. I’m your host, Preet Bharara. Stay Tuned.