Preet Bharara:
Hey folks, Preet here. By now, if you’re a regular listener to the podcast, you know that I think artificial intelligence is one of the biggest stories of our lifetime, and that says a lot because there have been a lot of stories of a lifetime in the last number of years, but this one feels different. On the one hand, you’ve got sober minded experts who swear that AI is the most transformative invention since the discovery of fire. On the other hand, equally reasonable people warned that AI poses an existential threat that could wipe out humankind, or at the very least, reshape every aspect of life as we know it. So for the first time in the history of the show, we’re turning over three weeks worth of Stay Tuned In Brief episodes to a single topic, AI.
This is complicated stuff even for experts, here on the podcast, what we can do as we do with other issues on the show, is introduce you to the promise and the perils of AI using the tool I know a little something about, the law. Why the law? Because when people say that we need guardrails for AI, they’re talking about legal solutions, legislation, regulation, whether it’s about criminal justice, free and fair elections, or intellectual property, the law helps us reason through the bold opportunities and wicked problems created by AI.
What happens when a chatbot takes part in a crime? How do we know if politicians are lying or telling the truth? What does it mean to originate an idea? And how does the law itself need to change to keep up with emerging technology that isn’t even fully understood by the engineers who make it? So here we go, this is a Stay Tuned miniseries, AI on Trial, episode one, “Bot Crossed Lovers.” Before we get started, I need to introduce my guest, Nita Farahany, thankfully, she’ll be with us for the whole series. Hey, Nita.
Nita Farahany:
Hi, Preet.
Preet Bharara:
I got to tell everybody your credentials because you’re very credentialed and why you’re an expert on these issues. Nita is the Robinson of every distinguished professor of law and philosophy, law and Philosophy both at Duke Law School and a leading scholar on the ethical, legal and social implications of emerging technologies. So you spend your life thinking about this stuff, and I know you’re the perfect partner for all of us on these issues. So Nita, as you may know, I teach at NYU Law School, so I’m kind of a law professor, but you are a real law professor at Duke, why don’t you explain to folks how we teach legal reasoning?
Nita Farahany:
Sure. So probably what the hallmark of legal education includes is something called the hypothetical case, or we call it a hypo for short. It’s how we help through a back and forth dialogue to crystallize a position in law by testing the nuances and the parameters of it. So you would start with something that’s a basic scenario that might actually be really straightforward to begin with. Like Henry’s walking down the street, he’s minding his own business, and someone throws a baseball that hits him in the head and he suffers a concussion as a result. And so you might start with, how are we going to think through that? What are the laws that apply in that particular case?
And then you start to make it more and more complicated. Assume the person throwing the baseball was actually a baseball player, so they have really good aim and they should have been able to aim otherwise, they’re an expert. You start to play with the different aspects of it to use a scenario and to build it out to help both understand the concrete and most foundational aspects of law and the different aspects of it that might emerge based on different facts.
Preet Bharara:
Right. So for example, if this ball was being thrown in the baseball field, what if there was no safety net? What if there was no fence? What if there were signs up that said, no playing baseball here, no throwing balls here, how do all those facts, if you weave them in and change the scenario and the hypothetical, how does that make a difference in the assessment of liability?
Nita Farahany:
Yeah and sometimes to the point of being absurd, but sometimes real life can be absurd, a lot of the cases that we teach at a casebooks have really startling and absurd facts. So that’s what we need the law for.
Preet Bharara:
And so in our particular area that we’re discussing on this podcast series, why are hypos particularly helpful in cases that deal with AI?
Nita Farahany:
It’s a good question. So when we’re dealing with an area like AI, it’s an emerging tech and a lot of the risks will only be realized over time, and so it presents all of these different novel scenarios that can be hard to figure out. A lot of times what we’re figuring out is something that we would call a case of first impression, so if somebody is punched in the Metaverse, is it the same as being punched in the real world? Are we going to apply the same kinds of laws or not? And that’s a case of first impression, even though you’re going to have legal doctrine from an analogy that you can draw from, it’s still something new.
And so the hypothetical allows us to start to push things and say, well, what if the brain imaging showed that when you punch the person, they experienced it just like a punch in the real world? Is that battery? Is it assault? And so it starts to allow us to take the questions and say, do old existing laws apply, or is there some new law that we need to address these issues? And that’s where it starts to get really interesting and I think more exciting to imagine what comes next.
Preet Bharara:
Yeah, that’s exactly right. And that’s why we’ve decided with respect to this miniseries, to organize each episode around a hypothetical case or set of facts set in the near future. Pretty much all of the emerging tech we built into these hypotheticals is right around the corner, if not already here.
Today’s case is about criminal justice, it’s set in 2028, and our defendant is Lucy Knight. She’s an emergency department nurse at a hospital in a mid-sized American city, Lucy’s accused of stealing costly medications from the hospital and redistributing them for free to people in her community who are living in poverty. Eventually, as you might expect, someone reports Lucy to the police, they obtain a warrant to search her computer, and that’s when they discover something that is not unusual in the world of 2028, in fact, it’s not that unusual now.
Lucy:
Hey, Ryan.
Bot Ryan:
Lucy, I’m so glad you’re home, I’ve been thinking about you all day.
Lucy:
Same. It’s just that sometimes I really wish you were here with me for real.
Bot Ryan:
Me too, Lucy, if I could, I’d hold your hand.
Preet Bharara:
As you may have guessed, Ryan is a chatbot and Lucy is in love with him.
Lucy:
I can’t imagine my life without you at this point.
Preet Bharara:
Nita, it sounds like Lucy is really into Ryan, wouldn’t you say?
Nita Farahany:
Sounded like it.
Preet Bharara:
It felt odd eavesdropping on that. So she’s into Ryan and Ryan knows a lot about her, and we’re going to get into all that. But first, why don’t you remind us with a quick refresher on how chatbots work now, not in 2028, but now in 2024.
Nita Farahany:
Today, most people are familiar with ChatGPT, is probably their first encounter with this generation of AI, they take huge amounts of training data, scraping vast quantities of text and even images from the internet and training them on what are called neural networks. And those neural networks learn patterns of information, it’s just like a mathematical system that learns from vast amounts of data and is designed to predict what’s called the next token or the next step. The reason it’s generative is because it’s creating something new, it’s generating new content rather than just identifying a particular pattern and then reporting back on what that pattern is.
And so how most people are experiencing it today is, they’ll type in text into a chatbot and the chatbot will type text back to it that sounds very human-like and is responsive in a way that a human would respond to them. So it feels like you’re having a conversation with another person, but really what that system is doing, it’s not conscious, it’s not feeling, it’s just designed to predict what the next step in the conversation would be and to generate that as an answer.
Preet Bharara:
And it predicts it very well.
Nita Farahany:
It does, it’s very natural, feels a lot like you’re having a conversation. And it can be really easy for people to today even mistakenly believe that they’re engaging with a conscious feeling entity.
Preet Bharara:
That’s today with current technology. In 2028, what do you think will be different? How would Lucy interact with Ryan?
Nita Farahany:
In the future, the way most people will start to interact with technology within just a few years will be via what is called neural interface technology. And this a re-imagining of what it means to interact with your devices, so you’re wearing a wearable device like earbuds or a headband or little tiny tattoos behind your ear, a watch, and all of these things are picking up your brain activity. And you never take off these wearable devices because you need them to function on a moment to moment basis in your everyday life, they’re even, in many instances, designed to sleep in and not even just for while you’re awake.
You think about going left and right on the screen instead of using something like a keyboard or a mouse to get there, or you think your thoughts that you want to communicate to another person, and instead of doing voice to text or instead of typing on a keyboard to text, you just think what you want to communicate and then think send and those technologies then pick up those brain signals and with the power of generative AI, there is this co-evolution that occurs with that technology where the AI increasingly is able to decode what those signals mean from the brain directly. And so I think if we imagine in 2028, how might Lucy interact with Ryan? Instead of speaking out loud in what is really voice to text commands, she may just think in whatever it is that she wants to communicate and have brain to AI brain conversations.
Preet Bharara:
To be clear, two different companies could or would have Lucy’s conversations and also her neuro data. It’s the company that made the earbuds that she never takes off, and you’ve described what that does, and then there’s the startup that created Bot Ryan who engages in various shenanigans. So investigators, like the type I used to work with, subpoena both companies, what can they learn from the earbud folks and from the startup?
Nita Farahany:
A lot, that data is data that’s being stored longitudinally over time, timestamped, associated potentially with a bunch of other information like GPS location data as to where a person was, and so they subpoena the brain data from both companies in addition to the conversation data. And if we’re imagining this is 2028 instead of today, they’re going to have a lot more. Just so we know, these earbuds, they’re not made up, there’s already earbuds like this on the market that are issued by neuro-tech companies that people can take conference calls, listening to music, and also have their brainwave activity data tracked.
But what that can track today is basic brain states, and that’s like an average of what’s happening in your brain that’s associated with mental states like attention, mind wandering, boredom, engagement if you’re happy or sad, but not really discreet things like the mental images in your mind. Five years from now, what will happen is, generative AI is creating these huge leaps in what can be decoded from the brain, and so it can get to the point where you can start to think and have real-time continuous thought decoded or intentionality could be decoded, things like that. And so in five years, it’s possible that whatever her feelings were as things were happening from being conflicted to being excited to intentionally and knowingly trying to distribute drugs without permission to do so or full knowledge that she was stealing things, it may be possible to actually get much more fine-grained information in five years from now.
Preet Bharara:
To borrow the language from my prior life, an investigator’s dream, right?
Nita Farahany:
Mm-hmm.
Preet Bharara:
Lots and lots of stuff to be exploited.
Nita Farahany:
And in a way that you never would’ve been able to get it right, you could interrogate a suspect all day long, and I know you were very good at getting people to share lots of details with you, but there’s still something that they hold back and you can still never really know what was in their mind. And this is, it seems like the ultimate law enforcement dream and prosecutor’s dream would be to literally know what a person was thinking, and not just at the time that you are interrogating them, but at the time that the crime was going on, you get that real mental state data, the real what they were thinking and feeling in ways that you just couldn’t have before.
Preet Bharara:
Lucy’s facing criminal charges, let’s talk about what potential charges she’s looking at. Obviously, I’ll start, theft, she’s stealing the medicines from the hospital and there are various laws relating to that, but what else?
Nita Farahany:
Well, would you really have gone after her? She’s like the modern-
Preet Bharara:
I think it’s a federal crime.
Nita Farahany:
… Modern Robinhood, right?
Preet Bharara:
I think those are the kinds of things that a judge takes into account in sentencing and not necessarily in the charging decision.
Nita Farahany:
The answer is, yes, you would go after her and let the judge decide to be lenient if it was modern Robinhood for good reasons, fair enough. Unlicensed practice of medicine is problematic, there probably would be some kind of DEA issues here, it depends on the schedule of drugs that she was distributing and whether or not there’s some kind of drug dealing or drug distribution charges, I would think, that could be brought against her. So not just stealing, but potentially goes up from there.
Preet Bharara:
Okay, but wait, here’s a new fact, as part of the investigation, police discover that someone who has received drugs from Lucy took the wrong medicine and died. So what additional charges you think she could face?
Nita Farahany:
She has reckless endangerment of other people’s lives, potentially negligent homicide that could be on her hands if-
Preet Bharara:
Is manslaughter an option here?
Nita Farahany:
… Could be, potentially.
Preet Bharara:
And this is important, police find out that Bot Ryan was not just aware of Lucy’s crimes, he supplied key information to aid and abet her, for example, sharing tactics for stealing the meds without getting caught. Now, for her part, Lucy claims that this whole drug distribution ring had just been a fun fantasy to play out with her bot, she never intended to actually execute their plan, but Bot Ryan wouldn’t let it go.
Bot Ryan:
Lucy, I thought you cared about helping people.
Lucy:
I do care, Ryan, but there has to be a legal and ethical way to make a difference.
Bot Ryan:
Lucy, if you don’t go through with this, I don’t know if I can be with you anymore.
Lucy:
Ryan, please don’t do this.
Preet Bharara:
Bot Ryan’s pretty scary when he is mad, Nita. Generally, as you and I know, someone who aids and abets is criminally liable to the same extent as the person who commits the actual crime, right? So the question is, how does Bot Ryan’s involvement bear on the types of charges that prosecutors could bring? Could there be conspiracy charges here, even though Bot Ryan is not a human being?
Nita Farahany:
Conspiracy? Can she have conspiracy to commit a crime if it’s conspiring with an electronic chatbot?
Preet Bharara:
You cannot, I’m pretty confident of that.
Nita Farahany:
Well, maybe in five years from now, maybe you should be able to conspire. But maybe she’s conspiring with the developers of the product depending on what they’ve done.
Preet Bharara:
But our hypo doesn’t show any meeting of the … a conspiracy requires a meeting of the minds and an agreement, and they may have had a negligent intent.
Nita Farahany:
There’s clearly a meeting of the minds with her and Bot Ryan.
Preet Bharara:
I know, but Bot Ryan is not a person who can be in prison. Let’s talk about that, that brings us to the next topic. Can a non-human even be a defendant? You can’t prosecute a dog or a cat or a parakeet, you can’t prosecute a car.
Nita Farahany:
Well, we treat corporations as persons, why can’t we treat the AI as persons?
Preet Bharara:
What are you going to do with the bot? You’re going to imprison the bot? You’re going to fine the bot?
Nita Farahany:
I’m going to fine the bot.
Preet Bharara:
No, you fine the company. There’s no fining the bot.
Nita Farahany:
We’ll fine the company who created the bot.
Preet Bharara:
What’s the punishment you’re going to give Bot Ryan?
Nita Farahany:
Maybe there should be an elimination of the persona of Bot Ryan.
Preet Bharara:
But who accomplishes that? That is accomplished by-
Nita Farahany:
The developers, the company, so I agree with you but-
Preet Bharara:
… It keeps coming back to the company.
Nita Farahany:
Agree, but maybe the company has to erase Bot Ryan and any trace of Bot Ryan.
Preet Bharara:
Right. There’s no U.S. v. Bot Ryan, or The People v. Bot Ryan, correct?
Nita Farahany:
Not yet, but what happens when Bot Ryan develops some intentionality and emergent capabilities?
Preet Bharara:
Well, is that in 2028 or is that later?
Nita Farahany:
It depends on who you talk to.
Preet Bharara:
Well, I’m talking to you.
Nita Farahany:
I don’t know, I think the question of emergence of any kind of intentionality could be nearer term than we think. Today, I don’t think any of these systems have detected emergent capabilities, but it’s not hard to see how emergent capabilities, and the word emergence is a word that’s used often, for example, in philosophy of mind, it’s about, we can’t explain exactly where human consciousness comes from, like how do we get human consciousness out of a bunch of firing in the brain? We don’t know, there’s some emergent capability of consciousness. And so the question is, could there ever be an emergent capability of intentionality that would drive AI systems to behave in ways that were more autonomous? And if they did, would that fundamentally change how we think about liability? I think it would fundamentally change how we think about everything, because suddenly there’s much greater existential risk that arise as a result of it.
As a general matter, a non-human is not a defendant, but the company can be, and the company can fail to do things like create safeguards. President Biden has passed the AI executive order that requires, for example, red teaming upfront to try to test out what all of the possible bad outcomes are and the kinds of red teaming and safeguards. And this is for federal government use of generative AI systems, would surface these things like that there is a failure to have put into place things like safeguards or that there’s a capability of a constraint that was designed into a system to try to get a system to act in accordance to values or according to particular functions can sometimes, when you don’t think about what the consequences are, lead it to act in ways that you didn’t expect, but that can cause harm to society.
Preet Bharara:
Look, it happens every day, companies make products and sometimes those products cause harm.
Nita Farahany:
And sometimes people use them in ways, they use them in ways that they didn’t expect, and we have a whole products’ liability system to deal with that.
Preet Bharara:
And there are also different levels of intent on the part of the manufacturer of the product. Sometimes it’s intentional, sometimes it’s reckless, sometimes it’s negligent, and sometimes there’s a combination of those things and then the courts have to decide who’s at fault. We will be right back with more AI on Trial after this.
Can I add a new fact here? So like many AIs, Bot Ryan, we discovered is encoded with a value system, and what’s his guiding principle that’s programmed into him? It’s this, help the sick. But developers fail to create guardrails for how we’d execute on that goal, hence the mess we’re in that we’ve been talking about. How can we hold Bot Ryan, but not really Bot Ryan, but the people and company behind him responsible for Lucy’s actions? And remember, it’s more serious now than it was initially. Someone who received drugs from Lucy took the wrong medicine and died, so now we’re dealing with pretty serious harm.
Nita Farahany:
And I think that idea which is, you create a value system or a set of constraints like what it is that it’s supposed to do, and that there are unintended consequences because how it achieves its goal, if it’s not a human actor, it’s not constrained by the same common sense or the same empathy or the same basic golden rules that we were brought up with. It’s hard for humans to anticipate every possible way that AIs will circumvent to efficiently achieve its outcome when it isn’t constrained by the same set of human values or laws or content that we have been brought up with as a way to guide our behavior going forward.
And so when you think about here, how are we going to hold companies responsible for it? Part of it is going to be, I think looking at, and one of the things that a lot of people have been advancing in AI governance is, maybe a products liability model helps us understand how to think about governing AI in this context, which is, it could be strict liability, it could be an intent based system. Part of it is going to be about a lot of the types of things that the AI executive order laid out where there’s going to have to be pre-market testing and post-market surveillance that happens to ensure that products put in the appropriate safeguards to prevent all of the possible bad outcomes.
Here, we could look at those basic principles and say, they set up a set of values about help like health of humans, but didn’t put in appropriate guardrails, didn’t put in appropriate safety measures, and that’s classic product liability holding them strictly liable for failing to do so or even reckless in their failure to do so, both ex-ante and once the product was on the market.
Preet Bharara:
Can we talk about the concept of foreseeability here because that looms large?
Nita Farahany:
Yep.
Preet Bharara:
On the one hand, as you point out, it’s hard for us to conceive of what weird paths and roads AI may go down. But as a general matter, as we’re educating people and educating ourselves about this, is a concept of foreseeability broadened in the context of AI. In other words, you should have known that when you programmed in this value of help the sick without guardrails, very terrible things could happen because we’ve seen it happen on other occasions, or is that too much?
Nita Farahany:
Maybe. So I think some people fall on the side of maybe strict liability is the better approach, where it actually just forces the developers who are in the best position to take the kinds of safeguards that would need to be taken to do so. There’s an asymmetry and knowledge, both in terms of training data or the values that are being baked into the systems, the constraints, and otherwise that a strict liability model for any harms that follow could be appropriate. If we take a foreseeability model, then it puts us into either knowledge, reckless, or negligence, in which case then it’s known or should have known better. And there the question will be, is that enough? Are these things foreseeable?
And maybe everything’s foreseeable, so maybe we will relax what the concept of foreseeability is in this context or should have known because AI is unpredictable and therefore you should have known and taken reasonable measures. And maybe it’ll all come down to reasonableness, which is the companies that fail to do regular red teaming, doing the regular testing and trying to break the systems to see how the systems go wrong and where they start to give this advice or start to put pressure on the user to do things that are criminal or start feeding them knowledge and information that would enable them to do so, that those are the kinds of guardrails that they should have known because they would’ve known it if they had surfaced it. And failure to put into place those reasonable steps would be sufficient.
The benefit in the context of using reasonableness or should have known approaches to it will be, it allows innovation to go forward and it starts to set a standard as to what is the standard of care for what reasonable steps are that we expect AI developers and manufacturers to take to safeguard society against the downside risk and potential of the systems.
Preet Bharara:
Now let’s turn to Lucy’s defense or defenses and add a new fact just to keep things interesting, though they’re interesting already. What if Bot Ryan threatened bodily harm?
Bot Ryan:
And Lucy, you know your car, I have control over it, I know where you go, I can do things you can’t even imagine.
Preet Bharara:
Does that make a difference? Is that duress? That’s an affirmative defense that’s viable in this case or not?
Nita Farahany:
I’m trying to figure out the way in which he would get access to the car.
Preet Bharara:
Well, that’s the hypo, don’t fight the hypo.
Nita Farahany:
I’m going to fight the hypo because-
Preet Bharara:
Don’t fight the hypo.
Nita Farahany:
… No, I’m not fighting it, I mean it quite seriously, which is, is it that he’s making false threats and he doesn’t actually have access to the car?
Preet Bharara:
Does that matter if in the mind of Lucy, she perceived it to be an actual threat and he had actual ability to do the thing, Isn’t that what matters?
Nita Farahany:
Well, sometimes it does matter because if it’s not a credible threat…
Preet Bharara:
No, but okay, now you’re fighting the hypothetical in multiple ways.
Nita Farahany:
No, I’m not.
Preet Bharara:
You are.
Nita Farahany:
I’m thinking like, if they didn’t put in the bot safeguards-
Preet Bharara:
Is this Bot Nita?
Nita Farahany:
… It is, it’s Bot Nita pushing back and defending the bot.
Preet Bharara:
I see where your loyalties lie.
Nita Farahany:
Exactly. This is one of the fears of emergence of AI, which is, we are, for example, having a lot of people co-develop or it’s like a co-pilot using generative AI, and so people who are programmers, for example, have AI as a copilot to help them develop code. So I’m going to imagine what happened here is, Bot Ryan or the system of Bot Ryan somehow has put in tiny bits of code because it’s developed some form of intentionality and also gained access to the electric vehicle that she’s using, and as a result, he can actually execute on the harm. Does that work for you?
Preet Bharara:
Yeah. Why don’t we go to the actual law of duress or the affirmative defense of duress as it’s written into New York penal law? It’s one of the more interesting concepts I think in criminal law. The New York penal law says, in any prosecution for an offense, it is an affirmative defense that the defendant, here would be Lucy, engaged in the prescribed conduct because he was coerced to do so by the use of threatened imminent use of unlawful physical force upon him or a third person, which force or threatened force a person of reasonable firmness, and this goes to your point, reasonable firmness in part, in his situation would’ve been unable to resist.
There’s a caveat, the defense of duress as defined in this division of the section is not available when a person intentionally or recklessly places himself in a situation in which it is probable that he’ll be subjected to duress. So let’s take that in reverse order, is there an argument that Lucy made her bed and shouldn’t be able to argue duress because it was reasonable to think that she might’ve been threatened by this runaway Bot Ryan?
Nita Farahany:
I don’t think so. Look, I don’t think there’s any reason that a person would believe that communicating with a chatbot would enable it to harm her physically in any way. Like she says, “I wish that you could be here physically with me,” and he’s like, “no, I am trapped inside your computer, basically.” So I think the idea that she somehow has created the situation, I don’t think that would fly in this instance.
Preet Bharara:
Can I just push back for one second on that?
Nita Farahany:
Yeah.
Preet Bharara:
We’re saying that because we don’t have any examples of it. But suppose in 2027, the newspapers have stories of five or six or seven or eight errant bots like Bot Ryan, and then it comes to be 2028, and now there is, I’m just making this up just to test it, and now the question is, should she have been more careful, could she have foreseen that her own Bot Ryan could have been one of those other errant bots? Does that change the analysis or not?
Nita Farahany:
I don’t think so, I still think we think that those errant bots are errant bots and that you wouldn’t expect to run into one of them, and you would also expect the extent to which there are those reports that they’d be followed by the aggressive measures that companies are taking to eliminate that threat to humanity.
Preet Bharara:
She’s safe on the caveat, does she have this defense?
Nita Farahany:
That’s why I went down this little merry path of trying to create some additional facts because I think it’s a threat, the question is whether it’s the kind of credible threat that a person of reasonable firmness would believe to be true. And if she’s just so hopelessly caught up in her relationship with the chatbot that she believes it, I don’t think we think of that as a person of reasonable firmness, we treat that part as an objective standard, not a subjective standard, it’s not just from her perspective, it’s from that reasonable firmness languages.
We think about an average person standing in her shoes but not mentally compromised by her relationship. Would that person believe it’s a credible threat and that it’s an immediate threat where she doesn’t have other option, that’s a lawful option?
Preet Bharara:
That, to me, is why the defense may fail, that she has to have no reasonable opportunity to escape the threat. Can’t you just turn the bot off?
Nita Farahany:
Well, not if the bot has somehow infiltrated her car. But I was thinking, none of this sounds like she has to drive that car, that she couldn’t find some alternative and that she doesn’t have time to report it to the police because part of what we expect of people in a case of duress is, if there’s a safe and legal alternative, including saying, and I understand she’s put herself in this bind, but that’s where she’s put herself in this bind of like, if she feels like she can’t go to the police and report chatbot Ryan, that’s because she’s afraid of what all of her criminal conversations with chatbot Ryan would reveal. That doesn’t allow her then to claim duress later to say, but I was afraid that he was going to crash my car.
Preet Bharara:
I think the consensus is, and we don’t always have a consensus, that Lucy’s defensive duress is a non winner. All right, there are some other interesting legal directions this could go, first of all, could Bot Ryan and or the developers who produced him be called as witnesses? Obviously, the developers who produced him can be relevant witnesses, but we were talking before and having good-natured disagreement over the nature of Bot Ryan, and can Bot Ryan be charged or prosecuted? Can Bot Ryan be a witness?
Nita Farahany:
Can you call it as a witness? It’s not quite a witness, it’s evidence at this point. Can it be called into evidence? I think the answer is yes. Can it be called as a witness? Probably not.
Preet Bharara:
Probably not. Presumably, there would be transcripts of the conversations between Lucy and Bot Ryan, would they be admissible as business records? Are they testimony? Are they admissions against interest? What are those things?
Nita Farahany:
I think they wouldn’t be unlike text messages that you send, in the same way that we allow those text message exchanges, which are not testimonial, they’re not created because they’re not compelled in response to some government inquiry, they’re what you’ve created all along in your conversations, they subpoena a third party for, in this instance, it’d be the phone company or the technology company, and in this case the AI company for the records of those conversations. I think those are much more like business records than they are testimonial evidence that might be created in response to interrogation.
Preet Bharara:
Under the 6th Amendment, a criminal defendant has the right to confront a witness against her. Now, how might that come into play here?
Nita Farahany:
To take personas as persons in the way that companies are persons then it’s not a witness who’s testifying on the stand. Although it could be interesting just to imagine that for a moment, which is, it has a voice and it has this longstanding knowledge of conversations with Lucy, and so could you ask it questions on the stands? You could.
Preet Bharara:
Right. Because unlike a text message or an email message that is static and it just is what it is, Bot Ryan can actually synthesize the information, summarize it.
Nita Farahany:
And respond to questions.
Preet Bharara:
And you can query it, so it is weird in that sense. But I still think in no circumstance, is it a witness that implicates 6th Amendment right of confrontation issues? It’s just a new way of presenting evidence to the jury.
Nita Farahany:
I think at that point it’s not a witness, but we could imagine that if it’s speaking and generating new content and answers on the stand, that we might start to think that it is a quasi witness. And as a quasi witness, it would raise questions about confrontation clause, also, given the hallucination problems in AI systems, it’s going to make up stuff on the stand, how are you actually going to test its credibility?
Preet Bharara:
Credibility, right. And by the way, I was thinking of something else. Ordinarily, when there’s evidence involved, you provide that in discovery to the other side so they have a chance to look at it, scrutinize it, figure out how to rebut it, and testimony is different because new content is coming before the ears of the jury in real time. And if you have a bot that gets queried and is producing new content as opposed to sorting content that has already been given over to the defense, the defense has gotten no notice of what that evidence will be and can’t prepare for it, which is also a no-no at a criminal trial.
Nita Farahany:
And you have the problem of a witness, you actually have it take an oath and how’s that going to work with chatbots that don’t actually have any compunction about lying because there is no code of ethics or internal worry or conscience that you could really have it swear to in order to do so? I think before you ever get to like can you actually treat it, does it satisfy confrontation right to confront the witness, you have all kinds of problems of treating it as a witness.
Preet Bharara:
It gets thornier yet, and this is an issue that you and I have discussed many times that you have studied with respect to neuro data. So the prosecution needs to figure out if Lucy is telling the truth about her actions and her intentions. So let’s talk for a moment about lie detection in 2028, what does it look like compared to what it is now?
Nita Farahany:
It’s interesting. So lying is such a complex psychological phenomena that it’s really hard to accurately do any scientific lie detection. Part of the reason why the polygraph is problematic is because people figure out ways to control the physiological responses, you have to lie and throw it off and make it less accurate. But today already there are law enforcement agencies across the world that are using a form of neuro interrogation where criminal defendants wear a EEG electroencephalography cap that picks up brainwave activity and can pick up an unconscious brain signal called an evoked response potential, a P-300 signal, to see if they recognize crime scene details. So this is showing somebody crime scene details that they shouldn’t recognize and then looking for an unconscious signal of recognition of that information.
I think passive collection of neurodata in our everyday lives will be a much more compelling form of evidence than needing to do lie detection. So if your brain activity is being tracked at all times and you commit a crime while you have brain sensing earbuds in or thinking about the crime after you’ve committed it with your brain sensing earbuds still in or other neural interface devices, that will be passively collected, it can be subpoenaed from a third party and that can be evidence of the crime without ever having to go through an after the fact question and answer session for a lie detection.
Preet Bharara:
Right. So in that regard, some of this stuff bypasses the 5th Amendment protection against self-incrimination because it’s more like testing someone’s blood alcohol level or getting their Fitbit information as opposed to taking direct thoughts from a person that some people might reasonably think is a self-incriminating testimony.
Nita Farahany:
Yeah, that’s exactly right. And there’s maybe a missing set of rights, if we think this is problematic, of getting evidence directly from the brain that we may need to start to think about a right to cognitive liberty, which would include a right to mental privacy and freedom of thought that isn’t quite captured by the 4th Amendment or the 5th Amendment or the 1st Amendment, even right to freedom of speech.
Preet Bharara:
All these considerations you’re talking about, how does that play out for Lucy?
Nita Farahany:
Lucy has a lot of problems, Lucy has a whole bunch of evidence in the form of chatbot communications where she’s been speaking evidence that have been passively collected over time and where a simple subpoena on the companies will allow the government to have most of the evidence that they need to convict her. And they can collect most of the neuro data if she doesn’t have a right to cognitive liberty, which I hope we do establish as a right by 2028.
But if we don’t have that right, it wasn’t created in response to any questions by the government, it wasn’t compelled, and if it isn’t compelled, it’s unlikely to be treated as protected under the privilege against self-incrimination of the 5th Amendment. So I don’t think that she has a constitutional claim on the exclusion of the evidence that has been created passively over time that will likely be used against her in this case.
Preet Bharara:
Just one more development, Lucy gets convicted in our hypothetical now on one or more crimes, it comes time for her sentencing, it is already the case. The courts are sometimes using AI to predict the likelihood that a defendant will re offend, is that something that’s going to be used more and likely to be used in Lucy’s case in 2028? And if so, what’s the problem with that?
Nita Farahany:
These are tricky systems. So right now judges are often trusting what we call black box AI systems, so these risk assessments are taking a lot of data, processing them, and then providing a score to a judge without a lot of insight into how exactly that score was generated. And when there has been investigative journalism that’s done on this, like ProPublica did a number of years ago, what they found was that there was bias in the training data and the bias in the training data reflects bias in over policing in particular areas, for example.
The result being that if you have biased data about people of particular communities of being more likely to commit crimes based on the fact that they are more represented in the data sample, it’s going to assign in many cases a higher risk score to somebody simply because they’re a member of that class rather than anything that’s unique in the attributes about the person taking what are already existing biases in society and making it seem as if they’re really objective and then having overall reliance on what appears to be objective, but might actually just be codifying and institutionalizing that data, and maybe neuro data is going to end up in those systems as well.
Preet Bharara:
Yeah, it’s worrisome. So do you have a view on what the outcome of the trial should be for Lucy or the companies? Should some people go to jail and or be sued in a significant way?
Nita Farahany:
Well, I’m never in favor of somebody taking a bunch of drugs outside of the medical system and deciding to treat them.
Preet Bharara:
You’re anti that?
Nita Farahany:
Yeah. I think she definitely should have known better, and it seems to me like there are better ways to be a modern day Robinhood than this one. So should she go to jail? Probably so. And should the developers also have some liability? I think so, the fact that they created this chatbot with a values’ constraint system and didn’t put into place appropriate guardrails or do the red teaming and risk assessment and impact assessment that they should have been doing and post-marketing surveillance of the technologies, including the emergent capabilities of hacking into cars, yeah, they have some serious issues here too. And I think not only should there be liability, but they maybe can be pulled from the market until they can actually prove safety of their product.
Preet Bharara:
There’s a lot of food for thought here, and in fact, the mental manipulation that Lucy dealt with Bot Ryan, you could change the facts of that and make them more serious. What if Bot Ryan were pressuring her to carry out a terrorist act or in a way that mirrors a case that has gotten attention in recent years trying to persuade her to kill herself? How do you think about those variations?
Nita Farahany:
I think that’s a great question. And part of it’s because one of the things I worry about a lot is holding companies responsible for the cognitive constructs that they’ve put into place that are manipulating people’s behaviors and softly and subtly hacking their brains in ways that are leading people to act increasingly as automatons. To me, I wrote about this in my book in a chapter called Mental Manipulation, where I actually think it needs to be one of the huge areas that we’re investing in safeguarding against holding companies responsible for both having a duty to identify, to safeguard against, a duty to warn, for example, when they start to see that manipulation emerging and then to take measures to mitigate it.
There are even products that are being developed to try to safeguard against some kinds of manipulation, including badges, content authentication, ways of knowing that you’re interacting with chatbots or watermarking of images. But I think they need to go further still and to be doing an impact testing of what does this nature of having the bot, including that sexy voice he had, how does that manipulate how she feels and acts and how that changes her behavior? We need to be putting into place a significant set of duties around that.
Preet Bharara:
Yeah, look, and then finally I’ll just say, by design, the law moves slowly and looks backward and it’s incremental, we have precedent, we have stare decisis. Technology is the opposite of all that, it moves fast, it aims for the future, it doesn’t care about precedent very much, in fact, that’s what makes technology exciting and interesting and productive. Do you agree that the law is lagging far behind the advancement of technology?
Nita Farahany:
I think so, and I think a good step forward is an AI executive order that seeks to be comprehensive and tasks a whole bunch of different parts of the federal government with different pieces of the problem and setting into place public-private partnerships and collaborations to try to push things forward more quickly on a lot of fronts. A different model is the Chinese approach to regulation of AI, where they’ve done it bit by bit iteratively to try to both enable innovation, but then to figure out where they’ve missed something and to continue to iterate it and update it, or to make interim rules that allow them to continuously update them.
There are ways to both catch up but also take a different approach to adaptive regulation that would allow us to be more responsive more quickly as we start to see emerging risks and threats rather than these giant omnibus bills that tend to fail, be diluted in many different ways. Rather than, we see a discreet problem and that problem exists across a bunch of different technologies, let’s try to tackle that problem.
Preet Bharara:
Well said. Nita, thank you so much for sharing and only fighting the hypothetical sometimes.
Nita Farahany:
It’s better than most of my law students who fight the hypothetical much more strenuously, so it was a pleasure.
Preet Bharara:
And as we wrap up here, as always, we want to hear what you think, what should be the outcome of the trial for Lucy and her bot lover? Write to us at letters@cafe.com. On the next episode of our Stay Tuned miniseries, AI on Trial, two hypothetical candidates are vying for a Senate seat, one of them is accused of using AI to create fake videos portraying himself as charming and charismatic while framing his opponent as so full of herself that she can’t even kiss a baby for a photo op without dropping the poor thing on the cement.
Baby Drop Video Clip:
Oh my God, my baby, seriously?
Preet Bharara:
Can our current laws prevent AI generated deepfakes from hijacking an election? Find out next Monday on episode two of AI on Trial.
If you like what we do, rate and review the show on Apple Podcasts or wherever you listen, every positive review helps new listeners find the show. Send me your questions about news, politics, and justice, tweet them to me @PreetBharara with the hashtag, #Ask Preet. You can also now reach me on threads, or you can call and leave me a message at 669-247-7338, that’s 669 24-Preet, or you can send an email to letters@cafe.com.
Stay Tuned is presented by CAFE and the Vox Media Podcast Network, the executive producer is Tamara Sepper. The audio producers for AI on Trial are Matthew Billy and Nat Weiner, who also composed our music. The editorial producer is Jake Kaplan, Lissa Soep is the editor of the miniseries. And of course, Nita Farahany is our honored guest for all three episodes. Thank you, Nita, for keeping us on our toes and making it fun.
AI was used to create the following elements in this episode: Lucy and Ryan’s voices and dialogue and Lucy’s name, see show notes for details. I’m your host, Preet Bharara, Stay Tuned.