• Show Notes
  • Transcript

Reid Hoffman is the co-founder of LinkedIn and a leading voice in AI innovation. He’s out with a new book, Superagency: What Could Possibly Go Right with Our AI Future? Hoffman joins Preet to discuss how he thinks AI can help us be more human, why DeepSeek had Silicon Valley in a frenzy, and Hoffman’s latest venture: using AI to cure cancer. 

Plus, Preet breaks down why over a dozen FBI agents are suing the Trump administration, how to be a public servant in the Trump era, and why going to law school is still a good idea.

You can now watch this episode! Head to CAFE’s Youtube channel and subscribe. 

Have a question for Preet? Ask @PreetBharara on Threads, or Twitter with the hashtag #AskPreet. Email us at staytuned@cafe.com, or call 669-247-7338 to leave a voicemail. 

Stay Tuned with Preet is brought to you by CAFE and the Vox Media Podcast Network.

Executive Producer: Tamara Sepper; Editorial Producer: Noa Azulai; Associate Producer: Claudia Hernández; Deputy Editor: Celine Rohr; Technical Director: David Tatasciore; Audio Producers: Matthew Billy and Nat Weiner.

REFERENCES & SUPPLEMENTAL MATERIALS: 

THE INTERVIEW:

  • Reid Hoffman, “Superagency: What Could Possibly Go Right with Our AI Future?,” Authors Equity, 1/28/25
  • “Reid Hoffman Raises $24.6 Million for AI Cancer-Research Startup,” WSJ, 1/27/25

Q&A:

  • “F.B.I. Agents Ask Court to Bar Trump Team From Disclosing Their Names,” NYT, 2/4/25

Preet Bharara:

From CAFE and the Vox Media Podcast Network, welcome to Stay Tuned. I’m Preet Bharara, and there’s a number of existential risks that confront human beings, pandemics, asteroids, climate change to nuclear war. I think AI just being developed reduces the overall existential risk characteristics. That’s Reid Hoffman. He’s the co-founder of LinkedIn and one of the leading voices in AI innovation. He’s also the host of the popular podcast Masters of Scale. He’s now out with a new book, Super Agency: What Could Possibly Go Right with Our AI Future? Reid joined me this week to discuss why he thinks AI can make us more human. We also get into DeepSeek, TikTok, and Reid’s latest venture using AI to cure cancer. That’s coming up. Stay tuned. AI is here to stay, whether we like it or not. Reid Hoffman explains why.

THE INTERVIEW

Reid Hoffman, welcome to the show.

Reid Hoffman:

It’s great to be here.

Preet Bharara:

So, we’ve wanted you on for a long time, but we have a great occasion to have you on. Congratulations on the new book, which for those of you who are watching on video, you’re lucky because you get to see the actual book. A lot of colors on the cover.

Reid Hoffman:

Yeah, well, it’s trying to be glowy, rainbow-ish about the future.

Preet Bharara:

Well, it’s a very optimistic cover. The book is called Super Agency, which you wrote with your co-author. Super Agency: What Could Possibly Go Right with Our AI Future, I see what you’re doing there. And I will tell you, if I walked by this, given my background in a bookstore, I would think this was a pro-CIA book, super agency. But obviously it’s not about the Central Intelligence Agency; it’s about our AI future. So, what the heck do you mean by super agency?

Reid Hoffman:

Well, as you know, a lot of the dialogue around AI is kind of doomer and gloomer. It’s like, “Oh my God, how is AI going to wreck lots of things?” And when you look at all the different things, it can range from, “What’s it going to do with my data, privacy? What’s it going to do with my information flow for democracy, misinformation? What’s it going to do for my job?” Even on the doomer side, “Are we going to get Terminator robots?” All of that is like massive reductions in human agency. And yet, when you look at the history of major technology adoptions, the dialogue’s always that. It’s always a new technology. It always comes out of that. And the end result is we actually get what we call super agency, which is, as opposed to reduced human agency, it increases your agency. And what’s more, when a bunch of us all get it together, most of us, many of us get it together, we get super agency.

So, simple kind of parallel is, “Hey, when I got a car, I got mobility to go places. But by the way, because my friends got mobility, they can come visit, doctor got mobility, they’d come to a house call from a kid or parent and that kind of thing.” And so my agency increases not just with my ability to a car, but with other people having cars too. And the thesis is AI will give us that kind of super agency in massively increased human agency. Doesn’t mean there aren’t transition issues. Those are very serious. Doesn’t mean there aren’t some navigation issues, but it’s a “what could possibly go right” vision of the future.

Preet Bharara:

It’s interesting you mentioned cars and some other technologies. Often it’s the case with important technologies like cars or telephones and with less important advances in technology like flat screen TVs. At the beginning, they cost a lot of money, and that’s why we have the term early adopter. And I remember when a flat screen TV cost $5,000 and now it’s like 200 bucks. So for AI, how long is it going to be before not just you get AI but your neighbors get AI too at scale, or is that already here?

Reid Hoffman:

It’s already here. I mean, there’s hundreds of millions using ChatGPT and others. I mean, one of the funny examples I heard recently from a friend who was in a taxi in Morocco is all the taxis are using it, the free ChatGPT service, as a translation service. So that, you know what, they don’t speak the language… in this case it was English… they could use that for directions, and where you’re going, and all the rest. So, if you have a smartphone or better and a data connection, you can use it.

Preet Bharara:

Do you put out your podcast in other languages? Why don’t we all do that?

Reid Hoffman:

Actually, I don’t yet. I did when I gave a speech in Perugia last year, had Read AI give it in Hindi and Chinese and all that. So, it wasn’t just audio, it was me.

Preet Bharara:

How’s your Hindi accent Reid?

Reid Hoffman:

Oh, you’ll have to tell me. I have no idea. I don’t speak Hindi.

Preet Bharara:

I will have to listen, but that’s the future for this hat that you wear and I wear as part of the gigs that we do. It’s just occurring to me now, and I’m going to have to talk to my team, why aren’t our podcasts put out in 20 countries because the ease with which it can be done is low, right? I mean, are high ease-

Reid Hoffman:

Yeah, totally easy. And what’s more, if you do a little bit of work and train it on your voice as I have, could I get a custom model to your voice, it’s interesting. I mean, you’ll have to tell me how the Hindi is and how it listens, but it’s my voice speaking Hindi. It’s my voice speaking Chinese. It’s my voice speaking Japanese.

Preet Bharara:

So, what’s the right analogy for technology? So, you mentioned the printing press and there was a lot of opposition. People thought bad things would happen. It would destabilize the world such as it was at the time. People thought different things about different forms of power. None of the doomsday scenarios took place. Now, the one example where a doomsday scenario could have taken place and could yet still take place is nuclear power. That’s actually something that’s qualitatively different from the printing press and radio transmission, not so different from space travel and rocket power because they’re related to each other. Is the right analogy nuclear power? Because even though you’re not a doom-and-gloomer, you’re sort of in the middle, but tending to be optimistic. What is, in fact… let’s give fair shake to the doomsday people… What is the doomsday scenario that’s realistic?

Reid Hoffman:

They obviously use nuclear kind of as… It’s almost like the Far Side cartoon, and now Hendersons have the bomb. Everyone has a bomb on their own thing.

Preet Bharara:

Well, you said neighbors.

Reid Hoffman:

Yes, yes. Well, there’s a classic Far Side cartoon, which is the nuclear arms race and has a person looking out the window. And there’s a little ICBM on a trailer and he’s like, “Oh, now the Hendersons have the bomb.”

Preet Bharara:

But that didn’t happen because it’s way expensive.

Reid Hoffman:

Yeah, exactly. And look, I think there’s some places where AI can be used for some extraordinary things, whether it’s crime or terrorism or other kinds of things that have some-

Preet Bharara:

Wait. To commit or prevent?

Reid Hoffman:

Both. It’s one of the reasons why some navigation safety really works, really is important to do. Rogue states, terrorists, criminals, it gives superpowers. Those are three entities we don’t really want to give superpowers to, or as little or as focused as possible. And I think that kind of thing is somewhat parallel. Like for example, say using AI as a cyber attack vehicle to take down a grid as the kind of thing that happened. That’s part of the reason why I don’t even think the nuclear analogy… Now, a little bit of if you’re talking to the doomers, the doomers say, “Well, you’re going to make an AI that’s so intelligent that I think it could become something like a Terminator. And I’m worried about that.” And I tend to think that, well, I think two things. One thing, which is I think the important thing, is you say, “Hey, look. We’re worried about what are called existential risks.” And there’s a number of existential risks that confront human beings, pandemics, asteroids, climate change, nuclear war.

I think AI just being developed reduces the overall existential risk characteristics. I think it’s the only way I can figure out to really… I think, more and more pandemics in the future are even a natural occurrence, let alone potentially man-made, and AI is the best defense. And so I think it helps in all that. That’s one. And then two is I can tell you the Terminator story, but it’s not at all clear that that’s an inevitable. Even if we create a very intelligent AI, it’s not clear that we can’t steer it some. There’s a set of different things in that. And then it’s, by the way, not at all clear that superintelligence is what’s going to emerge out of this. Now, there are technologists who argue that it will, and I’m not saying there’s 0% chance it won’t. I’m just saying the blase, “Hey, it all goes to Terminator,” is not thinking I agree with.

Preet Bharara:

Is the advent of AI maybe be a little bit more mundane like the advent of the microprocessor or the advent of the Internet when, at its starting stage, had both promise and peril, but it sort of depended how people would… the Internet today is very different than it was, and it’s pretty new. My kids still don’t understand how we operated and lived in the world 30 years ago without Google and without the Internet. Is that a fairer parallel? And then we can chart our own course?

Reid Hoffman:

It’s there and more, because the question is, like the Internet, obviously there’s still stuff that we’re kind of working through, how truth and information and a bunch of other stuff is still one of the things that is a more complicated space. It’s not only because of the Internet… cable news, talk radio, a bunch of other things… but there is some complications, but there’s a massive amount of good that comes from the Internet that’s there. I think in AI, it’s more in part because the Internet had job transformation, but relatively call it modest job transformation. And I think AI in the transition period will lead to much larger job transformation because if you basically think, “Hey, within a small number of years, every professional will deploy with one or more AI co-pilots, agents in the work they’re doing, and probably will be more and that that’s the skillset and toolset that you’re going to need to bring to bear not just at the beginning of your career, but generally,” I think that’s an amplified kind of transition from the kind of stuff the Internet’s been so far.

Preet Bharara:

You have many hats and many skills. You’re, as they say, a polymath. Among other things, I assume that you are basically a tech historian. And in that regard, when you talk about all these technological advances over the centuries, what is your view on whether or not modern society or societies mature and get more and more modern, their receptivity to new technology? Can you make a broad generalization about how more or less welcoming we are, or every time there’s something new we’re like, “Oh my god, it’s a printing press,” once again.

Reid Hoffman:

I think it’s, “Oh my God, there’s something new,” because part of-

Preet Bharara:

We haven’t adjusted?

Reid Hoffman:

No. And this is part of the reason… Learning from the transitions is one of the reasons I wrote Super Agency, and I’m kind of getting the message out there because the short answer is part of the reason I’ve described us in a couple of different books as Homo techne more than Homo sapiens is because we first start with this technology being really alien. Remember the Internet is cyberspace, and you don’t know who it is and what’s going to go on, and, “Oh my God, would you put your credit card into cyberspace?” It’s the same stuff each time because that’s new and alien when it is. And then when it gets integrated, just like your kids, it’s like, “No, this is who we are. This is how we roll. This is what we do.”

Preet Bharara:

And so a related question, and I think I know the answer to this because I used to work in the Congress, as societies age and mature, are the elected officials and representatives of the people becoming more able to deal with regulatory issues relating to new technology or less or the same?

Reid Hoffman:

Well, you know that answer, and the broad answer is less.

Preet Bharara:

Yeah. It’s a bad answer.

Reid Hoffman:

Yeah. It’s learning and adjusting to the accelerating rate of technology. The learning and adjusting rate is just a very slow linear speed and the pickup of technology is super linear.

Preet Bharara:

So, what do we do about that? And I guess we could spend a while talking about… I mean, presumably part of the reason you are more sanguine than other people and you refer to the fact that there are dramatics on both sides, the doom and gloom people and the euphoric utopia people. I like the place you are. That’s where I tend to be, whether… If I had to pick a place to be without knowing anything, I’d pick the place where you are outside of the two extremes. But presumably, you’re there because I think you have some confidence that people will rally around regulation and there… Can you talk first about the United States and regulation in the rest of the world and how those things intersect?

Reid Hoffman:

Well, one of the things that I go into some depth in the book is that this kind of technology is best done with what we call iterative development and deployment, which is, as opposed to trying to get it all mostly right before you get out, you actually, just like the car, you put it on the road, and then you go, “Oh, we need bumpers up. Oh, we need seat belts. Oh, we need window washers. Oh, we need airbags,” as you’re kind of going through it because that’s the way you focus on which of the specific things that you should be adding in. And part of how you get, actually, call it broad, inclusive feedback comes from having a lot of people exposed to it, engaged with it, using it, et cetera, which is part of-

Preet Bharara:

Yeah. And by the way, just to pause on that, it’s an actually a startling analogy that you use, because before each of those improvements to cars, it wasn’t just more people got exposed, more people were dead.

Reid Hoffman:

Yeah. Of course.

Preet Bharara:

We had to suffer… I mean, there are thousands and thousands, hundreds of thousands, if not millions of people over the course of the last century whose lives were lost because the newer technology and safety mechanism was not yet in place. And we totally accepted that.

Reid Hoffman:

And by the way, I think for good reasons, which is otherwise no cars, because there is no way of doing this tech development saying, “We’re going to get the complete punch list by which this will be completely safe.” Because by the way, we’ll have thought, “Well, we should have six-foot bumpers around it and we should make sure that it’s only one person per car.” And you could imagine this kind of massive safety list that would be disastrous. And even today I think we have about 40,000 fatalities in automobile accidents per year. It’s one of the most dangerous things most people do. And yet modern society essentially functions because of this.

Preet Bharara:

Right. But since we’re talking about technology, and you and I have been at places where this has been discussed, we keep getting told that the self-driving automobile is soon going to be ubiquitous. I remember hearing that seven or eight years ago. In four years, it’ll be ubiquitous. And because I have a lawyer’s background and bias, I’ve always assumed that one obstacle to that is even though we accept and abide 30,000 or 40,000 deaths even today with all the safety mechanisms and the rear-view cameras and all this stuff, the beeping and the emergency brakes and everything else, and the better bumpers, it’s still tens of thousands of people die, and we accept that. I’ve always assumed that the tolerance for self-driving car death is going to be much, much closer to zero. Is that your assessment or not?

Reid Hoffman:

Well, I think that’s the typical political, lawyerly-

Preet Bharara:

Political? But it’s a psychological assessment.

Reid Hoffman:

Yeah, but it’s a psychological assessment that also comes from a general discourse with, “Look at this heinous thing where a person could die because of it.” And already the technology is broadly here, that if you were-

Preet Bharara:

You got to start with trucks though. I keep hearing people say… This is maybe to your point about the cars and developing advances one by one that there has to be a socialization of the technology, which is not what’s happening with AI, I don’t think. But with self-driving vehicles, once people get used to the fact that there are buses and cars and mass transportation that are automatic, you live a couple of years with that, it’s not as weird to get a self-driving Volvo, right?

Reid Hoffman:

No, exactly. And by the way, in the places where Waymo is deployed, people go, “Oh my God, I prefer using that.” And then for trucks, it’ll be Aurora and other things. So, it’s already today, if all we had with self-driving, we’d actually be much safer and we’d be saving a bunch of lives. Part of the reason the development time is that… It’s going to not start that way. It’s going to be mixed, because for example, I think roughly I did a back of the envelope calculation, and take trucks, if every single truck manufacturer started building AV trucks today, in 10 years is maybe when you get to about 50% of the trucks on the road are AV.

Preet Bharara:

So, one of the issues, to pivot back to AI, one of the issues with respect to mass adoption and utilization of EVs is the lack of infrastructure, charging stations and the like, right? You can have all the oil in the world, but if you don’t have barrels or pipes, good luck. What is the analogy there for AI? Is it computing power? Is it energy? What is the thing that will hold back AI in its most useful and best form going forward, if anything?

Reid Hoffman:

Probably mostly what the industry terms is inference compute, which is compute for serving to the use cases, to the consumer or the business and all that. That’s probably the thing that we [inaudible 00:19:40]. Now earliest, even today, a lot of people talk about AI… it’s in discussion all over the place… and still relatively few people actually really engage it very seriously. And part of the thing that I am trying to get people into is kind of why “Humanity Enters the Chat” is the first chapter, and Super Agency is like, “No, go start using it, and try it for things that are really serious.” For example, for me, and Preet, this may be something that if you haven’t used, you should do, when I have interesting technical subjects that I really want to understand, say more in-depth quantum computing, the difference between photon and electron quantum computing, which most people are like, “I didn’t even know those two things were two things-

Preet Bharara:

But why not just use a regular search engine?

Reid Hoffman:

Oh, infinite… Let me just finish this example, which is take a technical paper or even not and then say, “Explain this to me like I’m 12,” and it will do that, right? And so you go-

Preet Bharara:

I would say, “Explain it to me like I’m eight.”

Reid Hoffman:

Yeah, that works too.

Preet Bharara:

I wouldn’t go 12.

Reid Hoffman:

And part of what’s amazing is you can start with eight. You go, “Okay, I got that. Okay, 12. Okay, I got that. Okay, 18. Okay, I got that. Okay, 25, like a professor. Okay. Huh. Now this is interesting.”

Preet Bharara:

Yeah, you can tell them to do it in iambic pentameter, right? And it can do that.

Reid Hoffman:

Yes. Or in hip-hop rhyme.

Preet Bharara:

All right, so I interrupted your answer in part because I asked you a multi-part question. Which countries, and we’re going to talk about DeepSeek in a moment. Everybody wants to know about DeepSeek, but on the regulatory front, like a lot of things, like global warming for example, it’s a totally different issue and problem. How much value is there if one country does a decent job because we’re so interconnected, particularly via the Internet and computers and communication systems and economically and in a hundred other ways that I haven’t mentioned? Who’s leading on this? Everyone that I talked to says the EU is far ahead, but isn’t the whole system going to be governed by the lowest common denominator or not?

Reid Hoffman:

EU is far ahead in being behind? I’m not quite sure what you meant by the EU is far ahead.

Preet Bharara:

I’ve heard people say the EU… You tell me. You’re the expert. Some other experts, and maybe they’re idiots and they’re going to be corrected by Reid Hoffman right now, that the EU has shown a greater appetite for focusing on and trying to adopt sensible regulations of AI. True or false?

Reid Hoffman:

Well, I think they’ve adopted a leading position on regulation, but I think it’s actually causing them to be the absolute laggards in actually deployment of AI, development of AI, et cetera. Let me give you the metaphor that I think [inaudible 00:22:27]-

Preet Bharara:

Oh, I see. Yeah.

Reid Hoffman:

… by our European friends, which is if you look at AI as a World Cup football match between the US and China, and Europe is trying to play the referee, there’s at least two problems. One, referee never wins, and two, no one really likes the referee. And so sure, they’re ahead in, “We got GDPR out. We got our AI Act out.” And you’re like, “Okay, you already have essentially next to no AI industry.” And if, speaking from the various technology companies that are kind of building and racing ahead on this, if you went to the industry as a whole and said, “Well, you can only deploy…”

Matter of fact, I heard an example, which I can’t repeat the company, because it was in a Chatham House discussion, but literally I know of a publicly traded internet company that I don’t have anything to do with that deploys much worse AI models in Europe because they’re required to get the approval of each one so that their product and service to the European audience is at least 18 months behind because of EU regulation. And that’s just the beginning of it. When we’re out there building these really amazing things, you say, “Hey, you can’t use it without an extensive EU regulatory licensing, refactor, process, everything else,” they’ll go, “Great. We’ll get to that years later as it’s relevant, and we just won’t deploy in Europe.”

Preet Bharara:

I learned something from your answer that you probably didn’t intend to convey, and that is that there are internet companies that you don’t have anything to do with.

Reid Hoffman:

One or two.

Preet Bharara:

I thought you were involved with all of them.

Reid Hoffman:

No, definitely not.

Preet Bharara:

You must be proliferating an amazing… So, if you were advising the EU, and maybe you do sometimes, should they just sit around, twiddle their thumbs? Is everything they’re doing quixotic because they’re such tiny players?

Reid Hoffman:

Well, I think everything they’re doing, most of what they’re doing, most of what the EU is doing is essentially guaranteeing that the EU will lose out from the AI, cognitive industrial revolution.

Preet Bharara:

The upside, they’re going to get no upside.

Reid Hoffman:

Yeah. Both in development, which obviously is important to them, but also even in the deployment for the capabilities of their companies, and part of that… And actually, that’s bad for the world. It’s not just bad for them, it’s bad for the world. It’s part of the reason I try to [inaudible 00:24:53]-

Preet Bharara:

But why is it bad for the world? Because they can just, eventually, they’ll just use American AI technology, no? Won’t they get the benefit of America’s dominance? They just won’t get the profit.

Reid Hoffman:

Well, certainly not the profit. But part of the argument in Super Agency is that part of how we create these technologies to be really good, to have the right kind of human agency and humanist results is we bring humanity at scale into the loop. And I value European culture, the European voices. I think they should be in the loop too. But if they’re unintentionally completely excluding themselves, then it’s like…

This is part of the reason when I used to go to Davos, which I think I stopped going 15 years ago or something, part of what I would do on the stage is I’d say… trying to wake them up and say, “Look, you guys should keep passing these regulations because then all the interesting technologies will be built in Silicon Valley. And then after they’re scaled, we’ll figure out how to retrofit to your regulations and you’re handing the whole industry to us, whereas…” And it was really kind of trying to be provocatively sarcastic, so they’d go, “No, no, no. We should have these kinds of startups and scale ups and technology too,” and get them to do that. Now, the Brits do that, which is great, but the EU especially tends to be massively laggard on this.

Preet Bharara:

Is there such a thing as a consensus within Silicon Valley/the tech world/the subset of it that’s the AI world? Is there a consensus? There’s not a consensus about whether it’s going to be doom, gloom, or utopia, but is there a consensus about what, if any, regulation there should be? I’m guessing not? And how would you describe sentiment?

Reid Hoffman:

Well, what I would say is, so I do these four categories in the book, doomers, gloomers, zoomers, and bloomers. And I would say the consensus-

Preet Bharara:

You’re lucky they all rhyme. Look out for you.

Reid Hoffman:

Lucky, deliberate, sometimes you try to add a little pizzazz into giving it a little bit of entertainment into people’s reading. And so, what I think the consensus in the Valley… Well, no, there’s three threads in the Valley, essentially the kind of mostly gloomer thread. The doomer thread tends to be not Valley. Gloomer, zoomer and bloomer are actually in fact three threads. The gloomers are like, “Okay, this is going to have this massive impact, mostly focused economic job transition,” and, “Oh my god, government has to be in the loop because how do you have the right will of the people?” And part of the reason why a bunch of my book is directed at this is to say, “Look, part of actually how you serve the world of the people is get hundreds of millions of people actually engaging with you.”

And actually, in fact, this is one of the ways that a lot of people don’t realize that classic businesses are actually, in fact highly… especially when they’re dealing with scale of hundreds of millions of plus people… highly customer-focused. But the next group of zoomers is, “There should be zero regulation. We should go as absolutely as fast as possible, no regulation, terrible.” And then the bloomers, which is the category I’m in, is kind of more closely resembles the Biden executive order, which is like, “Okay, very focused, specific, limited regulations to deal with extreme cases, rogue nations, terrorists, things that could go very, very wrong, and then allow the rest of the interdevelopment happen,” but to have discussion, like demand that there’s red teaming for safety purposes, that you have a safety plan, that if the government comes and asks you about it, “Here’s how we’re trying to make sure that the AI we’re doing is well-aligned with human wellbeing at scale, and here’s the negative cases that we’re navigating.” So, those are the three.

Preet Bharara:

I’ll be right back with Reid Hoffman after this. We’ve got to talk about DeepSeek before we run out of time. So lots of people, and you talked about this in the book as well, look at things including quantum computing and economic power and trade and tariff wars as a battle between the United States and China. And there’s a lot of bipartisan support. I’ve heard people say, and I tend to agree with this, the one bipartisan thing that everyone seems to be able to agree on is no one can be hard enough on China. And China in the AI battle, as you alluded to already, is a significant player. So, tell us what DeepSeek is and why it’s freaking everybody the F out.

Reid Hoffman:

So, DeepSeek is a open source model that was released by a Chinese development group that has characteristics that are close in performance to a leading release from OpenAI called o1. And the thing that freaked everyone out is not just that it came from China, it’s like, “Oh look, they’re fully in the game and you’re from China,” but the claim around it was, “We did this for $6 million.”

Preet Bharara:

Right, can you just put that in perspective for people? If that is true, how does that compare to what some other folks that you mentioned have spent?

Reid Hoffman:

Well, it’s roughly speaking what people are familiar with, what Google, Microsoft, OpenAI, Anthropic, and others are doing, is putting billions of dollars into the computers and many millions, tens of millions, even a hundred million, 200 million into the compute runs in this.

Preet Bharara:

So, compared to six million, that’s more, right?

Reid Hoffman:

Yes. A lot more, tons more. So, “We have figured out how to have an EV car that goes for 10,000 miles per recharge, and the batteries are very cheap,” is kind of the thing. It’s like, “Oh my god.”

Preet Bharara:

All right, so is that bullshit that it was $6 million?

Reid Hoffman:

Yeah, it’s basically BS. Now, it’s BS in two important ways. One is that as the story is scratched at, there’s clearly a massive scale compute engagement to the creation of it, which is either a lot of access to ChatGPT, which is a large scale model to distill it, or even very big computers to actually run it. Because by the way, when I talk to experts from multiple AI companies, they say, “Look, yes, we could all do this kind of thing for $6 million as the last compute run in a hundred million dollars of iterative compute. Yeah, we could do this too.” And there were some good innovations. I don’t want to steal the efficiency innovations. That was one of the things that a lot of people… because every single lab downloaded it, started playing with it, started to understand it, say, “What are the things… Oh, we could use this, and oh, we could use this,” as parts of doing.

And so that was great, but it was like, it’s like the whole fiction around big scale compute doesn’t matter. No, it’s like, “No, it proves that it does,” versus it doesn’t. And the whole, “It’s massively cheaper,” it’s like, “No, actually, in fact, there’s some good innovations here that add into it…” So by the way, the technology competition with China is well and truly on, which is I think one of the good derivative points to this, but not that scale is irrelevant and American companies are misspending in profligate ways.

Preet Bharara:

So, we shouldn’t be too worried about this or we should?

Reid Hoffman:

Well, my point of view only change in that DeepSeek is six to 12 months earlier than I thought it was going to be. There’s no other update to my point of view on this. I think large models training small models is something we’ve known for at least 18 months, if not maybe 24. It’s one of those things-

Preet Bharara:

But that was going to happen. I mean, tell me if this is a very stupid analogy for a layperson like myself to understand. The Sony Pictures movie library, whatever it consists of… let’s say hundreds and hundreds of movies… you could say that cost them billions of dollars because they filmed them, they stored them, there were directors, there were actors, and then North Korea comes in and they somehow hack the library, but that still takes energy and time. Maybe it takes $6 million. And they say, “Look, we have a picture library too.” Is this like that or not?

Reid Hoffman:

There may be elements like that. We don’t know the full story, and I’m not meaning that to cast aspersion. It could be like that, but it’s also that there’s a lot of smart technology people… This is one of the things that I’ve been going around telling everyone. It’s like, “Look, the game’s on with China.” And they say, “Oh no, you’re using…” I literally was told today at a class in Columbia, “You’re using China as the bug bear.” It’s like, “No, it’s there. The competition is real. This is not self-justifying for why it is it’s important that we continue aggressive and directed development in this stuff.” And so, I think the Chinese are enormously talented, a lot of tech entrepreneurs, a lot of deeply competent tech people. There’s innovations in DeepSeek that we’re learning from the AI labs here, but what is not the case is, “We figured out how to make the 5,000-mile EV. That’s just not the case.

Preet Bharara:

So, we’ve just been through, and we’re not done yet with, an enormous controversy with TikTok and how it’s a security threat, and our current president wanted to get rid of it. Now he wants to save it. It’s unclear what’s going to happen, but there’s a broad consensus among American politicians, Republicans and Democrats, that there’s a huge security issue if a Chinese government-backed and overseen app like TikTok, which seems innocuous on its face, is gathering information about everyone and can spread this disinformation. What is the lesson of TikTok and those concerns for DeepSeek?

Reid Hoffman:

Well, actually in that case, very similar, because if a lot of people started using DeepSeek as a service, all of that data and usage flows into the Chinese ecosystem.

Preet Bharara:

So, is that bad?

Reid Hoffman:

Well, given that the Chinese are probably one of the world leaders in corporate espionage, if you’re putting in-

Preet Bharara:

Well, I think they’re number one.

Reid Hoffman:

Yes, exactly.

Preet Bharara:

You’re being charitable.

Reid Hoffman:

Yes. And so you’re like, “Well, if you’re using it for serious cases and all the rest, it’s a direct channel with a potential risk line.” And so, it’s one of the reasons why I think it’s quite important to have pretty solid company’s governance that follow data privacy restrictions such as our companies. And so, I think that’s at least one example of something that’s a substantial threat line.

Preet Bharara:

What about misinformation? I was going to ask you about it generally, but I’ve seen people post these examples with respect to queries of DeepSeek. People have done it with ChatGPT and other things. It seems to be a parlor game for people to try to query one of these chatbots to find the flaw or the bias in how it answers a prompt. And I saw people post the prompt, “What was Tienanmen Square all about? What happened to the kid in front of the tank?” And it refuses to answer, says it can’t answer those questions. Is there anything to those examples?

Reid Hoffman:

Well, 100%. I mean, this is actually part of the reason why it makes… why it’s important that artificial intelligence is not just amplification intelligence, but American intelligence for this, because the question around autocratic nation censorship in things to bolstering is not good.

Preet Bharara:

I want to talk about good things because I want to think about the positive and I want to get to your cancer project, because I think that’s incredibly important and a big deal, and it affects so many people. But before we get to cancer, are there other big ticket… You talk about some of these in the book. Are there other big ticket… Think big, climate change space exploration, eradication of disease. Paint the most optimistic, but reasonable and plausible middle-term future for AI aiding humans.

Reid Hoffman:

So, a medical assistant on every smartphone that runs under $5 an hour, that is better than today’s average high-quality western doctor. And that, by the way, doesn’t put western doctors out of jobs. There’s all kinds of… You can imagine that you’re talking to your… your medical agent says, “Oh, you should go schedule an appointment. Hey, do you mind if I give this summary of information of our chat to the doctor to make things better?”

Preet Bharara:

Fitbits didn’t put gyms out of business.

Reid Hoffman:

Yes, exactly. And so that, a tutor on every subject for every age, so for learning anything from fun stuff to skill stuff, all the rest, and then agents that help you do your work a lot better. And all of this stuff’s just line-of-sight. Like for example, as an investor, I can put a business plan into ChatGPT Copilot and say, “What are the elements of due diligence? What are the different places where the technology is current? What is the areas where this describes future technology,” et cetera, and you can actually get really useful stuff as a research analyst right then. And that’s just, for me as investor, this is true for everything, sales, marketing, legal, all the rest. And I think that’s all visible line-of-sight. And that doesn’t even begin to get to questions around, “Okay, so I start deploying with a set of agents and what I’m doing,” and essentially all of us become conductors in the sum of our job of agents doing work really, really fast and becoming very productive in that.

Preet Bharara:

Well, you talked earlier about… I made some glib remark when you said something about terrorism and crime, and I said in committing them or in preventing them, and you said both. Now, I’ve seen this in my experience, the Securities and Exchange Commission in the narrow area of insider trading has developed all sorts of functions using technology such as it is to spot… When there’s a market moving event, they don’t wait for a tipster to come in. And even since the time I was prosecuting insider trading cases, that technology has gotten better. And they’ve done a pretty good job in that area, I think, of being ahead of crime. How do you think the balance is going to be in other areas where criminals will use AI to rob people and commit other malfeasance?

Reid Hoffman:

Well, I think we’re going to really need to up our defense game. I think already people are getting phishing attacks and texts and emails that are much more sophisticated probably because of using AI in various ways. So, I’d say because it’s open and general access, I think the criminal offense game may be leading some important parts of the defense game, and we need to up our game.

Preet Bharara:

And AI should be able to help us with that.

Reid Hoffman:

Of course.

Preet Bharara:

AI can help us up our game with AI. Is that too metaphysical?

Reid Hoffman:

Yes, exactly. Look. Whenever, and AI especially, but whenever you think, “Hey, there’s a challenge,” think about how technology… at all, but with technology, a challenge, think about how AI can be or technology can be the solution. So, you think about an agent that is with you on your e-mail or your text and goes, “Oh wait, this might be phishing. You should really confirm these kind of elements. This is something you should be much more careful about,” and just like on that one, AI could add a whole lot to this.

Preet Bharara:

I made a reference to your cancer project. It’s a new project. I’m very happy about it. Cancer is just… You can’t say enough awful things about it. It has hurt pretty much any family you come across, including my own. Can you tell us what that is and what the prospects are?

Reid Hoffman:

So Dr. Mukherjee, celebrated oncology researcher, author, and entrepreneur, has actually been very successful with a number of anti-cancer drugs already, and I were talking about how do you put the best of science, the best of AI together to massively expand and accelerate the possibilities of various cancer solutions? And we realized that this is one of the things that if you do both the science and the AI, you could have a great acceleration both in the number and in the speed at which you’re discovering and deploying possible kind of antibodies, other kinds of things to make this work. And so, we’ve just announced it. We’re kicking it off. We’re very happy with our very early deployments and validations of the technology to say, “Hey, we think this is really going to work,” and we’re running ahead.

Preet Bharara:

These may be premature questions, but I’m curious, would you adopt a strategy of, as an initial matter, focusing on the much more intractable, difficult to cure and solve cancers like pancreatic, or would you look to the greatest value for the greatest number, which might be a less deadly or lethal or intractable cancer, or are you not thinking about it in that way at all?

Reid Hoffman:

Well, Sid is the person who makes these kind of targets-

Preet Bharara:

He’s the scientist.

Reid Hoffman:

He’s the scientist, exactly.

Preet Bharara:

He wears the lab coat, as they say.

Reid Hoffman:

Yes. And I think his fitness function of identifying which of the targets are a combination of both of those, it’s like, “Well, look, this one has been really unsolved and is actually really important.” Now, if it’s only a couple of people, it’s like, “Okay, we’ll get to it later.” We will get to it. But it’s a combination of like, “Okay, this one kills a large number of people and aren’t really good solutions.” That’s a good reason. And then, “Oh, this one, we could make much better in a much more general case. And even though there may be some solutions that work for 10, 20, 30, 40% of the people, we might add something.”

Preet Bharara:

But is it about new treatments? I’m sure it is.

Reid Hoffman:

Yes.

Preet Bharara:

But is it also about… I mean, these are debates that happen in lots and lots of families… somebody gets cancer and only finds out about it incidentally because they’ve gone to the doctor for something else. And by the time, it’s discovered… and let’s assume these are people who are not doctor-phobic. They go to doctors all the time. They’re not afraid of doctors. And let’s assume, for the purposes of this discussion, they have means and resources. They have ability to get tested. They’re the kind of people who get colonoscopies when they turn 50 or whatever the recommended age is. But by the time it’s discovered, it’s stage four or something terrible. Does AI, can AI teach us something about screening, and when that should happen, and for which populations? Because I feel there’s a lot of debate among laypeople about the protocols for getting tested and screened for various kinds of cancer. Fair?

Reid Hoffman:

So, I think AI can do a bunch, but that’s not our initial focus. Our focus is on this much harder problem of getting cures. But I do think that the notion of monitoring, early detecting, which is obviously super important relative to your ability to-

Preet Bharara:

It’s a little easier, right?

Reid Hoffman:

Yeah.

Preet Bharara:

If you could figure out how to get to the percentage of people for whom early detection would save their life, that would seem to me that’s a lot easier than a cure for people who have stage four something.

Reid Hoffman:

Yeah. So, that’s another very good case in a number of circumstances. Maybe we’ll get there. Maybe other people will get there. Ours was essentially this, what has otherwise been heroically too difficult that we can actually find a cure, because there’s a number of cancers where you discover it, and if it’s metastasized or moved already, like blood cancers, anything else, you’re kind of… There’s relatively little that can be done for you.

Preet Bharara:

Hoffman, congratulations on the book. I’m going to show it for the people on YouTube. It’s a very colorful cover. The book is Super Agency: What Could Possibly Go Right with Our AI Future. I appreciate not just your optimism, but your balance. Thanks for your insight, Reid Hoffman. Thanks so much.

Reid Hoffman:

Preet, always a pleasure.

Preet Bharara:

My conversation with Reid Hoffman continues for members of the CAFE Insider community. Will AI ever become sentient? We dive into the possibility in this bonus for insiders. To become a member, head to cafe.com/insider. Again, that’s cafe.com/insider. Stay tuned. After the break, I’ll answer your questions.

Q&A

Now, let’s get to your questions. This question comes in an email from Rufus who asks, “Preet, what do you make of the FBI agents’ lawsuits filed this week against the Trump administration?” So, as Joyce Vance and I discussed in the CAFE Insider podcast, this is a big deal, what’s going on with the FBI.

The reporting has been that after the Trump administration fired about eight or so of the FBI officials in the entire agency, it looks like they’re coming for more. Among other things, they have distributed to thousands of agents an invasive questionnaire that seeks to find out any connection any agent may have had to any one of the 1,500 or so January 6th prosecutions, whether they did any part of any investigation, whether they signed any affidavits, whether they were responsible for any of the arrests, all of this, by the way, in anticipation of what people expect is coming next, a mass firing, or in the words of some people, a purge of potentially thousands of agents for having been related to, in any way, the January 6th prosecutions. By some accounts, and according to the lawsuits, the number of agents affected or involved could be as high as 6,000, which is as substantial and material percentage of the entire agent force at the FBI.

So these lawsuits themselves, unsurprisingly, arise from what is claimed to be a violation of the Civil Service Reform Act, which provides various protections for non-political appointees in government service. That includes FBI agents. You’re supposed to get due process according to the law. You’re supposed to get notice and an opportunity to be heard if there’s some allegation of malfeasance or misconduct on your part. Now, the interesting issue here is, with respect to these two lawsuits, those firings, though anticipated, have not yet happened. The purge is not yet here. So, the suits obviously can’t address firings that haven’t yet happened, and they’re not seeking to prevent firings. What they are seeking to do is to prevent the release of the identities of the people who were involved in the January 6th investigations. Now, some of that is a matter of public record if the agent was an affiant in a court proceeding or testified in a court proceeding.

But lots and lots of agents names are not in the public domain. And the argument is, given that there’s a Civil Service Reform Act violation either happening or imminent, that in order to shield FBI agents from perceived or expected harassment and potentially even violence from members of the public, their identity should remain secret. Again, it’s not yet a 100% clear what the Trump administration plans to do with the answers to the questionnaires by thousands of FBI agents, but these lawsuits seek to prevent a publication of the list of those who have responded, who were involved in the January 6th investigations and prosecutions. So, these are interesting lawsuits. I think they are more a harbinger of what’s to come. They were shot across the bow, making it clear that, if and when employment actions are taken against some of these agents or perhaps all of these agents, there’s going to be a battle in the courtroom or in many courtrooms around the country.

This question was posted on Blue Sky by Richard. He writes, “As a former government official who famously did not quit, and now as a media figure, what guidance do you have for those whose office’s integrity could be questioned, SDNY, 60 Minutes, CBS, Treasury, etc? Should they quit and protest if their office or employer caves to Trump or stay? #askpreet.” So, you’re talking about a couple of different things, as far as I can tell from your question. There’s a category of people who are asked to resign or to whom it’s been communicated, either publicly or on social media or some other manner, that their services are no longer needed. And the question with respect to those people, as it was for me, “Do you resign? Do you quit? Or do you force the other party to fire you?” And I made the decision for various reasons that I’ve discussed over the years, many, many times that when asked to resign, I didn’t.

I insisted upon being fired to make a point and to make a record and to make sure that everyone was clear as to what was going on. And in that category, I also put the now former FBI director, Chris Wray, the now former special counsel, Jack Smith, and a few other people who understood that their time was coming to an end, who had a basis for believing that the way they were put into office and the structure of their office and the protections of their office meant that they should be able to serve beyond one administration as independent and nonpartisan and apolitical law enforcement officials, Chris Wray as the FBI director, chief among them. In my view, to answer, I think, part of your question, and I’ve said this before, that Chris Wray should have insisted on being fired. It’s a personal decision, and I get it, and I respect it, but I think that would’ve been the better course, same with Jack Smith.

Now, there’s a separate category of person that you seem to be talking about also in your question, and that is not people who have been asked to resign or whose jobs are in jeopardy, but people who are safely in their jobs, but who work at organizations where something bad has happened, or their supervisors or principals acquiesce in something that they think is not honorable, not correct, not proper, not constitutional, not legal, not ethical, pick your poison. And your question, “Should they quit in protest if their officer employer caves or stay,” I think that’s a complicated one. I think it depends on the circumstances. Obviously, people need to put food on the table, they need to pay their bills. It’s not so easy to just cavalierly and blithely say, “If you don’t like something that’s going on, quit and protest.” You have to have the wherewithal and the means to be able to take care of yourself and your family if you choose to leave your job.

But if that’s so, I’m a fan of people voting with their feet, not just speaking up with their voice, and making clear that they oppose something that’s happening at the top of their organization. I’m also a supporter of people who quit in protest when asked to do something that they would find to be unethical, or inappropriate, or against the values of the agency for which they serve. But on the other hand, and this is not going to be satisfactory to anybody because it sounds like I’m a little bit talking about two different options here, if everybody who is a good person, who has good values, who understands loyalty to the Constitution, not to any particular person, but loyalty to the Constitution and loyalty to the public, to the people of the United States of America, if they’re in a public service job, if all of those good people left those agencies, even in understandable protest, you would have an agency full of people who were, I think, too easily bending to the will of someone who is trying to do something bad.

So on the one hand, I understand the value of a protest move, like quitting a job and exiting loudly. On the other hand, I kind of don’t want all the good people and the principled people who believe in the values of the Constitution and democracy and the institutions they serve to walk out the door, because that’s sort of giving the bad actor what the bad actor wants. So at the end of the day, I think it depends on the particular facts and circumstances relating to the person and the facts and circumstances relating to the agency and what the caving is about, what the conduct or misconduct is about. So, I guess my answer, which I already warned you would be unsatisfactory, is it depends, but I think this is a question I’d like to hear answered by some listeners. What do you think? In what circumstances do you stay and in what circumstances do you go? I think it’s not just a relevant question for the past couple of weeks, but it’s going to be a recurring and relevant question for weeks and months and perhaps the next four years.

This question comes in a tweet from Liz who writes, “Is it worth going to law school at this point?” It’s very expensive, so I think you have to want to go to law school. So Liz, I’m not sure of the spirit in which you’ve asked your question, but I’m going to assume it has to do with the things that are going on both lawfully and unlawfully in the country, the kinds of issues we’ve been talking about on the Stay Tuned podcast and on the Insider podcast week after week after week, even though it’s only been two weeks since Trump has been inaugurated. And if your question is a version of a question I’ve gotten a lot, which is, “How do you be hopeful? How do you think about how you can make things better in the country,” then absolutely. If you have an interest in the legal process and legal profession and learning the craft, absolutely, wholeheartedly, this is as good a time to go to law school as any.

I still believe, quaintly, there are many, many ways to help your fellow citizens to improve the world around you, to improve public safety, to ensure democracy persists and thrives in the best traditions of the United States of America. So, I don’t think that going law school is the only way you can do that, but I think it’s an important way and an impactful way. If you ever have the privilege and the power of access to our judicial system as a member of the bar of any state, you have an inordinate amount of power and responsibility to affect the kinds of change that, in good faith, you want to achieve. That was true 10 years ago. That was true four years ago. That’s true today, and it’ll continue to be true. And by the way, that’s not just me saying it. I think I mentioned on the podcast before, my oldest child, my daughter, is at this moment receiving acceptances from various law schools and plans to go to law school in the fall with my full encouragement and support.

And not to put too fine a point on it, but I think a lot of battles that are going to be really important about what this country is about and the direction that it’s going to go in and the values that we hold dear, those battles are going to be fought at ballot boxes, sure, they’re going to be fought in rhetorical battles, sure, they’re going to be fought on the airwaves and in the media, sure, but they’re really going to be fought in courtrooms around the country, whether it’s birthright citizenship or mass deportations or other immigration laws or trampling on civil service laws or the rights of trans people or other civil rights violations. Fight after fight after fight on things of deep importance will be taking place in courtrooms, and there’ll be lawyers on each side. And if you can be in the ranks of lawyers that are fighting those battles on the side of the right and the good and the Constitution, is it worth going to law school at this point? 100%.

Well, that’s it for this episode of Stay Tuned. Thanks again to my guest, Reid Hoffman. If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me at @PreetBharara with the hashtag #AskPreet. You can also now reach me on Threads, or you can call and leave me a message at (669) 247-7338. That’s (669) 24-PREET, or you can send an email to letters@cafe.com. Stay Tuned is presented by CAFE and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The technical director is David Tatasciore. The deputy editor is Celine Rohr. The editorial producers are Noa Azulai and Jake Kaplan. The associate producer is Claudia Hernández, and the CAFE team is Matthew Billy, Nat Weiner and Liana Greenway. Our music is by Andrew Dost. I’m your host, Preet Bharara. As always, stay tuned.

Enjoyed the bonus? Click below to listen to the full episode

Featured image of the main content for this bonus
Bonus: Will AI Ever Become Sentient?