• Show Notes
  • Transcript

What does the recent chaos at OpenAI say about the state of—and fears about—the development of artificial intelligence? Kara Swisher is the host of the popular podcasts On with Kara Swisher and Pivot and has been a leading voice on tech and business for three decades. Kara joins Preet to break down the OpenAI saga. 

Plus, the constitutional “oopsie” of the QAnon Shaman’s congressional campaign, disbarring Supreme Court Justices, and federal judges asking each other for advice. 

For exclusive weekly content, become a member of CAFE Insider with the biggest discount of the year – 50% off the annual membership price from now until December 3rd. Head to cafe.com/informed

Have a question for Preet? Ask @PreetBharara on Threads, or Twitter with the hashtag #AskPreet. Email us at staytuned@cafe.com, or call 669-247-7338 to leave a voicemail.

Take the CAFE survey to help us plan for our future!

Stay Tuned with Preet is brought to you by CAFE and the Vox Media Podcast Network.

Executive Producer: Tamara Sepper; Editorial Producers: Noa Azulai, David Kurlander; Technical Director: David Tatasciore; Audio Producers: Matthew Billy and Nat Weiner.

REFERENCES & SUPPLEMENTAL MATERIALS: 

Q&A:

  • “Jan. 6 rioter dubbed ‘QAnon Shaman’ plans to run for U.S. Congress,” Axios, 11/12/23
  • U.S. Constitution, Article II, Sec. 4: Overview of Impeachment Clause
  • “Impeachment and Removal of Judges: An Explainer,” Brennan Center, 5/6/22

INTERVIEW:

  • On with Kara Swisher podcast
  • Pivot podcast with Scott Galloway
  • “What Happened in the World of Artificial Intelligence?” NYT, 11/19/23
  • “Cruise’s C.E.O. Quits as the Driverless Carmaker Aims to Rebuild Trust,” NYT, 11/19/23
  • Aswath Damodaran on Stay Tuned, 12/1/22
  • President Biden’s Executive Order on AI safety

Preet Bharara:

From CAFE and the Vox Media Podcast Network, welcome to Stay Tuned. I’m Preet Bharara.

Kara Swisher:

This isn’t stopping, so how do you want to do it? AI is not stopping. There’s too much money, too much power. The tech industry has a lot of control of this stuff and they’re not stopping, and so how do we want to deal with that? It is not stopping, you have to accept that.

Preet Bharara:

That’s Kara Swisher. She’s a legendary tech journalist who’s been covering Silicon Valley and its discontents for over three decades. The co-founder of Recode, Kara now hosts the popular podcasts On with Kara Swisher and Pivot, which she co-hosts with NYU Professor Scott Galloway. Kara has interviewed the biggest names in tech and has her finger on the pulse of the industry’s most compelling stories. That’s why she’s joining me to talk about the recent chaos at OpenAI, the company behind ChatGPT. After a five-day span in which the company’s CEO Sam Altman was fired and then rehired, the stability and direction of artificial intelligence is in question. Kara and I talk about what happened, what it means for the future of AI, and how much AI’s potential dangers should impact its development. That’s coming up, stay tuned.

Q&A

Now, let’s get to your questions. This question comes in an email from Silas who writes, “Hello Preet. I hope life finds you well and may you live forever. The Q Shaman is already out of jail and running for Congress. As a convicted felon, is he allowed to run for office but not vote for himself? That would be a constitutional oopsie if ever there was one.” Well, Silas, thank you for the good wishes, thank you for saying what you said. I don’t know that I want to live forever, but I’d like to live for a long time still, but thank you. So Silas is obviously referring to a man by the name of Jacob Chansley, who became famous after the January 6th riot and insurrection and became known as the QAnon Shaman. He’s the guy, as you may remember, who wore the distinctive horned headdress and face paint as he stormed the Capitol back on January 6th of 2021.

Chansley, the QAnon Shaman, was convicted of a crime, a felony of obstructing an official proceeding related to the January 6th attack. He pled guilty to that charge and was sentenced to 41 months in prison a couple of years ago. He was released to a halfway house last March, and he’s supposed to serve a three-year term of supervised release following the conclusion of his prison sentence. So if you look at Arizona law, it’s my understanding that once you’re convicted of a felony, your voting rights are automatically restored only upon completion of all supervised release. And so since that condition has not yet been satisfied, my understanding is under Arizona law, he wouldn’t be able to vote. But as you point out, Silas, the QAnon Shaman has announced this candidacy as a libertarian for Arizona’s 8th Congressional District. There is nothing in the law in Arizona that I’m aware of or under federal law or in the Constitution that bars someone from having being convicted from running for office.

So, I congratulate you on coining a very good new legal term, constitutional oopsie, which I think does capture quite well this idea that you can run for office but can’t vote for yourself. The more famous example of this will be likely, of course, the Donald Trump situation where he’s possibly going to be convicted of multiple crimes in multiple jurisdictions, and depending on where he’s convicted and what the local laws are like in Florida, he may not be able to vote for himself though, like with Congress with respect to the presidency, Donald Trump, even if convicted, will have no constitutional, legal or statutory bar against running for and winning the presidency once again. Constitutional oopsie indeed.

This question comes in a tweet from Twitter user @jamesalancaster, and yes, I’m not yet used to calling it X, I will continue to call it Twitter indefinitely. The question is, “Can SCOTUS justices be disbarred for unethical conduct? If so, would that prevent them from hearing cases?” So as far as I know, judges who are lawyers in every instance are subject to the bar rules like every other lawyer is and if they engage in conduct that’s extremely unethical or criminal or crosses lines the various bars have put in place, they should be able to be subject to disbarment proceedings just like anyone else. And of course, yes, if you are disbarred, I believe you’d be prevented from hearing cases, but I think the more likely result or consequence of crossing serious ethical or criminal lines for a sitting judge, including Supreme Court justices, is impeachment. And we’ve talked about this before. That’s set forth in the Constitution at article two, section four, which says the president, vice president and all civil officers of the United States shall be removed from office on impeachment for and conviction of treason, bribery or other high crimes and misdemeanors.

And the way that the constitutional provision has been interpreted for a very long time, federal judges clearly qualify as civil officers of the United States. So it happens from time to time that a federal judge is impeached under this provision. According to a report by the Brennan Center from 2022, “With respect to federal judges since 1803, the House of Representatives has impeached only 15 judges, an average of one every 14 years. And only eight of those impeachments were followed by convictions in the Senate.” And you may be wondering what about Supreme Court justice as well? The Brennan Center report goes on to say, “Justice Samuel Chase is the only Supreme Court justice the House has impeached and he was acquitted by the Senate in 1805.” So there is a process in place that would cause a justice not to be able to hear cases, be removed from office, be removed from their seat, but as I just mentioned, it is exceedingly rare and unlikely to happen in the near future.

This question comes in an email from David who asks, “Do federal judges typically consult with other judges before making rulings on high profile complex cases, the Trump cases, for example? For example, I’m wondering if Federal Judge Aileen Cannon relied on advice from other judges or academics whom she consulted regarding her controversial decision to appoint a special master to review evidence obtained in the Mar-a-Lago search that was eventually overturned by federal appeals court. And more recently, Judge Tanya Chutkan’s partial gag order that the judge just decided to restore unless a higher court intervenes. I’m a longtime faithful listener and CAFE member, thanks for the great lineup of guests and outstanding show.” Today these are very, very smart questions and insightful ones.

Now with respect to the particular examples that you mentioned, Aileen Cannon and Tanya Chutkan, I don’t know and have no idea and have no way of knowing if they relied on advice from other judges or folks. As an initial matter, judges rely principally on their own study, experience, understanding. Secondarily, they rely very heavily also on their law clerks. In my experience, people who I know who are judges, friends and colleagues and people who have appeared before, they do from time to time call upon a colleague, particularly if they’re a new or novice judg.e and may present a hypothetical scenario or maybe even talk about what they’re thinking about in terms of a sentence for someone, because other judges have a lot of experience and they bounce ideas off of them. There’s nothing improper about that, nothing inappropriate about that. My sense though is it doesn’t happen all that often, and as judges become more and more mature and get more and more seasoned, they consult with colleagues minimally. I may be wrong about that, and if I have judge friends who are listening send in a note and correct me.

It’s an odd kind of thing. I make this point in a similar context in the book I wrote a few years ago. Unlike many other professions, including specifically the legal profession where most people have the opportunity to learn and grow from watching their colleagues, not only watching their colleagues, also watching their adversaries, and also calling upon them with questions they have about how they should do their job. I, for example, as a partner in a law firm now routinely talk about my cases with my fellow partners and lawyers and ask them questions. I will sometimes ask people outside of the law firm about their experiences with certain kinds of cases and certain judges and how we might best proceed and what the best practices are. That’s a normal, everyday part of most workplaces.

What’s sort of interesting about judges, obviously judges can learn and improve themselves by reading their colleagues’ opinions because those are published for the public, but by and large, once you become a judge and you wear the robe, and particularly if you have life tenure in a federal court, you never again walk into another judge’s courtroom. So you don’t get the benefit of seeing how another judge may handle a witness, how another judge may engage in a sentencing proceeding, how another judge controls the courtroom. All of that learning in some ways unfortunately stops at the time you’re confirmed and sworn in as a federal judge.

In almost every other profession, as I mentioned, you get the benefit of watching other people do what you do to see how to do it better. If you’re a journalist, if you’re a TV anchor, if you’re an athlete, you get to watch other players, you get to watch other journalists, you get to watch other lawyers practice their craft, and you learn from that. And in fact, if you didn’t do those things, you’d be committing a form of malpractice. I don’t have a solution to the problem, it’s just sort of an odd predicament that judges are in that they become more insular and separated from other people who practice their profession after they become judges and have life tenure. But it’s an interesting question you ask, and maybe it’s the case that judges should do it more often. I want to think about that some more.

I’ll be right back with my conversation with Kara Swisher.

THE INTERVIEW

 

Preet Bharara:

The tech world was thrown for a loop when OpenAI’s CEO Sam Altman was fired suddenly and then rehired just days later. Renowned journalist Kara Swisher joins me to break down what happened and what it means. Kara Swisher, welcome back to the show.

Kara Swisher:

Thank you. It’s great to be here, Preet.

Preet Bharara:

I’m very delighted to be talking to you for a lot of reasons, but primarily this week I thought no one better in America or in the world or frankly in the universe to talk to us about all the ups and downs and crazy happenings in AI generally, but then specifically at OpenAI. And people may be following the story a little bit, that one of the founders, the CEO of the company, Sam Altman, was fired then he was unfired, questions about the board, about the structure, large questions looming with respect to that company and the whole industry at large about AI and the future of AI. And I know you’ve been tracking it very closely.

Kara Swisher:

Yes, there is no one better. I broke quite a few of the stories, so happy to help.

Preet Bharara:

Well, thank you for your modesty.

Kara Swisher:

Well, when you won a case, did you go, “Oh, it wasn’t anything”?

Preet Bharara:

I don’t remember now. It’s been a long time.

Kara Swisher:

True, that’s true.

Preet Bharara:

Can we just remind everyone, OpenAI known mostly to the general public for one of its products, ChatGPT, there are other products on the horizon. They kind of burst on the scene, although people in the know like you who have been covering the industry and covering tech for a long time have known about the company and have understood the company. So let’s go back to the beginning. OpenAI, a nonprofit?

Kara Swisher:

It was initially, it was started by people like Elon Musk and Sam Altman and many others because they thought there were too many big companies that were going to dominate AI and they had concerns about its development and the commercialization of a product that could impact humanity. So it was founded in good thoughts, I guess, in terms of the tech industry.

Preet Bharara:

In about 2015, is that right?

Kara Swisher:

Right, yeah.

Preet Bharara:

So, in the universe, a fairly young company, nonprofit or not?

Kara Swisher:

It is.

Preet Bharara:

But then in 2019 it started a for-profit arm?

Kara Swisher:

Yes, it had to.

Preet Bharara:

Why did it have to?

Kara Swisher:

Because it costs so much to do the compute. The reason why big companies dominate here is the compute is really expensive, and so they thought they would rely on the donations of people and that wasn’t going to be the case. I think Elon gave 100 million, but these are billion dollar computer systems, billions and billions of dollars that it takes to do the compute here, and so only big companies could compete. And so they needed to have a for-profit arm to take their products… It was a little like what happened to Mozilla, it became a profit arm, although not as big as this one, but a very similar thing. It started off with these lofty purposes and then realized pretty quick that someone has to pay the bills.

Preet Bharara:

So we went from obviously in 2015 before it existed at a $0 value up to I guess about last week, approximating something along the order of $90 billion.

Kara Swisher:

Yeah, somewhere in there. Yeah, in the fundings, because they also brought in fundings from Microsoft was the principal one, but some venture firms,

Preet Bharara:

Microsoft to the tune of, what, 12, 13 billion?

Kara Swisher:

Well, it included compute. It wasn’t precisely just cash, it was use of compute and it hadn’t given everything over, and that was one of the things that was holding over the head of this company was that it didn’t have to keep helping essentially.

Preet Bharara:

By the way, just while we’re on it, and we’ll get back to the drama in a moment, the release of ChatGPT, was that timely? Should that have happened earlier? Did that happen too soon?

Kara Swisher:

Well, no, it was when the product was ready, right? So they wanted to get ahead because everybody else was working on this. The minute a Google comes out with it or an Amazon or Meta or whatever, Meta/Facebook, they were going to be in trouble. And of course, Microsoft’s addition of the money, they had lost out on search, Bing had lost out really badly to Google, and so they really needed to get a product out there to be the leader. It was really important.

Preet Bharara:

So let’s set the stage for the weirdness of last week. It’s founded as a nonprofit, it then starts a for-profit arm, as you said, because it had to, increases in valuation to the tune of tens and tens of billions of dollars. But what was the corporate structure and who was on the board? It still had the structure of a nonprofit even though it was a highly profitable company?

Kara Swisher:

Right, yes, exactly. And so what it did is the profit arm was under the nonprofit, so the nonprofit board kept monitoring… Again, this happened at several other companies, Mozilla is the one I bring to mind most, which was the Mozilla browser, and then it sold its browser to Google the use of its browser, license it. And so the profit arm was under the nonprofit arm and it was controlled by the board of the nonprofit company, and there was no board for the profit company, it was just the CEO, Sam Altman.

Preet Bharara:

And the board of the nonprofit that ran the for-profit part, fairly small and somewhat academic oriented?

Kara Swisher:

Very much so. Well, it was a bigger board until recently. They had Reed Hoffman on it, there was a woman named Shivon Zilis, and Will Hurd of all people was on it. And so I think it was a nine person board, so that’s a little better from a board perspective. And then they left for various reasons. Reed had started his own competing company, for example, and I don’t know why Will left. So that was a much smaller board, which included Sam and people that he had worked with. And then there were these three other board members and the people who were sort of the, they’re called desal people, they’re ones that were more concerned with problems of AI, the P(doom) people, had gained control and they couldn’t decide on new board members is the problem. And so usually it was pretty, someone compared it to the Supreme Court, it was even, and then it wasn’t even, and so they got to pass some laws that the others didn’t like essentially.

Preet Bharara:

Let’s fast-forward to about November 17th and out of nowhere, the board fires the CEO, who by the way, maybe you should mention for a moment what his reputation was within the company and within the industry the moment before he found out he was being fired.

Kara Swisher:

He was beloved. I mean, he had his issues, like everyone has their issues, Preet, but he certainly was-

Preet Bharara:

But you and I don’t have our issues.

Kara Swisher:

Yeah, but we have some people that are critics of us. So he was well-known. I met him when he was a kid, when he was doing this company called Looped. It was a location-based kind of thing, it didn’t make it nice guy. Then he went on to Y Combinator. He’s very peripatetic, he’s got a lot of entrepreneurial interests, including in energy, et cetera, and makes lots of investments everywhere. And he had a good reputation. He become the face of ChatGPT, going to Congress, he spent a lot of time talking about the problems, which a lot of internet people did not do or haven’t done. And so he became the famous face of it. And so he outshone this board, obviously, which probably was an issue in it, and it became, in their minds, too much of a profit making engine that got too much of the weight. And of course they flexed their power then.

Preet Bharara:

Was there any warning at all to anyone, to investors, to the public, to Microsoft?

Kara Swisher:

No. Well, we probably should have been paying attention because of the board was so small. But I think what happened is this one board member, and I’m going to mispronounce his name, Ilya, I think Sutskever, I think is the correct way to pronounce it. He was the chief scientist, and really he’s the inventor of this really in a lot of ways. And I mean, there’s a team obviously, but he’s sort of the inventor. Think of him like a Steve Wozniak and Sam Altman is Steve Jobs, if you want to make that comparison. And he changed his mind on Sam Altman. He had been demoted a little bit within the organization, and I think he was worried, he became worried and he sided with the doom scrollers essentially. And there were other things at play, it wasn’t just the doom. I think the two women board members certainly had a point of view on the dangers of AI. I’m not so sure about Adam D’Angelo, who’s from Facebook and has run Quora, and he had his own AI company, there might’ve been some self-interested issues there for him too.

Preet Bharara:

I thought that the firing was accompanied by a statement alleging that Sam Altman had not been candid with the board.

Kara Swisher:

Yes, they did say that.

Preet Bharara:

What was that about?

Kara Swisher:

I have no idea. And we don’t to this day.

Preet Bharara:

We still don’t know, right?

Kara Swisher:

No, we still don’t know. I think he was trying to get rid of one board member, Helen Toner, for writing this piece that really insulted the company. You really can’t write a piece if you’re a board member insulting the company, you should get off the board really pretty much.

Preet Bharara:

It’s a good rule of thumb.

Kara Swisher:

Yeah, and she was very much through this other company called Anthropic, which was a bunch of people who left OpenAI, which is fine, that happens all the time, who had a different point of view on the safety issues. And she wrote this piece and he was trying to get her removed from the board, and I suspect he might’ve been sneaky in that, which is, what a shock that was sneaky board member things, but they had never come up with anything. Were they upset about investments that he made? Were they upset that he visited the Gulf States to get more money for other projects, including a hardware device or a chip? They were thinking of doing chips. He was doing his job, looking around at things, and I think they thought he was acting high-handedly, and I guess they didn’t feel dialed in. Now, this is not the board’s job, by the way. The board’s job is to hire and fire A CEO, that is the board’s job. It is not to run the company, but they felt like they should, they felt that they had more of an ability to do so. And so that’s really what happened in a lot of ways.

Preet Bharara:

To your knowledge, was there any back and forth with the CEO?

Kara Swisher:

No, he didn’t know. I think he was shocked. When I texted him he’s like, “I don’t know what to say.” I think he must’ve understood tensions certainly, but if you have a nonprofit board sitting on a profit board, guess what? You’re going to have tensions.

Preet Bharara:

You have problems.

Kara Swisher:

And so I think he probably understood the tension part, but not the fire. He didn’t think they’d go that far. I think he was probably, “I can’t believe they did this,” although impressed, probably impressed.

Preet Bharara:

So the reaction, first of all, I guess there’s lots of stakeholders here and constituencies, there’s the general public, there’s the investing public, there’s the particular investing folks in the company itself, and then there’s the employees. Let’s take those one at a time. The public reaction to the sudden firing of Sam Altman was what?

Kara Swisher:

“What?”

Preet Bharara:

“Wtf?”

Kara Swisher:

“I can’t believe it. WTF, what? What the heck?” I think everybody, I called around that night and everyone that day and literally very well-known people didn’t know what was going on, including Microsoft. Previous board members, they were all caught unaware. Everyone was aware of tension, but that’s, again, any given day of any board has a tense situation. And so even Microsoft had not found out about it until about a minute before, which shows how bad this board is. You can’t really pull off… It’s not a coup because they’re the board, so you can’t really say it’s a coup, but it kind of felt like one, it was certainly a surprise.

Preet Bharara:

And then the employees did something interesting within days. What did they do?

Kara Swisher:

What they did is, and I had been talking to employees who were also surprised, and so they got together and they said, “First of all, we had a meeting with you,” including several of the top executives under Altman, said, “We had a meeting and we asked for specifics and you wouldn’t give us any.” And they were very unspecific, the board was, and one of the board members allegedly said, “Even if we have to ruin this company, it’s for the best,” which is not a thing you want to hear from a board member, right? Well then get off the board, same person.

So anyway, they met with him and then were unhappy with that meeting and then got together and wrote an open letter. By this time it had been announced that he was going to Microsoft. By the way, Microsoft had offered Sam an entire division to run to work on their version of ChatGPT essentially, or this technology that they licensed from ChatGPT. And so they said, “We’ll even go to Microsoft,” which is like, “I’ll even go to Jersey City for this guy.” Jersey City’s nice actually now, right?

Preet Bharara:

Yeah, watch it. That’s my state.

Kara Swisher:

Yeah, I know. Sorry, apologies. I hear it because my son loves Jersey City. So going somewhere we don’t want to go, right? Nobody wants to become a vice president at Microsoft if you’re part of an entrepreneurial culture, even though it’s a very good company, but you know what I mean? It was unprecedented and most of the companies signed it. I think they had 700 or 770 workers, and one of the signatories was the guy, this chief scientist who had fired Sam Altman. So that was a little-

Preet Bharara:

Yeah, so explain that, the change of heart on the part of Ilya?

Kara Swisher:

I think he realized he’d done worse, he now put power into the hands of Microsoft. I think he realized he had done more damage by his action. I suspect he was the only one who really had pure motives here, although I suspect he was demoted, so probably was unhappy with the situation. So he changed his mind. And so he was essentially saying to himself, “You suck,” looking in the mirror and saying, “You suck.”

Preet Bharara:

And who hasn’t done that on occasion?

Kara Swisher:

Yes. Not me, but go ahead.

Preet Bharara:

Well, I’ve never done it publicly. Privately that’s for another time.

Kara Swisher:

Yeah.

Preet Bharara:

So how did the board react to this reaction? Did they totally not see this coming?

Kara Swisher:

I don’t know how they couldn’t have read the room, you know what I mean? I guess they thought they had the room, but Sam had the room, right? They didn’t. I think they probably didn’t have much interaction with the employees and I suspect that meeting was like the first time they had been exposed to these people. And so they said, “Well, we’re leaving. We don’t like you. We like him, and we’re going to go over to Microsoft and put you out of business essentially.”

Preet Bharara:

Was this a great opportunity for Microsoft?

Kara Swisher:

Oh, 100%. Well, it was a bad thing for Microsoft to be put in this position, and it recovered really Satya Nadella was very much part of these discussions, but I think he was surprised that he had no purview into what was going on here, any view into what was going on here. And he immediately moved to fix it. And they didn’t have a board seat, they didn’t have an observer status. And I suspect now they will. It’s still-

Preet Bharara:

That’ll probably change.

Kara Swisher:

Absolutely. If I were them, 100%.

Preet Bharara:

So how does someone go from being, because we have not said there was anything illegal or unlawful about the firing, they had the power, there was this onboard structure because of the hybrid nature of the company, nonprofit and for-profit. How does somebody go from being fired one day and five days later rehired?

Kara Swisher:

Well, it’s called a circus. It’s kind of ridiculous, Silicon Valley dramatic circus. I mean, any given Tuesday at Twitter feels like this. So it’s this bad board, right? This is an inexperienced board who didn’t have what it thought it had. It only had the righteousness, their righteousness, which everyone thought was not really good enough reason for doing how they did what they did. If they had difference of opinion, there was lots of ways to deal with this, right? But instead they decided to try to kill the king, I guess, and missed. And so it put a lot more power in the hands of Microsoft for sure. And now there’ll be a bigger board.

And one board member, Adam D’Angelo stayed. He was very stubborn and refused to leave. And eventually in the interest of getting this settled, because you saw all this value pull away from the company immediately, they settled it. There’ll probably be a much bigger board, there will be a much bigger board and a much more even keeled board with safety issues sort of addressed more prominently, I suppose, and better governing boards, probably professional board members, people who know how to run a board.

Preet Bharara:

This is a counterfactual I guess, but what would’ve been the fate of OpenAI had Sam Altman not been able to be unfired?

Kara Swisher:

Oh, it would’ve been killed. They brought in a CEO, perfectly nice guy from Twitch, but I think everyone, all the employees would’ve left and then they would’ve been stuck with a contract with Microsoft that Microsoft could have gotten out of because they wouldn’t have able to deliver the promises they made to Microsoft. So I think the value would’ve gone out, the investors would’ve left, and all of that would’ve flown elsewhere. They didn’t want this ridiculous clown carb aboard in charge, nobody wanted it.

Preet Bharara:

So I’m going to set you up with softball right down the middle. So people have been saying, and I am on board with the conclusion that AI in all its forms, and we’ll talk about future forms as well, is revolutionary.

Kara Swisher:

Yes.

Preet Bharara:

One of the most transformative technologies in the history of humankind.

Kara Swisher:

Could be.

Preet Bharara:

We believe that to be true. And you have one of the largest players, if not the most famous player in the industry, in the area, OpenAI, that at the drop of a hat can go from being in that position and this most transformative technology of our lifetimes and perhaps of our century to going down the toilet. So here’s the softball, what does that say about the maturity of this incredibly important and somewhat dangerous industry?

Kara Swisher:

Well, imagine if this had happened to Google early on, and there’s certainly a lot of people worried about Larry and Sergey at the time, let me say, I mean, they wanted to get rid of them many times. They brought in Eric Schmidt as the solution in that case, that was a smart thing for the board to do. Imagine if they had thrown them out the window, right? There wouldn’t be Google. There wouldn’t, there wouldn’t. And everyone was loyal to them. And so you could have seen it happening. I think these companies are in early stages, and there’s so many companies now, so much funding going on it’s anybody’s ball, right? It just happens to be-

Preet Bharara:

So it doesn’t really matter, right? Nobody misses MySpace.

Kara Swisher:

No, exactly right. That’s exactly right. MySpace was innovative at the time, but then someone at Facebook supplanted it. It happens all the time. There was a company even before that, what was the name of it? It was even better. It had been offered to be bought by Google and the guy turned it down, turned down the offer to start a social network.

Preet Bharara:

Mistake, yeah.

Kara Swisher:

I know. I remember saying to him, “You are an idiot.” He would’ve been a billionaire.

Preet Bharara:

So is that all to say that we should be sanguine about this because there are enough players in the industry that people fall and people rise and it’s okay?

Kara Swisher:

Well, the worry is there’s too many powerful players, right? It now goes to the victor goes to the spoils, and the spoils go to the big companies. And I think that consolidation is the issue. Is there enough innovation? Is there a company like a Google to come out of it? Google came out of Microsoft, the Microsoft fight because they had an opening, they had a lane because of the trial that was happening. And so without Google, or without Apple being recovered, there wouldn’t be an app economy. And so is there a lane for innovative companies to thrive here if the big companies, and the problem is this takes so much money and computing power, can there be lots of businesses built off of these businesses the way Apple built the app store and then there’s Uber or whatever, Airbnb? And so I don’t know. I think I worry that there’s not going to be a bunch of independent players, that’s my worry in general about the entire industry.

Preet Bharara:

So you have a worry about at some point over-consolidation and monopolization?

Kara Swisher:

I hate to say that was Elon Musk’s worry and Sam Altman’s worry when they started this company, right? It’s called OpenAI. And I did a very good interview with Elon when he was speaking sensibly about this, worries about big companies, we talked about it all the time, and he was correct, big companies dominating the next age of computing. And so, I mean, credit where credit’s due, he was talking about it early and often, and so was Sam. And so that’s, I think, the worry, and that’s where they broke too is he took money from Microsoft. He was moving in a different direction than Elon wanted. And of course Elon started his own company, Grok. So I do agree there should be lots of people in this industry. I do worry about consolidation, that’s probably my biggest worry, one of them.

Preet Bharara:

Do you think that the government is capable of and is ready to police that kind of thing from the antitrust perspective?

Kara Swisher:

Yes, absolutely, 100%. They’re capable of it, whether they do it or not is another question.

Preet Bharara:

That would’ve been a better question.

Kara Swisher:

They’re doing a lawsuit over search. You heard about that? That was 20 years ago, right? So that’s where they’re on now. So I suspect they’ll get to ChatGPT when we are dead, I guess.

Preet Bharara:

Maybe if they use AI.

Kara Swisher:

Yeah, yeah.

Preet Bharara:

You were saying a moment ago imagine if this happened at Google or some other company, and you’ve already mentioned Steve Jobs.

Kara Swisher:

Oh, it could have happened at Google. They thought they were crazy, they thought those founders were crazy. And Mark Zuckerberg had an opportunity to sell Facebook to Yahoo at one point, and he turned it down and only because he had the power to do so, but the venture capitalists would’ve sold them out in seconds. Trust.

Preet Bharara:

It’s hard to judge which of those important inflection points, whether it was the right thing to sell or not sell. It’s easier in hindsight for a couple of podcasters.

Kara Swisher:

Sure. But I was there, I was there. I remember them wanting to get rid of Larry and Sergey. Now it’s all sunshine and roses, but there was a cover of Fortune back then called Chaos at Google because of the way it was run. It was crazy, there was all kinds of stuff going on there.

Preet Bharara:

So I mentioned Steve Jobs because some people, and maybe you did this, also have analogized the Sam Altman situation to the Steve Jobs situation. There’s obviously one big difference Steve Jobs was fired by his board at Apple and then rehired 11 years later, Sam Altman was rehired in about a half a Scaramucci if you use that metric of time.

Kara Swisher:

Yes.

Preet Bharara:

Are there parallels there, or is that a silly analogy?

Kara Swisher:

Well, I think we’re not clear if Sam deserves that title, the Steve Jobs title. That’s a big title. He certainly does a lot of things that look like Steve Jobs, right? He does the praying hands thing. He does the-

Preet Bharara:

Black turtleneck?

Kara Swisher:

Yeah, he does. Well, he’s not down on the Elizabeth Holmes road, but he’s definitely-

Preet Bharara:

That’s a bad analogy.

Kara Swisher:

Well, that’s what she was doing. I think it’s interesting. I think the accomplishments aren’t there yet, right? Steve came back and Apple was almost dead when Steve took over after they had ran it into the ground after he left. And you don’t remember any of those board members, just remember his triumphant return. Silicon Valley’s in love with that story, by the way, they love that idea of resurrection and the Jesus-like characters too. So this is a Silicon Valley story. I’m not so sure Sam has proven yet, Steve Jobs had a record, right? Had a bit of a record. So we’ll see.

Preet Bharara:

It takes some time.

Kara Swisher:

Yeah.

Preet Bharara:

I’ll be right back with Kara Swisher after this. We’ve had on the show a guest who I know you know well and Scott Galloway knows well and has been a speaker at Code, professor at NYU Stern Aswath Damodaran.

Kara Swisher:

Yeah, he’s fantastic, yeah.

Preet Bharara:

He’s fantastic. And in some of these conversation we’re having in preparing for this interview, I was thinking about some of the things he said about how at different stages companies need a different kind of CEO.

Kara Swisher:

They do.

Preet Bharara:

Can you address that generally, and then maybe specifically in the AI world? What are the kinds of CEOs we need and how do they need to be different in the future?

Kara Swisher:

Well, some CEOs do make it through, Bill Gates, right? Mark Zuckerberg. But they’re definitely doubted early on. Mark, well, he had control, so it didn’t matter, he just decided to stay. Or Pierre Omidyar left eBay and Meg Whitman came in. So that was a good thing, she did a great job in building it up. And so it just depends on the person, it depends on the control of the board, it depends on the influence of the VCs. And so again, I don’t know what makes a good one, because Mark managed to be a good one, Bill Gates managed to be a good one. Others did not, they stayed in and they were way over their skis and that happens a lot. And again, they often bring in people like they brought in Sheryl Sandberg to help Mark Zuckerberg, right? That was critical. I think Facebook would’ve had a very different story without her there for sure.

Preet Bharara:

There’s the piece by David Streitfeld in the New York Times making this analogy between Steve Jobs and Sam Altman, and one of the things he says is, “One of the most persistent cliches is that of the visionary founder.” Do you agree with that?

Kara Swisher:

I do sometimes. I think that, again, it’s the Jesus coming, although Jobs was very visionary, right? So I don’t know what to say. I’ve just been recently looking back at a lot of interviews I did with him, and he really had a lot of stuff fright from the very beginning and very prescient. But they liked that idea. Elon was going for that for a while, that mantle and has sort of taken an alleyway, a different alleyway. But yeah, I think they like that idea of the great leader. I don’t think that’s a new and fresh thing. I think it just doesn’t happen in other industries because they’re not new. And this is a new industry. These are the actual founders, and I suspect Henry Ford who turned out to be not a great person, and others at the time were very touted when they were founding it. The founders of anything get that attention and then later the professionals takeover and then it’s not quite the thing. Although some CEOs like Lee Iacocca remember, but that was a lot of PR if you really want to be honest.

Preet Bharara:

Was it?

Kara Swisher:

Yeah, I mean, he did some good things but I always think there’s a lot of people involved in why things succeed. And one time Steve Jobs was mad about the idea that he did everything. But offhand to me, he was like, “What do they think? I’m Willy Wonka and all my staff is the oompa loompas?” And of course you can see that now, it’s surpassed him. Tim Cook and the others who were there have, I think it’s 10 times the value of the company. Everyone’s like, “It’s over because Steve Jobs is gone.” It wasn’t over by a long stretch. If you had had that point of view, you would’ve lost a lot of money as an investor.

So I think it’s nice to have that idea, and it’s a little bit Steve understood PR in that regard, all those photos he took and was hugging a computer, et cetera. He was a master at marketing. But I think there’s a little bit too much in that. I don’t think anyone’s irreplaceable in that regard. And sometimes it’s a problem like Mark Zuckerberg really needs a lot more input from other people, right? That’s been clear for a long time.

Preet Bharara:

I saw someone post on social media recently the following bit of advice to corporate leaders. It was something like, “If you have two candidates you’re looking at for a leadership position and they’re otherwise equal and one is very charismatic and one is not, choose the non-charismatic one.”

Kara Swisher:

Yeah, possibly.

Preet Bharara:

A, because maybe that person has other qualities, but B, that person got as far as the other candidate without charisma.

Kara Swisher:

Oh, that’s interesting.

Preet Bharara:

What do you think of that?

Kara Swisher:

I think charisma is important. I think feeling of purpose, think about it, not just think about it in government, think about it, it’s never really about policy, it’s about charisma, right? There’s a charisma to our candidates. Same thing in legal, I would imagine in legal stuff, right? Isn’t the more charismatic lawyer more successful?

Preet Bharara:

Well, the problem with charisma is-

Kara Swisher:

Rizz, it’s called rizz. The young people call it rizz.

Preet Bharara:

Yeah, my kids had to explain both that to me and also-

Kara Swisher:

You sound so mid.

Preet Bharara:

Drip.

Kara Swisher:

Sorry. Yeah, no, that’s old. That’s old.

Preet Bharara:

Yeah, well, so am I. If you have a lot of charisma, you can also cheat and defraud and fool people.

Kara Swisher:

That’s right.

Preet Bharara:

We mentioned Elizabeth Holmes, there are various junctures in her saga, as I’m sure you know better than anybody, where it should have fallen apart and the emperor had no clothes and that should have been discovered. And then by force of personality and enthusiasm and charisma, as I understand it, she convinced a very distinguished board at her company, but not a very knowledgeable board about the product that they were alleged to be making. Their doubts were overcome by charisma. And I think that’s also dangerous in politics. Look, Donald Trump-

Kara Swisher:

Charismatic.

Preet Bharara:

I’m going to get a lot of mail about this, whatever you think of him, he has charisma.

Kara Swisher:

He does.

Preet Bharara:

With tens and tens of millions of people and it can be used for dangerous purposes. One of the reasons we have founded our country the way we founded it with checks and balances was to guard against the overly charismatic and populist fraud.

Kara Swisher:

That is correct.

Preet Bharara:

So good and bad. I want to go back to this debate and talk about it, whether it’s a good faith debate or I think it is, the way we began this conversation about the concerns people have about AI being good or bad, effective or destructive. Is AI dangerous and needs to be regulated? Is it going too fast? Because my sense is that some of these academics on the board at OpenAI had a good faith concern that maybe too much was happening too fast. Is that right?

Kara Swisher:

I think they weren’t technical. They might’ve had more technical expertise if they’re going to make comments like that, I feel the same way about myself. And so people I trust feel like it’s a little more in the middle. I have a lot of respect for say Geoff Hinton and others, and he’s the one from Google that left saying, “I didn’t realize how dangerous it would be,” and I think he was just calling out the real, he wants some level of accountability and regulation. And so that’s what he would say, right?

I think most people I talk to think issues are much more in the middle. And that’s where I would go, that’s where I would go is that there’s some immediate term risks around jobs, around social… I don’t want to say social justice, but discrimination, there’s obviously those that we could mitigate right now, that mitigation and regulation would probably bring to heal. Then there are some longer term issues that everyone in the globe has to decide on. Killer robots, development of pandemics using AI, a no for countries, right? That have to be global decision making that has to happen where everybody agrees. Now, not everyone’s going to agree, but in general, the world has to get behind certain rules through the road for AI. And the idea that it’s going to become self-aware or that it’s reached this status, this AGI, we’ve reached artificial general intelligence and it can think for itself, people I have talked to who I have relied on seem to think that’s overblown at this point, at this moment.

Preet Bharara:

It’s overblown that that’s dangerous or it’s overblown that we’ll get there?

Kara Swisher:

Oh, that’s the hope, right? We’re going to get to the moon. Nobody thought we could get to the moon, right? We got to the moon. I think they think they can get these computing systems there, it just depends on how it manifests itself.

I think a lot of time, I was at a dinner party decade ago where this was the discussion which was this p(doom), which is the probability of doom, right? It’s p and then doom in brackets. What is your p(doom)? They used to talk about this, which was of course a typical thing to happen at a Silicon Valley party. And a lot of them thought that was the case but these people have been steeped in sci-fi and Terminator and gaming, and they do have that attitude that this is where it inevitably heads and I think it inevitably heads to good things, right? Why not focus in on cancer research and healthcare and education and that kind of stuff? And so you immediately do go to the doom scenarios versus the positive ones. So I do think with good regulation, good transparency, a lack of centralization and consolidation, we can mitigate a lot of this stuff. And anticipate it, by the way, anticipate it.

Preet Bharara:

Yeah. I’ve been thinking about this, and this is not a fully formed thought, but I’m a lawyer and I’ve practiced law and I’ve overseen cases and I comment on about the law and I’ve written a book and all of that stuff, right? And one of the themes of my work is it is very, very difficult to have a perfectly balanced system of regulations and rules and laws because life is complicated and people are complicated, right? And that’s with a system of human beings who have fairly good reasoning skills and discretion. And it is hard to make up a rule that in every circumstance causes the proper outcome. And that’s for humans. Now, think about how difficult it is to come up with a set of rules that governs the behavior of AI entities.

Kara Swisher:

Well, they don’t know. We don’t know.

Preet Bharara:

I’m just worried that it’s very, very difficult. So if you have an AI entity whose goal is to help people, that’s terrific. But you have to balance that against the means and methods of helping people. Are you allowed to cheat and steal? Are you allowed to take from other countries? Are you allowed to do all sorts of other things to help people? It becomes very complicated to construct.

Kara Swisher:

Sure, but that’s how humans operate too, right?

Preet Bharara:

Yeah, I mean, we kind of suck often at it.

Kara Swisher:

I mean, hello?

Preet Bharara:

Maybe the AI will be better. I don’t know, you’re very optimistic.

Kara Swisher:

I’m not optimistic. I’m not a techno optimist and I’m not a doom scroller is what it is. I can see the positives in it-

Preet Bharara:

You’re in the middle?

Kara Swisher:

You can clearly see the positives in it. And I felt that way about the early internet. But what about the car? All right, say a car or plane, let’s not make it because in 100 years there’s going to be a real crisis of climate. What would you do, right? Well, oh, maybe not. But then we’d be stuck not having cars and whatever you think of cars-

Preet Bharara:

I thought you were going to go in a different direction about cars, not about climate, but humanity was prepared to endure a shocking number of car deaths.

Kara Swisher:

Yes, they were.

Preet Bharara:

And it’s interesting to me, I don’t know what the number is now, 25, 30,000 people in America die every year.

Kara Swisher:

That’s right.

Preet Bharara:

In car accidents. And most people don’t say, “I’m not getting in a car. You have two plane crashes in 2024, lots of people stop flying. Can you explain that?

Kara Swisher:

No, I can’t.

Preet Bharara:

AI would be more logical about it.

Kara Swisher:

I don’t know why so many people are scared of lots of things that they’re scared of. The way I look at it is it’s something I spent a lot of time anticipating, and I think one of the problems with tech people are they don’t anticipate anything. And I was in rooms with Facebook people around their live feeds and I’m like, “Oh, someone’s going to put a GoPro on their head and murder people obviously.” And they were like, “Kara, you’re just terrible.” And I was like, “What? Hello?” That’s just an easy one, that’s a low hanging fruit, terrible fruit of this tree. A couple of years ago in 2019, I sketched out a scenario where Donald Trump lost the election, said it was a fraud, and then went on social media to gin up all kinds of anger and then ask his people to do something about it. And I remember getting a call from top brass at Twitter saying I was irresponsible for saying that. And I was like, really? Seems like it could happen.

I think that’s really what it is anticipation of the problems and mitigation of them and figuring that out and being honest about it, forcing tech to be honest about the problems that could happen. And at the same time, really having government bringing in experts of all kinds, and not just the top leaders in tech, but real academics, researchers and others with other points of view so they can at least make a decent decision about what should be regulated and what shouldn’t be. I mean, that’s supposed to be the way government works, right?

Preet Bharara:

Yeah.

Kara Swisher:

That’s the hope.

Preet Bharara:

Let me ask you this, I haven’t asked you this question but I’ve asked other people this question.

Kara Swisher:

Sure.

Preet Bharara:

So as I said a minute ago, we tolerate tens of thousands of deaths due to automobile accidents and we don’t give up the automobile. We would tolerate far fewer in the air for various reasons that we don’t really understand.

Kara Swisher:

Yeah, they seem more horrific, right?

Preet Bharara:

There’s a certain psychology operating there. So in the yet unconquered area of driverless cars, automatically driving cars, do you have a prediction of how many casualties-

Kara Swisher:

A lot.

Preet Bharara:

We would be willing to accept to have that?

Kara Swisher:

A lot.

Preet Bharara:

You think so?

Kara Swisher:

Yeah, I do. I do.

Preet Bharara:

You don’t think it’s more like airplanes?

Kara Swisher:

No, I don’t. Because I think once people start using them, they get it. Once you start using, it’s just like-

Preet Bharara:

Have you been in one?

Kara Swisher:

One many times, all the time. When I’m in San Francisco, I love it, I love it.

Preet Bharara:

How is it?

Kara Swisher:

Great. There’s one accident, of course it was terrible, but the original accident in this accident caused cruise to pull off the road was by a person, a person hit a person with a car and then it fell into the path of this thing and it didn’t know what to do, right? And so the problem is the intersection of humans with machines is a real problem. That’s the real issues here is there’s humans driving and cars driving. And also humans are very, you don’t know what they’re going to do, right? And so you can’t anticipate every one of their little situations with them. You can eventually, but not today you can’t.

I find them wonderful. I really like them. I think they’ll save a lot of lives, but I think people can’t think of that in the bigger picture. They’re like, “What do you mean save lives?” Right? Not everybody knows someone who died in a car accident, but the numbers are staggering. I just don’t see there’s any other way to solve a problem like this besides getting people out of cars, driving cars. And I know they like to drive cars, that’s very different and then we have to make a societal decision. But in general, when I drive them, I feel very good about them. I’ll tell you one thing, I’ve been driving in them for 20 years, they keep dragging me into one and a lot of them were on test areas, parking lots and things like that and I drive them in the streets of San Francisco now. And you can see the development has gotten so good. It’s certainly not perfect, but it’s gotten good.

And I am nervous to say so because of this terrible accident in San Francisco but while that accident happened, hundreds of people were killed on the road by people in car accidents. And you just can’t visualize that, right? Instead, you focus on this one horrific accident. You’re just not going to prevent every accident, it’s just not going to happen. And same thing with electricity, right? Lots of people get electrocuted, but would we turn off the lights? No, you just don’t. And that’s an easy one to make, correct? That’s an easy one. Or buses or trolleys. Remember, if you go back in history, all these people were run over by trolleys back in the day and there was all kinds of hue and cry to stop them.

By the way, let me just make that last point, this isn’t stopping. So how do you want to do it? AI is not stopping. There’s too much money, too much power, the tech industry has a lot of control of this stuff and they’re not stopping. And so how do we want to deal with that? It is not stopping, you have to accept that.

Preet Bharara:

Well, that’s what makes the decision by the board, the initial decision by the board at OpenAI kind of interesting because it’s not like they were the only game in town. There’s this free rider problem or there’s some other psychological technical term you can apply to it. If we put the brakes on ourselves, that doesn’t put the brakes on anyone else.

Kara Swisher:

That’s correct.

Preet Bharara:

And lots of other people are operating here too. And so just by force of technology, as you know better than most, of necessity, the technology always wins, right?

Kara Swisher:

It does. It always does. I just finished my memoir and one of the examples I gave was we were on the beach in Kitty Hawk back in the day, back when the Wright brothers were doing their test. You watch it take off and it didn’t fly very far and it wasn’t very high, it didn’t do a lot. It was, I don’t know, I forget the number of amount of time it flew, but it was short. Are you going to be the person sitting on that beach saying, “Well, this is going to be dangerous,” or, “That wasn’t high enough, this sucks.” You just can’t do it.

Preet Bharara:

It’s so funny you say that, I think I wrote my first paper in third grade about Kitty Hawk, so I remember it was in 1903 in the Wright Brothers, one of the few things I remember from when I was in third grade. And I’m reminded of that a little bit, and so it’s fascinating that you mentioned it, when people are looking at what AI can do in terms of photography and film, and they’re saying, “Oh, that’s terrible, I can tell the difference. Why does that lady have six fingers?” That’s what it can do today.

Kara Swisher:

Right, oh no, this is very powerful.

Preet Bharara:

Give it a month, give it a year, give it three years.

Kara Swisher:

Yeah, it’s very powerful.

Preet Bharara:

It’s very, very powerful.

Kara Swisher:

But then it’s sort of like nuclear energy, right? Right now there’s a renaissance in nuclear energy and all this, speaking of Sam Altman, he’s got a company that’s working on small nuclear devices to put in the home where it would heat your home. It’s much safer than it’s been led people to think, but they conflate them with nuclear bombs or nuclear devices and weaponry. And so as Oppenheimer has so perfectly reminded us, the devastation is vast. And so which way would you want it, to be a tool or a weapon? And that’s the constant push-pull of technology. It is often both, right? It is often both. And a knife, what’s a knife? You know, I’m sure you’ve prosecuted these things, it’s an attack vehicle, it also cuts your bread. What is it? And so I think there’s a very famous Paul Virilio, I think his name is, he goes, “When you invent the ship, you invent the shipwreck.”

Preet Bharara:

Yeah, no, look, something simpler than a knife that’s man-made, water. Water we need for, I mean, now we’re getting a little, maybe, overly silly, but you need water to live and water kills a lot of people and can be devastating.

Kara Swisher:

That’s right.

Preet Bharara:

You can have a tsunami kills 220,000 people in the Pacific. So yes.

Kara Swisher:

Well that’s nature and she’s pissed. So that’s just the way it goes.

Preet Bharara:

Yeah. We should stop pissing her off. I want to ask you one more question about regulation. Did you have any reaction to or view of, or the people you speak to, do they have any strong reaction to Biden’s executive order relating to AI?

Kara Swisher:

They liked it.

Preet Bharara:

They did?

Kara Swisher:

Yeah.

Preet Bharara:

What’d they like about it?

Kara Swisher:

Well, first of all, Congress should be doing this, but of course it can’t agree on lunch, they’re busy with Hunter Biden, which is our most important issue of the day. I think it was very even-handed, it was open to smaller innovation and research, it was talking about responsibility and safety. It’s certainly not enough, but it was all the steps in the right direction. And it actually established, at least the US was doing something about it as opposed to having never passed any legislation during the internet era where Europe took the lead, which is not enough. So I think most people thought was very even-handed and directionally correct in that it celebrated the importance of it.

While I don’t always go along with tech people on this issue, it’s important for it to be a democratic open process versus what’s happening in China. And they’re investing a lot in this, and they will dominate if need be, and they’re very good. And so what kind of internet do you want? What kind of future of AI do you want? Do you want it to be a surveillance one? Do you want it to be one with all kinds of ways to protect ourselves against it? And so I think most people liked it. Again, the only issue is, is it enough? An executive order doesn’t really make anything happen, it’s just a suggestion.

Preet Bharara:

Do you want to give us a preview of your memoir and why you chose to write one?

Kara Swisher:

Well, I was paid an enormous amount of money. I wasn’t going to write a book and I’ve turned down books for decades, like, “Write a Yahoo book. Write a Google book.” I’m like, “Ugh, I’d rather not.”

Preet Bharara:

You don’t want to write the Sam Altman book.?

Kara Swisher:

I’d rather go and watch Law and Order, right? Yeah, exactly.

Preet Bharara:

You’re going to let Walter Isaacson write that one?

Kara Swisher:

Oh, no, no, no. I didn’t get the chance to do the Elon book and I’m so glad I didn’t. He’s in the book though, for sure. It’s my take on 30 years of covering these people, and I was there at the beginning, and so I’m going to tell you what they were like. And it’s called Burn Book, which you can get an idea of what I think. But the subhead is ‘A Tech Love Story’ because I love tech and I’m very disappointed with what the boys have done with the place, as they say, and the opportunity. And so it’s about that. And I think you’ll like it. I think I maintain a love of tech, and at the same time saying power concentrated in the hands of certain crazy people is a problem for our world, and it always has been. And propaganda, narcissism, et cetera, et cetera, has always been a problem in our history and with these tools and this power and this money, they need to be held to account. So that’s that.

Preet Bharara:

Well, we’re looking forward to. When is that out?

Kara Swisher:

It’s also funny, Preet. It’s very funny, you’ll laugh.

Preet Bharara:

I was hoping it would be funny.

Kara Swisher:

It is funny. There’s points where there’s one Google baby shower that’s going to make you laugh out loud. February.

Preet Bharara:

Do you settle scores?

Kara Swisher:

I think you’ll see.

Preet Bharara:

That noise is hard to put in the transcript.

Kara Swisher:

Yeah. Yes, yes I do. No, I have my opinions, they can have their opinions. How’s that? It’s out in late February. February 27th, I think.

Preet Bharara:

Okay.

Kara Swisher:

Cool.

Preet Bharara:

Do you think we can regulate Scott Galloway or is the government not capable of doing that?

Kara Swisher:

Never. I’ve tried so hard and I’m about to go do that right now actually. And we’ll see how many penis jokes I can stop.

Preet Bharara:

Oh boy.

Kara Swisher:

Can we have him arrested? Preet, can you get a big job again and put him in jail?

Preet Bharara:

I can try, I can try, I can try. That has to be evidence. We have to think about that.

Kara Swisher:

Well, there’s lots of evidence.

Preet Bharara:

Kara Swisher, my friend, thank you so much. It’s always a treat to have you on.

Kara Swisher:

Thanks, Preet.

Preet Bharara:

Well, that’s it for this episode of Stay Tuned. Thanks again to my guest, Kara Swisher. If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me @PreetBharara with the hashtag #AskPreet. You can also now reach me on Threads, or you can call and leave me a message at (669) 247-7338. That’s (669) 24-Preet. Or you can send an email to letters@cafe.com. Stay Tuned is presented by CAFE and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The editorial producers are David Kurlander and Noa Azulai. The technical director is David Tatasciore. The audio producer is Nat Weiner, and the CAFE team is Matthew Billy, Jake Kaplan, and Claudia Hernández. I’m your host, Preet Bharara. Stay tuned.