• Show Notes
  • Transcript

Cyber Space is the newest podcast for members of CAFE Insider. Every other Friday, host John Carlin, the former head of the Justice Department’s National Security Division, will explore issues at the intersection of technology, policy, and law with leaders who’ve made an impact in the world of cybersecurity. 

In this inaugural episode, Carlin speaks with Alex Stamos, Facebook’s former Chief Security Officer who led the company’s investigation into Russia’s manipulation of the 2016 presidential election. Stamos was previously Chief Information Security Officer at Yahoo when the company dealt with a series of cyber attacks that resulted in the breach of some billion user accounts. Currently, he advises Zoom and leads Stanford University’s Internet Observatory. In the episode, Stamos discusses these experiences, addresses the debate over the role social media companies should play in fighting disinformation, and reacts to the biggest tech news, including the President’s intention to ban TikTok and WeChat. 

Episode recorded on 8/7/20. 

Preet Bharara:              Hey folks, Preet here. I hope you enjoy this episode of Cyber Space with John Carlin in conversation with Alex Stamos.

John Carlin:                  From CAFE. Welcome to Cyber Space. I’m your host, John Carlin. Today marks the official launch of this podcast. Every other Friday we’ll be exploring the key issues at the intersection of tech, law and policy. I’ll be joined by a range of guests who’ve made an impact in the world of cybersecurity. My guest this week is Alex Stamos. He served as the chief security officer at Facebook when he led an investigation into Russia’s interference in the 2016 election.

He was previously the chief information security officer at Yahoo, where he dealt with a number of major cyber attacks from nation state actors. Today he teaches at Stanford and leads the university’s internet observatory. He also recently took on a role with Zoom, helping the company with security challenges brought on by its exponential growth during the pandemic. There is much to discuss and I’m thrilled to have Alex Stamos on this program.

Welcome Alex. Great to be talking to you again. And I wanted to start really with your background a little bit. You’ve had a long career in tech, including time as a successful entrepreneur and executive at Yahoo and Facebook. Although as we’ll get into a little bit more, sometimes I wonder if hiring you means instant crisis, but let me just take you back a little bit. Is this the path you envisioned when you majored in electrical engineering and computer science at UC Berkeley?

Alex Stamos:                Not exactly. I was always interested in computer security, but like other security people of my generation, most of the training that we got was very unofficial, I think you would say. There’s really no good way if you’re growing up in the eighties and early nineties to learn about security in a way that’s completely legal and allowed. So, did some things on a Commodore 64, then a PC starting with 300 baud modem and hanging out with friends and the like and back then security stuff was just kind of fun, right? Like going and breaking into BBSs and you know.

So BBS is pre-internet. If you want to hang out with your friends online, you dial into what’s called the bulletin board system. So you dial in with your modem and often people would be interacting asynchronously because a lot of BBSs only had one or two phone lines. So you could go write messages while you’re dialed in, go off, come back 24 hours later to see what people have done.

But also, what was big for kids those days was trading video games and tips on how to crack them and such. And that’s actually how I first learned how to do some reverse engineering, was breaking the copy protection on my Commodore 64 games and learning all that from BBSs.

John Carlin:                  Was there a particular game you remember cracking?

Alex Stamos:                Yes, actually. There’s a really fun game called Red Storm Rising, which was a submarine simulator based the Tom Clancy book. It was the best Commodore 64 game and also very expensive. Got myself a copy of that, but at the time, again, it was just kind of this innocent time when doing this kind of stuff was not incredibly impactful on people’s lives. And technology was just this kind of side thing, right? Like, most of my life did not revolve around tech even for a super nerd. And just kind of amazing to think what has happened since then that now technology plays such a more central part of everybody’s life. Especially somebody who’s interested in it like I was.

John Carlin:                  WarGames. This is 1984, I think around movies starring Matthew Broderick, about a kid who hacks into the Department of Defense and nearly starts a nuclear war, spoiler alert. But did you see it and what impact did it have?

Alex Stamos:                It did. Yeah. I mean the two biggest, most impactful movies for me in the computer space was WarGames, which is exactly the kind of stuff that me and these kids who’d go to this 2,600 meetups. So there’s a magazine called The Hacker Quarterly: 2,600. I’m sure you’ve seen it. Do you know where the 2,600 comes from? This is actually a quiz I give my students.

John Carlin:                  I do not.

Alex Stamos:                There was a man named Captain Crunch, actually, who I believe also went to Berkeley, but back in the sixties, who figured out that the whistle in the Captain Crunch cereal box made a perfect 2,600 Hertz tone, which turns out to be the tone by which the local AT&T office would signal the long distance switch. If you use that whistle, you could then steal long distance calls.

So anyway, this is exactly the kind of stuff that we’d talk about and WarGames. He is what’s now called war dialing, which is, he just goes through and dials a bunch of phone numbers until he finds computers. And that’s the exactly kind of thing that a bunch of kids would do in those days. Again, with no malicious purpose, just as see what was out there and what was hanging out. And it was a pretty good time. The other movie that really had impact on me was Sneakers, right? Which is the Robert Redford movie in which he and a bunch of his friends have a company that are professional hackers and they get paid to fix stuff which effectively became a career goal of mine that I was able to fulfill a little bit later.

John Carlin:                  For those, you know, it’s pandemic time, at home looking for something to watch, it’s a good time to rewatch both those movies and Sneakers has an incredible cast. Not just Robert Redford.

Alex Stamos:                Right, right, it’s Dan Aykroyd, David Strathairn and James Earl Jones makes a cameo as you pretty much, right? And as a high up guy in the National Security establishment.

John Carlin:                  It was pre National Security Division, but I’ll take it even though I’m not sure it’s the best messaging for it. The WarGames, I always find it an interesting one. I know we’ve talked about this before, because in addition to influencing a young Alex Stamos, it also for president Reagan caused him to ask the question, “Could this happen to us?” Apparently it was really the beginnings of the first government programs when the answer was, “Yes, it could happen to us and we’re not really prepared for it.”

Alex Stamos:                Yeah. It’s interesting and that whole time is like looking back, the relationship between teenage hackers and the government was very complicated and weird, right? Some guys I ended up working with later were part of a group called MOD, Masters Of Destruction, which was like a pro hacking group in New York that the secret service, I think, called the greatest cyber terrorist in the world or something. They turned out to be all these kids that go to the same school in the Upper East Side.

John Carlin:                  Which we’ll get to it later. But the recent Twitter hack, I think you still have that phenomenon going on.

Alex Stamos:                Right. Well, but I mean, this is the difference now, right? Like if you’re 17 and you have these skills and you don’t very carefully stay on the beaten path, you will end up in a very dark place. I think that’s actually … there’s so many more options for young people now. There’s good options. You can participate in online hacking clubs. Like just yesterday, I did a webcast with a thing called Hack Club, which is like this international group of teenagers that are interested in computer security and they all participate in capture the flag competitions. They participate in official bug bounties.

John Carlin:                  Oh, let’s take a step back. Explain what capture the flag competition is.

Alex Stamos:                A capture the flag or a CTF is a competition where you have an artificial network set up and you compete to break into computers and to get the flag, which is usually a file that sits on the computer. That’s kind of an artificial game and it’s something that I actually used to participate a lot in. I’m too old now. I’d have to be in the seniors division of hacking if I did that.

John Carlin:                  They should make that.

Alex Stamos:                They should.

John Carlin:                  They should make that.

Alex Stamos:                They should be, right. [crosstalk] Just like with swimming, they call it masters, right? Like that just means old. You know, if you’re a young person and you want to learn to hack, you can be part of a competition. Schools can have teams now, like at some high schools, it is a varsity sport to do hacking, which I think is incredible and awesome and a great opportunity.

Then if you want to go up against real systems and not in these artificial play lands, you can go up against companies that have announced, “You’re allowed to hack us and we will pay you.” And so there’s all these great opportunities for students, but there’s also a huge downside, which is if you fall into effectively the wrong crowd, you will get pulled into a criminal underground that didn’t really exist when I was coming up.

John Carlin:                  We’ve talked a little bit about the fork in the road for current kids, where you can go down the path of being paid by companies to find vulnerabilities so that they can fix them and essentially getting paid for gaming, which is the capture the flag or bug bounty programs. And then on the other hand, that there’s this dark criminal underworld where you can also end up getting paid, and kids end up faced with a choice or exploring one path or the other. What would you say, which experience helped shape your thinking about technology and policy and put you, I think on the path of a law abiding citizen?

Alex Stamos:                A mostly law abiding citizen.

John Carlin:                  Mostly.

Alex Stamos:                Yeah. I mean, I was very fortunate in my childhood. So I had very supportive parents. Grew up in a very stable household in suburban Sacramento. Went to a good public high school, was able to get good grades and go to Berkeley. And so there was none of those things that maybe other kids pushed them off the path. Plus we’re a immigrant family, Stamatopoulos, and I had a grandfather, [Andrew Rawkus 00:09:25], [Andreas Ruckus 00:09:24] from Cyprus who had really kind of nailed into me the importance of education. That was his big thing.

He had left Cypress, he had a fifth grade education because he was the oldest boy in the family and had been pulled out of school to work on the family farm. And so, when you’re the grandson of a literal goat herder, there’s that kind of immigrant expectations on your shoulder. That helped and I had that opportunity. I could go to Cal and I could take real classes.

There wasn’t a lot you could study in security. There’s only one security class in the late nineties there. This is actually an interesting kind of mirror image of what’s going on right now around trust and safety, which is the same problem. But at the time you couldn’t go to a university and learn a lot about security. So I took a graduate class by [David Faulkner 00:10:07], a really famous now professor, at the time he was like a brand new professor, just gotten his PhD. And then was able to get a good job and do this stuff professionally.

John Carlin:                  So you get out of Berkeley and you end up just three years later, co-founding iSEC Partners, a security consulting firm that ends up becoming a really big success. And as you’re saying, there was barely any coursework. It sounds like you went and found the one course that you could find at Berkeley. How’d you end up in the security space?

Alex Stamos:                I mean, it was always something I was interested in doing professionally. I mean the nice thing about going … and I do always recommend to young people, if you have the ability to go get a real computer science or electrical engineering education, it is still incredibly important for security. Now, some of the best hackers I know dropped out of school, didn’t go to school. But if it is an opportunity for you, it is still important because security is about breaking the layers of abstraction in a system, right? Like the computers are so incredibly complex that even the people that program them have basic assumptions about how they work. It turns out those basic assumptions are often false. They’re good enough to go build something that works. They’re not good enough to build something that works securely. And so studying computers down to the lowest level can often be a really good skill.

But yeah, I graduated in the dot bomb, right after the collapse of the dotcom industry. I had a job offer at a company called LoudCloud, which was Marc Andreessen’s company after Netscape. It was really the first cloud computing company. It was a cloud computing company that predated all the technology that makes cloud computing profitable now. It was just a little ahead of its time and I graduated into that and they ended up pulling back my job offer. Then I have a very distinct memory of doing this trip through Europe with my sister, for my post college trip and being in the basement of the train station in Florence, Italy, and logging into a web cafe.

So here I am now, looking back, I’m like, wait. So I went into like this random computer in this train station in Italy and I typed in my password to get my email, but that’s how you did it. And then them saying like, “Oh, because we are legally required under our PCI, our credit card requirements to have a security engineer, we’re going to have to give you your job,” which is like, it was great to get an email. It was like, “We are required to hire you. So we’re going to roll eyes, actually follow through.” But that was a great experience and did some other stuff. I worked for a company called [Adstake] which was kind of a consulting company famously full of ex hackers and a fascinating group of people.

John Carlin:                  Just to pause though for a second, that’s an example of, so there’s essentially a public private regulation that says there’s certain ways you need to protect credit card information in the PCI rules. And, but for that, you would not have gotten your first job in security?

Alex Stamos:                Yeah, I think so. It wasn’t called PCI just to be … you’re going to get email at the time. I forget exactly what it was called. Like before the payment card industry council existed, there were some other security standard from the credit card industry, but yes, it was only because they had a regulatory requirement to have somebody with this title, that I eventually got the job.

John Carlin:                  So I expect you to be thankful throughout this for regulation.

Alex Stamos:                Oh yes, exactly. Well, I guess we’ll get to that part. I’m definitely pro-regulation, I just think it should be tied to reality. But yeah and so I worked at Adstake, which is a consultancy [inaudible 00:00:13:20].

John Carlin:                  What year was this roughly?

Alex Stamos:                I was 25. This is 2004. So Adstake gets bought by Symantec and a bunch of friends and I were like, we don’t want to go sell antivirus for Symantec. That was not our plan. So we ended up going and starting our own company and it was an incredible experience. I mean, we bootstrapped it. We all threw a couple of grand in to buy our first laptops. The first data center was the closet. My wife and I had just gotten married. We moved into a place in the sunset district of San Francisco so it was nice and cool.

So I literally had a stack of computers in the closet and then just opened the window for ventilation. She called it the door of din because we had these machines running continuously in the closet under her shoes. We started this company and it turned into something pretty great. We were just really well timed because all of us were specialists in application security. And this is the time in the 2004 timeframe when the thrust of security interest inside of corporations was switching from what we call network security, which is mapping out networks, getting past firewalls, using vulnerabilities in software that you buy from somebody else, like an Oracle or Microsoft, to software security about the software that companies build themselves.

And so we were just pretty well positioned for that. Then we made some good bets, like we got into mobile security very early, and that turned out to be a really good bet. Be in the right place at the right time is a pretty important thing.

John Carlin:                  So you’re sitting there in your one bedroom with the floor shaking from the door of din and you managed to pull off getting some heavy hitting clients, which I think you explain, including Google, Microsoft, Facebook, Amazon. And it sounds like some of that is your right place, right time, great expertise, but how’d you pull that off? How’d you get them to trust someone working out of essentially his basement?

Alex Stamos:                Well, all of us had worked as consultants before for Adstake, so a couple of those relationships were preexisting, probably the most important was Microsoft. I had done at that point, a lot of work at Microsoft for Adstake. This is right after the famous letter from Bill Gates out to the entire company. The trustworthy computing memo, where he said, “We as a company are going to pivot to build software that is more secure and more trustworthy.”

Speaker 1:                    Well, there’s a lot of invention that has to take place. Take, for example, the issue of privacy. That holds a lot of people back from using the internet. How do you describe to a user, say when they provide their credit card or they do a transaction, how do you describe to them in a simple way how that information is going to be used? So that, say when they come back to that site, it’ll be customized for their use that’s to their benefit.

But unless you can come up with a way of describing it and really assuring them, then people will always under-use what’s possible on the internet. Privacy is a perfect example of something that we need brilliant work, and it’s not just man hours. Take the idea of you get email that interrupts you. Some email is super important and you should be interrupted and other email you should get when you’re not busy-

Speaker 2:                    And some you shouldn’t get.

Speaker 1:                    … And some email you should never get.

Speaker 2:                    Exactly.

Speaker 1:                    And why can’t the computer do that? Well, it’s not just man hours to solve that problem. We have, I think, brilliant people on it, but it’s not like they can write down, “Okay, that’s 500 hours of work.” They’ve got to invent. And that’s why it’s a research topic.

Alex Stamos:                A big part of that was then Microsoft hiring outside hackers to come in and help them fix stuff. And I had done a bunch of that work already, and then the day of the announcement that Symantec was buying this company, Microsoft kicked all the consultants off because they were already in some kind of antitrust lawsuit or something with Symantec. And so there was this opportunity of they knew us, we gave them a huge discount. This is an opportunity for them to pick up some talent cheaply and it worked out really well.

So Microsoft was kind of our cornerstone client for awhile, but then, we built a reputation and the other thing we did is we did a lot of public research and that’s, I think still an important thing for new consultants and new security experts starting out, is to get out there and do original research in new kinds of vulnerabilities, new classes of issues. And then go talk about them publicly, preferably in an ethical way, in a way that actually helps things. So we did a lot of that and that was a really good marketing and sales strategy for us.

John Carlin:                  And our paths overlapped in a way, although we didn’t meet, but with the Aurora attacks. Maybe we can just spend a moment reminding our listeners, could you explain the Aurora attacks and what happened?

Alex Stamos:                We’re talking about early 2010. Our company was doing lots of work for Google. They had originally brought us in to work on the ship software. So, Google had made that transition from being a web only company to shipping operating systems and they had known that we had done a lot of work for Microsoft on operating system security. And it’s actually quite a different model of the kinds of things you do.

So we were doing a lot of work for Google. We were also doing a decent amount of instant response work for companies that had been broken into, and I get a call from the Google folks and go over. And they explained to me that they had uncovered a breach into Google that had lasted for quite a long time. In investigating that, they had found a commanding control system that was behind a dynamic DNS name, and they had gone and taken that over and pointed it at their own computer.

When they did so, their goal was to find all of the infected machines inside of the Google network. But what they found was that there was over 20 other companies that had been hacked too by the same campaign, and that they had traced it back now that they had control of that intermediate system, to China. They believed that this was related to the Chinese military.

Speaker 3:                    Late last year a student group that criticizes China’s rule in Tibet, learned false emails were being sent in their name. And some of their email accounts provided by Google had been hacked.

Speaker 4:                    I was extremely startled and I couldn’t believe that someone, an unknown stranger could hack into my account so easily.

Speaker 3:                    Last week Google traced the sabotage back to China and says the break-ins were part of a pattern of cyber attacks on human rights activists who criticized China.

John Carlin:                  And pausing there just for a sec, can you recall another prior to that case where a company of the size, scale, reputation of Google publicly accused China and its government of hacking?

Alex Stamos:                No, this was a landmark moment for a number of reasons. Like you said, yes, Google comes out and they publicly admit that they’d been hacked, which was very avant-garde, for people to proactively say that at the time. It was seen as shameful if you had been attacked. And so they one, publicly come out with that. Two, that it was against the entire tech industry, right? Like everybody at this point had heard about Chinese attacks against defense contractors and government agencies and defense industrial base, but to go against a huge chunk of Silicon Valley all at once, this was kind of the first public knowledge of that. Yes, and the fact that they provided attribution was a totally new thing.

John Carlin:                  And then they not only say it originates in China, they threatened to pull out of the country. And what’s your view of both that thread and what actually happened?

Alex Stamos:                The relationship between the United States and China is obviously incredibly complex. From my perspective, the People’s Republic is the place that Silicon Valley ethics go to die. That you have these companies that are very high minded in their corporate missions, in the statements they make and the way they relate to democracies, that then when it comes to China, the combination of the skill of the Chinese government in manipulating companies and gaining leverage, the amount of ef …

PART 1 OF 4 ENDS [00:21:04]

Alex Stamos:                -And gaining leverage. The amount of effort the Chinese put into hacking and infiltration, and then the overall economic importance of the country is this combination that for whatever reason companies often forget all of the things they have said in other places, and they act in a way that’s completely contrary to their corporate mission. At this time, Google had a search engine that was available in mainland China that was censored. Now at the time, Google made the argument and it’s not a horrible argument, that it’s better for Google to exist than nothing. I think this was seen by them as if they’re going to cooperate with the Chinese on Chinese law openly, to have the PRC then turn their intelligence agencies against the inside network of Google was just a step too far, and that they were going to cut off official.

My understanding is they turned off that censored version and then the PRC blocked the rest of Google’s product via the Great Firewall. There’s been this uncomfortable relationship between Google and China since then. Whereas, China is still incredibly important to Google because that’s where most Android phones are built. Google’s Android operating system is very popular there. It powers the vast majority of phones, although often in an unofficial capacity in a way that’s not licensed from Google. There’s this real complicated back and forth that has continued since then. I think at the time it was a very big statement, and I thought it was also one of the last times I can think of a big tech company really standing up to China publicly.

John Carlin:                  Let me take a step back. We’ve covered a decent span of time now, and you had a quote when you were speaking at the Newton Lecture Series dividing our time into different eras, comparing 2001 where very compared to now few people had Internet access.

Alex Stamos:                Security’s gone from being a fun game to a basic life safety issue. In 2001, I don’t know what the number is, but somebody can probably look it up while we sit up here. One, in 2001, you couldn’t automatically solve bar bets like that by just looking at your phone. You look it up later. Also, a billion had Internet access, or 700 million people, or something like that. I think at some in 1999 there was like 100 million people had Internet access. For those people, the Internet was a fun thing where they could do some research, do some reading, and now the Internet is a critical part of the lives of close to three billion people. If security’s gone from something, when the Morris Worm happened and the entire Internet shut down, nobody died. That would not be true today if the Internet just stopped working, or if we had a worm that infected 90% of Internet-connected devices. People would die, or people would lose their jobs, or there’d be mass chaos.

So, security’s gone from something that’s just kind of fun to something that’s responsibility. That doesn’t mean it can’t be fun when you do it, but you have to sometimes step back and be like, “Oh man, actually you are having a real impact.”

John Carlin:                  Then you referred to, for prosecutors and law enforcement, a case that’s always discussed: the Morris Worm, which is really the first time attempted to apply criminal law to an intrusion. This was a self-propagating code in 1988 that shut down the Internet. You said that when the Morris Worm happened, the entire Internet shut down and nobody died. That would not be true today if the Internet just stopped working, or if we had a worm that infected 90% of Internet connections, people would die or people would lose their jobs, or there would be mass chaos.

Explain that quote a little bit, and where you think we are now compared to where we were when it comes to Internet-related threats.

Alex Stamos:                Yeah, in that speech I was kind of talking about the progression of our professional of people who work in computer security, or cyber security as you guys say in DC, or information security as we say on the west coast, as yo, first a hobby, then a job just like any other IT job that is important to support things that people are doing, but not super critical to effectively becoming a priesthood, that security has become this thing that underlies a huge chunk of our lives. It’s because of the success of the insertion of technology into every aspect of people’s lives top to bottom that the same people might be doing the same things, but the importance of what we do around us has completely changed.

Yes, I was using the Morris Worm as a great example because that was a worm, as you pointed out, that was amazingly advanced for the time. It had multiple payloads, it could infect multiple different incompatible computer architectures, and it infected a big chunk of then-nascent Internet. It was a story among IT people in universities, but nobody died. There was no actual impact. Because of the way we have inserted technology into everybody’s lives, that is not true anymore. From my perspective, the tech industry overall, we’re really, really good at making technology useful for people to the point of where they start to rely upon it, but then we’re not very good at making that technology trustworthy for them.

There’re multiple levels to that. There’s the traditional security issue, there’s kind of the privacy issues which are about how you decide to gather up data and use it, and then there’s also what you might call the “trust and safety issues”, which is we build these technologies where bad things can happen, and we often do so first. We get them important first, and then we figure out how can people abuse them later. I think that’s a fundamental problem in the structure of how we build technology in Silicon Valley, is that all of the thinking about the downsides happens way too late in the process, and it makes it very difficult to fix it up.

John Carlin:                  After working at Yahoo, and then had one of the biggest thefts ever, maybe the biggest theft ever of email-related information by Russian criminals linked to and taking taskings from the Russian state, you then move to Facebook and are there as you’re confronting an unprecedented attempt to manipulate the way individuals are thinking using social media. Now you’re at Stanford. Tell me a little bit about what you’re doing at Stanford, and how you’re working to tackle some of the problems that you’ve observed and confront firsthand in your different industry jobs.

Alex Stamos:                I’m trying to do a couple of things at Stanford. One is, our team is doing research in the misuse of the Internet, that I’m trying to hit the sweet spot between it being done in a timely manner and being quantitative and qualitatively supportable enough to really inject a better level of accuracy into the discussion of these abuses. Specifically in the disinformation world, unfortunately since 2016 there has been a belief among people that- and by “people” I mean really just kind of mostly the US media- that any kind of disinformation activity is immediately impactful and can have all of these downstream impacts that are maybe immeasurable, and therefore that you should do almost anything to stop it.

That’s not how we can handle any kind of abuse. We have to really understand how do these abuses work, what kind of impact does it have so that we can calibrate what our responses to it are. There’s a lot of good academic groups doing this kind of work. The problem is the majority of them are publisher parish kind of models where they have to be in peer review journals and the like, so you’re talking about maybe something coming out a year or two afterwards. We’ve built this team to be able to do in the short-term what is more much journalistic work of here’s us exploring and uncovering a Russian disinformation campaign in Africa and getting it taken down. So, we have impact immediately. That then can turn into a political science journal paper a year or two later by the PhDs who were on that team.

One of our goals is to try to just kind of inject a little more realism into the discussion of these abuses. The second is to expand the discussion of what should be considered the responsibility of tech companies beyond the traditional information security into all of these other areas. There’s a parallel here, as I’ve talked about, of where we were in the late ’90s/early 2000s in security, which security was this super specialized field that was off in the corner, that was something you did last, that was not deeply integrated into the product’s life cycle, and that didn’t have an academic component and wasn’t training undergrads.

That’s where I feel like we are on the broader trust and safety issue. When I talk about trust and safety, we’re talking about abuses of technology, which is generally the technically correct use of a technology to cause harm without any hacking or violation of the rules of the system. Hacking is usually about making a computer do something it doesn’t want to do. Abuse is making the computer do exactly what it is built to do, but the outcome of that is somebody gets hurt.

One of the things we’re trying to do is we’re trying to make that part of undergraduate education, and we’re doing that by I’m teaching a class at Stanford called Trust and Safety Engineering, and my lectures are about all those other things that aren’t really hacking. We have a lecture on hate speech, bullying and harassment. We talk about suicide and self-harm, and suicide clusters online, and people encouraging each other to commit suicide. We have two lectures on child sexual abuse, which I know you’ve dealt with, but most people who haven’t worked in trust and safety or law enforcement have never been exposed to the true horror of what actually happens online every day for children. We’re talking about terrorism and the like.

We’re putting that class together. We’re going to make it free. We’re writing the textbook. The book will also be free. There will be a paper copy, but you’ll be able to get it free online. The goal is to kind of capture all of this stuff that we know about, how do the products we’ve built, how they’ve been abused in the past so that the next generation can make at least different mistakes than my generation.

John Carlin:                  This is so important. I know working on my book and going back to early criminal cases [inaudible], I was just shocked by how many mistakes were repeated because people were not learning a lesson. Some of the same tactics and techniques, going back to our earlier conversation, that were prevalent in the 80s are still successful here in 2020.

Alex Stamos:                Yeah, right. We had this problem in security. It’s less of a problem in traditional information security now because you have enough people whose career is around security and they’ve studied this stuff in the past. What you really need is companies that build these products need to have somebody on staff that has some kind of tribal knowledge of all of the things that have happened before, and then can look at what they are doing right now and then synthesize, “Oh, this is how it affects us.” That’s a real problem. One of the reasons I’m doing it at Stanford is that there’s probably no university in the world that has more responsibility than the current state of Silicon Valley than Stanford.

The university keeps on graduating out these 22 year olds, mostly men, who come out and they’re like, “Oh, I have an idea. I’m going to make an app where you can take photos and then anonymously send those photos to an infinite number of women. What could possibly go wrong?”

John Carlin:                  What could go wrong? Yes, I think-

Alex Stamos:                Right? Well, here’s the list.

John Carlin:                  Let me move to a real life case that you’re working on. Here we are. We’re in a pandemic. Everyone’s working from home. Schools are suddenly teaching classes from home, meaning kids are using some services to have video chats in a way they never have before, along with employees. And you have Zoom, which the example, a company that was relatively small, and just explodes in terms of usage. I know they’ve publicly brought you on as a consultant. I’m wondering in terms of tribal knowledge, to use your phrase, it seems like Zoom is being exploited in many of the ways that one could predict. Although, the explosion in its users from 10 million to over 200 million in a period of couple of months, I think that was a little harder to predict, right?

Alex Stamos:                That’s right. Yeah, there’s really two totally different issues with Zoom. Like you said, they brought me on as a consultant, so full disclosure, I’m a paid consultant for the CEO of Zoom. I got that job because I was tweeting about… This is the first time Twitter’s been good for my career, but I was tweeting about the mistakes Zoom had made, and how I’d seen this pattern at companies before, and how effectively Zoom now needed to speed run in six months what Microsoft and Facebook did over years and years, that they had this need to build up.

I ended up the next day getting a call from the CEO. He had called around some friends, found my cellphone number from a joint friend, and we had this long discussion about all the things they could do. I sent him this nice big email, and then he ends up announcing that they’re going to do a bunch of the stuff that we had discussed. I thought, one, that was somewhat from my previous CISO jobs, it was somewhat unique to have a CEO who put security first. That was a unique experience for me, to see a CEO who’s that. It became clear that they saw this as both an existential issue as well as a potential form of longterm committed advantage over their other companies.

If you want a company to care about security, safety, trust and privacy, then you want them to see it as a positive competitive advantage and a moat they can build. That’s the way to get invested. Anyway, there’s two classes of issues with Zoom. One is, the traditional security problems. Zoom had a bunch of bugs in their [inaudible] and such. The truth is, generally most of the bugs are the kinds of things you’d find in mid-size enterprise companies all the time, from my consulting days. If we went into any successful but not household name enterprise company and looked at their software, that’s the exact stuff we’d find: local privilege escalation, some remote code execution, using old libraries and stuff like that.

John Carlin:                  Let me just walk through that little bit for the audience, who’s mixed in background. You’re saying this is what you should expect essentially from a lot of mid-sized companies, and the problems included things like the way that Zoom was installed, it took over admin privileges, which means the highest privilege that you could have on a user’s computer so you could get root access. You could control the computer. That would allow someone who wanted to abuse it, to use your term, to put in bad programs without the user’s knowledge, including the type of scheme we’ve both seen a lot of, which is taking over webcams or microphones.

Then it also was found earlier that it was sending data to Facebook even if you weren’t logged into a Facebook account, and routing some traffic through China unbeknownst to users. Then there were encryption issues where the default encryption, and this probably the more complicated one and we can talk about it, but the default is not to end in crypt, which means that messages in transit could be vulnerable. Finally, the one that I think got the most publicity, in part because it has a good name, was Zoom bombing, where people were finding open meetings to join and then they were flashing pornography or other things in front kids, et cetera. That got a lot of media attention.

I focused on those and wanted to walk through what some of the issues were, because I think you made an important and provocative point, which is yeah, this got exposed because Zoom grew so quickly, but this is actually business as usual, if I’m hearing it right, for a lot of medium-sized companies.

Alex Stamos:                Right. That last part, Zoom bombing, is a totally different problem. That’s kind of my point. On that core security issues, yes you said this is what we expect. Unfortunately, our expectations for software of enterprise security are way too poor. We should expect better. The truth is, there’s a massive disparity between the top 10 tech companies that people can name, and the 10,000 X companies that build the software that all of our lives run on, perhaps unbeknownst to us.

Zoom used to be in that second category, in that the people before COVID who knew about Zoom were CIOs and enterprise video conferencing teams, and the folks who would go out and do a bake off between webex, and BlueJeans, and Microsoft Teams and Zoom, and then make a decision in a kind of enterprise bake off. It wasn’t a product that people were using every day. Yes, those kinds of products often have these problems if you’re not part of a Google, a Microsoft, or a company that has thousands of security engineers based upon… That generally comes for those big companies because they had an existential issue.

For Microsoft, that came in the early 2000s with the Trustworthy Competing Memo. With Google, it came after the Aurora. The Aurora tax we were talking about caused a huge amount of investment by Google and security, but the vast majority of companies don’t. So, this doesn’t mean Zoom shouldn’t have done that. They should have invested more in security earlier, but solving those problems is a little more pedestrian because what you have to do there is you have to build up a good team, you have to improve your software development life cycle so that security is built into multiple parts of the security life cycle. You have to be able to hand outside bug reports more efficiently.

That’s all the kind of stuff that I’ve been working on with them. They have a new CISO, they have a new Head of Applications Security, they’re building up their Application Security team, they’ve fixed up their bug [inaudible] in a bunch of ways. There’s still work to be done there, but that’s kind of traditional info stack. The other issue they’re facing is much closer to the one I was talking about, about the class, the trust and safety issue, which is that when Zoom went because of COVID from being mostly an enterprise product that was provisioned, and bought, and managed by IT professionals to something that Mrs. Smith for her fourth grade class would just go get a Zoom account and then all of a sudden move a bunch of children onto Zoom.

That is not what the product was built for. That is the safety issue, which is turns out that Zoom bombing or the meeting disruption problem, almost everything you needed to prevent that from happening existed in Zoom at the beginning of this year. It’s just people didn’t know it was there. Some of that stuff was buried in different IT settings. Some of it was available to consumers, but not in the normal interface.

John Carlin:                  We’ve talked about that a lot, in a lot of different context for companies, but this goes to whether or not you should have security by default, essentially, right?

Alex Stamos:                Right, and the core problem that Zoom faced on this was they offer a freemium product. They offer a self-service you can put a credit card product, and then they offer an enterprise product. Almost all of their money is made on the enterprise product. It’s made by selling a bank 100,000 seats. If you sell a bank 100,000 seats, that bank is going to use single sign on, they’re going to have their IT team go and set all the little bits in the interface to make the default secure. They’re going to do all the right things.

One thing Zoom did, was they did add all these free accounts to schools so they’ve kind of created this problem for themselves. All of the sudden, they gave all these accounts to schools where you might have the one IT person who might be a volunteer parent or might be part-time one of the teachers, who just was totally slammed, did not know how to use the product, didn’t know any of the stuff. By default, it did not walk them through “This is what you should be thinking.” So the other issue for the schools especially, but a number of other institutions like churches and such, is they didn’t have communication links set up to securely communicate out when this incredible transition started happening in March.

This happened at our kids’ school, where they actually started just putting the links to Zoom meetings on the public website and the public calendar because there was no other good way to kind of get it in front of people. So, you have then all of these people who are home and bored, and malicious, who would go and do things like scan the Internet, scan through Twitter and Facebook, and look for anything that looks like a Zoom link and then go trade that in private groups in Discord, private groups on Facebook, WhatsApp channels and the like, and they would trade these links and back and forth, and they would disrupt them, jump in and hopefully do something like just be annoying, and in the worst cases do things like show illegal child exploitation content.

This is kind of the core issue for Zoom was that a lot of their product management was focused on the space for which they made money, which is what you generally see from enterprise products, was the-

Alex Stamos:                … made money, which is what you generally see from enterprise products, was the security features built for enterprises and becoming kind of a household consumer product overnight completely changed what their focus had to be. And so the work there, fortunately, because a lot of these features existed, was mostly about redesigning the interface and redesigning the user experience so that when you set up an account, by default, the settings are much more restrictive, and you have to turn them down and putting in the interface if something bad happens to make it much easier.

It used to be very hard to report that something bad happened. Now it’s much easier to say, “This person did something bad. I want them kicked out. I want them reported to Zoom.” And then they had to build up their trust and safety team with the people who investigate that stuff and then work with law enforcement.

John Carlin:                  To break it down a little bit, yeah, so number one, you have a product that kind of races to market, and a company’s building out, and it has security flaws in it. I mean, there was a Princeton professor who characterized Zoom and said, “Let’s make this simple: Zoom is malware.” And there he was referring, I think, to security flaws that would allow you to get access to things like the ability to use someone’s webcam or microphone without their permission, so that’s one bucket.

Alex Stamos:                So I know that professor. I think he massively overstated that. Things like local privilege escalation, so there was something like three or four New York Times articles that mentioned Zoom’s installer having local privilege escalation on OS 10. Every time there’s a dot release of Mac OS, there’s a dozen local privilege escalation bugs. So yes, [crosstalk] bugs.

John Carlin:                  Well, that’s exactly what I wanted to push on a little bit is, taking it up a level, what do we do in terms of policy so that … Because this is one example that got well-publicized because the company exploded overnight, and so a lot of press attention, but to your point, there is thousands and thousands of other companies who have similar issues. What do we need to change the default so that security is there before it arrives and needs to be fixed on the consumer’s computer?

Alex Stamos:                It’s really tough. I think, I mean, the first is one of the problems we have in the computer security world is you hear about a handful of the failures like this that are public because people know about it. The vast majority of security failures are secret.

This week, a bunch of companies are going to get broken into. The information that gets stolen is not going to touch the PII definitions of SB1386 and the other state laws that require the disclosure of breaches that touch personal information.

John Carlin:                  You’re saying the PII is personal identifiable information and that a lot of the laws are really triggered around whether someone gets things like your name plus your social security number, but that’s not the issue here.

Alex Stamos:                That’s right. And you and I have both worked a bunch of cases where companies have had important information taken, but it’s not personal information, so therefore they don’t have a disclosure requirement. And kind of culturally and legally, nobody is encouraged to admit … like Google did back in 2010 … nobody’s encouraged to admit we had a breach, even if that breach was caught halfway or something. So I think the first thing we need to do is we need to build kind of a cultural and legal permission for people to be honest about these issues.

And the industry, I think, we should continue to look for … and I’m not the first person to say this, but the industry that does incredibly, technically complex things, and does so safely, and has a culture of continuous improvement is the airline industry. And part of it is regulation; part of it is that they have a regulator that actually understands stuff.

So the National Transportation Safety Board has a level of technical knowledge of how airplanes work in way that there is not a single well institution maybe other than the NSA or Cyber Command in the US government that has the same level of technical knowledge on the InfoSec world. So they have a regulatory structure of people who are real experts, and then they have a culture of continuous improvement.

So obviously, if a plane crashes, there’s a massive NTSB investigation. There’s all this stuff that happens. But even on the close calls … if something breaks, if there’s a human mistake … there is a culture of that being filed, that being looked at, that being discussed; and the legal structures to allow that to happen exist in their regulatory structures. I think we need to move to a world much closer to that on the security world side, whereas when these mistakes happen, there is a discussion of what went wrong.

And I’ll use, actually, an example that you brought up. So when I was at Yahoo … Actually, the biggest breach of Yahoo stuff happened way before I got there, but there was a separate attack a couple of months after I got there that started a couple of months after I got there by a group of hackers working for the FSB, for Russian intelligence, that had broken and at first looked for a targeted set of people related to the near abroad Russian ex-Soviet state oil and gas industry.

But then once we found them, that turned into them trying to grab as many passwords as possible while we kick them out of the network, so this was a small group of pretty good hackers who were able to break in, but there’s a lot of reasons why they were first able to break into Yahoo, and there’s another set of complex reasons of why they were able to stay. It took us weeks and weeks to get rid of them. A lot of those reasons go back basically to the age of the company and a malinvestment insecurity over a decade.

But after this breach comes out, there is a set of lawsuits against Yahoo, as is normal, and I end up, as you can imagine, getting subpoenaed for a bunch of these. And so I ended up and going and having these very long discussions with tons of lawyers in the room, which, for people who haven’t been deposed, I can’t recommend it because you’re sitting there, and the video is looking at you, and everybody’s parsing every single word you said. And then there’s 12 people in the room, and every other person in the room is getting paid to be there except you.

And so you’re sitting there, and they’re asking me all these super detailed questions. They got thousands and thousands of emails and documents, and they’re putting these emails in front of me and these documents in front of me. And through this process of doing this with me and dozens of other people who are involved in these breaches, you can start to build out this idea of, “Okay, what were the root causes that caused the ability for these people to be able to break into Yahoo and then the difficulty in kicking them out?”

Okay, that’s fantastic. That is really useful knowledge because these FSB hackers, a couple of them have been arrested, but the main guy’s actually still at large being protected by Russia.

John Carlin:                  Well, yeah, protected by … Exactly.

Alex Stamos:                Right, right. Yeah, we can talk about our joint friend Alexei. So it would be super useful for everybody else to understand what happens at a Yahoo. What happened to all of those transcripts, all of those documents, everything? Well, they do a deal where the class action lawyers get $45 million in lawyer fees; a bunch of Yahoo users end up getting free credit monitoring, which makes absolutely no sense for a Yahoo breath, plus a gift card, I think; and then all of that stuff gets sealed up by the court.

So as a society, if you have a failure like that, to make that just then part of what is effectively a legal game just to move money around with some class action attorneys in Florida, and then all of the fact finding is then sealed up and made useless once the money is moved around, that is just the silliest way to try to address an incredibly complex issue. Imagine the 737 Max crashes, and that’s how we handled it, and we have no idea of what actually failed on the 737.

There’s no NTSB report, everything, because Boeing ended up paying off some lawyers in Florida. We would find that ridiculous, and airlines would be much less secure, and so I think that’s one of the core kind of regulatory things we got to do is we’ve got to make the discussion of what went wrong public, and we also have to create a model where people are encouraged to come out when they have a close call or when they have a breach that doesn’t touch PII but maybe touched some kind of intellectual property.

We need to have both a carrot of certain protections, and you’re a better position to have opinion on this, but maybe you have kind of statutory penalties for certain kinds of breaches and such, so it’s not a four year class action lawsuit, but we need to have an encouragement of a carrot, and then you have to have a stick if people keep it secret. And then what I would love to see is there really should be a 400-page report of what happened at Yahoo because then everybody else can read that and be like, “Oh man, now I see the things I have to do to prevent it.”

John Carlin:                  Yeah, and famously with Yahoo as well, and we see this, is, according to reporting, this gap between what you’re finding or recommending as Chief Information Security Officer, and what the CEO, who’s facing a lot of pressures about a business that’s losing customers, did in terms of security. And I think you said earlier that in your career, you have not worked for a lot of CEOs for whom security was the top priority.

Alex Stamos:                Right, and there’s also people who have talked about … and how exactly you calibrate this I think is a little complicated, but I think it’s a good direction … effectively a [Sarb-Ox] for security where you make the board and you make the CEO liable in personal way for security that they aren’t right now.

John Carlin:                  I know you’re talking about the Sarbanes-Oxley law that essentially said that you have to personally sign as an officer of the company that you’re meeting certain financial controls, essentially. And you’re saying we need some type of law like that that makes CEOs and boards personally accountable for security.

Alex Stamos:                That’s right. Yeah, I think so because what keeps on happening is the compensation structure for CEOs is built only around financial metrics. And so this is just a truism for any industry, but you get what you measure and to the detriment of everything you don’t measure and you don’t bonus. And so if you’re only bonusing based upon the short term financial metrics and not upon the longer term risks, then management is going to go all one way.

And there are companies where security is integral. It’s just it’s extremely rare, honestly. The vast majority of companies, the existence of a CSO is in some ways negative because you’ve created this executive through whom you kind of place all of this risk, but they don’t have the ability to make the decisions that actually bend the risk, and so from my perspective, a CSO shouldn’t be the person who’s the risk manager.

The risk manager is really the CEO and the owners of the different product lines. In a big enough company, it’s not the CEO; it’s the people who own the business who make the real business decisions.

John Carlin:                  So former CSO says CSOs shouldn’t get fired after security incidents.

Alex Stamos:                Well, I didn’t say that. I’m just saying we’re in this weird place where you have this executive whose only job is to think about the downside, but they never have the ability to make kind of the big picture decisions of balancing kind of long term growth against risks.

John Carlin:                  That’s a really important point, in all seriousness, and it’s part of a big change we’ve seen in our lifetimes, just over the last 10, 15 years. First, there really wasn’t a Chief Information Security officer. And then the C in CSO was supposed to indicate that they were in the C-Suite, and in most companies, they’re not really in the C-Suite.

And then even in the C-Suite, what you’re saying is they need to be empowered. And that might require creating by law, if I’m hearing you right, regulatory or other risk that forces a company when doing a rational calculus to prioritize security and give them the authority they need to reach the right risk decisions, not just for the company, but for society.

Alex Stamos:                Yeah. No, you put that much better than me. When a young person asked me, “Alex, how do I become a CSO one day?” my answer is, “Don’t.” The place you want to be is the senior director of security role where you can run a large team; you can have a huge impact; but you’re not seen as having that responsibility, because being a CSO in 2020 is like imagine if you’re a CFO, and Sarbanes-Oxley has passed, but you haven’t invented double entry accounting yet.

And everybody’s allowed to spend whatever money they want, and you can just advise. That’s not how companies work. The CFO moves all the money in a public corporation. You are not allowed to spend 10 cents without some kind of infrastructure they have put in place approving that, but the CSO sits over in the corner and is almost all reactive, and it’s very difficult to even know the decisions that are being made at any moment that are going to accrue a huge amount of security risk.

And that’s the kind of thing that we have to adjust, because you’re right. Every big company has this secret meeting … it’s not secret, but has this small meeting, usually on Monday mornings at the beginning of the week, usually in the conference room of the CEO, and those are the people who actually run the company. They’re not necessarily all the direct reports to the CEO, but it’s the inner circle, the cabinet of people who are making that decision.

And it is extremely rare to hear about a CSO or really any executive who handles downside risk being in that meeting. And if you’re not in that meeting, then in the end, you’re just there for the clean-up. You’re not able to actually bend the curve.

John Carlin:                  Let’s move actually to Facebook. And I want to do the same divide we did a little bit when talking about Zoom, which is you have a series of issues, and we’ve talked about them with Yahoo that really have to do with the security of a product; and then you have abuse of a product or it being used in ways that you didn’t anticipate.

At Facebook, you had more authority on paper. You had the Chief Security Officer, and while you’re there, you end up encountering, as we talked about a little bit, Russian interference in the 2016 election and seeing a type of meddling that I think we’d seen on smaller scales but really never on a scale like this before, in part because there was never a platform that had this type of impact like Facebook did.

So tell me a little bit what your proposed approach would be to deal with Russian interference and how that differed from the approach of other executives. And also, just going to our conversation, were you in the right meetings or at the table so they had a chance to hear from you?

Alex Stamos:                Yeah, so the vast majority of my job at Facebook was the traditional information security job. And that was actually, in some ways, much easier than it was at Yahoo because Facebook had money. A lot of the core problems at Yahoo is the fact that Yahoo was effectively a dying company by the time I joined. But by the time Marissa took over as CEO, there’s an argument made that nobody could have turned Yahoo around, but whether that’s true or not, by that point, the investment in technology had really stalled out for about a decade.

And that wasn’t true at Facebook. Facebook was at the height and continuing to grow, had all new technology, had built all this stuff internally. When I got to Yahoo, there was a server that hadn’t been rebooted for 10 years, which a bunch of people were really proud of what that meant about the quality of the data center and the fact that it hadn’t lost power in 10 years.

And to me, it was like, “Well, that just means you’re not enforcing any kind of patch policy if you’re not patching for 10 years,” and that was not true at Facebook. There’s much more of a culture of kind of the core security, so that was most of my job, but where I got pulled into the Russia stuff was because one of the things that I inherited and then really grew is we had a threat intelligence team whose entire job it was to look for advanced attackers first attacking the company, so just looking for exactly the kind of attack that the Chinese did against Google, but then also abusing the platform to cause harm.

And coming out of 2016, one of my core beliefs is as a society, we have really messed up by not having the equivalent of a 9/11 commission look at what happened in 2016, because there’s actually this fascinating kind of parallel-

John Carlin:                  Well, there’s this Mueller report.

Alex Stamos:                Right. Yes, I’ve heard of it. I’m in a couple of the footnotes. I think I’m not even in the footnotes. Yes, it’s amazing the things that we do that we think are not going to be that important and then are going to be enshrined in history forever. I’m sure you’re in the same place.

John Carlin:                  Do we need another report, or do we need to mandate that people read the Mueller report, or do you think there’s more information to be discovered?

Alex Stamos:                Well, I think it’s too late now. I think, as you and I know, Robert Mueller’s goal was to understand what happened to look for criminal behavior. The Mueller team did not do kind of a top to bottom analysis of, what are the root causes that allowed this to happen? And that was what the 9/11 commission did after September 11th.

John Carlin:                  And in fact, as a result … Actually, it wasn’t that commission. It was a different commission, the WMD commission, but the division I led at the Justice Department, National Security Division, was created as one of those.

Alex Stamos:                Oh, really? [crosstalk 00:58:23].

John Carlin:                  Yeah, recommendations, so I take your point, that you need a top to bottom and think about how both government’s structured and how the private sector is structured, and we still haven’t done that.

Alex Stamos:                And we haven’t. And there’s this interesting parallel between 2016 and 9/11, which is the 9/11 report, a lot of it is about the failures of government to communicate internally. And there’s a whole section on the lack of institutionalized imagination, that there weren’t people who were thinking ahead of, what are all the bad things that can happen, and how should we preemptively think about the ways our adversaries might act?

And we have the exact same thing happened in 2016, except in 9/11, the responsibility there was almost completely in the public sector. Protecting our country from terrorists, keeping people from getting on planes with weapons, that was clearly a government responsibility, whereas in 2016, now you have this much more distributed responsibility between folks in the government, but also the private tech platforms, but also the media and the campaigns themselves who perhaps should report if somebody reaches out to them from the Russian embassy.

And there was this failure of institutionalized imagination. And part of it was that kind of the belief of what was a government going to do in attacking technology was based upon what we had seen in the past, which was taking over accounts, sending malware, spear fishing, getting into private groups of dissidents. That kind of secret police slash Aurora-type attacks was the focus of the threat intel team that we had inside of Facebook.

It was the focus of the entire kind of threat intelligence private sector … the [inaudible] and the CrowdStrikes and the like … and it was the focus of the US intelligence services, which makes sense because a lot of these people actually come from kind of the same pipeline of folks. And so that was kind of the core prompt at Facebook is we had people who were looking for all those kinds of things and were very successful.

We uncovered a Islamic Revolutionary Guard Corps attack against the state Department that we spotted and then helped wrap it up in a bunch of places. We had a bunch of kind of different attacks against power infrastructure that we discovered by a number of different actors.

And so we’re really good at that, and then we, and nobody else, was paying attention to the idea of subtly manipulating the conversation through completely non-technical means just by creating fake accounts and creating memes and spicy conversation. And there was kind of a failure of institutionalized imagination, both within my team but then kind of in general as well.

John Carlin:                  Yeah. And on that, on [Centives 01:00:59], there’s a memo purportedly written by you that was leaked … This is while you’re still at Facebook … that says, “We need to listen to people, including internally, when they tell us a feature is creepy or point out a negative impact we are having in the world. We need to deprioritize short term growth in revenue and to explain to Wall Street why that is okay. We need to be willing to pick sides when there are clear moral or humanitarian issues. And we need to be open, honest, and transparent about our challenges and what we are doing to fix them.”

And what I wonder is on that quote, why would a private company do that? Going back to your 9/11, what changes do we need structurally to make that in the interests of a private company to do, or is it something that a private company will ever do? Do we need some other solution?

Alex Stamos:                Yeah. So first off, this is how you know Facebook’s becoming a government is that all of our private internal communications are leaking.

John Carlin:                  Welcome to the club. I remember you used to hit us on transparency, and I would tell you, “I think everything I ever say becomes public.”

Alex Stamos:                Right. Right. Exactly. Yes, one day there’ll be a Mark Zuckerberg presidential library. They’ll have all my email in it.

John Carlin:                  It’ll be online.

Alex Stamos:                Yes. Perfect. Yes. Yeah, you’re right. There’s a core problem in Facebook that is reflected throughout Silicon Valley is again, you get what you measure. And what does a company like Facebook want? It wants to build products that people like. It wants to be impactful. So this is the term you hear in Silicon Valley all the time is impact.

“We’re having impact. We’re having impact.” And impact is generally measured in a product like this of how many people are using it, how often they’re using it, and then maybe some metrics about whether they’re enjoying it or not. It turns out none of those metrics measure whether or not your product is good for people or whether or not you’re accruing risk that maybe it’s good for people for a while, and then all of a sudden, some kind of Black Swan event happens, and you’re really bad for people, and that you didn’t notice that risk had been accruing that entire time.

And I think that is a core problem. And at Facebook, that was embodied in the team called the growth team, which was the-

Alex Stamos:                …Facebook, that was embodied in the team called the growth team, which was the product team, whose job it was to get people to want to use Facebook and also to get more use of the product around the world. Now, most of the US issues I think are less about growth, but that is the core of a lot of the international issues such as violence in Southeast Asia and the like, was the expansion of the product out into all of these use cases and languages that we were not ready for, well before we should have, and not predicting the kinds of ways people would use the product and understanding the geopolitical and cultural issues that had already existed. But in the US, a lot of it was, we’re going to give people what they want. It turns out what people want is not necessarily, what’s good for them.

John Carlin:                  Do you let your kids use some of these products unmonitored?

Alex Stamos:                Well not unmonitored, right. On the kid issue, I think that’s a whole nother thing that because of, I don’t know how it is for you guys, but we have blown through our screen time allotment through 2027 at this point, right?

John Carlin:                  We yield it to evitable. Exactly.

Alex Stamos:                Yeah. I read some good stuff, which I totally agreed with, in the spring of parents need to forgive themselves for not being fantastic parents right now, because we’re living through history and we have to do the best we can just to keep our families safe. If that means your kids are watching YouTube that’s okay. But now we’re transitioning to maybe this is the new normal for quite a while, and we’re going to have to come up with a better rule. You kids using technology I think is actually much more complicated. I’m not one of those people, just like if they’re in front of a screen period that they’re rotting their brains. But there is certainly positive things they can be doing. Even if they are fun things like playing a video game that actually stimulates your mind versus just media consumption. Right?

John Carlin:                  There’ve been a lot of stories about some of the designers not letting their kids use these products because they are designed to addict, that that’s how you build users. And it’s particularly powerful with children whose brains are still developing. What’s your view on this?

Alex Stamos:                We put limits on screen time. I think like the most addictive thing that my kids are allowed to do that we try to limit is YouTube because it’s passive media consumption and YouTube is very, very good. Their machine learning is very good at putting media in front of you, in front of kids, that they want to see. And so that’s the thing I think you absolutely have to set limits. When I was talking about Facebook and Russia, that was more about adults of adults wanting information that reinforces their own beliefs, right? That creating an information environment where people are able to seek out and live within a bubble of information that only reinforces their own beliefs. That that’s the thing that was not being measured and that was impactful.

John Carlin:                  How do you distinguish between, disinformation, misinformation, fake news, as we’re using some of these terms versus what I think you’re talking about when searching for things that confirm your own bias, that might be real news and real information, but you’re only seeing a piece of it? It’s like the old analogy with the blind man and the elephant. You’re only touching one piece and it’s hard from that to determine what’s true.

Alex Stamos:                That’s right. Yes. And I think this is all a much more complicated philosophical discussion than people have generally given credit for, which is the vast majority of Russian activity in 2016 was not what you can reasonably call fake news. Right? So there’s really two big chunks to their information operation. One is the pure online operation by the Internet Research Agency and other related organizations, which was mostly not about the election. It was mostly about political topics. And the vast majority of their output is not falsifiable claims of fact, right? They’re doing things like creating fake accounts of part of an anti-immigration group. And then they’re making extreme political statements about immigration that, for the most part, are not things that are falsifiable, right? And so, it is disinformation in that these people are working in a coordinated fashion to hide their identities and then to amplify their message well beyond what would normally be seen by people. But it is not fake news and it is not a lie.

And then the other part of the big information operation was the GRU operation, the hack and leak, right? And again, the core facts that they were able to put out were true, right? They had really email from John Podesta. They had real emails from Debbie Wasserman Schultz. It is one, the selective leaking of things to tell a story was part of the disinformation. Even if they weren’t faking these documents that they were able to frame it up. And then they were also able to drive a level of coverage of those topics well beyond what they should have. But this is why I also talk about this is all of society thing, because the real target of the GRU hack and leak was the mass media. It was not social justice.

John Carlin:                  Just like North Korea with Sony, right?

Alex Stamos:                Yes, exactly. Exactly. And so in both cases, you can call it disinformation because of the inauthenticity of the identities of who’s pushing it and their ability to get the coverage. But in both cases, it was based upon an actual true fact. And this is why I think people over discuss things like deep fakes and stuff about the creation of truly false pieces of evidence, because most of the the most effective information operations that we have examples of are based upon a kernel of truth that is then spun and amplified and twisted in a way that is difficult to call fake.

John Carlin:                  You’ve said before that one of the fundamental issue that Facebook faces is that there is no law to tell them or any other social media platform, what is or is not allowed, that there’s no fundamental privacy law in the United States. Would that extend to these content issues as well and is it better if the government is setting rules in this area that treads on freedom of speech, or should it be up to private social media platforms?

Alex Stamos:                I mean, that’s a complicated question. I think, just realistically in the United States, the vast majority of content that people don’t like on social media is first amendment protected, right? So we’re never realistically going to end up with direct government regulation in this space. I think the place where I think government regulation would be good would be to encourage the companies to be more thoughtful about the impact of certain abuses that are not about political speech, right? The misinformation disinformation is the hardest place because you have this spectrum of disinformation to political speech within the Overton window is incredibly blurry. And it is very dangerous for a government in a democratic society to say, this is where the line is.

John Carlin:                  Well, let me push on that just for a sec, because you’ve said Mark Zuckerberg is mistaken in his view that interfering with posts by politicians amounts to censorship. What do you think of Twitter’s policy, which has gone in a different direction to say, “Hey, there are facts. And we’re going to use a fact checking label on certain tweets”?

Alex Stamos:                Yeah. And I actually think Twitter’s policy here has been much smarter. I think Zuckerberg’s gone into a defensive crouch and that he’s been really way too stubborn about the core decision around labeling politician tweets. From my perspective, one, I do think we have to be careful about half trillion to trillion dollar corporations taking down speech by candidates in elections or democratically elected leaders. That’s a very dangerous place to go. But that doesn’t mean that the companies don’t have their own first amendment right to label speech as they see fit. And I think that the Twitters, at least their announced model, I don’t think their enforcement is incredibly good. But at least their announced model of we will allow a piece of speech generally to exist as long as it’s not causing direct harm, this is different if somebody is calling for a person to be harmed or something.

But if it’s a political statement that is a misstatement of fact that it can exist, but we’re going to reduce the product affordances that allow it to be amplified. So you turn off retweets and stuff like that and we’re going to use our first amendment right to label it as we believe this is not true. You can separate out silencing the voice of people and then adding your own voice to it because you think it’s wrong. And I think the companies need to think a lot more about that second option.

John Carlin:                  What about, you referenced the size of the companies? Just switching topics, but not entirely because I think it is driving some of the movement. I mean, is Facebook a monopoly? Is it too big? Should it be broken up?

Alex Stamos:                Yeah. I am not, as you very well know, I’m neither a lawyer nor an antitrust expert. So I’m not going to speak as to how you define a monopoly. What I will say is there are some platform abuses that scale with size and some that don’t. And so I think this is actually a really complicated analysis here because the bigger companies have more resources and the ability to have to spend that fixed cost on investigations team on an ML team and an engineering team and product management, to fight abuses. And in a case where the abuse does not necessarily scale with the size of the network, that can be really effective. But then things like disinformation do. And so, I don’t think there’s a simple answer here. I think, one of the problems we have as a society is we’re only paying attention to three platforms. And there’s very little discussion of all the bad things that happen on all the other platforms. And part of this is because a lot of what we know about these abuses come from the companies themselves.

John Carlin:                  But let me just push a little bit. Is the industry competitive? You’ve been inside it and you had a quote saying “You can’t solve climate change by breaking up ExxonMobil and making 10 ExxonMobils, you have to address the underlying issues,” which I thought was interesting. So I think in that quote, you’re suggesting that again, we need some fundamental regulation about what’s allowed and what’s not allowed and it doesn’t have to do with size of companies. But putting that to one side, is there competition in social media space? Is it healthy from what you can see?

Alex Stamos:                There is competition. But yes, it is true that it is very difficult to compete against Facebook. Right? And I think there’s not a lot of venture capitalist on Sand Hill Road that are going to throw a bunch of money at a company that says we’re going to go direct at Facebook. You will see them back Musical.ly, which became TikTok, and other companies that have a totally new direction. But those VCs are also thinking to themselves, one of the ways I get paid out is we make this threatening enough to Facebook that Zuckerberg writes us a check. But on the size issue, something I want to say is you referenced the Mueller Report. Pretty much everything in the Mueller Report that is about Facebook came from our team, right? It didn’t come from Mueller’s people.

We found all this stuff, the GRU activity, the IRA activity, did this big investigation, sent a lawyer to go brief, sent a special counsel office, swore out an affidavit, special counsel office comes back with a warrant and now we had this whole package that we were able to turn over to the special counsel’s office of all the data. And that becomes a big chunk of the Mueller Report. Now, obviously they did tons of work in other areas and tons of work of exploring Concord Management and the like.

But of the actual things that happen on Facebook, I don’t think there’s a single fact in there that didn’t come from our team, finding it and turning it over voluntarily. That is one of the challenges of having this discussion is that when you read the Mueller Report, you only read about a handful of companies. It’s because those are companies that have threat intelligence investigation teams that went and proactively turned stuff over. We just had this report last year of a attack called Secondary Infektion, with a K, which was a six year Russian operation that hit 300 different platforms. Right? And so part of the problem of this discussion is we assume that it’s only on the big platforms, when it turns out that’s not true. We’re just only looking at the data from the big platforms, because a smaller company doesn’t have a team going and turning this stuff over proactively.

John Carlin:                  You have too big to fail, but when it comes to security and monitoring social media, you might be too small to succeed?

Alex Stamos:                Well, see, this is the problem. I think it’s a difficult analysis because let’s say, when people talk about breaking up Facebook, they talk about Instagram and WhatsApp being broken off, which I think is like the most realistic. It’s not so realistic to break up Instagram itself or break up the Facebook app, but you can break the corporation up. A company the size of what would be an independent Instagram should be able to afford to have this capability, right? The problem is, is there’s not a lot of encouragement to do so.

And in fact, the way things have moved since 2016 is there’s a number of companies that have had massive disinformation problems and the like, who’ve never said anything publicly and they’ve never got any notice. And so it’s working out pretty well for them not to have people looking. And so I think we need to have an incentive structure that the smaller companies have an incentive structure to have teams that are proactively working on this. We probably need industry led coalitions to make this easier. Right? This is something that I think Facebook and Google have not done well enough, is building the equivalent of the FS-ISAC for the tech industry, where you have-

John Carlin:                  That’s the Financial Services Program for sure.

Alex Stamos:                Information Sharing and Analysis Center. Yes. Acronym, I’m sorry.

John Carlin:                  And that’s been codified by law. There are legal protections for sharing information through the structure of an ISAC to prevent antitrust concerns. It’s actually an exemption from antitrust.

Alex Stamos:                Right. And actually, I think we probably need legislation here because there is an ECBA SCA issue. There is issues with different privacy laws around the company, sharing anything that could be user data. And so we probably need the carve out of the ability of the companies to work together. But I would like to see a model where the big companies are much more aggressive about helping the smaller ones out with their work. And then we need to figure out a reasonable way for the government to get involved. Now that’s actually really hard here because it’s difficult to think of a way that a government agency can do these investigations under, without giving them a massive transfer of data that we have rejected as a society, especially post-Snowden. And so that I think is the much more difficult part.

John Carlin:                  And because there’s not enough to cover, as we’ve seen. But also from the significant when it comes to social media platforms, extraordinarily significant action that the president has taken, President Trump, in using executive order to ban two of the largest competing foreign social media platforms, TikTok and WeChat. What do you think of that action?

Alex Stamos:                Yeah, there are absolutely legitimate concerns about Chinese companies making apps that are used by a wide variety of American consumers. There’s a lot of legitimate issues around infrastructure companies and us buying things like Huawei routers. The problem is I think in a functioning democracy, we create rules, we have findings of fact, and then things come at the end. And it’s pretty clear here that the president just wanted TikTok gone, and everything else was a post hoc rationalization. And so I’m actually really dismayed by what I see coming out of the administration, because this is a legitimately important issue and it’s being turned into a bit of a joke and something that is not going to be respected as setting international norms, because there’s this component of a clear personal bias of Trump against the people on TikTok that don’t like him. And my hope is that in a next administration, that this is something that we can have a much more rational process around.

John Carlin:                  Yeah. It may ultimately have been a correct result, but process matters, fairness matters and you don’t see it here.

Alex Stamos:                Right. Process matters, if only because, for the last decade, the United States government alongside our tech companies have been fighting for the idea of an open internet where American companies are allowed to operate overseas. And so, if we’re going to have data protection rules that we enforce, then that’s great. And then that means that there will be other countries that we will negotiate how that works. For us just to say, we’re going to force the local sale of an American subsidiary, the European regulators who have been wanting to do this for years, the Indian regulators, they are loving this, right? Because the United States has pretty successfully pushed back on the idea that you have to nationalize every multinational that operates out of Silicon Valley. And Trump has overturned a decade of work on the open internet on its head.

John Carlin:                  Yeah, it’s fascinating. So if you think about it as the big strategic battle, you have the US championing open internet, that data should be free, and you have China championing a model that says it needs to be within your nation’s boundaries and that data is more secure if you don’t allow it to travel. And in some ways, we have this tactical move here. But strategically, it’s moving the world towards the Chinese model. And I’m curious, just as we wrap on this, this happens at the time it’s roughly two weeks after the Schrems II decision. So this is a landmark case in the EU about the rules by which companies can transfer data. And the European Union has said you can’t transfer data to the United States because the legal regime in the United States is not sufficiently protective of privacy concerns, which also seems to be an opinion that moves towards a world of walls and data localization.

Alex Stamos:                The timing with this and the Schrems II decision, I think is amazing and demonstrates that there’s an alternate universe in which the United States and Europe let go of the narcissism of small differences and came up with a unified data protection ideal in the free world, and then enforce that against countries like China that have a completely different view of free expression and freedom from surveillance. And instead, we’re doing this slipshod movement against China. We’re not getting the Europeans to follow us. And so we’re getting the worst of both worlds. And I think it is just a really bad way for us to try to maintain the competitiveness of our industry and to make an internet that reflects the norms, the democratic norms that we like.

John Carlin:                  Alex Stamos, it’s been great having you with us today and covered a wide range of issues. But I think it becomes critically clear why need people like you, who are both fluent in policy, and also understand the technology. Thank you.

Alex Stamos:                Thanks, John, and I feel the same about you. I’m looking forward to having a discussion like this again.

John Carlin:                  Cyber Space is presented by CAFE. Your host is John Carlin. The executive producer is Tamara Sepper. The senior audio producer is David Tatasciore. And the CAFE team is Adam Wahler, Matthew Billy, Sam Ozer-Staton, David Kurlander, Noa Azuali, Calvin Lord, Geoff Eisenman, Chris Boylan, Sean Walsh, and Margot Maley. The theme music is by Breakmaster Cylinder. Today’s episode was brought to you in collaboration with Brooklyn Law School’s BLIP Clinic. And I’d like to thank Amanda Kadish, Isabella Russo-Tiesi, Alice Abel, Megan Smith, James Sanderson, and Ryan Baum for their help with research.