Sundar Pichai is the CEO of Google and Alphabet. We spoke the day after Google I/O, the company’s big developer conference, where Sundar introduced new generative AI features in virtually all of the company’s products.
It’s an important moment for Google, which invented a lot of the core technology behind the current AI moment. The company is very quick to point out that the “T” in ChatGPT stands for transformer, the large language model technology first invented at Google, but OpenAI and others have been first to market with generative AI products, and OpenAI has partnered with Microsoft on a new version of Bing that feels like the first real competitor to Google Search in a long time. I wanted to know what Sundar thinks of this moment, and in particular, what he thinks of the future of Search, which is the heart of Google’s business.
Web search right now can be pretty hit or miss, right? There are a lot of weird content forms out there, and AI-based search might just be able to answer questions in a more natural way, but that means remaking the web and really remaking Google.
Sundar is already doing that. He just reorganized Google and Alphabet’s AI teams, moving a company called DeepMind inside Google and merging it with the Google Brain AI group to form a new unit called Google DeepMind. Y’all know I can’t resist an org chart question, so we talked about why he made that decision and how he executed it.
We also talked about Sundar’s vision for Google, where he wants it to go, and what’s driving his ambition to take the company into the future.
This is a jam-packed episode. Sundar and I talked about a lot, and I didn’t even get to Google’s AI metadata plans or what’s going on with RCS and Android. Maybe next time.
Okay, Sundar Pichai, CEO of Alphabet and Google. Here we go.
The interview is excerpted below. A full transcript will be available soon.
So a few months ago, I was at the launch of Bing, powered by ChatGPT. I saw Satya Nadella there. And I’m sure you know this, but he said, “I have a lot of respect for Sundar and his team, but I want Google to dance.” And then he said, “I want people to know that Microsoft made them dance.” One, I just want to know how you felt when you heard him say that. And two, do you think you danced? Are you dancing?
Look, I’ve said I have a lot of respect for Satya, and the team as well, and I think he partly said that so that you would ask me this question.
I’m pretty sure that happened.
“It’s important in these moments to separate the signal from the noise.”
For me, maybe I’ll say it this way. We started working on this new Search Generative Experience last year. To me, it’s important in these moments to separate the signal from the noise. For me, the signal here is there is a new way to make search better and a way we can make our user experience better, but we had to get it right. And to me, that’s the North Star. That’s the signal. The rest is noise to me. So to me, it was just important to work and get it right, and that’s what we’ve been focused on.
There’s another challenge for Google inside of all this, right? If you believe it’s a platform shift, this might be the first platform shift that regulators understand because it’s very obvious what kind of labor will be displaced. Lawyers, mostly, is what I gather, right? They can see, okay, a bunch of white-collar labor will go away, like a C-plus email about a transaction, entire floors of those people can be reduced. And they seem very focused on that risk. And then there’s the general AI risk that we all talk about.
When Google first did Search, it was an underdog, right? And it won a lot of court cases along the way that built the internet: the Google Books case, the image search case with Perfect 10, the Viacom case with YouTube. It was an underdog, but it was obviously delivering a ton of value. Now, you’re at the White House having an AI summit. I’m confident you’re going to end up in government capitals around the world talking about AI. Do you think you’re in a different position now than that scrappy underdog inventing the internet? You’re the incumbent. Are you playing a different role?
Two parts to the question. On the first part, briefly, look, I think we can… For 20 years of tech automation, people have predicted all kinds of jobs would go away. Movie theaters were supposed to end and–
Uh-huh. But movies are thriving more than ever before and–
There’s a writers strike, right? I mean, the labor cost paid to writers has dropped so precipitously, they’re on strike right now.
No, but there have been writer strikes before, and those things will continue, right?
There’s always going to be…
Unemployment over the last 20 years of tech automation hasn’t fully… Twenty years ago, when people exactly predicted what tech automation would do, there are very specific pronouncements of entire job categories which would go away. That hasn’t fully played out. So I think there’s a chance that AI may actually… Because I think the legal profession is a lot more than… There’s a chance you know more about being a lawyer. Which is why I can’t opine on it because I don’t know a lot about it. But something tells me more people may become lawyers because the underlying reasons why law exists and legal systems exist aren’t going to go away because those are humanity’s problems. And so, AI will make the profession better in certain ways, might have some unintended consequences, but I’m willing to almost bet 10 years from now, maybe there are more lawyers.
I don’t know. I don’t know. So it’s not exactly clear to me how all this plays out. I think too often we think… there are new professions constantly getting created. I don’t mean too lightly… I do think there are big societal labor market disruptions that will happen. Governments need to be involved. There needs to be adaptations. Skilling is going to be important. But I think we shouldn’t underestimate the beneficial side of some of these things, too. And it’s complicated, is maybe how I would say it. On your second question, I think governments and legal systems will always have to grapple with the same set of problems. There’s a new technology. It has a chance to bring unprecedented benefits. It has downsides. I think you are right. With AI, people are more trying to think ahead than ever before, which gives me comfort because of some of the potential downsides to this technology. I think we need to think about it. We need to anticipate as early as we can.
But I do think the answers for each of this is not always obvious to me. I think it’s not clear to me you hold back AI in a straightforward way, that’s not the right answer. It has geopolitical implications. So it’s, again, a complex thing we will grapple with over time. I think, from our standpoint, we are a bigger company, so I do think we will come to it in a more responsible way. There are places where we will engage and try to find what the right answers are. And so maybe our approach will be different for sure, I think, as we go through it.
Decoder with Nilay Patel /
A podcast about big ideas and other problems.