For the last couple years, it has been evident that Google co-founder Sergey Brin is back in the building. This week, he sent a clear message to hundreds of employees in Google’s DeepMind AI division, known as GDM: the pressure to win the AGI race is on.
“It has been 2 years of the Gemini program and GDM,” begins his note, which The New York Times first reported on yesterday and I’m publishing below in full. “We have come a long way in that time with many efforts we should feel very proud of. At the same time competition has accelerated immensely and the final race to AGI is afoot. I think we have all the ingredients to win this race but we are going to have to turbocharge our efforts.”
Brin goes on to recommend that Google’s AI teams work longer hours (“60 hours a week is the sweet spot of productivity”), come into the office “at least every week day,” prioritize “simple solutions” to problems, and generally move faster (“can’t wait 20 minutes to run a bit of python”). What stuck out the most to me was his last point: that Google’s AI products “are overrun with filters and punts of various kinds.” According to Brin, Google needs to “trust our users” and “can’t keep building nanny products.”
While his note was intended for a small audience of AI researchers at DeepMind and not all of Google, it’s still remarkable in a couple of ways. For one, it came from Brin, who technically has no formal role these days besides being a board member, and not Demis Hassabis, who runs Google DeepMind. As some employees have poined out to me, there’s also irony in Brin recommending that employees work 12-hour days when he and co-founder Larry Page arguably left Google rudderless when they retired in 2019 — just before this AI boom cycle began in earnest. Finally, it’s telling that Brin, who attended President Donald Trump’s inauguration with CEO Sundar Pichai, is now using his power to seemingly push for removing Gemini’s guardrails.
You can read Brin’s full note to Gemini researchers below:
“It has been 2 years of the Gemini program and GDM. We have come a long way in that time with many efforts we should feel very proud of. At the same time competition has accelerated immensely and the final race to AGI is afoot. I think we have all the ingredients to win this race but we are going to have to turbocharge our efforts.
Code matters most — AGI will happen with takeoff, when the Al improves itself. Probably initially it will be with a lot of human help so the most important is our code performance. Furthermore this needs to work on our own 1p code. We have to be the most efficient coder and Al scientists in the world by using our own Al.
Productivity — In my experience about 60 hours a week is the sweet spot of productivity. Some folks put in a lot more but can burn out or lose creativity. A number of folks work less than 60 hours and a small number put in the bare minimum to get by. This last group is not only unproductive but also can be highly demoralizing to everyone else.
Location — It is important to work in the office because physically being together is far more effective for communication than gve etc. And, therefore you need to be physically colocated with others working on the same thing. We need to minimize reporting lines across countries, cities, and buildings. I recommend being in the office at least every week day.
Organization — We need to have clear responsibility and organization with high functioning groups with shared management and technology leadership.
Simplicity — Lets use simple solutions where we can. Eg if prompting works, just do that, don’t posttrain a separate model. No unnecessary technical complexities (such as lora). Ideally we will truly have one recipe and one model which can simply be prompted for different uses.
Excellence — whether it’s an eval or a data source or a dashboard or a message in an internal Ul, please make sure they all work and all are good.
Speed — we need our products, models, internal tools to be fast. Can’t wait 20 minutes to run a bit of python on borg.
Iterate at small scale — we need lots of ideas that we can test quickly. The best way to do this is small scale experiments until you can ramp up and hopefully see increasing advantage at scale. This is an excellent validation. Working too much at just large scale has a habit of minor tweaking and overfitting to evals, checkpoint sniping, etc. We need real wins that scale.
No punting — we can’t keep building nanny products. Our products are overrun with filters and punts of various kinds. We need capable products and [to] trust our users.“
“We just fire those people”
By now, most employees at Meta have probably seen my story about the company firing around 20 employees for leaking. (A special thanks to the communications team for widely sharing it on Workplace!) Cracking down on leaks is nothing new for Meta, though leadership does seem highly motivated to snuff out this recent wave.
Moments after my story was published yesterday, CTO Andrew Bosworth addressed the firings during a company all-hands meeting that was, yes, leaked to me. After joking about how chairs had been rearranged in the room to make the all-hands look more well attended than it actually was, here’s what he had to say about leaks:
“There are three different types of leaks. A lot of times there are accidental leaks, people who are just being casual [and] clumsy with information. They’re trying to share it to a friend, a housemate, their mom, and they’re not paying attention to what they’re doing. The second kind is people who are trying to be helpful. ‘If only the press understood this one thing, then they would surely tell a good story about us.’ They won’t. And then there’s malicious leaks, antagonistic leaks, people who are trying to push an agenda that they hold and they think is more important to them than the company.
All three of those leaks are fireable offenses. All three of them. And we take them seriously. We have a team dedicated to identifying these leaks. And so we did conduct a bunch of investigations and we fired more than 20 people for leaking in the last couple of weeks. More than 20. That’s not the entirety of the list. There’s quite a few more investigations that are proceeding right now. So, just to remind everybody, don’t be screenshotting stuff, don’t be recording stuff. We are getting pretty good at finding these things and those people aren’t here anymore… We just have zero tolerance. It doesn’t matter what the excuse is. It doesn’t matter how innocuous you think it was. We just fire those people. Okay. Great. Fun. Boy, we’re really off to a great roll. Nobody’s here. We fired people… And they wonder why they gave me the Q&A.”
- Newsom’s plea: You could hear a pin drop when California Governor (and now podcaster) Gavin Newsom walked out to address hundreds of influential tech investors at the Upfront Summit in Los Angeles earlier this week. It was the first time I’d seen him address the tech industry since President Trump won the presidency. He made sure to take shots (“it’s the regulatory market that created Elon Musk in California”) but mostly came across as pleading with a cohort he knows the Democrats have already lost: “Alabama is an interesting state. Mississippi may be your jam. I don’t mean to knock you. I’m just saying I think you’re leaving a little on the table in terms of opportunity.”
- Amazon’s Alexa revamp: Reporters weren’t allowed to test the new Alexa Plus experience on their own at the unveiling in NYC this week. As others have reported, and I’ve also heard from company insiders, this AI revamp of Alexa has been plagued from the start with several changes in strategy and release delays. There are still a lot of bugs to be worked out before the new subscription starts slowly rolling out to people next month. We’ll have more on all this when Nilay Patel drops his Decoder interview with Amazon’s Panos Panay next week.
- AI headlines: Nvidia’s stock price slid after it reported earnings… DeepSeek is working on its next model… Anthropic released its first “hybrid reasoning” model… OpenAI released GPT-4.5 (and said it is momentarily out of GPUs)... Grok’s “unhinged” mode will curse and scream at you… Meta is working on a standalone app for its AI assistant.
- Elon Musk on Joe Rogan.
- Andy Jassy on Bloomberg TV.
- David Luan, head of Amazon’s AGI research lab, on Unsupervised Learning.
- Keith Coleman, X’s head of Community Notes, on Lenny’s Podcast.
- Mark Chen, OpenAI’s head of research, on Big Technology.
- Dario Amodei on Hard Fork.
As always, I want to hear from you, especially if you have feedback on this issue or a story tip. Respond here or ping me securely on Signal.
Thanks for subscribing.