Mark Zuckerberg has a new mission: build artificial general intelligence (AGI), a form of AI that can reason and learn like a human.To do that, he’s assembled an elite team of researchers, engineers, and AI veterans from OpenAI, Google, Anthropic, Apple, and more. This new unit, called Meta Superintelligence Labs (MSL), is tasked with building the most powerful artificial intelligence the world has ever seen.
The tech world is calling it a “dream team.” But it’s hard not to notice what’s missing: diversity.
Of the 18 names confirmed so far by Zuckerberg in a memo and by media reports, just one is a woman. There are no Black or Latino researchers on the list. Most of the team members are men who attended elite schools and worked at top Silicon Valley firms. Many are of Asian descent—a reflection of the strong presence of Asian talent in global tech—but the group lacks a wide range of backgrounds and lived experiences.
Here’s a partial list of the new hires:
Alexandr Wang (CEO and chief AI officer)
Nat Friedman (co-lead, former GitHub CEO)
Trapit Bansal
Shuchao Bi
Huiwen Chang
Ji Lin
Joel Pobar
Jack Rae
Johan Schalkwyk
Pei Sun
Jiahui Yu
Shengjia Zhao
Ruoming Pang
Daniel Gross
Lucas Beyer
Alexander Kolesnikov
Xiaohua Zhai
Ren Hongyu.
They’re brilliant. That’s not in question. But they’re also cut from a similar cloth: same institutions, same networks, same worldview. And that’s a serious problem when you’re building something as powerful as superintelligence.
What is superintelligence?
Superintelligence is an AI system that surpasses the smartest humans in reasoning, problem-solving, creativity, and even emotional intelligence. It could write code better than the best engineers, analyze laws better than top lawyers, and manage companies more efficiently than seasoned CEOs.
In theory, a superintelligent AI could revolutionize medicine, solve climate change, or eliminate traffic forever. But it could also upend job markets, deepen surveillance, widen social inequality, or automate harmful biases, especially if it reflects only the perspective of those who built it.
This is why who’s in the room matters. Because the people designing these systems are deciding whose values, assumptions, and life experiences get embedded in the algorithms that may one day run large parts of society.
Whose intelligence is being built?
AI reflects designers. History has already shown us what happens when diversity is ignored. From facial recognition systems that fail on darker skin tones to chatbots that spit out racist, sexist, or ableist content, the risks are not hypothetical.
AI built by homogenous teams tends to replicate the blind spots of its creators. It’s a product flaw. And when the goal is to build something smarter than humanity, those flaws scale.
It’s like programming a god. If you’re going to do that, you better be damn sure it understands all of humanity, not just a narrow sliver of it.
Zuckerberg has said little about the composition of his AI team. In today’s political climate, where “diversity” is often dismissed as a distraction or “wokeness,” few leaders want to talk about it. But silence has a cost. And in this case, the cost could be an intelligence system that doesn’t see or serve the majority of people.
A warning wrapped in progress
Meta says it is building AI for everyone. But its staffing choices suggest otherwise. With no Black or Latino team members and just one woman among nearly 20 hires, the company is sending a message—intentional or not—that the future is being designed by a select few, for a select few.
Then the problem becomes: can we trust this technology? It’s important to make sure that when we hand over key decisions to machines, those machines understand the full range of human experience.
If we don’t fix the diversity gap in AI now, we might bake inequality into the very operating system of the future.