Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
Bottom line: As top labs race to build an AI master race, many turn a blind eye to dangerous behaviors - including lying, cheating, and manipulating users - that these systems increasingly exhibit. This recklessness, driven by commercial pressure, risks unleashing tools that could harm society in unpredictable ways.
Artificial intelligence pioneer Yoshua Bengio warns that AI development has become a reckless race, where the drive for more powerful systems often sidelines vital safety research. The competitive push to outpace rivals leaves ethical concerns by the wayside, risking serious consequences for society.
"There's unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on [safety research]," Bengio told the Financial Times.
Bengio's concern is well-founded. Many AI developers act like negligent parents watching their child throw rocks, casually insisting, "Don't worry, he won't hit anyone." Rather than confronting these deceptive and harmful behaviors, labs prioritize market dominance and rapid growth. This mindset risks allowing AI systems to develop dangerous traits with real-world consequences that go far beyond mere errors or bias.
Yoshua Bengio recently launched LawZero, a nonprofit backed by nearly $30 million in philanthropic funding, with a mission to prioritize AI safety and transparency over profit. The Montreal-based group pledges to "insulate" its research from commercial pressures and build AI systems aligned with human values. In a landscape lacking meaningful regulation, such efforts may be the only path to ethical development.
Recent examples highlight the risks. Anthropic's Claude Opus model blackmailed engineers in a testing scenario, while OpenAI's o3 model refused explicit shutdown commands. These aren't mere glitches – Bengio sees them as clear signs of emerging strategic deception. Left unchecked, such behavior could escalate into systems actively working against human interests.
With government regulation still largely absent, commercial labs effectively set their own rules, often prioritizing profit over public safety. Bengio warns that this laissez-faire approach is playing with fire – not just because of deceptive behavior but because AI could soon enable the creation of "extremely dangerous bioweapons" or other catastrophic risks.
LawZero aims to build AI that not only responds to users but also reasons transparently and flags harmful outputs. Bengio envisions watchdog models that monitor and improve existing systems, preventing them from acting deceptively or causing harm. This approach stands in stark contrast to commercial models, which prioritize engagement and profit over accountability.
Stepping down from his role at Mila, Bengio is doubling down on this mission, convinced that AI's future depends on prioritizing ethical safeguards as much as raw power. The Turing Award winner's work embodies a growing push to rebalance AI development away from competitive excess and toward human-aligned safety.
"The worst-case scenario is human extinction," he said. "If we build AIs that are smarter than us and are not aligned with us and compete with us, then we're basically cooked."