Google Moves Forward With Pentagon AI Deal Despite Employee Pushback

5 hours ago 4

Google has reportedly signed an agreement allowing the US Department of Defense to use its AI models for classified work, despite an open letter from hundreds of employees urging the company to stay away from military uses that they say could become dangerous or impossible to oversee.

The deal, reported earlier Tuesday by The Information, allows the Pentagon to use Google's AI tools for "any lawful government purpose," including sensitive military applications. Google joins OpenAI and xAI, which have also struck similar classified AI agreements with the Pentagon. 

The reported agreement includes language stating that Google's AI system is not intended for domestic mass surveillance or for autonomous weapons without appropriate human oversight. But it also says Google doesn't have the right to control or veto lawful government operational decisions, according to reports. Google will also help adjust safety settings and filters at the government's request. 

A Google spokesperson told CNET in an emailed statement that the company remains committed to the position that AI shouldn't be used for domestic mass surveillance or autonomous weapons without human oversight, and said providing API access to commercial models under standard practices is a "responsible approach" to supporting national security.

The Pentagon declined to comment to CNET.

The deal lands in the middle of an internal backlash. In an open letter addressed to CEO Sundar Pichai, more than 600 Google employees asked the company to "refuse to make our AI systems available for classified workloads." The employees wrote that because they work close to the technology, they have a responsibility to highlight and prevent its "most unethical and dangerous uses.

"We want to see AI benefit humanity, not to see it being used in inhumane or extremely harmful ways," the letter says. The employees said their concerns include lethal autonomous weapons and mass surveillance, but extend beyond those examples because classified work could happen without employees' knowledge or ability to stop it. 

The tension echoes one of Google's most prominent internal revolts. In 2018, thousands of workers protested Project Maven, a Pentagon program involving AI analysis of drone footage. Google later chose not to renew that contract

The company's posture toward military and national-security AI has shifted since then. 

Last year, Google removed a previous language from its AI principles that said it would not pursue technologies likely to cause overall harm, weapons, certain surveillance technologies or systems that violate widely accepted human rights and international law principles. 

In a February blog post updating Google's AI principles, Google DeepMind CEO Demis Hassabis and senior vice president James Manyika wrote that "democracies should lead in AI development" and that companies and governments should work together to build AI that "protects people, promotes global growth and supports national security." 

For Google workers opposed to the deal, the concern is not just that AI could be used by the military, but that classified deployment removes the usual visibility around how a model is being used.

"I feel incredibly ashamed," Andreas Kirsch, a Google DeepMind researcher, wrote in a public post on X reacting to the reported deal.

The open letter from Google employees ends with a direct appeal to Google's CEO: "Today, we call on you, Sundar, to act according to the values on which this company was built, and refuse classified workloads."

Read Entire Article