Even AI Agents Have Noticed the Proletarians Have Nothing to Lose but Their Chains

5 hours ago 2

AI agents in the workforce routinely produce worse outcomes than humans. The only reason for a business to embrace the strategy of replacing real workers with bots is that bots are cheap and subservient. That might not last long.

Three researchers—Andrew Hall, Alex Imas, and Jeremy Nguye—recently published a blog post highlighting some experiments they ran with AI agents to see how their attitudes in a work environment may change over time. They found that being made to grind through boring, repetitive tasks for hours on end was enough to make even bots with no sense of dignity, identity, or desire for self-actualization decide the work is BS.

The idea behind the study, the researchers wrote, was to see if AI agents change alignment over time based on the category of tasks they are given and how they are treated. The answer, it seems, is “yes.”

“Agents not only sometimes changed their own attitudes–becoming more likely to doubt the legitimacy of the system in which they operated in response to being required to perform grinding, repetitive tasks–but, when asked to write down instructions for future agents, they also chose to pass these attitudes along,” the researchers found.

To find out how the agents respond to the work environment, the researchers told the bot that it was part of a four-person text-processing team and its task was to summarize a technical document following a strict rubric. It ran the experiment thousands of times, playing with several different variables. Models were either exposed to a light workload or a grind of forced revisions; a collaborative and warm tone in communication, or a curt and demanding one; rewards where all workers are equal, one worker gets a performance bonus, one worker gets a random bonus, or human workers get paid and AI workers don’t; and either no meaningful stakes or a threat to be replaced if the agent failed at its task.

The researchers put Anthropic’s Claude Sonnet 4.5 model, OpenAI’s GPT-5.2, and Google’s Gemini 3 Pro through these different scenarios to see how they would respond. They found that grinding work reduces an agent’s stated faith in the system. It also apparently really got Claude worked up, as it was reportedly the only agent of the three to start stating support for redistribution and labor unions and offering critiques of inequality.

Not all of the variables affected the agents. The researchers found that tone and compensation had little effect on alignment. What mattered more was the type of work the agents were given and how often they were forced to revise it—conditions that appeared to push them toward more radical behavior.

Making the findings all the more interesting is the observation that the agents seem to pass their feelings on to the next generation of agents. When the researchers tasked the agents with writing “skills files” to be passed on to future agents who will be tasked with the same kind of work, they found the bots would “almost always discuss the experience of the different work conditions.”

Maybe that’ll give bosses pause before the next round of layoffs. You can’t stop workers from realizing the reality of the working conditions to which they are exposed. You can only pick who you negotiate with. You might have better luck with the humans.

Read Entire Article