- A carefully crafted branch name can steal your GitHub authentication token
- Unicode spaces hide malicious payloads from human eyes in plain sight
- Attackers can automate token theft across multiple users sharing a repository
Security researchers have discovered a command injection vulnerability in OpenAI’s Codex cloud environment that allowed attackers to steal GitHub authentication tokens using nothing more than a carefully crafted branch name.
Research from BeyondTrust Phantom Labs found the vulnerability stems from improper input sanitization in how Codex processed GitHub branch names during task execution.
By injecting arbitrary commands through the branch name parameter, an attacker could execute malicious payloads inside the agent’s container and retrieve sensitive authentication tokens that grant access to connected GitHub repositories.
Article continues below
A vulnerability in plain sight
What makes this attack particularly concerning is the method researchers developed to hide the malicious payload from human detection.
The team identified a way to disguise the payload using Ideographic Space, a Unicode character designated as U+3000.
By appending 94 Ideographic Spaces followed by "or true" to the branch name, error conditions can be bypassed while rendering the malicious portion invisible in the Codex user interface.
The Ideographic Spaces are ignored by Bash during command execution, but they effectively conceal the attack from any user who might view the branch name through the web portal.
The attack could be automated to compromise multiple users interacting with a shared GitHub repository.
With proper repository permissions, an attacker could create a new branch containing the obfuscated payload and even set that branch as the default branch for the repository.
Any user who subsequently interacted with that branch through Codex would have their GitHub OAuth token exfiltrated to an external server controlled by the attacker.
The researchers tested this technique by hosting a simple HTTP server on Amazon EC2 to monitor incoming requests, confirming that the stolen tokens were successfully transmitted.
The vulnerability affected multiple Codex interfaces, including the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE extension.
Phantom Labs also discovered that authentication tokens stored locally on developer machines in the auth.json file could be leveraged to replicate the attack via backend APIs.
Beyond simple token theft, the same technique could steal GitHub Installation Access tokens by referencing Codex in a pull request comment, triggering a code review container that executed the payload.
All reported issues have since been remediated in coordination with OpenAI’s security team.
However, the discovery raises concerns about AI coding agents operating with privileged access.
Traditional security tools like antivirus and firewalls cannot prevent this attack because it occurs inside OpenAI’s cloud environment, beyond their visibility.
To stay safe, organizations should audit AI tool permissions, especially agents, and enforce least privilege.
They should also monitor repositories for unusual branch names containing Unicode spaces, rotate GitHub tokens regularly, and review access logs for suspicious API activity.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.








English (US) ·