When Collins Dictionary announced its 2025 ‘Word of the Year’, many were surprised to see vibe coding take the top spot.
The term describes using AI tools to build software through prompts rather than traditional coding, a practice that has surged as large language models have become more accessible.
Senior Director, Field Technology Office at CyberArk.
The rise of vibe coding brings real promise. It can open programming to a wider audience, build tech literacy and eliminate repetitive work. But it also comes with significant risks, particularly for users who do not fully understand the code being generated on their behalf.
The core issue is simple. Running untrusted or unvetted code can expose systems to serious security threats, whether through subtle vulnerabilities introduced without the user noticing or the accidental execution of malicious code.
The risks of unvetted code
Traditional coders, especially within a business context, do not just have a comprehensive knowledge of software development but of the specific systems they are developing the code for.
They can truly understand the code they’re writing, and what exactly it is doing on a machine. This traditional process also includes rigorous testing, code reviews and security checks before any practical deployment.
While the time and cost savings of vibe coding can be appealing, they often come at the expense of the expertise and oversight that traditional coding offers. AI-generated code, for example, is often generic, even when built from extensive prompts.
LLMs lack the context of a business' specific cybersecurity, identity management and data protection policies and protocols, and may inadvertently violate them.
In some cases, unvetted code can also expose sensitive credentials or open vulnerabilities in a system without an amateur developer even realizing.
In fact, according to recent research from Cornell University, 25-30% of 733 code snippets generated by a popular LLM contained serious security flaws, spanning 43 different common weaknesses (CWEs) that could be easily exploited by attackers.
Supply chain attacks and ‘poisoned’ code
While code generated by LLMs may not always contain vulnerabilities or malicious elements, it is not automatically safe. Many AI models are trained on public code repositories and can unknowingly draw on external functions from those sources.
Attackers are well aware of this. By targeting publicly accessible repositories that LLMs or other AI tools are likely to scrape, they can compromise vast numbers of AI-generated code snippets at once. Even projects that appear safe can be affected if their code libraries originate from manipulated or tampered-with sources.
If an AI model unknowingly sources ‘poisoned’ code, it can be replicated across thousands of projects within seconds.
Depending on how widely the code has been deployed, the damage could be substantial, ranging from harvesting sensitive data to deploying malware such as Remote Access Tools or ransomware, or even lying dormant in systems until activated by an attacker.
Can vibe coding ever be secure?
Vibe coding offers clear advantages, such as faster development and deployment, but businesses must still approach it with the same level of caution they would apply to any new technology.
Human oversight, for instance, remains essential, and boardrooms, compliance teams and IT leaders should require thorough reviews of all AI-generated code with no exceptions. Code produced by AI must be examined with the same rigor as human-written code, regardless of how complete or accurate the prompt may appear.
Data security is another critical consideration. Inputting confidential or proprietary information into AI tools, especially public ones, significantly increases the risk of exposure.
To minimize this, teams should rely on private, sandboxed LLMs trained on trusted internal data wherever possible. Code libraries should also be sourced internally or, when external options are required, drawn from official repositories that are actively monitored for unauthorized changes.
Access control provides an additional layer of protection. AI-generated code should be granted only the permissions necessary for it to function, and businesses should adopt modern identity management practices based on Zero Trust principles.
This includes explicit verification for every identity and the removal of access rights once they are no longer needed. By limiting permissions in this way, even if malicious code is deployed, its ability to move through systems or access sensitive data becomes significantly restricted.
Vibe coding is here to stay
Love it or loathe it, vibe coding is here to stay. It can speed up development, make coding more accessible to non-technical teams and deliver meaningful savings in time and cost. It is no surprise that many businesses want to take advantage of it.
But without care, vibe coding can also increase exposure to cyber risks. Organizations need to balance experimentation with strong oversight, policies and thorough review, understanding where vibe coding adds value and where the risks outweigh the reward.
AI can write code at remarkable speed, yet only humans can verify that the output is safe. In some situations, traditional coding or expert intervention will still be the smarter choice. Vibe coding may offer convenience, but it is not always worth the risk.
We've featured the best encryption software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro






English (US) ·