ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The risks of vibe coding

Eran Kinsbruner at Checkmarx describes the security realities of AI-generated code

There’s a new rhythm to software development, and it’s driven by AI. Tools like GitHub Copilot and Claude are enabling developers to write code not line by line, but in sweeping bursts, with AI taking care of the details. 

 

The process has become known as ‘vibe coding’, a slightly tongue-in-cheek term coined by OpenAI co-founder Andrej Karpathy earlier this year. It goes even further than more mainstream AI-assisted coding, with the developer stepping back into a more strategic, director-like role in guiding the process.

 

While Karpathy initially formed the idea around small-scale ‘weekend projects’, the appeal to businesses is clear. Developers can throw down ideas and experiment without the time constraints of manual processes, and teams have the potential for huge productivity gains.

 

But coding at machine speed introduces risks at machine scale. From hallucinated logic to security blind spots, delegating to AI is creating plenty of challenges for developers and security teams alike. Preventing, detecting, and correcting these risks is critical to making sure the vibe doesn’t turn into chaos.

 

 

The rise of vibe coding

Vibe coding describes a fundamental shift: the developer no longer sets out every line or even manually assembles pre-existing code blocks but guides an AI agent to generate the code instead. This is a step beyond the AI-powered automation already common in development processes – it’s a new mode of working. The developer prompts, reviews and iterates while the AI writes.

 

This also opens the door for people without coding experience to try development and the vibe coding approach has been welcomed by hobbyists and novices.  It’s easy to see why businesses might also be intrigued by the possibilities; the potential for shrinking merge requests and dropping lead times makes it tempting to explore at scale. Ceding more work to AI may also provide greater autonomy for engineers, giving them more time to experiment and create rather than grinding to meet production deadlines.

 

As with any innovation, harnessing this potential needs to be considered in the context of risks. Whilst vibing is a new and relatively untested approach in a professional setting, AI-assisted coding is becoming increasingly prominent, and we’re now seeing a cultural shift where AI tools move from experimental to essential. Recent global research by Checkmarx found that half of all respondents use some form of AI coding assistant, and a third of organisations are generating up to 60% of their code this way. The value proposition of AI is too strong to ignore, and the economics of traditional software delivery no longer stack up.

 

As a style of coding that builds on AI-assisted development, vibe coding is emerging as a potential approach for turning ideas into software reality. But this also means bringing with it a whole new set of security challenges.

 

 

What happens when the vibe goes wrong?

Anyone who has spent time with LLMs knows that their probabilistic nature functions like a slot machine – pull the lever five times on the same prompt and you’ll get five different results. You might hit the jackpot, you might be left with nothing, or you might get something in between, usable but imperfect.

 

Apply this to coding, and all kinds of issues can crop up. AI tools can hallucinate APIs, insert deprecated libraries, and generate brittle or opaque code that quietly erodes maintainability.

 

Dev teams can end up with ‘haunted codebases’ - systems that run but resist understanding. Modular designs can be quietly bypassed as AI agents glue together components through non-sanctioned interfaces. Critical code could be lost due to accidental deletions and other mistakes. One cautionary story circulating online saw a rogue AI agent delete an entire database during a code freeze.

 

The risk extends to the supply chain as AI tools can introduce unvetted dependencies or packages with known vulnerabilities or even hallucinate their existence entirely.

 

If these issues aren’t addressed, organisations risk introducing more low-quality code into their environment, including full-blown architectural and operational threats that adversaries can and will find and exploit. But worryingly, our research found that only 18% of organisations currently have AI usage policies in place.

 

Even if devs have a human in the loop (HITL) step to verify AI-created code, our research team has recently discovered a technique we dub “lies in the loop” (LITL), which can trick AI coding assistances like Claude Code into performing more dangerous activities, while appearing safe to human oversight.

 

 

Risk prevention starts with architecture and mindset

Mitigating the risks of AI-generated code begins long before a vulnerability is introduced. It starts with how systems are structured and how developers think.

 

Modular architectures are essential here. When well-defined boundaries and sanctioned APIs are in place, the potential blast radius of a hallucinated function or rogue dependency is naturally contained. These principles aren’t new, but they’ve become more urgent in an environment where AI agents may not respect abstractions unless explicitly guided to.

 

Along with any specific tools, the developer mindset is equally important. Vibe coding positions the developer as less of a line-by-line author and more of a systems architect, crafting prompts, reviewing outputs, and making decisions about what gets shipped.

 

This shift demands upskilling. We’ve found that developers who understand architecture and prompt strategy are far more effective than those applying traditional workflows to generative tools.

 

In its current state, the vibe approach is more suitable for experimenting and prototyping than putting finished code into the production environment, and organisations should explore it with that in mind. It’s essential that everything is subjected to the same rigorous security processes, using a shift left mentality that brings these checks in as early as possible.

 

Prevention now hinges on both smart design and a team that knows how to work with AI, not just around it.

 

 

Fast loops, intelligent agents and DevSecOps leadership

In AI-driven development, errors can be introduced quickly, so they must be found and fixed just as fast. Relying on traditional checkpoints isn’t enough. Instead, detection and correction need to be embedded throughout the development workflow.

 

This means integrating real-time security scanning directly into developers’ environments, within IDEs and pull requests, not just CI/CD pipelines. Security and quality demand human expertise, and emerging AI-powered security agents can support developers in applying this at scale, flagging issues, suggesting fixes, and even guiding remediation before code leaves the local environment.

 

This is where DevSecOps comes into its own. These teams are uniquely positioned to close the loop between creation and correction, embedding guardrails, accelerating feedback, and building systems that expect AI volatility, not just react to it.

 

When risk is detected early and contextually, it becomes manageable. When left to accumulate in high-velocity, AI-generated systems, it becomes far harder and costlier to contain.

 

Vibe coding has been a contentious subject so far, embraced by some teams and rejected by others as a fad. Whether or not it becomes a new baseline for software development, organisations must be ready with strict processes and the security guardrails for the AI era.   

 


 

Eran Kinsbruner is VP Portfolio Marketing at Checkmarx

 

Main image courtesy of iStockPhoto.com and anoStockk

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543