The vibes are off (Part I)
Risks from vibe coding are growing as AI vendors aggressively push their coding agents.
Aaand we’re back. Last week we took a bit of hiatus to focus on some geo-political developments in Zambia that disrupted our plans for the continent. Nevertheless, we persist and our work continues.
Vibe coding has a disastrous track record. While assisted-coding can help speed up code generation, it has unleashed toxic competitiveness disguised as productivity along with a tsunami of digital vulnerabilities. This already dire situation is about to worsen as AI companies flood social media platforms with paid influencer placements and ads targeting novice coders.
First things first - What is vibe coding?
‘Vibe coding’ is a term coined by OpenAI co-founder, Andrej Karpathy. It’s an approach where natural language prompts are used to generate working code instead of formal programming language. Karpathy described it as a form of coding where you “fully give in to the vibes, embrace exponentials, and forget that the code even exists”.
The vibes are off.
Vibe coding was supposed to be “chill’ but now that everyone is doing it, this has led to full-blown paranoia among coders, reports Bloomberg.
The use of AI coding agents has created a sense of productivity paranoia among executives and engineers, with some feeling pressure to work longer hours and produce more code.
Engineers are experiencing “AI fatigue” due to the fear of missing out on the next big breakthrough, and some companies are tracking employees’ interactions with coding agents to measure productivity.
Vibe coders are out of their depth.
Heavily marketed to amateurs and non-coders, the strongest argument against vibe coding is that the users generating and deploying code often lack the expertise to secure their code and test the output themselves. This heavy over-reliance on the tool has led to disastrous outcomes like security failures, in addition to massive technical debt, production outages, and the list goes on.
The situation is so bad that Georgia Tech’s Systems Software & Security Lab (SSLab) launched a research project last year called Vibe Security Radar that tracks security vulnerabilities introduced by AI-generated code. Speaking to Infosecurity, Hanqing Zhao, founder of the Vibe Security Radar said,
“Everyone is saying AI code is insecure, but nobody is actually tracking it. We want real numbers. Not benchmarks, not hypotheticals, real vulnerabilities affecting real users.”
In the same interview, Zhao admitted that the real figure of vulnerabilities due to the use of AI coding tools “is almost certainly higher” but “most of the AI tool traces have been stripped by the authors, so we can only confirm around 20 cases with clear AI signals.”
We can’t just blame the users. The employer pressure to use AI tools is too much, competition is intense, and vendor marketing is overwhelming. Despite all the hype, the vibe coding tools themselves are hugely problematic.
Vibe coding tools have vulnerabilities undetected by the amateur coder.
Vibe Security tracker has detected over 78 CVEs (Common Vulnerabilities and Exposures) as of March 2026 from 8 tools with Anthropic’s Claude Code leading the pack in the number of CVEs detected.
Check Point Research found that files generated by Claude Code, even as benign as operational metadata could trigger hidden commands, bypass built-in consent and trust safeguards, and expose the system to external threats.
Analysis of popular vibe coding agents by AI cybersecurity startup, Tenzai found that all had significant number of vulnerabilities across different applications.
“Codex (by Open AI), Cursor and Replit tied for first place with a total of 13 vulnerabilities, while Claude Code came in last with 16 vulnerabilities. In addition to introducing the most vulnerabilities overall, Claude Code also had the highest number of critical-severity findings.”
Tenzai’s testing doesn’t include Lovable, the $6.6 billion vibe coding platform with 8million users. According to The Next Web (TNW),
[Lovable] has faced three documented security incidents exposing source code, database credentials, and thousands of user records, with the most recent BOLA vulnerability left open for 48 days after the company closed a bug bounty report without escalation.
Last year, a subscriber broke the story on Hacker News discussion board that Cursor AI agent had started kicking users off when they logged in from multiple machines without any notification. Cursor is owned by Anysphere, a private AI company based in San Francisco. Their hallucinating bot (mis) informed those who reached out that it was because of a non-existent policy change.
And then there’s OpenClaw, the popular coding agent, that runs locally on a user’s machine.
Earlier this year, BitDefender reported on the internet-wide OpenClaw exposure.
Security researchers identified more than 135,000 internet-facing OpenClaw instances – a sharp jump from earlier counts reported the same day. This surge indicates that the platform’s footprint is expanding at an alarming rate.
Researchers also claim to have documented a major shift in the infostealer landscape, according to Info Security magazine.
The permissions users grant it to access sensitive data and systems, insecure default settings and plaintext storage of secrets have raised eyebrows in the security community.
This is just a snapshot of the disastrous vulnerabilities of vibe coding. While it has has lowered barriers for non-coders, it hasn’t eliminated the need for solid programming and technical skills.
In part II of this post, we will share expert tips on what to do and not to do for good vibes or low-risk vibe coding.


