AI-Generated Code Is Here to Stay. Are We Less Safe as a Result?

Coding in 2025 isn’t about toiling over fragments or spending long hours on debugging. It’s a whole ’nother vibe. AI-generated code stands to be the majority of code in future products and it has become an essential toolkit for the modern developer. Known as “vibe coding”, the use of code generated by tools like Github […] The post AI-Generated Code Is Here to Stay. Are We Less Safe as a Result? appeared first on Unite.AI.

Jun 14, 2025 - 02:00
 0
AI-Generated Code Is Here to Stay. Are We Less Safe as a Result?

Coding in 2025 isn’t about toiling over fragments or spending long hours on debugging. It’s a whole ’nother vibe. AI-generated code stands to be the majority of code in future products and it has become an essential toolkit for the modern developer. Known as “vibe coding”, the use of code generated by tools like Github Copilot, Amazon CodeWhisperer and Chat GPT will be the norm and not the exception in reducing build time and increasing efficiency. But does the convenience of AI-generated code risk a darker threat? Does generative AI increase vulnerabilities in security architecture or are there ways for developers to “vibe code” in safety?

“Security incidents as a result of vulnerabilities in AI generated code is one of the least discussed topics today,” Sanket Saurav, founder of DeepSource, said. “There’s still a lot of code generated by platforms like Copilot or Chat GPT that don’t get human review, and security breaches can be catastrophic for companies that are affected.”

The developer of an open source platform that employs static analysis for code quality and security, Saurav cited the SolarWinds hack in 2020 as the kind of “extinction event” that companies could face if they haven’t installed the right security guardrails when using AI generated code. “Static analysis enables identification of insecure code patterns and bad coding practices,” Saurav said.

Attacked Through The Library

Security threats to AI-generated code can take inventive forms and can be directed at libraries. Libraries in programming are useful reusable code that developers use to save time when writing. 

They often solve regular programming tasks like managing database interactions and help programmers from having to rewrite code from scratch. 

One such threat against libraries is known as “hallucinations”, where AI-generative code displays a vulnerability through using fictional libraries. Another more recent line of attacks on AI-generated code is called “slopsquatting” where attackers can directly target libraries to infiltrate a database. 

Addressing these threats head on might require more mindfulness than may be suggested by the term “vibe coding”. Speaking from his office at Université du Québec en Outaouais, Professor Rafael Khoury has been closely following the developments in the security of AI-generated code and is confident that new techniques will improve its safety. 

In a 2023 paper, Professor Khoury investigated the results of asking ChatGPT to produce code without any more context or information, a practice that led to insecure code. Those were the early days of Chat GPT and Khoury is now optimistic about the road ahead. “Since then there's been a lot of research under review right now and the future is looking at a strategy for using the LLM that could lead to better results,” Khoury said, adding that “the security is getting better, but we’re not in a place where we can give a direct prompt and get secure code.” 

Khoury went on to describe a promising study where they generated code and then sent this code to a tool that analyzes it for vulnerabilities. The method used by the tool is referred to as Finding Line Anomalies with Generative AI (or FLAG for short).

“These tools send FLAGs that might identify a vulnerability in line 24, for example, which a developer can then send back to the LLM with the information and ask it to look into it and fix the problem,” he said. 

Khoury suggested that this back and forth might be crucial to fixing code that’s vulnerable to attack. “This study suggests that with five iterations, you can reduce the vulnerabilities to zero.” 

This being said, the FLAG method isn’t without its problems, particularly as it can give rise to both false positives and false negatives. In addition to this, there are also limits in the length of code that LLMs can create and the act of joining fragments together can add another layer of risk.

Keeping the human in the loop

Some players within “vibe coding” recommend fragmenting code and ensuring that humans stay front right and center in the most important edits of a codebase. “When writing code, think in terms of commits,” Kevin Hou, head of product engineering at Windsurf said, extolling the wisdom of bite-sized pieces.

“Break up a large project into smaller chunks that would normally be commits or pull requests. Have the agent build the smaller scale, one isolated feature at a time. This can ensure the code output is well tested and well understood,” he added. 

At the time of writing, Windsurf has approached over 5 billion lines of AI-generated code (through its previous name Codeium). Hou said the most pressing question they were answering was whether the developer was cognizant of the process. 

“The AI is capable of making lots of edits across lots of files simultaneously, so how can we make sure that the developer is actually understanding and reviewing what is going on rather than just blindly accepting everything?” Hou asked, adding that they had invested heavily in Windsurf’s UX “with a ton of intuitive ways to stay fully in lock-step with what the AI is doing, and to keep the human fully in the loop.”

Which is why as “vibe coding” becomes more mainstream, the humans in the loop have to be more cautious of its vulnerabilities. From “hallucinations” to “slopsquatting” threats, the challenges are real, but so are the solutions. 

Emerging tools like static analysis, iterative refinement methods like FLAG, and thoughtful UX design show that security and speed don't have to be mutually exclusive. 

The key lies in keeping developers engaged, informed, and in control. With the right guardrails and a “trust but verify” mindset, AI-assisted coding can be both revolutionary and responsible.

The post AI-Generated Code Is Here to Stay. Are We Less Safe as a Result? appeared first on Unite.AI.