Generative AI has emerged as the next big thing that will transform the way we build software. Its impact will be as significant as open source, mobile devices, cloud computing—indeed, the internet itself. We’re seeing Generative AI’s impacts already, and according to the recent Gartner Hype Cycle™ for Artificial Intelligence, AI may ultimately be able to automate as much as 30% of the work done by developers.
AI coding assistants like GitHub Copilot can be a considerable force multiplier for programmers. Early analysis by GitHub showed that use of Copilot could increase overall productivity by 50%, deployments by 25%, code commits by 45%, and merge requests by 35%. GitHub also found that use of Copilot increased quality through faster unit testing, while reducing code errors and the number of merge conflicts. It also increased overall developer satisfaction as well as accessibility with its conversational interface.
That developers are eager to adopt AI coding assistants isn’t a huge surprise. They’ve been using IDEs with auto complete for the last 20 years. Given that, who wouldn't want to write a few lines of code and let AI finish the job?
While the potential productivity gains of AI coding assistants may be irresistible for developers, that doesn’t mean that teams get a free lunch. AI tools are improving rapidly, but a number of risks remain. The large language models (LLMs) these tools are built on are trained on millions of lines of code in the public domain. But what code? Good code? Bad code? The answer is both, and as a result, these tools are prone to
This doesn’t mean that AI can’t generate good code. Studies analyzing Copilot show that in general, it did well at avoiding certain types of security weaknesses (CWEs), including
These defects are often easier to detect because they are the result of flaws in the syntax of a programming language. Other, more complicated, security defects are another story. Copilot was less effective at avoiding vulnerabilities that are the result of the way an application interacts with data and external inputs. These include
In addition, studies such as “Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions” from August 2021 showed that while AI coding assistants do indeed speed up development, 40% of the programs they generated were found to have vulnerabilities.
Another report, “Is GitHub’s Copilot as Bad as Humans at Introducing Vulnerabilities in Code?” from August 2023, took a different approach. It compared code generated by GitHub Copilot to that written by humans, when both were given the same prompt. Here, GitHub Copilot was found to have produced vulnerable code approximately one-third of the time, while avoiding vulnerabilities approximately 25% of the time. Interestingly, researchers observed that nearly half the time, CoPilot generated code that differed significantly from that produced by a human developer.
Finally, a third report, “Security Weaknesses of Copilot Generated Code in GitHub” from October 2023, found that approximately 35% of the Copilot-generated code in GitHub contained vulnerabilities.
Does this mean AI coding assistants are bad and your team should avoid them? Not at all. The reality is that the AI code genie is out of the bottle and it’s not going back. And besides, the AI-generated code is probably no more buggy or vulnerable than the code many developers (especially less-experienced ones) produce.
And therein lies the key takeaway. AI-generated code can significantly speed up your development, but you still need to review and verify it as much, if not more, than code written by your developers.
So, what should your organization be doing to get the benefits of AI-generated code, while avoiding the security and quality risks? Don’t just let developers download and use whatever tool they just read about on Stack Overflow. Instead, make a plan that addresses these three key areas.
As AI continues to reshape the landscape of software development, organizations must strike a delicate balance between innovation and risk mitigation. By adopting proactive governance measures and adhering to best practices, your organization can harness the power of AI-generated code while safeguarding your intellectual property and ensuring the integrity of your software projects. As we venture further into the realm of AI-driven development, vigilance and strategic planning will be key to navigating the evolving challenges and opportunities that lie ahead.
Synopsys is helping enterprises produce more-secure software at the speed their business demands by combining the power of our market-leading AppSec engines with generative AI, so developers and security teams will be able to ship more-secure software faster to provide the innovation your business needs.