AI has become as indispensable to software development. Ninety percent[i] of organizations report using AI coding assistants such as Copilot and Claude Code. Over 96% of organizations are using open source AI models to power core functions like data processing, computer vision, and process automation in the software they ship. And one-fifth of organizations prohibit AI tools but know their developers are using them anyway.
There’s no doubt that AI helps developers code faster. But AI coding tools just create code that mimics patterns observed in open source projects and other publicly available code. Traditionally, AI code generators are trained to prioritize functional code—security is often no more than a happy coincidence, and software license compliance is just a suggestion. So how can you get the best of AI without exposing your organization to the worst?
Ultimately, as with any DevSecOps initiative, you need the efforts of contributors from both development and security aligned in strategy to achieve defined goals. If dev seeks speed, it must come along with the security controls AppSec seeks and the IP protections sought by legal. That’s a lot of moving parts. What could go wrong?
[i] All statistics quoted in this blog post are from the 2024 Black Duck “Global State of DevSecOps” report
AI code generators like Copilot are known to introduce vulnerabilities in one-third of the code they generate, according to a Cornell University study. Not only that, when attempts are made to have these tools fix the issue they created, they introduce new ones 42% of the time (same study).
AI coding assistants can generate code very quickly, and at great scale. This can flood pipelines with potentially vulnerable or weak code and accrue massive backlogs for AppSec review. Two major issues that you absolutely DO NOT want AI tools to introduce at scale are improper input validation and OS command injections.
Avoiding such issues is a basic best practice for secure coding, but they are nonstandard considerations for AI coding assistants.
AI coding assistants have to be trained on something to be able to produce functional code for a given project in a given language. More often than not, this training is based on open source projects, which typically carry specific licensing obligations. While there are many licenses, some are more potentially detrimental to the business, such as those compelling the free release of any work derived from the included code or component.
If developers use AI-generated code without understanding the licensing terms associated with it, they run the risk of unintentionally “open sourcing” their proprietary code, devaluing intellectual property and opening the organization to legal implications.
The problem isn’t AI. It’s how your developers are using it. If we implicitly trust something we don’t understand, we open ourselves to potentially devastating consequences. Not just for ourselves, but for our customers and partners. Four steps can help you prevent these consequences without taking AI away from your developers.
Automating security scans is essential for timely, consistent, and repeatable results. This is particularly important as AI coding assistants are increasingly being used semiautonomously and pushing code through pipelines quickly. Automation should balance security coverage with pipeline speed, and trigger only necessary tests based on pipeline actions.
Set up automated security scans to
Cultivating security-capable developers should be a persistent background activity amid all this. Developers must be able to cross-check the output of AI code generators as well as fix issues detected in later stages. Additionally, train your AI models to recognize and avoid security risks before they produce them. Automate this by incorporating security best practices and rules into the training data and algorithms used by your AI.
Snippet scanning can quickly identify and address potential software license conflicts before they are propagated across projects. Implement mechanisms to automatically detect small excerpts of code (snippets) sampled from licensed open source components.
Integrate snippet scanning tools into dependency management systems, or initiate their analysis with every code commit to keep pace with faster, AI-enabled pipelines. This will help you maintain an up-to-date and accurate inventory of third-party components.
Instead of a big-bang approach in which AI is rolled out everywhere all at once, go with a step-by-step process that gradually integrates AI into your workflows. This allows you to manage risks, optimize resources, and ensure that your AI solutions are effectively aligned with your business needs.
Gradually expand your use of AI tools based on the success and learnings from initial teams. This phased approach allows for controlled scaling and continuous improvement. A phased approach also gives you the opportunity to implement security controls to mitigate risk. These controls should include
Black Duck has a proud history of helping organizations all over the world secure their software. We led the way in the use of open source code, helping developers to incorporate it safely, securely, and legally in their own projects. Now, we’re defining the next frontier of application security—one defined by AI-enabled development pipelines and expanding regulations.
Oct 08, 2025 | 6 min read
Jun 03, 2025 | 3 min read
May 08, 2025 | 3 min read
Jan 23, 2025 | 6 min read
Jan 06, 2025 | 6 min read