To do a job well, you need the right tools. But it’s just as important—perhaps even more so—to use those tools correctly. A hammer will make things worse in your construction project if you’re trying to use it as a screwdriver or a drill.
The same is true in software development. The intricacies of coding and the fact that it’s done by humans means that throughout the software development life cycle (SDLC) there will be bugs and other defects that create security vulnerabilities that hackers can exploit.
Addressing those vulnerabilities effectively is called defect management.
There are multiple software security testing tools on the market. Among them: static analysis (SAST), which tests code at rest; dynamic analysis (DAST), which tests code while running; and interactive analysis (IAST), which tests code while interacting with external input. Software composition analysis (SCA) helps find and fix defects or licensing conflicts in open source or third-party software. Pen testing and red teaming at the end of the SDLC can find bugs that may have been missed earlier in the cycle.
But for those tools to be effective in a DevOps world, where there is enormous pressure on developers to produce quickly, security tools have to be configured properly. If they aren’t, they can flag every defect without regard for its significance and easily overwhelm developers. Any developer who is constantly bombarded with notifications from a security analysis tool will start to ignore them. It becomes white noise. And the inevitable result will be the opposite of the intent—less-secure code.
That’s even more likely when tools are automated. Automation is good in that it’s much faster than a manual process, but it’s important to limit notifications to security vulnerabilities considered critical or high-risk.
That’s one of the prime messages Meera Rao delivers to clients when talking about defect management.
Rao, senior director for product management (DevOps solutions) at Black Duck, said if any static analysis tool (Coverity®), isn’t “finely configured,” it will push far too many defects, including those that are low risk, into a defect-tracking tool like Jira.
“Do I want all of the thousands of issues that Coverity found to go into defect tracking?” she asked. “No, because unless it is configured otherwise, it finds them all—critical, high, medium, low, informational—and I don’t want them all to be flooded into Jira.”
So when she talks to security teams at client organizations, she tells them their first priority should be to decide what security vulnerabilities are critical to the application being developed.
“If it is externally facing, like a banking application that is going to be available throughout the world, then I’m most nervous about cross-site scripting (XSS) and SQL injection,” she said. “I don’t care about empty cache blocks or other less important issues because then I would be flooding my defect tracking.”
“When I configure a tool such as static analysis, I want to narrow it down to the vulnerabilities that my organization and my application care most about. The tool might have found thousands of other issues, but I don’t care.”
Rao said it’s also important for security and development teams to realize that the security vulnerabilities most important to them will likely be different from a general top 10 list like the one created by the Open Web Application Security Project (OWASP).
“The OWASP Top 10 is unbelievably good, but those might not be the top 10 for your organization,” she said. “So you have to make sure you have the metrics to look at what are the top 10 or top 5 security vulnerabilities that matter the most to you. And just for those five, make sure that every time you run the tool, whether it is static, dynamic, or interactive analysis, you create defect tickets for those, and then see that it is a closed loop.”
“The main goal is to push in as many rules as possible within the tool,” Rao said. “Not all of you are writing web applications. Some of you might be writing a microservice. Some might be writing middleware that has nothing to do with XSS or SQL injection because it doesn’t have a database.”
“The key is to make sure that you customize the tool—whether it is SAST, DAST, IAST, or SCA—to the application, the language, the technology, or to the framework you are using, and then once you do that, you will have a narrow set of results. And then you can even fine-tune that as well,” she said.
Fine configuration also means a vast reduction in one of the chief irritations for developers—false positives. “Are there chances that there might be some false positives in all of this workflow?” Rao asked. “Yes. Tools are tools and there will be false positives.”
“But what I ask organizations is, what is the rate of false positives when you finely configure the tool and the rules, and narrow it down to the ones that you truly care about? I have seen maybe 2% to 3% false positives at that point. That’s acceptable.”
That percentage can be cut even further over time, she said, because if developers notify the security team about a false positive, “they mark it as such and then it’s gone forever, because the tool will remember all of those.”
The second important element of using security analysis tools and defect tracking correctly is to make sure that when they flag something that’s critical, it gets fixed
“All organizations should have some kind of risk management,” Rao said. That’s needed to create a protocol for how critical security vulnerabilities are handled.
She said one way to do it is to give developers a deadline—one or two weeks—to fix a critical defect. If a query to the defect management tool shows that it hasn’t been fixed by the deadline, “then pause the pipeline. Immediately notify the development team, saying you cannot go to production.”
Or alternatively, “someone needs to sign off—take the ownership,” she said. “Say ‘I know there is a critical vulnerability but I have other controls in place and I need to push this to production.’ The defect management tool helps you control that.”
Defect tracking, she said, can also help improve the quality and security of the code being written by the development team.
“Over months or even weeks, you will be able to see the ROI of what happened with this workflow,” she said. It will keep a log of who on the development team is making the most mistakes, how quickly the defects are being fixed and who fixed them.
“The tool you use has all those metrics,” she said, “so you can see trends. Is the number of vulnerabilities going up? Do my developers need more training? Do I need to help them with instructor-led training, e-learning, or defensive programming? What are some of the vulnerabilities that they are creating over and over again? You get all these insights when you have a very tightly controlled defect management workflow.”
That, Rao added, can be much more effective than a PDF or spreadsheet that nobody looks at. “Having this tight loop where they create the ticket and you’re able to run the specific tool to identify whether they really fixed the vulnerability or not, that’s where you get a lot of benefits.”
The bottom line is to help organizations understand that security defects are just as important as quality assurance (QA). Often, she said, “when teams find QA defects, they immediately create a ticket in Jira, but when it comes to security, they are more likely to say that maybe it’s a false positive.”
But if organizations customize and configure their security tools, they won’t have to sacrifice speed for security.