There is a certain amount of wisdom in the old cliché “If it ain’t broke, don’t fix it.” That’s how it became a cliché.
And it probably still applies to things like your old lawn mower. But it is dangerously outdated “wisdom” when it comes to the digital world. Thousands of systems and apps that appear to be functioning just fine could be harboring dangerous vulnerabilities, some discovered years ago. Many organizations aren’t aware of these legacy vulnerabilities. Others have forgotten about or simply ignored them. But attackers haven’t.
Any “legacy code” that hasn’t been reviewed recently, no matter how well it appears to work, might contain vulnerabilities.
In short, you might be releasing software with legacy vulnerabilities that were put aside to deal with later but have since been forgotten about—or that you didn’t even know existed in the first place.
But previously unknown vulnerabilities come to light constantly. Why else would “Patch Tuesday” be a thing? Indeed, on the most recent Patch Tuesday, Microsoft released updates to fix at least 115 security defects, 26 of which carried its most severe “critical” rating.
And if organizations that build or assemble software don’t have their own version of Patch Tuesday, their systems and apps will increasingly be littered with known vulnerabilities that are easy for hackers to exploit.
That means they could face problems ranging from irritating to catastrophic. As is well documented, hackers go for the easy targets.
The notorious WannaCry ransomware, which crippled a number of companies in mid-2017, was enabled in large part by the victims’ use of legacy systems.
As Infosecurity magazine put it at the time, “The culprit is called the ‘EternalBlue’ exploit and it’s a tool that takes advantage of previously unknown vulnerabilities in certain older versions of Microsoft Windows operating systems, such as Windows XP.”
You might think that something as devastating as WannaCry would prompt not only consternation but action. But no. A survey conducted two years ago by the RSA Conference found that only 47% of companies patched known vulnerabilities right away—a hacker’s dream. The reasons? Respondents to the survey said they didn’t have time or money, or they didn’t have people with the expertise to do it.
The irony, of course, is that if one or more of those vulnerabilities gets exploited, they will have even less time and money.
IBM’s 2019 Cost of a Data Breach Report found the average cost worldwide was $3.92 million, but more than double that in the U.S., at $8.19 million.
Beyond that, legacy code could add even more financial risks for businesses operating under strict compliance requirements. Standards such as HIPAA (Health Insurance Portability and Accountability Act), PCI DSS (Payment Card Industry Data Security Standard), and SOX (Sarbanes-Oxley Act) require that technology security be kept current.
Not only does legacy technology make audits difficult and costly, but a breach will likely lead to fees and fines.
Bottom line: Simple math tells you that it will be cheaper in the long run to fix what’s broke. Yes, it may be a headache to address legacy vulnerabilities, but failure to do so can create problems far worse than a headache.
And the way to address them is pretty straightforward: Find them and fix them.
It’s a bit more nuanced than that. Good risk management means finding and fixing the worst first—the most critical bugs and other defects.
Nor is it easy. Dealing with legacy vulnerabilities is labor intensive. But the advice from most experts is fairly uniform. The steps recommended by Black Duck for securing open source software components could apply in general to legacy code.
To be worthwhile, your software inventory has to be comprehensive. It should include all software in the operating system, hardware, applications, and containers. Since most modern applications contain open source software components, a software composition analysis (SCA) tool is the best way to find those components.
After you know what you have, you need to fix what’s broken. That means finding vulnerabilities. Most SCA tools will alert you if your code contains components with known vulnerabilities, such as those listed in the National Vulnerability Database (NVD). The entries in the NVD are fed by the CVE (Common Vulnerabilities and Exposures) database but contain additional data.
As the NVD website puts it, “This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security-related software flaws, misconfigurations, product names, and impact metrics.”
Some SCA vendors provide their own vulnerability information feeds as well, enhanced with more information than the NVD provides, such as remediation details.
The message of the severity ratings in the NVD and other vulnerability lists is that organizations need to set priorities. Don’t waste time or add to your risk by simply working through an endless list of vulnerabilities from top to bottom. Fix the worst ones first.
Indeed, Tim Mackey, senior principal consultant at the Black Duck CyRC (Cybersecurity Research Center), notes that a complicated fix of one obsolete component could create risks that are worse.
“While fixing legacy sounds good on paper, in reality it can cause more harm than good,” he said. “For example, updating from an obsolete library might require rewriting a bunch of stuff ‘just because.’ The rewrite then could introduce issues that are more severe than just accepting there’s a sleeping dog in the code.”
Once you get up to date, stay up to date. Don’t let your assets slide back into obsolescence. That means setting up policies to keep your inventory current, to track updates and patches, and to install them as soon as feasible.
Yes, you will spend time and money doing it. But you will save time and money in the long run. Creating and following a plan to deal with legacy vulnerabilities is an investment, and daily headlines should make it obvious that it’s worth it.