May 2014
Beyond Heartbleed: 5 Basic Rules To Reduce Risk
12 05, 14 Filed in: Blog Posts | Bylines
As published in Wall Street and Technology:
Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.
When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H
The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.
Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.
Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.
To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.
This means our candidates for the next Heartbleed would be among the following common software packages:
So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.
Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.
It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).
Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.
Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:
Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.
Reduce privileges: Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.
Patch: For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.
Firewall: The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.
Defense in Depth: There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.
Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.
When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
- Affects software that is in widespread use as an Internet-facing service that commonly handles sensitive data
- Is present in version(s) of that software representing a sizable percentage of the deployed base
- Can be exploited to reveal sensitive information (directly or indirectly) without logging in with a valid username and password
- Its method of exploitation is widely known
For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H
The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.
Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.
Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.
To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.
This means our candidates for the next Heartbleed would be among the following common software packages:
- Email software (Sendmail, Postfix, and Exchange)
- Web server software (Apache and IIS)
- The encryption packages that supports both of them (OpenSSL and SChannel)
- The TCP/IP stacks of the operating systems they usually run on (Linux, FreeBSD, and Windows)
- The server-side languages and other plugins that are frequently used within web servers (PHP, Java, PERL, Python, Ruby, .Net)
So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.
Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.
It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).
Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.
Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:
Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.
Reduce privileges: Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.
Patch: For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.
Firewall: The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.
Defense in Depth: There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.
Time to modernize thinking, technology in fighting malware
09 05, 14 Filed in: Press Quotes