Zero Days

Adobe Critical Flash Player Update Repairs Flaw Used In Targeted Attacks

Quoted in CRN on a new zero-day vulnerability in Internet Explorer:

Read More...

Time to stop using IE

The IE vulnerability that has been released (CVE-2014-1776) follows a fairly typical pattern we have seen before. Internet Explorer and Flash have a long track record of nasty vulnerabilities (along with Java and Adobe Reader). These vulnerabilities are useful for attackers who can set up web sites to exploit the vulnerability and then direct victims to those web sites via phishing emails, manipulating search engines, buying ads, or compromising legitimate popular web sites (so called “drive-by download attacks”). These types of attacks have been reported to be exploiting this vulnerability in the wild. Internet Explorer versions 6 though 11 are affected. Microsoft has issued an advisory with a number of workarounds that can be put into place while a patch is developed that can be found here: https://technet.microsoft.com/library/security/2963983

This vulnerability also factors into the recent news that Windows XP is no longer supported by Microsoft: This represents the first major vulnerability released for Windows XP since it went out of support earlier this month and, according to early reports, a patch will not be released for that platform. This means that the risk posed by any remaining Windows XP systems has just moved from theoretical to actual. Organizations should be moving off of the XP platform as soon as possible and taking extraordinary steps to protect any remaining XP systems in the interim.

Relying on basic vulnerability scans to detect this sort of vulnerability can lead to a false sense of security if the results come back clean: Most vulnerability scans are conducted from the perspective of an attacking coming in across a network and focus on making inbound connections to network services in order to identify vulnerabilities. In most cases these types of scans will not detect client-side vulnerabilities like this one as client side vulnerabilities are based on outbound connections. Most scanning tools can be configured to connect to target systems with a valid username and password in order to analyze the installed software versions and this type of scan should be effective in identifying this and other client-side vulnerabilities. Organizations that do not typically conduct this type of scan may be shocked at how many client-side vulnerabilities they actually have the first time they run it.

The broader issue here is that any installed software may include vulnerabilities that increases the "attack surface" an attacker has to work with. A core security concept is that any unnecessary software should be removed or disabled whenever possible to reduce the attack surface. Unfortunately (for security at least) most software vendors and IT organizations often choose ease-of-use over security and have default installations that tend to include many potentially unnecessary enabled features and plugins, including Flash, whether or not they are actually needed for business purposes. As system and network administrators have gotten better at disabling or firewalling unnecessary server software the attackers have shifted to attacking client software in order to gain a foothold inside a target network. Flash along with Java, Adobe¹s Reader software, and Internet Explorer itself are the most common client-side targets likely due to both their ubiquity and complexity (more complexity usually means more likely vulnerabilities).

Preventing this and future drive-by attacks will require IT to rethink how they deploy software. Rather than installing everything by default "in case someone needs it" IT should be creating workstations and servers with as little software as possible and then deciding what software to add based on the use-case for each system. For example if a workstation’s only business purpose is to enter credit card numbers into a processor’s web site and that web site does not require Flash then there is no reason to install Flash and add more potential vulnerabilities to the workstation. Most businesses will find that vulnerable plugins like Flash and Java are only needed for business purposes by a very small subset of their users. Of course many users are likely using these plugins for non-business purposes, like watching YouTube videos during downtime, and the organization will have to weigh the tradeoff of security versus the users’s desire to use their workstation just like they would use their home computer.

Apple in particular is already taking action along these lines: After years of having Java enabled by default Apple released a patch for Mac OS X that disabled Java due to a rash of zero-day vulnerabilities, users who actually need to use Java are provided with instructions on how to re-enable it when they reach a web site that requires it. Apple also added a feature to Safari that allows for the Flash and other plugins to be allowed or disallowed on a site-by-site basis. This feature in particular would provide the sort of granular control an IT organization would need in order to effectively manage client-side plugins like Flash: allow them for sites with a legitimate business need and disallow them everywhere else. The web does seem to be making a move to HTML version 5 which is an open standard that has the capability to replace most of Flash’s functionality. There is some hope that this transition will lead to less vulnerabilities than we’ve seen from Adobe’s proprietary software in the past.

Ultimately the choice is to keep scrambling with tactical fixes like workarounds and patches whenever these zero day vulnerabilities come out or making strategic decisions about how systems are deployed to reduce the overall risk to the organization.

Microsoft issues workaround for Internet Explorer bug

Quoted in USA today on Microsoft’s workaround for the zero-day vulnerability in Internet Explorer:

Read More...

Beyond Heartbleed: 5 Basic Rules To Reduce Risk

As published in Wall Street and Technology:

Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.

When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
  • Affects software that is in widespread use as an Internet-facing service that commonly handles sensitive data
  • Is present in version(s) of that software representing a sizable percentage of the deployed base
  • Can be exploited to reveal sensitive information (directly or indirectly) without logging in with a valid username and password
  • Its method of exploitation is widely known

For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H

The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.

Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.

Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.

To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.

This means our candidates for the next Heartbleed would be among the following common software packages:
  • Email software (Sendmail, Postfix, and Exchange)
  • Web server software (Apache and IIS)
  • The encryption packages that supports both of them (OpenSSL and SChannel)
  • The TCP/IP stacks of the operating systems they usually run on (Linux, FreeBSD, and Windows)
  • The server-side languages and other plugins that are frequently used within web servers (PHP, Java, PERL, Python, Ruby, .Net)

So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.

Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.

It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).

Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.

Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:

Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.

Reduce privileges:
Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.

Patch:
For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.

Firewall:
The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.

Defense in Depth:
There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.

New OpenSSL breach is no Heartbleed, but needs to be taken seriously

Quoted in ZDNet on the implications of new SSL vulnerabilities:

Read More...

Black Hat 2014 spotlights mobile device management, modem threats

Quoted in Tech Page One on vulnerabilities in Mobile Device Management products:

Read More...

MDM is Terrible: When Security Solutions Hurt Security

While the headline is a little too sensationalist for my liking, quoted in PC Magazine SecurityWatch on vulnerabilities in Mobile Device Management products:

Read More...