March 2014

A long goodbye for XP

Windows XP will no longer be supported as of April 8th. This should not come as a surprise to anyone as Microsoft’s lifecycle has been known for years but what may come as a surprise is how many organizations are likely to be affected by this: Recent reports indicate that over 30% of Windows installations are still running XP. With such a high percentage it is almost assured that any given organizations has an XP installation somewhere on their network, likely on long-forgotten servers or workstations at rarely upgraded remote sites. Even if servers and workstations have been expunged XP may still be lurking in one final holdout: embedded systems. These systems are almost like appliances in their nature, you just plug them in and they work, but behind the scenes they are still computers and require some sort of operating system, often Windows XP. As one example, over 90% of ATMs run XP; other embedded systems running XP could include digital surveillance video recording systems, electronic door lock access control systems, graphic displays (like the departure screens in airports), digital telephone exchanges, etc.

The risks of an unsupported operating system should be obvious: Microsoft will no longer be providing patches for Windows XP so any security vulnerabilities that are discovered in the future will remain permanently unfixable. With such a large number of XP systems still in use, attackers will almost certainly be looking for new vulnerabilities in XP and adjusting their exploit kits to take advantage of them knowing that the exploits will work indefinitely. Even if the remaining XP machines on a network do not provide critical functionality they may still serve as a gateway into the network for an attacker: most network administrators focus their security resources at the perimeter and have very little protection or detection capability internally, attackers have been taking advantage of this for years by compromising workstations (often through malware distributed via phishing emails) and using them to target other more sensitive systems on the name network. Leaving unsupported XP installations in place, whether on servers, workstations, or embedded systems, will provide just such a stepping stone for an attacker to penetrate a network and steal sensitive data.

In addition to the risk concerns there are compliance concerns as well: Any unsupported operating systems detected during an ASV scan results in an automatic failure. Because PCI defines the compliance scope as the systems that directly handle payment card data plus other connected systems (due to the risk of stepping-stone attacks described above) an unsupported XP machine that has nothing to do with card processing could cause this failure merely because it is on the same network.

A common refrain amongst organizations that run older software is that they do not upgrade either because they are concerned about the stability of the system or the cost of the upgrade. While these are valid concerns they should be considered in light of the potential stability impact of an attacker compromising the system with malware in order to use it as a platform to warehouse stolen data, send spam, launch DDoS attacks, and for further attacks within the network, as well as the cost of cleaning up after such a breach. The likelihood of such a compromise will increase by the day as vulnerabilities are identified and disseminated and it is unlikely that any objective risk assessment would conclude that keeping the unsupported operating system in place is the safest and least costly course of action.

NTT Com Security can help our clients identify XP machines on their network through scanning: When provided with access credentials our tools can connect to systems on the network and accurately identify the operating system. Fingerprinting techniques can help to identify systems that can’t be logged into (such as Unix systems with unique passwords) and flag potential unsupported installations for follow-up investigation. Additionally NTT Com Security can help design security controls to help protect existing XP systems while replacements are designed, procured, and tested.

The end of XP support will likely affect every one of our clients if it hasn’t already. Lets see what we can do to help smooth the transition and make sure there are no surprises left behind.

Hard target

Businessweek is reporting that Target spent $1.6 million to install FireEye (a next-generation network monitoring solution), they had an operations center in Bangalore monitoring the FireEye solution, the FireEye solution alerted on the malware penetrating Target's network, and the operations center treated it as a false positive and ignored it. Also revealed in this article is that Target's CEO said they were certified PCI
compliant in September of 2013 (I'm assuming he means that this was when they completed their last Report on Compliance). For the icing on the cake Businessweek made this their cover story with a huge “Easy Target” headline (complete with a cute animated online version) which demonstrates the potential PR fallout from a breach like this.
The article is here.

Compliance, monitoring, and response
For quite a while now I’ve been beating the drum on the message that you can't rely on protection mechanisms alone (firewalls, patching, etc.) to secure a network and the data within it; given enough time a motivated attacker will find a way in. You have to be able to detect the intruder and be able to respond to him in order to limit the damage he can cause. This is why banks have cameras, alarms, guards, and a hotline to the police despite also having a vault to keep valuables in. I've raised this point in the context of the Target breach before as well: we already knew that the breach was based on malware that had been modified to evade antivirus detection and this illustrates the need for monitoring and response capability rather than relying on antivirus alone. Reports indicated that Target first found out about the breach when they were informed of it by Federal authorities, likely because the stolen cards had already turned up on underground markets and had been traced back to Target via Federal or bank fraud analysis units. This indicates that Target's detection and response capabilities were not effective but was not surprising: 69% of breaches are first detected by an external party according to the Verizon 2013 Data Breach Investigations Report. Now the FireEye revelation, indicating that Target had all the right pieces in place to detect and respond to this breach, changes the nature of the conversation a bit.

Based on what we now know about the FireEye deployment it appears that Target was in fact trying to do all the right things: they became PCI compliant, they had robust monitoring infrastructure (FireEye) in place as required by PCI-DSS, and they had actual human beings reviewing the alerts generated by those monitoring systems also as required by PCI-DSS. Regardless of how effective the offshore operations center was (which I'm sure will become a topic of much speculation) these 3 points alone demonstrate more security effort than is apparent at most companies that handle credit cards. We are doing assessment work for major companies that haven't even attempted to become PCI compliant yet (some in the retail sector), most of these companies (compliant or not) have not invested in monitoring infrastructure any more advanced than IDS/IPS and basic system log collection, and manually reviewing these logs is usually an often overlooked side-job assigned to an overworked IT guy.

So here is where I disagree with Businessweek's characterization of "Easy Target" (although I'll admit it does make a great headline): In light of this revelation I would say that Target is likely one of the harder targets. Despite the enormous impact of this breach it is still only a single breach and should be viewed in light of Target's overall security efforts. I would be very interested to see numbers around how many attacks Target successfully stopped with their monitoring capabilities before this attack slipped through. This breach did still happen though and companies will want to know why and what they can do to protect themselves; based on what we know now I would say that Target made 2 errors, both relatively minor when compared to how atrocious security is in most organizations. The 2 errors both have to do with how monitoring is conducted; specifically what behaviors generate alerts and how false positives are handled.

False positives
Any security monitoring system, whether it is a network intrusion detection system, a motion sensor in a bank, or a metal detector at an airport, can be tuned to be more or less sensitive and a FireEye deployment is no different. The tuning capability exists because there is unfortunately no such thing as a security sensor that only alerts on threats without ever generating false positive results: a metal detector that alerted on any metal at all would alarm every time a person with metal fillings in their teeth or metal rivets in their jeans walked through, a motion sensor that alerted on any motion at all would alarm every time a spider crawled across the floor, and network monitoring system that alerted on any activity would inundate its operators with alerts on normal activity. Tuning the system to be less sensitive in order to eliminate false positives is not as simple as it may seem: if a metal detector is tuned only to detect a lump of metal the size of a gun it will fail to alarm when a group of people each carries through a single component of a gun for reassembly on the other side. In order for security technology to be effective it must be tuned to be sensitive enough that it will detect most of the conceivable threats and an allowance must be made for humans to thoroughly investigate the potential false positives that will inevitably occur as a result.

Published information on Target's response indicated that the FireEye solution labelled the ignored threat as "malware.binary", a generic name for a piece of code that is suspected to be malicious even though it does not match any of the patterns for more widely spread malware that has been analyzed and given a name. So far this indicates that Target has likely tuned their monitoring solution well enough as it did detect the actual threat and generated an alert based on it (a system that had been tuned to be too permissive wouldn't have generated an alert at all). Where Target's system fails is the human response to that alert: It is likely that Target's monitoring center received many of these generic alerts on a regular basis, most of them either false positives or simple attacks that were automatically blocked by other security mechanisms; after too many of these false positive generic alerts the humans responsible for responding to them will learn to ignore them. This is like asking each person who sets off the metal detector if they have metal fillings and sending them on their way without further inspection if they respond in the affirmative; it wouldn't be a surprise at all if something slipped through at that point. The only way to make effective use of the security solution is to actually investigate each alert and resolve the cause; this is time consuming and expensive but not nearly so much as a breach. It appears that this is the key piece of Target's process that failed.

Behavior monitoring
The second error is something I am inferring from what was not mentioned: specifically any alerts based on activities on the network. Malware is a "known bad", a chunk of code that is suspected to be suspicious because it exhibits certain characteristics. The same could be said for most alerts generated by intrusion detection and prevention systems: they are based on network traffic exhibiting known suspicious characteristics such as a chunk of network traffic that would exploit a known vulnerability in an email server or a computer that quickly tries to connect to each of the protected systems in turn. Attempting to monitor a network by only watching for "known bad" traffic is akin to setting up a firewall to allow all network traffic except that which is explicitly denied (a practice that was mostly abandoned many years ago). The standard for configuring firewalls today is to deny all traffic by default and to only allow specific "known good" services to pass through when they are explicitly defined and this is the method we must look into for effective network monitoring as well: Define "known good" traffic and alert when anything else out-of-the-ordinary happens on the network.

The actual method used to penetrate and infect the network aside, reports indicate that credit card data was sent from Target's point-of-sale terminals to another compromised server on Target's network where it was then encrypted and sent out for the attackers to retrieve over the Internet. This represents the exfiltration of a huge amount of data and, had Target been looking for anything other than "known bad" traffic, provides 2 opportunities for detection: point-of-sale terminals would have suddenly started interacting with a system on Target's own internal network that they did not normally interact with and then that system suddenly started sending large amounts of encrypted traffic to a system on the Internet that it had never communicated with before. Neither of these communication vectors would have been flagged as "known good" and therefore should have triggered alerts for investigation. Unfortunately almost no-one monitors networks in this way and Target can't really be faulted for not being on the bleeding edge of security best-practices.

Law enforcement
There is a third failing that is worth mentioning here, one that is not at all under Target's control but that nevertheless contributed directly to this breach and many others around the world: the inability or unwillingness of law enforcement to stop criminals who operate primarily online. In the physical world we are used to the concept that when a bank gets robbed the police will respond, investigate, and at least attempt to identify and arrest the offender but in the online world this simply isn't happening all that often.

There are various published reports identifying the individuals behind the malware used at Target and the website used to sell the stolen credit card numbers. These reports weren't the results of Secret Service investigations or NSA metadata collection programs, rather they were identified, fairly easily, by private individuals piecing together information from social media sites and underground forums. Unsurprisingly to anyone in the security industry, the implicated individuals are all young, from Eastern Europe, and have been engaged in these activities for many years. The economic realities in much of Eastern Europe is such that there aren't many legitimate career opportunities for bright young computer enthusiasts. Given the sad state of information security in the rest of the world and the potential income it isn't surprising that many of these kids, who under different circumstances may have been the brains behind a multi-million dollar Silicon Valley startup, are turning to crime against corporations on the other side of the planet. With the recent events unfolding in Ukraine perhaps there is a glimmer of hope that these economic conditions will start changing in the near future.

One would assume, if these are just broke kids with a knack for computers and they are so sloppy about protecting their identities that someone with computer know-how (and some knowledge of the Russian language) can figure out who they are, that law enforcement must already be heading for their door but things are not so simple: a significant fraction of online crime crosses borders and while large breaches like Target attract law enforcement attention a small business owner would be hard-pressed to get any meaningful law enforcement response to a breach regardless of the consequences for his business. Local law enforcement agencies usually don't have the resources to conduct investigations across state lines, never mind national borders. In the post 9/11 world Federal law enforcement priorities are often focused elsewhere, often in the name of "national security"; the agencies that have historically focused on information security seem to be more concerned with threats posed by other governments than criminal enterprises and the FBI is now spinning itself as a counterterrorism and foreign intelligence agency. The political realities in Eastern Europe are also such that the cooperation between Western law enforcement agencies and their local counterparts that would be necessary to bring offenders to justice would be difficult or non-existent, the recent events unfolding in Crimea indicate that any change in this status-quo is unlikely. For the foreseeable future the attackers will be mostly left to their own devices, honing their skills across hundreds or thousands of attacks until they have the capability to penetrate even the most well defended network.

Where do we go from here?
Technology alone can't solve all our problems. Hopefully most of us know that already but there were quite a few vendors at the RSA conference this year proclaiming that their technology would have prevented the Target breach or, even more ludicrously, claiming that it would have prevented the Snowden breach at NSA. If technology could in fact solve all of our woes then, in light of Target's $1.6 million dollar investment in
FireEye's solution, any organization that hasn't spent that enormous amount on security technology should be very worried. This also demonstrates once again that compliance alone is not security either: we don't know who Target's PCI assessor was or if they took the compliance mandate seriously (versus taking the checkbox approach) but from what I've read so far it is entirely possible for this breach to occur in the manner that it did even if Target was serious about compliance. We need to treat compliance as a minimum standard, a guideline upon which we should build security appropriate to our own threat environment. And finally, it is becoming increasingly obvious that the next step in the cat-and-mouse game of security is to increase real-time monitoring and response capabilities to make more effective use of the technology that we have deployed and to make sure that the people tasked with that response must have the time and resources to conduct proper investigations (no more pretending that the overworked IT guy will have time to do it).

DDoS Attacks Get Trickier, Traditional Defenses Fall Short

Quoted in Wall Street and Tech on evolving Distributed Denial of Service attacks:

Read More...

The changing face of malware

Stories are circulating about a “remote access trojan” for Android that made its way into the Google Play store. This malware is making headlines due to its ability to activate cameras and microphones to spy on victims but what is also interesting is that the malware comes from a malware construction kit known as Dendroid.

The existence of the Dendroid toolkit isn’t surprising. As mobile platforms are increasingly used to handle sensitive data, both personal and business, the criminal elements that profit from the information captured by malware will shift more attention to these platforms in order to expand their illicit businesses. The pattern used by Dendroid is a familiar one: virus construction kits have existed for years, allowing attackers to quickly and easily combine various vulnerability exploits with malicious payloads in a customized package.

The malware generated by Dendroid managed to evade Google’s detection mechanisms and has since been picked up by antivirus signatures but this is only the first step in what will be a cat-and-mouse game. As we’ve seen with traditional malware, the authors will now begin modifying their software to evade the latest antivirus signatures, always trying to stay one step ahead of the vendors. The Target breach is a high profile example of this modus operandi: the malware used on Target’s point-of-sale systems was believed to have been purchased on an underground market where it had been available for months, if not years, and was then modified to evade antivirus detection before being deployed.

Evading Antivirus
Modifying malware to evade antivirus solutions is made simple by the very methods that antivirus software uses to detect malicious code: most antivirus solutions are signature based. When a new virus sample is found “in the wild” the antivirus vendors will look for unique patterns in the files or in the behaviors of the offending code and build a signature based on these patterns. These signatures are then added to a database that is distributed to the antivirus installations deployed around the world. The antivirus software simply looks for the signature patterns contained in their databases and then alerts on or quarantines any suspect files.

This approach may be effective at preventing a common virus from spreading widely across the Internet when virus samples can be identified and signatures generated but the approach quickly becomes ineffective in the face of custom-assembled malware; a malware author can simply review the same antivirus software databases in order to determine how not to trip any signatures when he develops a piece of malicious code and test the code against live antivirus installations to be sure. If the resulting malware is only deployed against a few selected targets there will be no publicly circulating samples for antivirus vendors to build signatures off of and the malicious code will likely remain undetected until a breach is well underway unless the target has other behavior detection and response capabilities deployed besides antivirus.

There is an inherent limit to how quickly signatures can be developed: antivirus vendors must first find a sample of the malware, examine it for patterns, and then carefully test the resulting signatures in order to avoid false-positive results once it is deployed. If a signature is not specific enough it can cause the antivirus solution to alert on legitimate software that just happens to match the signature patterns. Despite the efforts of the antivirus vendors this does happen occasionally and can have catastrophic consequences when critical files end up automatically quarantined and systems crash. As a point of reference, it took about 2 weeks after the Target breach was announced before signatures that would detect the malware used in their environment began to be released.

Malware authors are also becoming more creative in deploying their malware, utilizing “dropper” code that causes the initial infection, installs a separate backdoor, and then deletes itself in order to avoid leaving any files behind to be sampled by antivirus vendors. This makes the development of antivirus signatures that can stop the initial infection more difficult, even for more widely spread viruses, as there are no samples. Antivirus vendors also try to analyze virus files in an isolated “sandbox” environment so that they don’t unintentionally infect their own systems. Malware authors can design their code so that it can attempt to detect these sandbox environments and alter its behavior to prevent effective analysis.

Evading Google
The mobile app marketplace has gravitated toward a model with centralized app stores that is very different from the distributed model common to the personal computer world. The malware generated by Dendroid is a member of a specific sub-class of viruses uniquely suited to distribution in the app store model, called “trojans” after the infamous trojan horse: the malicious code is attached to another, often legitimate, program that potential victims would be likely to voluntarily download and run. The idea of getting a malicious app into a sanctioned app store is a very enticing prospect for a malware author as it will be almost guaranteed to have more exposure than software on a standalone website. Furthermore, users trust software on these sanctioned app stores and are unlikely to even consider that there may be embedded malicious code.

Google (and Apple) recognize the trust users place in their marketplaces and attempt to prevent malware from ending up in the stores. This is typically done by running an app that has been submitted to the store in another “sandbox” environment, basically a virtual machine that simulates an actual mobile device. The behavior of the software within the sandbox is monitored by QA staff to determine if it performs any suspicions actions when run. This sandbox environment is very similar to the way antivirus vendors monitor the behavior of malware samples in order to build signatures and a technique that virus authors have been using to prevent analysis of their malicious code has been adapted by Dendroid to sneak its malware past Google’s checks: detect whether the software is being executed on an actual device or in a sandboxed virtual environment and if it is in a virtual machine suppress any malicious behavior to avoid detection.

What about Apple
A recent study indicated that 97% of mobile malware is on the Android platform which is an incredible result considering the widespread popularity of Android’s main rival, Apple’s iOS. One would expect malware authors to target any popular platform rather than focusing on one and ignoring the other half of the market. A likely reason for this phenomenon is the very openness of the Android platform that causes many of it’s users to choose it over Apple’s competing products with their “walled garden” approach.

Apple’s mobile platforms are very restrictive when compared to the freewheeling personal computer world and the Android platform where any software can be run and nearly every aspect of the platform can modified at will. With a few setting changes Android users can access alternative app stores (often riddled with malware), allow apps to interact with each other, and access or change the operating system software at a very low level. Apple on the other hand restricts their users to official apps from the official app store only, prevents apps from communicating with each other except in very specifically defined ways, shields the underlying operating system from the user, and works very hard to prevent jailbreaks that would allow users to bypass these restrictions. The restrictions placed on iOS users and apps makes it much more difficult for users to perform actions that would result in the successful installation of malware and limit the damage that malware could cause to the system if installed.

Besides the apparent security benefits of Apple’s restrictive walled garden, Apple’s vigorous attempts to prevent jailbreaks is also likely to be a contributing factor in the platform’s resistance to malware. Jailbreak software that allows users to bypass Apple’s restrictions in iOS function by exploiting bugs in the operating system in order to gain low level access and open a backdoor that offers the user a greater degree of control over their device than Apple intended. This is the exact same technique that malware uses to gain access to a system and open a backdoor for attackers to take control. Apple is engaged in its own cat-and-mouse game with the authors of jailbreak software, quickly patching the bugs that allow jailbreaks to function and thereby also closing many of the holes that malware would be able to use in order to break out of an app and compromise the underlying system.

Playing Defense
Mobile malware is here to stay and companies must consider what they can do to protect themselves from it. Unlike desktop PCs, mobile devices are often used outside the company’s security perimeter where they can be exposed to any number of threats.

Antivirus should be considered a first-line defense against common viruses that have begun to spread widely but it is ineffective at stopping most targeted attacks or viruses in the early stages of their spread. It should not be considered a standalone substitute for other security mechanisms.

Similarly, the mobile platform vendors’ methods of controlling access to their app marketplaces also have their limits. Much like the game between malware authors and antivirus vendors, there will be constant attempts to evade whatever controls are put in place to keep malware out of app stores and Google or Apple’s approvals can not be completely relied upon. In spite of this it would appear that, faced with iOS as a much tougher target for malware than Android, attackers have been focusing their efforts on Google’s platform. Apple’s iOS is still a very enticing target and malware will certainly be released for the platform but, for now at least, it would appear that the security risk on iOS is much lower than on Android. This of course assumes that users are not jailbreaking their devices and bypassing all of Apple’s controls that make the platform a more difficult target.

Ultimately the solution is a combination of techniques based on the risk mobile devices pose to an organization: Companies must think very carefully about the risks of allowing sensitive information on a privately owned device where little control can be exercised over the other software installed on the device, or conversely about what software they allow to be installed on company-owned devices. In most cases old technologies like antivirus should be combined with newer technologies like Mobile Device Management to provide defense-in-depth while increasing monitoring, alerting, and response capabilities so that potential breaches can be detected and stopped before they get out of hand.

Navy network hack has valuable lessons for companies

Quoted in CSO on the lessons to be learned from the Navy breach

Read More...

12 Ways to Disaster-Proof Your Critical Business Data

Quoted in CIO on disaster readiness

Read More...

Digi-Ransoms: Meetup.com Latest in Long History of Cyber Hostages

Quoted in NBC News on DDoS blackmail

Read More...