Detection and Response
Hard target
18 03, 14 Filed in: Blog Posts
Businessweek is reporting that Target spent $1.6 million to install FireEye (a next-generation network monitoring solution), they had an operations center in Bangalore monitoring the FireEye solution, the FireEye solution alerted on the malware penetrating Target's network, and the operations center treated it as a false positive and ignored it. Also revealed in this article is that Target's CEO said they were certified PCI
compliant in September of 2013 (I'm assuming he means that this was when they completed their last Report on Compliance). For the icing on the cake Businessweek made this their cover story with a huge “Easy Target” headline (complete with a cute animated online version) which demonstrates the potential PR fallout from a breach like this.
The article is here.
Compliance, monitoring, and response
For quite a while now I’ve been beating the drum on the message that you can't rely on protection mechanisms alone (firewalls, patching, etc.) to secure a network and the data within it; given enough time a motivated attacker will find a way in. You have to be able to detect the intruder and be able to respond to him in order to limit the damage he can cause. This is why banks have cameras, alarms, guards, and a hotline to the police despite also having a vault to keep valuables in. I've raised this point in the context of the Target breach before as well: we already knew that the breach was based on malware that had been modified to evade antivirus detection and this illustrates the need for monitoring and response capability rather than relying on antivirus alone. Reports indicated that Target first found out about the breach when they were informed of it by Federal authorities, likely because the stolen cards had already turned up on underground markets and had been traced back to Target via Federal or bank fraud analysis units. This indicates that Target's detection and response capabilities were not effective but was not surprising: 69% of breaches are first detected by an external party according to the Verizon 2013 Data Breach Investigations Report. Now the FireEye revelation, indicating that Target had all the right pieces in place to detect and respond to this breach, changes the nature of the conversation a bit.
Based on what we now know about the FireEye deployment it appears that Target was in fact trying to do all the right things: they became PCI compliant, they had robust monitoring infrastructure (FireEye) in place as required by PCI-DSS, and they had actual human beings reviewing the alerts generated by those monitoring systems also as required by PCI-DSS. Regardless of how effective the offshore operations center was (which I'm sure will become a topic of much speculation) these 3 points alone demonstrate more security effort than is apparent at most companies that handle credit cards. We are doing assessment work for major companies that haven't even attempted to become PCI compliant yet (some in the retail sector), most of these companies (compliant or not) have not invested in monitoring infrastructure any more advanced than IDS/IPS and basic system log collection, and manually reviewing these logs is usually an often overlooked side-job assigned to an overworked IT guy.
So here is where I disagree with Businessweek's characterization of "Easy Target" (although I'll admit it does make a great headline): In light of this revelation I would say that Target is likely one of the harder targets. Despite the enormous impact of this breach it is still only a single breach and should be viewed in light of Target's overall security efforts. I would be very interested to see numbers around how many attacks Target successfully stopped with their monitoring capabilities before this attack slipped through. This breach did still happen though and companies will want to know why and what they can do to protect themselves; based on what we know now I would say that Target made 2 errors, both relatively minor when compared to how atrocious security is in most organizations. The 2 errors both have to do with how monitoring is conducted; specifically what behaviors generate alerts and how false positives are handled.
False positives
Any security monitoring system, whether it is a network intrusion detection system, a motion sensor in a bank, or a metal detector at an airport, can be tuned to be more or less sensitive and a FireEye deployment is no different. The tuning capability exists because there is unfortunately no such thing as a security sensor that only alerts on threats without ever generating false positive results: a metal detector that alerted on any metal at all would alarm every time a person with metal fillings in their teeth or metal rivets in their jeans walked through, a motion sensor that alerted on any motion at all would alarm every time a spider crawled across the floor, and network monitoring system that alerted on any activity would inundate its operators with alerts on normal activity. Tuning the system to be less sensitive in order to eliminate false positives is not as simple as it may seem: if a metal detector is tuned only to detect a lump of metal the size of a gun it will fail to alarm when a group of people each carries through a single component of a gun for reassembly on the other side. In order for security technology to be effective it must be tuned to be sensitive enough that it will detect most of the conceivable threats and an allowance must be made for humans to thoroughly investigate the potential false positives that will inevitably occur as a result.
Published information on Target's response indicated that the FireEye solution labelled the ignored threat as "malware.binary", a generic name for a piece of code that is suspected to be malicious even though it does not match any of the patterns for more widely spread malware that has been analyzed and given a name. So far this indicates that Target has likely tuned their monitoring solution well enough as it did detect the actual threat and generated an alert based on it (a system that had been tuned to be too permissive wouldn't have generated an alert at all). Where Target's system fails is the human response to that alert: It is likely that Target's monitoring center received many of these generic alerts on a regular basis, most of them either false positives or simple attacks that were automatically blocked by other security mechanisms; after too many of these false positive generic alerts the humans responsible for responding to them will learn to ignore them. This is like asking each person who sets off the metal detector if they have metal fillings and sending them on their way without further inspection if they respond in the affirmative; it wouldn't be a surprise at all if something slipped through at that point. The only way to make effective use of the security solution is to actually investigate each alert and resolve the cause; this is time consuming and expensive but not nearly so much as a breach. It appears that this is the key piece of Target's process that failed.
Behavior monitoring
The second error is something I am inferring from what was not mentioned: specifically any alerts based on activities on the network. Malware is a "known bad", a chunk of code that is suspected to be suspicious because it exhibits certain characteristics. The same could be said for most alerts generated by intrusion detection and prevention systems: they are based on network traffic exhibiting known suspicious characteristics such as a chunk of network traffic that would exploit a known vulnerability in an email server or a computer that quickly tries to connect to each of the protected systems in turn. Attempting to monitor a network by only watching for "known bad" traffic is akin to setting up a firewall to allow all network traffic except that which is explicitly denied (a practice that was mostly abandoned many years ago). The standard for configuring firewalls today is to deny all traffic by default and to only allow specific "known good" services to pass through when they are explicitly defined and this is the method we must look into for effective network monitoring as well: Define "known good" traffic and alert when anything else out-of-the-ordinary happens on the network.
The actual method used to penetrate and infect the network aside, reports indicate that credit card data was sent from Target's point-of-sale terminals to another compromised server on Target's network where it was then encrypted and sent out for the attackers to retrieve over the Internet. This represents the exfiltration of a huge amount of data and, had Target been looking for anything other than "known bad" traffic, provides 2 opportunities for detection: point-of-sale terminals would have suddenly started interacting with a system on Target's own internal network that they did not normally interact with and then that system suddenly started sending large amounts of encrypted traffic to a system on the Internet that it had never communicated with before. Neither of these communication vectors would have been flagged as "known good" and therefore should have triggered alerts for investigation. Unfortunately almost no-one monitors networks in this way and Target can't really be faulted for not being on the bleeding edge of security best-practices.
Law enforcement
There is a third failing that is worth mentioning here, one that is not at all under Target's control but that nevertheless contributed directly to this breach and many others around the world: the inability or unwillingness of law enforcement to stop criminals who operate primarily online. In the physical world we are used to the concept that when a bank gets robbed the police will respond, investigate, and at least attempt to identify and arrest the offender but in the online world this simply isn't happening all that often.
There are various published reports identifying the individuals behind the malware used at Target and the website used to sell the stolen credit card numbers. These reports weren't the results of Secret Service investigations or NSA metadata collection programs, rather they were identified, fairly easily, by private individuals piecing together information from social media sites and underground forums. Unsurprisingly to anyone in the security industry, the implicated individuals are all young, from Eastern Europe, and have been engaged in these activities for many years. The economic realities in much of Eastern Europe is such that there aren't many legitimate career opportunities for bright young computer enthusiasts. Given the sad state of information security in the rest of the world and the potential income it isn't surprising that many of these kids, who under different circumstances may have been the brains behind a multi-million dollar Silicon Valley startup, are turning to crime against corporations on the other side of the planet. With the recent events unfolding in Ukraine perhaps there is a glimmer of hope that these economic conditions will start changing in the near future.
One would assume, if these are just broke kids with a knack for computers and they are so sloppy about protecting their identities that someone with computer know-how (and some knowledge of the Russian language) can figure out who they are, that law enforcement must already be heading for their door but things are not so simple: a significant fraction of online crime crosses borders and while large breaches like Target attract law enforcement attention a small business owner would be hard-pressed to get any meaningful law enforcement response to a breach regardless of the consequences for his business. Local law enforcement agencies usually don't have the resources to conduct investigations across state lines, never mind national borders. In the post 9/11 world Federal law enforcement priorities are often focused elsewhere, often in the name of "national security"; the agencies that have historically focused on information security seem to be more concerned with threats posed by other governments than criminal enterprises and the FBI is now spinning itself as a counterterrorism and foreign intelligence agency. The political realities in Eastern Europe are also such that the cooperation between Western law enforcement agencies and their local counterparts that would be necessary to bring offenders to justice would be difficult or non-existent, the recent events unfolding in Crimea indicate that any change in this status-quo is unlikely. For the foreseeable future the attackers will be mostly left to their own devices, honing their skills across hundreds or thousands of attacks until they have the capability to penetrate even the most well defended network.
Where do we go from here?
Technology alone can't solve all our problems. Hopefully most of us know that already but there were quite a few vendors at the RSA conference this year proclaiming that their technology would have prevented the Target breach or, even more ludicrously, claiming that it would have prevented the Snowden breach at NSA. If technology could in fact solve all of our woes then, in light of Target's $1.6 million dollar investment in
FireEye's solution, any organization that hasn't spent that enormous amount on security technology should be very worried. This also demonstrates once again that compliance alone is not security either: we don't know who Target's PCI assessor was or if they took the compliance mandate seriously (versus taking the checkbox approach) but from what I've read so far it is entirely possible for this breach to occur in the manner that it did even if Target was serious about compliance. We need to treat compliance as a minimum standard, a guideline upon which we should build security appropriate to our own threat environment. And finally, it is becoming increasingly obvious that the next step in the cat-and-mouse game of security is to increase real-time monitoring and response capabilities to make more effective use of the technology that we have deployed and to make sure that the people tasked with that response must have the time and resources to conduct proper investigations (no more pretending that the overworked IT guy will have time to do it).
compliant in September of 2013 (I'm assuming he means that this was when they completed their last Report on Compliance). For the icing on the cake Businessweek made this their cover story with a huge “Easy Target” headline (complete with a cute animated online version) which demonstrates the potential PR fallout from a breach like this.
The article is here.
Compliance, monitoring, and response
For quite a while now I’ve been beating the drum on the message that you can't rely on protection mechanisms alone (firewalls, patching, etc.) to secure a network and the data within it; given enough time a motivated attacker will find a way in. You have to be able to detect the intruder and be able to respond to him in order to limit the damage he can cause. This is why banks have cameras, alarms, guards, and a hotline to the police despite also having a vault to keep valuables in. I've raised this point in the context of the Target breach before as well: we already knew that the breach was based on malware that had been modified to evade antivirus detection and this illustrates the need for monitoring and response capability rather than relying on antivirus alone. Reports indicated that Target first found out about the breach when they were informed of it by Federal authorities, likely because the stolen cards had already turned up on underground markets and had been traced back to Target via Federal or bank fraud analysis units. This indicates that Target's detection and response capabilities were not effective but was not surprising: 69% of breaches are first detected by an external party according to the Verizon 2013 Data Breach Investigations Report. Now the FireEye revelation, indicating that Target had all the right pieces in place to detect and respond to this breach, changes the nature of the conversation a bit.
Based on what we now know about the FireEye deployment it appears that Target was in fact trying to do all the right things: they became PCI compliant, they had robust monitoring infrastructure (FireEye) in place as required by PCI-DSS, and they had actual human beings reviewing the alerts generated by those monitoring systems also as required by PCI-DSS. Regardless of how effective the offshore operations center was (which I'm sure will become a topic of much speculation) these 3 points alone demonstrate more security effort than is apparent at most companies that handle credit cards. We are doing assessment work for major companies that haven't even attempted to become PCI compliant yet (some in the retail sector), most of these companies (compliant or not) have not invested in monitoring infrastructure any more advanced than IDS/IPS and basic system log collection, and manually reviewing these logs is usually an often overlooked side-job assigned to an overworked IT guy.
So here is where I disagree with Businessweek's characterization of "Easy Target" (although I'll admit it does make a great headline): In light of this revelation I would say that Target is likely one of the harder targets. Despite the enormous impact of this breach it is still only a single breach and should be viewed in light of Target's overall security efforts. I would be very interested to see numbers around how many attacks Target successfully stopped with their monitoring capabilities before this attack slipped through. This breach did still happen though and companies will want to know why and what they can do to protect themselves; based on what we know now I would say that Target made 2 errors, both relatively minor when compared to how atrocious security is in most organizations. The 2 errors both have to do with how monitoring is conducted; specifically what behaviors generate alerts and how false positives are handled.
False positives
Any security monitoring system, whether it is a network intrusion detection system, a motion sensor in a bank, or a metal detector at an airport, can be tuned to be more or less sensitive and a FireEye deployment is no different. The tuning capability exists because there is unfortunately no such thing as a security sensor that only alerts on threats without ever generating false positive results: a metal detector that alerted on any metal at all would alarm every time a person with metal fillings in their teeth or metal rivets in their jeans walked through, a motion sensor that alerted on any motion at all would alarm every time a spider crawled across the floor, and network monitoring system that alerted on any activity would inundate its operators with alerts on normal activity. Tuning the system to be less sensitive in order to eliminate false positives is not as simple as it may seem: if a metal detector is tuned only to detect a lump of metal the size of a gun it will fail to alarm when a group of people each carries through a single component of a gun for reassembly on the other side. In order for security technology to be effective it must be tuned to be sensitive enough that it will detect most of the conceivable threats and an allowance must be made for humans to thoroughly investigate the potential false positives that will inevitably occur as a result.
Published information on Target's response indicated that the FireEye solution labelled the ignored threat as "malware.binary", a generic name for a piece of code that is suspected to be malicious even though it does not match any of the patterns for more widely spread malware that has been analyzed and given a name. So far this indicates that Target has likely tuned their monitoring solution well enough as it did detect the actual threat and generated an alert based on it (a system that had been tuned to be too permissive wouldn't have generated an alert at all). Where Target's system fails is the human response to that alert: It is likely that Target's monitoring center received many of these generic alerts on a regular basis, most of them either false positives or simple attacks that were automatically blocked by other security mechanisms; after too many of these false positive generic alerts the humans responsible for responding to them will learn to ignore them. This is like asking each person who sets off the metal detector if they have metal fillings and sending them on their way without further inspection if they respond in the affirmative; it wouldn't be a surprise at all if something slipped through at that point. The only way to make effective use of the security solution is to actually investigate each alert and resolve the cause; this is time consuming and expensive but not nearly so much as a breach. It appears that this is the key piece of Target's process that failed.
Behavior monitoring
The second error is something I am inferring from what was not mentioned: specifically any alerts based on activities on the network. Malware is a "known bad", a chunk of code that is suspected to be suspicious because it exhibits certain characteristics. The same could be said for most alerts generated by intrusion detection and prevention systems: they are based on network traffic exhibiting known suspicious characteristics such as a chunk of network traffic that would exploit a known vulnerability in an email server or a computer that quickly tries to connect to each of the protected systems in turn. Attempting to monitor a network by only watching for "known bad" traffic is akin to setting up a firewall to allow all network traffic except that which is explicitly denied (a practice that was mostly abandoned many years ago). The standard for configuring firewalls today is to deny all traffic by default and to only allow specific "known good" services to pass through when they are explicitly defined and this is the method we must look into for effective network monitoring as well: Define "known good" traffic and alert when anything else out-of-the-ordinary happens on the network.
The actual method used to penetrate and infect the network aside, reports indicate that credit card data was sent from Target's point-of-sale terminals to another compromised server on Target's network where it was then encrypted and sent out for the attackers to retrieve over the Internet. This represents the exfiltration of a huge amount of data and, had Target been looking for anything other than "known bad" traffic, provides 2 opportunities for detection: point-of-sale terminals would have suddenly started interacting with a system on Target's own internal network that they did not normally interact with and then that system suddenly started sending large amounts of encrypted traffic to a system on the Internet that it had never communicated with before. Neither of these communication vectors would have been flagged as "known good" and therefore should have triggered alerts for investigation. Unfortunately almost no-one monitors networks in this way and Target can't really be faulted for not being on the bleeding edge of security best-practices.
Law enforcement
There is a third failing that is worth mentioning here, one that is not at all under Target's control but that nevertheless contributed directly to this breach and many others around the world: the inability or unwillingness of law enforcement to stop criminals who operate primarily online. In the physical world we are used to the concept that when a bank gets robbed the police will respond, investigate, and at least attempt to identify and arrest the offender but in the online world this simply isn't happening all that often.
There are various published reports identifying the individuals behind the malware used at Target and the website used to sell the stolen credit card numbers. These reports weren't the results of Secret Service investigations or NSA metadata collection programs, rather they were identified, fairly easily, by private individuals piecing together information from social media sites and underground forums. Unsurprisingly to anyone in the security industry, the implicated individuals are all young, from Eastern Europe, and have been engaged in these activities for many years. The economic realities in much of Eastern Europe is such that there aren't many legitimate career opportunities for bright young computer enthusiasts. Given the sad state of information security in the rest of the world and the potential income it isn't surprising that many of these kids, who under different circumstances may have been the brains behind a multi-million dollar Silicon Valley startup, are turning to crime against corporations on the other side of the planet. With the recent events unfolding in Ukraine perhaps there is a glimmer of hope that these economic conditions will start changing in the near future.
One would assume, if these are just broke kids with a knack for computers and they are so sloppy about protecting their identities that someone with computer know-how (and some knowledge of the Russian language) can figure out who they are, that law enforcement must already be heading for their door but things are not so simple: a significant fraction of online crime crosses borders and while large breaches like Target attract law enforcement attention a small business owner would be hard-pressed to get any meaningful law enforcement response to a breach regardless of the consequences for his business. Local law enforcement agencies usually don't have the resources to conduct investigations across state lines, never mind national borders. In the post 9/11 world Federal law enforcement priorities are often focused elsewhere, often in the name of "national security"; the agencies that have historically focused on information security seem to be more concerned with threats posed by other governments than criminal enterprises and the FBI is now spinning itself as a counterterrorism and foreign intelligence agency. The political realities in Eastern Europe are also such that the cooperation between Western law enforcement agencies and their local counterparts that would be necessary to bring offenders to justice would be difficult or non-existent, the recent events unfolding in Crimea indicate that any change in this status-quo is unlikely. For the foreseeable future the attackers will be mostly left to their own devices, honing their skills across hundreds or thousands of attacks until they have the capability to penetrate even the most well defended network.
Where do we go from here?
Technology alone can't solve all our problems. Hopefully most of us know that already but there were quite a few vendors at the RSA conference this year proclaiming that their technology would have prevented the Target breach or, even more ludicrously, claiming that it would have prevented the Snowden breach at NSA. If technology could in fact solve all of our woes then, in light of Target's $1.6 million dollar investment in
FireEye's solution, any organization that hasn't spent that enormous amount on security technology should be very worried. This also demonstrates once again that compliance alone is not security either: we don't know who Target's PCI assessor was or if they took the compliance mandate seriously (versus taking the checkbox approach) but from what I've read so far it is entirely possible for this breach to occur in the manner that it did even if Target was serious about compliance. We need to treat compliance as a minimum standard, a guideline upon which we should build security appropriate to our own threat environment. And finally, it is becoming increasingly obvious that the next step in the cat-and-mouse game of security is to increase real-time monitoring and response capabilities to make more effective use of the technology that we have deployed and to make sure that the people tasked with that response must have the time and resources to conduct proper investigations (no more pretending that the overworked IT guy will have time to do it).
On detection and response
18 02, 14 Filed in: Blog Posts
Organizations need to move beyond merely trying to keep attackers out and start building the capability to quickly detect and respond to intrusions while designing compartmentalized networks to slow attackers once they have breached the perimeter, buying more time to detect and respond to the attack. According to the Verizon Breach Report 69% of breaches were spotted by an external party. This shows us that security staff are often asleep at the wheel.
Effective detection and response capability can be difficult and expensive, it is not as simple as deploying a piece of technology that will sound the alarm when a breach happens. Intrusion prevention systems, web application firewall, security information and event monitors, file integrity monitoring software, and other technological detection mechanisms require extensive tuning when they are deployed and continue to require ongoing tuning to adjust to changing conditions on the network. Without tuning these systems will generate mountains of false-positive alerts, essentially “crying wolf” so frequently that legitimate attack alerts will be lost in the noise and ignored as well.
While some technology, such as intrusion prevention systems and web application firewalls, have the ability to automatically stop basic attacks when they are well tuned and properly configured, a sophisticated attacker will eventually be able to find a way around these and we must have real live humans paying attention to the network in order to stop attacks. Many of the sophisticated attackers are located overseas and are likely not keeping standard office hours therefore this monitoring and response capability must be operating 24x7 in order to be effective. Staffing for 24x7 monitoring capability can be difficult and cost-prohibitive for all but the largest of organizations and this is an area where many companies may benefit from outsourcing the monitoring and initial response roles to a managed services provider.
Most typical networks have security resources concentrated at the perimeter with very little to protect systems inside the network from each other. This puts the attacker who successfully breaches the perimeter in a position where he can “pivot” on the compromised system and use it to attack other, potentially more sensitive, systems on the network without much interference. Unfortunately any host can provide the gateway for an attacker to breach the perimeter whether it is a poorly written web application that allows commands to be run on the underlying server or a user who falls for a phishing email and downloads malware onto their workstation.
Protecting and building the capability to monitor an entire network with all of its possible attack points can be cost-prohibitive regardless of the size of the organization. This can be mitigated by compartmentalizing the network into separate segments, for example building a dedicated section of the network for systems that handle credit card data and protecting this from the rest of the internal network with firewalls, intrusion prevention systems, and other security measures just as it would be protected from the Internet. This would impede an attacker who managed to compromise another less sensitive and protected system on the network by forcing him to go through the internal security perimeter, hopefully attracting attention from the security team as a result. An advantage of this approach, beyond slowing attackers down so that they are more likely to be detected, is that it also allows organizations to concentrate their limited security resources on the network segments that contain critically sensitive data rather than expending resources unnecessarily on systems that would not directly impact sensitive data.
Although all of the details haven’t been released yet, these lessons can be applied to the Target breach based on what we do know and suspect of the techniques used there. The attackers are believed to have gained entry into Target’s network by using the login credentials of an HVAC company that provides services to Target in order to access a web page (suspected to be an invoicing system). Although we don’t know how well segmented Target’s network is, a segmented network where critical systems like point-of-sale terminals, are isolated from other unrelated systems would make it much more difficult for the attacker to move into the point-of-sale systems undetected. The attackers are also believed to have conducted a test-run of their malware by installing it on a few point-of-sale terminals before deploying the malware on a wider scale. The attack seems to have run for a few weeks before it was detected, demonstrating that Target likely did not have the monitoring and response capability necessary to detect that the POS systems had been compromised (such as with file integrity monitoring) or to detect the stolen card data being exfiltrated from the network (such as with data loss prevention technology). It is believed that the breach was detected through fraud analysis on the stolen cards or undercover purchases of stolen cards rather than by direct detection on the network further illustrating this point.
Effective detection and response capability can be difficult and expensive, it is not as simple as deploying a piece of technology that will sound the alarm when a breach happens. Intrusion prevention systems, web application firewall, security information and event monitors, file integrity monitoring software, and other technological detection mechanisms require extensive tuning when they are deployed and continue to require ongoing tuning to adjust to changing conditions on the network. Without tuning these systems will generate mountains of false-positive alerts, essentially “crying wolf” so frequently that legitimate attack alerts will be lost in the noise and ignored as well.
While some technology, such as intrusion prevention systems and web application firewalls, have the ability to automatically stop basic attacks when they are well tuned and properly configured, a sophisticated attacker will eventually be able to find a way around these and we must have real live humans paying attention to the network in order to stop attacks. Many of the sophisticated attackers are located overseas and are likely not keeping standard office hours therefore this monitoring and response capability must be operating 24x7 in order to be effective. Staffing for 24x7 monitoring capability can be difficult and cost-prohibitive for all but the largest of organizations and this is an area where many companies may benefit from outsourcing the monitoring and initial response roles to a managed services provider.
Most typical networks have security resources concentrated at the perimeter with very little to protect systems inside the network from each other. This puts the attacker who successfully breaches the perimeter in a position where he can “pivot” on the compromised system and use it to attack other, potentially more sensitive, systems on the network without much interference. Unfortunately any host can provide the gateway for an attacker to breach the perimeter whether it is a poorly written web application that allows commands to be run on the underlying server or a user who falls for a phishing email and downloads malware onto their workstation.
Protecting and building the capability to monitor an entire network with all of its possible attack points can be cost-prohibitive regardless of the size of the organization. This can be mitigated by compartmentalizing the network into separate segments, for example building a dedicated section of the network for systems that handle credit card data and protecting this from the rest of the internal network with firewalls, intrusion prevention systems, and other security measures just as it would be protected from the Internet. This would impede an attacker who managed to compromise another less sensitive and protected system on the network by forcing him to go through the internal security perimeter, hopefully attracting attention from the security team as a result. An advantage of this approach, beyond slowing attackers down so that they are more likely to be detected, is that it also allows organizations to concentrate their limited security resources on the network segments that contain critically sensitive data rather than expending resources unnecessarily on systems that would not directly impact sensitive data.
Although all of the details haven’t been released yet, these lessons can be applied to the Target breach based on what we do know and suspect of the techniques used there. The attackers are believed to have gained entry into Target’s network by using the login credentials of an HVAC company that provides services to Target in order to access a web page (suspected to be an invoicing system). Although we don’t know how well segmented Target’s network is, a segmented network where critical systems like point-of-sale terminals, are isolated from other unrelated systems would make it much more difficult for the attacker to move into the point-of-sale systems undetected. The attackers are also believed to have conducted a test-run of their malware by installing it on a few point-of-sale terminals before deploying the malware on a wider scale. The attack seems to have run for a few weeks before it was detected, demonstrating that Target likely did not have the monitoring and response capability necessary to detect that the POS systems had been compromised (such as with file integrity monitoring) or to detect the stolen card data being exfiltrated from the network (such as with data loss prevention technology). It is believed that the breach was detected through fraud analysis on the stolen cards or undercover purchases of stolen cards rather than by direct detection on the network further illustrating this point.
Time to modernize thinking, technology in fighting malware
09 05, 14 Filed in: Press Quotes
Beyond Heartbleed: 5 Basic Rules To Reduce Risk
12 05, 14 Filed in: Blog Posts | Bylines
As published in Wall Street and Technology:
Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.
When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H
The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.
Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.
Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.
To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.
This means our candidates for the next Heartbleed would be among the following common software packages:
So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.
Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.
It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).
Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.
Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:
Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.
Reduce privileges: Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.
Patch: For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.
Firewall: The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.
Defense in Depth: There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.
Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.
When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
- Affects software that is in widespread use as an Internet-facing service that commonly handles sensitive data
- Is present in version(s) of that software representing a sizable percentage of the deployed base
- Can be exploited to reveal sensitive information (directly or indirectly) without logging in with a valid username and password
- Its method of exploitation is widely known
For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H
The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.
Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.
Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.
To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.
This means our candidates for the next Heartbleed would be among the following common software packages:
- Email software (Sendmail, Postfix, and Exchange)
- Web server software (Apache and IIS)
- The encryption packages that supports both of them (OpenSSL and SChannel)
- The TCP/IP stacks of the operating systems they usually run on (Linux, FreeBSD, and Windows)
- The server-side languages and other plugins that are frequently used within web servers (PHP, Java, PERL, Python, Ruby, .Net)
So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.
Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.
It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).
Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.
Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:
Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.
Reduce privileges: Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.
Patch: For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.
Firewall: The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.
Defense in Depth: There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.
True Detectives: VARs On The Case As The Need For Incident Response Strategies Gets More Evident Every Day
07 07, 14 Filed in: Press Quotes
What Experts Say is the Single Largest Security Threat to Your Company’s Reputation
08 08, 14 Filed in: Press Quotes
Quoted in Online Reputation Management on the impact of breaches on a company’s reputation:
Read More...
Read More...