Blog Posts

How vulnerable is our critical national infrastructure?

As originally published on Help Net Security:

Considered the backbone of the nation’s economy, security and health; critical infrastructure provides power, water, transportation, and communications systems relied on to connect us with our friends and family to our communities.

Utility, oil and gas, manufacturing and alternative energy organizations are fending off cyber attacks on a daily basis. From activist groups to state-sponsored hackers, our nations’ critical infrastructures are targeted regularly in an attempt to disrupt services and cause havoc.

Information technology has evolved significantly in just the past decade, yet most critical infrastructure technology is based on embedded hardware and proprietary protocols that predate the Internet. Years ago, systems were largely isolated with operation managers onsite, as opposed to connecting in from remote offices or even while on the road - there was no need to connect to a corporate network or Internet and the security models of many of these systems reflects these simpler times.

In an attempt to streamline business, improve communication in the supply chain and stay current with technology trends, such as Big Data and the Internet of Things, these organizations have been and are connecting their critical control systems to open and often public networks.

Unfortunately the networks may not be as secure as believed, exposing companies to an abundance of cyber attacks and vulnerabilities. The once obscure proprietary protocols used by industrial control systems have been dissected and analyzed with the results spread across the Internet for any interested party to peruse, researchers (both those looking to help make the Internet more secure and those looking to defeat its security) are actively looking for vulnerabilities, and dedicated search engines like Shodan allow potential attackers to quickly find systems that are vulnerable to their latest exploit (even though Google often works in a pinch as well). Despite the well-publicized attacks (and the ones never made public) in recent years, security isn’t being seen as a priority for many of the organizations that form our critical infrastructure.

Cybercrime is forcing companies of all sizes in all sectors to take notice; the threat of a cyber attack has serious repercussions that reach far beyond the companies’ business to the individuals who rely on the services of these organizations for their day-to-day needs. A pair of research papers by Trend Micro show how common attacks on critical infrastructure systems have become, who is behind them, and the types of damage these attackers are willing to cause, even with no apparent motive.

In the extreme case, Stuxnet and its descendants have shown us the damage a motivated state attacker can cause. Thirty years ago, physical threats were the biggest concern to critical infrastructure, and today, a cyber attack that isn’t easily attributable to a specific actor poses the greatest threat. It is key that the critical infrastructure maintains reliable functioning.

How can critical infrastructure organizations manage to stay up to date with technology while protecting their company from a security breach?

Cyber security standards and guidelines already exist and in many cases, have been in place for years, yet reported attacks continue to grow and many could have been avoided. With the growing awareness globally of the threat of cyber attacks against critical infrastructure system, guidelines and framework are exactly that – guidelines and suggestions to follow, rather than legal requirements to comply with. In many cases these guidelines will only provide a bare minimum, failing to address the additional risks posed by a specific organization’s architectural design choices.

It still remains the responsibility of the industry to continuously monitor and control its own systems and IT environments. Additionally, due to how connected critical infrastructure systems have become to the broader corporate network, all employees, not just IT employees, need to be educated and trained to do everything possible to reduce the risk of a cyber attack.

Sitting tight and hoping for the best is not an option. The risk of a cyber attack isn’t going away and critical systems are not becoming less vulnerable to attack. To control the risk, an organization must understand the current risk exposure across all area of the business and focus on the critical areas.

To mitigate a security breach, reputation damage and financial loss – a detailed incident response plan is essential. A timely implementation of an incident response is imperative post breach, and having an in-house skilled security expert on call 24x7 may not be an option for many companies as there is a growing global skills shortage in this industry that will likely take years to improve. Many organizations outsource these critical functions, reassuring companies that their systems are monitored around the clock with security experts on hand providing crucial support when needed.

It’s clear that critical infrastructures are under scrutiny from both attackers and defenders. Organizations need to understand their cyber security efforts and where improvements can be made, allowing them to identify and fix the weaknesses in their infrastructure. The industry needs to take control of the issue and find ways to reduce the growing number of threats by building systems that bake in security as part of the design and thereby reduce the number of exploitable vulnerabilities. Until that day arrives, organizations need to remain attentive to protect its assets.

In order to reduce high-risk situations, these ten steps will help improve security controls:

1. Understand your risk - an annual risk assessment exercise should be conducted with an expert who has conducted similar technical risk assessments in order to identify the risks that baseline security and compliance standards don’t cover and determine what level of security is appropriate for a particular system.

2. Secure configuration - keep software and hardware up to date, persistence always pays off. Work with suppliers to ensure proprietary systems are maintained, and build an asset register with a focus on end-of-life/unsupported systems that will require extra protection.

3. Aim for real-time detection - continuously monitor all log data generated by the organization’s IT system to keep track of what strays from “normal” activity and be prepared to respond immediately to any perceived issue. This will likely include a combination of IPS, DLP, FIM, and SIEM solutions working together to provide deep visibility.

4. Educate and train your employees - make sure they really understand your policies, procedures, and incident response processes. Make it a priority to teach everyone at least the basics.

5. Check passwords on connected devices - make sure the devices aren’t using weak passwords that are easily hacked. Default passwords for even obscure products are well known and documented on the Internet, attackers will try these first. Commonly used or otherwise simple passwords aren’t much better.

6. Incident response - establish, produce and routinely test incident management plans to ensure that there is an effective response to maintain business continuity in the face of a breach.

7. Secure network - manage the external and internal network perimeters to filter out unauthorized access. It is key to understand what is on your network and what protocols traverse it. This can’t be accomplished if critical systems share a “flat” network with other unrelated systems with unrestricted internal access. When feasible, completely disconnect critical networks from the Internet and other networks to eliminate the possibility of remote attacks.

8. Malware protection - establish anti-malware defenses and continuously scan for malware. While this won’t stop every attack and shouldn’t be relied on it can provide an early warning of a sloppy attacker.

9. Test security – Regular penetration tests should be conducted in order to identify weaknesses and test the effectiveness of other security controls. These should go beyond basic vulnerability scans and include hands-on attempts to exploit vulnerabilities conducted by testers who are familiar with the techniques necessary to attack industrial control systems.

10. Pay attention to new threats – New vulnerabilities arise regularly, whether it is a simple exploit discovered in a particular product or an entirely new way of manipulating a common protocol that affects a wide range of products dating back years (as we have seen with a number of SSL vulnerabilities lately). All of the policies, procedures, risk assessments, and security controls should constantly be updated to address these latest threats as they are discovered rather than waiting until they are exploited when it is often too late.

As critical infrastructure companies become more connected to the Internet, they are placed under high scrutiny from cyber attackers. It is vital for organizations to recognize where they stand in their cyber security efforts and pinpoint where there are weaknesses in their infrastructure. It is extremely important for companies to be prepared for cyber threats and attacks, and aware of the repercussions, not only on them but also for those who rely on them on a daily basis.

Hacking the Hackers: The Legal Risks of Taking Matters Into Private Hands

Following up from an incident that Microsoft caused a while back trying to close down some malware, Becca Lipman at Wall Street & Technology has posted a great article on the difficult issues faced by financial institutions trying to protect themselves from relentless hackers that are hiding in countries where they can’t be served with justice.

Three critical changes to PCI DSS 3.0 that every merchant should know

The strategic concerns of security in cloud computing Read More...

Chip & Pain, EMV Will Not Solve Payment Card Fraud

As published in Wall Street and Technology:

Switching to EMV cards will lower retail fraud, but it's not enough. Here's the good, the bad, and the ugly.

Home Depot, much like Target before it, has responded to its breach with a press release indicating that it will be rolling out Chip and PIN technology. While this is a positive step, it is also a bit of a red herring: Chip and PIN technology alone would have done little to nothing to prevent these breaches.

Chip and PIN is one piece of a larger standard called EMV. This standard defines how chip cards interoperate with point-of-sale terminals and ATMs. It includes the Chip and PIN functionality that we hear so much about as well as Chip and Sign functionality that seems more likely to get rolled out in the US. EMV is not without its flaws. 

It's all about the money
The card brands are pushing for EMV to be in place by October 2015 with gas pumps and ATMs allowed an extension until October 2017. The mechanism by which this is being accomplished is a liability shift.
In the US today the bank or card brand is typically responsible for most fraud losses. When the deadlines pass the acquirers will be transferring liability for fraud losses down to whoever isn't using EMV technology. For example, if fraud is committed with an EMV card at a merchant that only supports stripe cards then the merchant will be liable.

The good
The advantage of an EMV card is that the chip is much harder to clone than a magnetic stripe.

The magnetic stripes are like miniature tape cassettes that can easily be overwritten with stolen data while chips are more like miniature computers that cryptographically validate themselves. The chips are not supposed to give up the secret keys that would be necessary in order to create a clone.

Chip and PIN cards also make it more difficult to steal and use a physical card. The thief would need to know the PIN to use the stolen card.

The bad
So far banks in the US are rolling out Chip and Sign cards due to fears about consumer acceptance of PINs. With Chip and Sign it remains possible for a thief to steal a physical card and make a purchase at any store by drawing a squiggle on a piece of paper.

There are deeper problems with the transition though. Not every merchant or bank will support EMV right away so both EMV cards and terminals will continue to support magnetic stripes. Stripe data stolen from a non-EMV merchant can still be used for fraud and unless terminals enforce the use of cards in EMV mode this opens the door to stolen card data being used in magnetic stripe mode regardless of its source.

The ugly
The chip helps verify that the card is legitimate but most EMV terminals read the unencrypted card details off of the chip in nearly the same way that a magnetic stripe terminal reads them now. A compromised point-of-sale terminal could still skim off card details that could be used for fraud elsewhere.

Security researchers have also identified a few different techniques for capturing PINs and an attack that allows an incorrect PIN to be used successfully. EMV terminals are also not immune from people tampering with the terminals themselves, including in the supply chain, and this has already resulted in some real-world breaches.

E-commerce still relies on punching a card number into a website. EMV offers no protection here, cards could be stolen from compromised e-commerce servers and stolen card data could be used to make online purchases.

What, if not EMV?
EMV does lower retail fraud where it is used today because it's easier to steal cards and commit fraud in another geography where EMV is not in use. As other sources of card data dry up we can expect the flaws in EMV that we already know about will be exploited more widely and new exploits will be found. Before too long we will end up right back where we are today.

The real solution to the retail breaches we've been seeing is encryption. By the time the card data gets to the point-of-sale terminal it's too late. Encryption should happen as close to the card as possible, this means in the terminal hardware as the card is read. In this model the only realistic attack a merchant would have to be concerned with is tampering with the terminal hardware itself.

PCI has published the Point-to-Point-Encryption (P2PE) standard to standardize this approach but most merchants are focusing on the migration to EMV instead. I'm afraid that soon after the shift to EMV is complete we will find ourselves making another forced migration to P2PE. Either that or consumers and merchants begin their own migration to alternative payment technologies.

Driving Information Security, From Silicon Valley to Detroit

As published in Wall Street and Technology:

For better or worse, computer software vendors are practically devoid of any liability for vulnerabilities in the software they sell (although there is certainly a heated discussion on this topic). As far as vendors are concerned, software is “licensed” rather than sold, and users who accept those licenses are agreeing to waive certain rights, including the right to collect damages resulting from failures in the software.

To pull one particular example from
the license for Microsoft SQL Server Enterprise 2012, a widely used piece of database software that underpins a significant number of enterprise applications that handle millions of dollars worth of transactions each:

YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO THE AMOUNT YOU PAID FOR THE SOFTWARE... YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES. 


When a flaw is discovered, including security flaws that are
actively being exploited to breach systems, a vendor will typically issue a patch (sometimes many months later, and, hopefully without causing more problems than they fix), and that is the end of the issue: no lawsuits, no refunds, and no damages.

This liability-free model used by software vendors stands in stark contrast to almost any other product that is bought and sold. Product liability laws hold manufacturers and sellers responsible for design or manufacturing defects in their products. Rather than releasing a fix and calling it a day, these companies will find themselves on the hook financially for the consequences of their failures.

Software infiltrates everything

Government oversight from organizations like the Consumer Product Safety Commission, the National Highway Traffic Safety Administration, and the Food and Drug Administration track complaints and have the ability to force recalls or issue fines. For a recent example of these consequences we can look to General Motors’ ignition recall troubles that have so far resulted in $2.5 billion worth of recalls, fines, and compensation funds.

Most consumer products also don’t receive the frequent software updates that we are used to applying to our computers; whatever software version comes in a consumer product tends to stay in it for life. In the automotive world this has already led to some comically outdated in-dash navigation, information, and entertainment systems (especially when compared to today's rapidly evolving smartphones and tablets) but will also likely lead to some horribly vulnerable unpatched software.

These two worlds, both operating under very different rules, are colliding. Cutting-edge computers and software are increasingly finding their way into the types of products we buy every day, and nowhere is this more apparent than in the automotive world. The days of carbureted vehicles that could be tuned with a timing light and a screwdriver ended in the 1990s, replaced with fuel injection and electronic ignition systems that are controlled by computers actively adjusting engine parameters as we drive, based on the readings from a network of sensors scattered throughout the vehicle. These networks have grown to include more than just the engine sensors.

In-car networking standards, such as the CAN bus standard, enable a wide array of devices within a vehicle to communicate with each other, allowing huge wiring harnesses containing hundreds of bundled wires, fuses, and switches to be replaced with commands and updates traveling over a single wire. On modern cars the brakes may not be controlled by a hydraulic line connected to the brake pedal; the throttle may not be controlled by a cable connected to the gas pedal; and the steering may not be controlled by a shaft connected to the steering wheel. Instead, the brake pedal, gas pedal, and steering wheel could all just be electronic sensors that send computerized commands over the CAN bus network to electric motors elsewhere in the vehicle that carry out those commands. Toyota’s electronic throttle control system has already made some headlines as part of a series of 
unintended acceleration lawsuits that resulted in 16 deaths, 243 injuries, a motorist released from jail, and a $1.2 billion fine.

This issue goes much deeper than the types of software mistakes that can cause a car to malfunction on its own. As we’ve seen with much of the software connected to the Internet, including some other systems that can have real-world (and sometimes 
very messy) consequences, it is the malicious hackers that can cause the most problems. Security researchers have already been looking into these sorts of possibilities and have separately demonstrated the ability to gain access to in-car networks from a remote location and affect a vehicle’s braking, steering, and acceleration (among other things) once they gain access to the in-car network.

Other attacks like location tracking and eavesdropping on a vehicle’s occupants via hands-free communication microphones are also possible, but they pale in comparison to the potentially fatal consequences of interference with the vehicle controls. Presentations at the annual
Black Hat Conference and DEF CON security conferences this month have also covered topics related to automotive network and computer security, while a group in China is offering a prize of $10,000 to anyone who can gain remote access to a Tesla’s on-board operating system.

Although some of the media reports on this topic are being dismissed within the information security community as “stunt hacking” (sensationalist stories based on hacks conducted in unrealistic conditions) and manufacturers are quick to state that their car systems have safety checks built in, it is clear that the building blocks for a real-world attack are being built and assembled. The 
firmware manipulation techniques demonstrated at DEF CON earlier this month could be used to override or eliminate the safety checks built in by the manufacturers, and it is only a matter of time before the techniques that are being used to remotely access cars are combined with the techniques to manipulate the controls.

Many ways to attack

For an attacker, getting access to a car’s network is not as hard as it may initially seem. The most obvious attack point would be the On-Board Diagnostics connector that is usually located in a discrete spot under a vehicle’s steering wheel where a small and cheap micro controller could be connected. More interesting attacks could be launched via malware contained on CDs, DVDs, or USB devices loaded into the vehicle’s infotainment system. Moving into the wireless realm, many cars come equipped with Bluetooth or WiFi connectivity for smartphones and other devices within the vehicle.

All of these attack vectors would require the attacker to be in or near the target vehicle, but services like GM’s OnStar, BMW’s Assist, and others utilize mobile cellular connections to connect vehicles to the outside world. New smartphone apps that allow vehicle owners to interface with their cars remotely can open up these interfaces essentially to anyone on the Internet. It’s not too far-fetched to imagine that a few years from now bored Chinese hackers could spend their downtime crashing cars instead of 
trying to cause trouble at water treatment plants.

Motor vehicles have been built with mechanical and hydraulic linkages for over a century, and the basic safety principles for those types of systems are well understood. Designing reliable software for complex vehicles is a fairly new discipline that is only understood by 
a few companies (and even they make mistakes). Malfunctions or outside interference with operating vehicles can easily have fatal consequences, and the increasing use of networked control systems connected to the outside world increases the likelihood of accidental or malicious incidents.

The developers of the electronic systems in our vehicles would do well to heed the the saying “with great power comes great responsibility.” As we’ve seen with both Toyota and GM’s recent troubles, safety issues can bring heavy financial consequences for manufacturers. Congress is starting to 
pay attention to the issue of car hacking as well, and it will likely only take one high-profile incident to provoke regulatory action.

Tesla Motors has already shaken up the industry by bringing its Silicon Valley approach to the automobile business and continues with this approach by 
actively soliciting information from the public on security vulnerabilities in its vehicles and publicly posting a “Hall of Fame” for security researchers who have assisted them. Perhaps this is part of the future, manufacturers working closer with their customers to find and address issues.

As Google experiments with some of the first realistic self-driving cars, it isn’t too far fetched to imagine them following the same path as Tesla when it comes to working with security researchers, especially in light of Google’s existing 
bug bounty programs. In any case, one habit of Silicon Valley that we can be almost assured won’t carry over to the automotive world is the practice of disclaiming liability for damages from the improper operation of software; the Toyota case has shown us that those days are already over. Who knows? Before long, it may be Silicon Valley looking to Detroit for advice on how to handle product liability concerns.

As a footnote, many of the issues raised here are applicable to other industries outside the automotive sector as well (software vulnerabilities in medical devices and industrial control systems have been getting quite a bit of attention as of late). But it’s hard to imagine any other industry that is as integral to the national (and global) economy, whose products are used more frequently by such a large proportion of the population, and the correct operation of which carries life-and-death consequences.

Phishing Policies

Just got this in my spam box, quite possibly my favorite phishing email of all time. It looks like a pretty good knock-off of an E-ZPass email considering that it was all done without images (although they could have done a little better on the text). The best part is the link to the “Phishing Policy” at the bottom of the email. I don’t dare click on it to see where the link actually goes but it’s nice to know the bad guys have a sense of humor too.

EZPhishing

Vigilante Justice on the Digital Frontier

As published in Wall Street and Technology:

This is a story about Microsoft and a company called Vitalwerks, but first lets go through a fictional scenario.

Let's say you own a number of office buildings. Unbeknownst to you, some of your tenants are engaged in criminal activity. In particular, a crime ring operating out of some of these offices steals cars and uses them to rob banks. One day, you start getting angry calls from your tenants (the ones involved in legitimate businesses), because they are all locked out of their offices. You come to discover that General Motors, upset that its products are being stolen and used in bank robberies, has managed to identify the crime ring. However, rather than contacting you (the landlord), so that you can evict the offenders, or getting law enforcement involved to apprehend the criminals, the company spent months applying for a court order allowing it to seize the crime ring's offices on its own.

Unfortunately for you and your legitimate tenants, instead of locking down the individual offices used by the criminals, General Motors seized and locked down your entire office buildings.

This scenario seems absurd on so many levels. Why allow the criminals to operate with impunity for months instead of taking immediate action? Why not contact the landlord or law enforcement for help, instead of resorting to a secret seizure order? Why seize entire buildings, rather than the individual offices used by the suspects? Why is a third-party like General Motors even involved to this degree? How could a court ever agree that any of this was a good idea and issue an order allowing it? Despite the court order, the whole things reeks of vigilante justice.

As absurd as this all seems, it actually happened on June 30, only it was all online. The criminals were distributing malware. The landlord was a hosting company called Vitalwerks. The targets of the seizure were Vitalwerks' Internet domain names, and the company doing the seizing was Microsoft.

Vitalwerks' domains were handed over to Microsoft as a result of a court order. This transfer is done by domain registrars who actually control the Internet's domain name resolution infrastructure. It does not require any notifications to or actions on the part of the target. In theory, Microsoft's goal was to use its control of the domains to "sinkhole" the subdomains used by the malware (redirecting them to a system that doesn't distribute malware). However, because what Microsoft is calling a small technical error, it actually interrupted service for millions of Vitalwerks' legitimate customers. It took days before service was completely restored.

The seizure does seem to have affected criminal operations. Kaspersky reports that 25% of the APT groups it was tracking have been affected. This raises the question of whether the end justifies the means. In this case, the means was a tricky technical maneuver that went awry and affected millions of hosts for days in an industry where providers strive to have as many nines in their uptime as possible.

This isn't the only instance of this phenomenon, either. The tactic of hijacking domains to interrupt malware traffic has been used for a few years and is quickly becoming a favorite for Microsoft's Digital Crimes Unit. Of course, given some of the tactics used by law enforcement agencies (such as taking hundreds of unrelated servers from co-location facilities in raids), the seizure of a few domains might actually be the lesser of two evils.

Unlike some of the "bulletproof hosting" providers operating out of Eastern Europe, where a forced takeover may be the only way to block malicious traffic, Vitalwerks is based in the US, where the law doesn't look too kindly on organizations that intentionally harbor hackers. In this case, Vitalwerks says it was unaware of the malware that was utilizing its service, and that it would have immediately blocked the offending accounts if it had known about them. The company says it has actually worked with Microsoft to block malicious accounts in the past, so it isn't sure why anyone would go through the time and effort to get a court order (allowing the malware to operate the whole time) when it could have acted immediately.

On the other side of the argument, the type of hosting service provided by Vitalwerks is easily abused (though these services do have legitimate purposes). Microsoft's Digital Crimes Unit contends that Vitalwerks was not doing enough on its own to prevent abuse.

It seems that we are dealing with the age-old consequences of frontier justice moved from the Wild West to the digital realm. Private organizations are taking law enforcement into their own hands, because the government hasn't been able to keep up. Innocent bystanders are being hurt in the process. Companies that rely on their Internet presence to do business may want to be careful about the providers they choose. They risk getting caught in the crossfire if criminals happen to be in the vicinity.

Bank Fraud: It’s Not Personal, Just Business

As published in Wall Street and Technology:

Less publicized (but nonetheless costly) incidents of fraud, questions of liability, and mixed success in court complicate the allocation of security resources.

High-profile breaches of consumer data have been in the news lately, with Neiman Marcus, Michael's, and Target each losing hundreds of thousands to millions of payment card details. As of last week it looks as if we will be able to add P.F. Chang’s to that list as well.

Much of the media coverage of these events has revolved around the impact on consumers and what consumers should do to protect themselves, but the reality of these breaches is that the consumers are the least likely to be affected: Federal law limits liability for fraudulent credit or debit card purchases to $50 in most cases (with the condition that the loss or theft of the card is reported promptly in the case of debit cards). The real impact of these breaches has been on the companies that have been compromised. Target reported $61 million in total breach expenses during the quarter of the breach, and this number is sure to grow as time goes on.

There is another type of financial fraud that is hitting companies as well: wire transfer fraud. This type of fraud costs approximately $1 billion per year but generally doesn’t get the media coverage we have seen with recent personal information breaches, perhaps because it doesn’t involve millions of individuals’ payment card numbers or because breach notifications usually aren’t required if a consumer’s personal information isn’t lost.

The ploy is fairly simple, an attacker gains access to a commercial bank account, wires as much money as possible to another bank account, and withdraws the stolen money before the unauthorized transfer is noticed. Often the recipient bank accounts and withdrawals are handled by unwitting “mules” who answer the “Work From Home!” ads that seem to be plastered all over the Internet and on telephone poles across the country. The mules believe they are working for a legitimate company handling office finances when in reality they are withdrawing the stolen money and forwarding it to the overseas (usually somewhere in Eastern Europe) masterminds behind the scheme.

Unlike personal consumer bank accounts, which fall under FDIC regulations and have the same federal liability limits as debit cards ($50 if the bank is notified within 2 days and $500 if the bank is notified within 60 days), there is essentially unlimited liability for commercial bank accounts. It is entirely possible for an entire bank account to be cleaned out in a matter of hours. In 2009 Experi-Metal Inc., a Michigan based company, had $5.2 million wired out of its account at Comerica in a single day. The bank was able to recover most of the money because the transactions had been detected by fraud-alerting algorithms, but Experi-Metal was still left short by $561,000.

Experi-Metal’s story is fairly typical, most victims are left with losses in excess of $100,000. This seems like a pittance compared to the Target losses, but it could be a devastating blow for a small or midsized business with a much smaller revenue stream than the $21.5 billion Target reported during the same quarter as the recent breach. These attacks are happening regularly, and they aren’t just targeting businesses: Public schools, libraries, universities, and non-profits have all been victimized in this manner.

Most banks accept no liability for the missing money, because the breaches are occurring on the customer’s computer systems, not the bank's. These can range from a simple phishing attack in which an email purporting to be from the bank attempts to trick an unwitting user into directly revealing his or her banking passwords to complex botnets made up of malware-infected computers around the world waiting to capture these credentials.

Law enforcement does try to break up these fraud networks when they can, but it can take years. With many of the perpetrators targeting US businesses but operating out of foreign countries, it can be difficult for US law enforcement to find the masterminds behind the operation and get the quick cooperation they would need to effect any meaningful arrests. Businesses certainly shouldn’t hold out any hope that these modern-day bank robbers will be caught and their money returned.

Some businesses have tried to fight back against the banks in court with mixed success. Patco Construction Co. of Maine lost $588,000 in 2009 and, after repeatedly losing in lower courts, was able to win a judgment from the 1st Circuit Court of Appeals in July 2012 forcing the bank to cover tits losses. On the other hand, Choice Escrow and Land Title LLC of Missouri also lost $440,000 in 2009, and on June 11, 2014, the 8th Circuit Court of Appeals ruled that not only was the bank not responsible for the losses, but that the bank can pursue Choice Escrow to pay for its legal defense costs. Given the potential losses from a breach and the expensive, uncertain, and lengthy nature of attempting to recover funds from a bank it is clear that businesses need to focus on protecting themselves from fraudulent transfers.

Malware and botnets are an enormous threat on the Internet today, and many of them are designed to steal financial details in order to facilitate wire transfer fraud. The ZeuS botnet alone (the same piece of malware that caused the Patco breach described above) is estimated to have stolen $70 million over its lifetime. NTT Com Security’s Global Threat Intelligence Report shows that botnets were responsible for the largest proportion of attacks happening on the Internet in 2013 with 34% of the total. Disturbingly, the same report also shows that 54% to 71% of malware is not detected by antivirus software, which highlights an underlying security issue: Installing antivirus and tossing a firewall on the network is not enough to prevent these types of attacks.

Real network security requires building the capability to monitor a network and respond to attacks. We saw this with the Target breach where, despite spending $1.6 million on FireEye network monitoring software, Target managed to ignore the alerts it generated based on the malware attacking their network. We saw this again with the Neiman Marcus breach where 60,000 alerts were ignored over a three-and-a-half month period. If large companies with multimillion-dollar security budgets can’t protect themselves from malware, then the prospects would seem exceedingly bleak for the small and midsized companies that are being victimized by wire transfer fraud.

In spite of all this, there are low-cost and remarkably simple steps we can take to help significantly reduce the chances of a malware attack compromising a bank account. It can be as simple as isolating the computers used to access bank accounts. Most malware attacks rely on the fact that a single workstation is often used for multiple purposes: If a user is browsing the web he opens his workstation to drive-by download attacks; reading email opens the workstation to malware contained within email attachments; and file-sharing (whether it is a USB memory stick, a corporate shared network drive, or a peer-to-peer network) opens workstations to direct cross-contamination from other infected systems it interacts with.

On the other hand, if a few designated workstations, and these workstations alone, are used solely for the purpose of processing bank transfers to the exclusion of web browsing, email, and all of the other activities that could bring malware onto the system, then the risks of infection would be drastically reduced -- even moreso if these workstations could be firewalled off from the rest of the network or given their own dedicated Internet connections. The cost of a cable modem and a small firewall would almost certainly be a tiny fraction of the potential cost of a single fraudulent transfer.

Phishing attacks serve to illustrate this point further: There is no technical solution that can effectively stop a user who has been duped from sending out passwords; we must instead rely on training and awareness to make sure that individuals who hold the digital keys to a company’s bank accounts are aware of the threats they are facing and how they operate. If more people have the passwords to initiate bank transfers, then there are more people who could potentially leak that information. Keeping the key holders to a minimum allows companies to focus their training and awareness efforts on those few key individuals who matter.

We must also not forget the banks themselves. Many offer enhanced security measures for wire transfers that businesses just aren’t using. In the case of Choice Escrow, mentioned above, the bank offered a system where two passwords would be required, one to approve a wire transfer and another to release the transfer. In this case Choice Escrow chose not to use those dual controls. We have no way to know if using dual controls would have made a difference in the breach or the court case, but it is certainly telling that an easy-to-use security feature was not being employed. There are likely many companies that are not leveraging all the security tools the banks are providing for them, simply for the sake of convenience.

The ultimate liability solution may go beyond technology as well. The ability for hackers to launch fraudulent wire transfers seems to be under the radar of most businesses, as is the lack of liability that the banks accept. At least one bank, JPMorgan Chase & Co, does offer insurance on commercial accounts. Perhaps as more businesses become aware of the underlying risks in commercial bank accounts they will move to banks that offer more robust protections and instigate a change in the banking industry. Or perhaps we are just waiting for our “Target” moment when a major publicly traded corporation finds tens of millions of dollars missing from its bank account and makes the front-page news.

PCI-DSS 3.0 Helps Merchants Defend Against Emerging Threats

Protecting sensitive personal data continues to be a priority for merchants and businesses that operate in the payment card industry. With the release of PCI-DSS 3.0 many organizations that are already PCI compliant or are working toward becoming PCI compliant are wondering what these changes will mean to their organization.

Let’s take a look at what has changed and the impact this will have on how organizations approach PCI compliance.

Merchants and businesses should find that PCI-DSS 3.0 is easier and more intuitive to work with than earlier versions. The main impact of the changes includes:
  • New requirements for periodic inspection of PIN Entry Devices (PEDs) will have a major impact on retail merchants but will limit the likelihood and impact of skimming and Chip-and-PIN compromises.
  • Greater clarity for organizations and any service provider partners on their respective responsibilities to avoid compliance gaps between them.
  • While recognizing the importance of network segmentation for scope reduction, there are now clearer requirements for tests to ensure the effectiveness of any segmentation controls.

What new requirements are included in PCI-DSS 3.0?
With version 3.0, the PCI Security Standards Council enhanced or clarified existing PCI-DSS requirements. However a number of new compliance requirements were added including:

General: A new PCI-DSS ROC Reporting Template must be used as the template for creating the Report on Compliance.

General: More details have been added to the testing procedures to clarify the level of validation expected for each requirement. This reduces uncertainty over what is required to confirm compliance with a requirement and make determining compliance much more straightforward and consistent.

Req. 5.1.2: An organization will need to be aware of evolving malware threats to its systems and act if malware does become a significant threat, rather than the previous assumption that malware protection was only required on Windows systems.

Req. 8.2.3: The recent change gives greater flexibility to meet this requirement by providing a control which security equivalent to a password of at least 7 characters composed of numeric and alphabetic characters. Guidance recommends password entropy as a means of measuring this.

Req. 8.6: Where other authentication mechanisms are used (for example, physical or logical security tokens, smart cards, certificates, etc.) these must be linked to an individual account and ensure only the intended user can gain access.

Req. 9.3: Control physical access to sensitive areas for onsite personnel, including a process to authorize access, and revoke access immediately upon termination.

Req. 9.9: For Brick and Mortar retailers who will need to catalogue POS terminals and regularly check them to detect any theft or tampering (e.g. for skimming). At the European PCI Community Meeting, it was clarified that this only applied to the card interaction points (swipe or dip, etc).

Req. 11.5.1: New requirement to confirm that alerts from the change detection mechanism are investigated. This update makes the requirement to investigate alerts more explicit.

Req. 12.8.2: Many organizations will have contracts in place which pre-date their PCI-DSS compliance efforts, but which did place a requirement on the Service Provider to maintain the security of CHD either explicitly or implicitly. These agreements must now explicitly address compliance with PCI-DSS requirements and so may require amendments to existing contractual agreements.

Req. 12.9: This is the mirror of changes to Requirement 12.8.2 – the Service Provider has a matching requirement to confirm it will maintain applicable PCI-DSS requirements to match the client’s requirement to obtain it from them.

What is the timing for these changes?
PCI-DSS 3.0 went into effect Jan. 1, 2014, but businesses are given a year to implement the updated standard. This means that during 2014 merchants and service providers can choose whether to validate compliance under version 2.0 or 3.0 of PCI-DSS, although they may not mix requirements from 2.0 and 3.0 together in a single assessment. Any validation conducted in 2015 must be conducted under version 3.0. Service providers also have until July 1, 2015 to meet specific requirements.

Want more information?
Watch my walkthrough of these changes in a comprehensive webinar: The Changing PCI Landscape: What does it mean for your organization? Additionally, download the white paper “PCI v3.0 Impact Analysis” for specific rule changes.

Beyond Heartbleed: 5 Basic Rules To Reduce Risk

As published in Wall Street and Technology:

Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.

When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
  • Affects software that is in widespread use as an Internet-facing service that commonly handles sensitive data
  • Is present in version(s) of that software representing a sizable percentage of the deployed base
  • Can be exploited to reveal sensitive information (directly or indirectly) without logging in with a valid username and password
  • Its method of exploitation is widely known

For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H

The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.

Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.

Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.

To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.

This means our candidates for the next Heartbleed would be among the following common software packages:
  • Email software (Sendmail, Postfix, and Exchange)
  • Web server software (Apache and IIS)
  • The encryption packages that supports both of them (OpenSSL and SChannel)
  • The TCP/IP stacks of the operating systems they usually run on (Linux, FreeBSD, and Windows)
  • The server-side languages and other plugins that are frequently used within web servers (PHP, Java, PERL, Python, Ruby, .Net)

So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.

Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.

It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).

Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.

Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:

Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.

Reduce privileges:
Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.

Patch:
For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.

Firewall:
The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.

Defense in Depth:
There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.

Time to stop using IE

The IE vulnerability that has been released (CVE-2014-1776) follows a fairly typical pattern we have seen before. Internet Explorer and Flash have a long track record of nasty vulnerabilities (along with Java and Adobe Reader). These vulnerabilities are useful for attackers who can set up web sites to exploit the vulnerability and then direct victims to those web sites via phishing emails, manipulating search engines, buying ads, or compromising legitimate popular web sites (so called “drive-by download attacks”). These types of attacks have been reported to be exploiting this vulnerability in the wild. Internet Explorer versions 6 though 11 are affected. Microsoft has issued an advisory with a number of workarounds that can be put into place while a patch is developed that can be found here: https://technet.microsoft.com/library/security/2963983

This vulnerability also factors into the recent news that Windows XP is no longer supported by Microsoft: This represents the first major vulnerability released for Windows XP since it went out of support earlier this month and, according to early reports, a patch will not be released for that platform. This means that the risk posed by any remaining Windows XP systems has just moved from theoretical to actual. Organizations should be moving off of the XP platform as soon as possible and taking extraordinary steps to protect any remaining XP systems in the interim.

Relying on basic vulnerability scans to detect this sort of vulnerability can lead to a false sense of security if the results come back clean: Most vulnerability scans are conducted from the perspective of an attacking coming in across a network and focus on making inbound connections to network services in order to identify vulnerabilities. In most cases these types of scans will not detect client-side vulnerabilities like this one as client side vulnerabilities are based on outbound connections. Most scanning tools can be configured to connect to target systems with a valid username and password in order to analyze the installed software versions and this type of scan should be effective in identifying this and other client-side vulnerabilities. Organizations that do not typically conduct this type of scan may be shocked at how many client-side vulnerabilities they actually have the first time they run it.

The broader issue here is that any installed software may include vulnerabilities that increases the "attack surface" an attacker has to work with. A core security concept is that any unnecessary software should be removed or disabled whenever possible to reduce the attack surface. Unfortunately (for security at least) most software vendors and IT organizations often choose ease-of-use over security and have default installations that tend to include many potentially unnecessary enabled features and plugins, including Flash, whether or not they are actually needed for business purposes. As system and network administrators have gotten better at disabling or firewalling unnecessary server software the attackers have shifted to attacking client software in order to gain a foothold inside a target network. Flash along with Java, Adobe¹s Reader software, and Internet Explorer itself are the most common client-side targets likely due to both their ubiquity and complexity (more complexity usually means more likely vulnerabilities).

Preventing this and future drive-by attacks will require IT to rethink how they deploy software. Rather than installing everything by default "in case someone needs it" IT should be creating workstations and servers with as little software as possible and then deciding what software to add based on the use-case for each system. For example if a workstation’s only business purpose is to enter credit card numbers into a processor’s web site and that web site does not require Flash then there is no reason to install Flash and add more potential vulnerabilities to the workstation. Most businesses will find that vulnerable plugins like Flash and Java are only needed for business purposes by a very small subset of their users. Of course many users are likely using these plugins for non-business purposes, like watching YouTube videos during downtime, and the organization will have to weigh the tradeoff of security versus the users’s desire to use their workstation just like they would use their home computer.

Apple in particular is already taking action along these lines: After years of having Java enabled by default Apple released a patch for Mac OS X that disabled Java due to a rash of zero-day vulnerabilities, users who actually need to use Java are provided with instructions on how to re-enable it when they reach a web site that requires it. Apple also added a feature to Safari that allows for the Flash and other plugins to be allowed or disallowed on a site-by-site basis. This feature in particular would provide the sort of granular control an IT organization would need in order to effectively manage client-side plugins like Flash: allow them for sites with a legitimate business need and disallow them everywhere else. The web does seem to be making a move to HTML version 5 which is an open standard that has the capability to replace most of Flash’s functionality. There is some hope that this transition will lead to less vulnerabilities than we’ve seen from Adobe’s proprietary software in the past.

Ultimately the choice is to keep scrambling with tactical fixes like workarounds and patches whenever these zero day vulnerabilities come out or making strategic decisions about how systems are deployed to reduce the overall risk to the organization.

Bleeding Heart Vulnerabilities

A very nasty vulnerability has been discovered in the OpenSSL encryption software that powers the TLS/SSL* encryption behind many web sites, email servers, VPNs, and other solutions that require the confidentiality or integrity of information. OpenSSL is very widely used (coming standard with most Linux distributions and open source web servers like Apache) and most organizations will likely have vulnerable systems. This should be taken very seriously and remediated immediately, it has already been said that this vulnerability is worse than not having encryption at all.

*SSL and TLS are essentially the same thing: the encryption protocol used to be called “SSL” but was renamed to “TLS” a few years ago. Essentially what is TLS version 1.0 would have been SSL version 4.0 had it not been renamed. Although most implementations now primarily use the newer TLS version of the protocol people still commonly refer to it as SSL so I use “SSL/TLS” throughout this text to avoid confusion. Also note that OpenSSL is just one implementation of the open SSL/TLS protocol, there are other implementations of SSL/TLS that do not contain this vulnerability. To be clear: this is a bug in certain versions of the widely used OpenSSL software that implements the SSL/TLS encryption protocol, not a problem with the SSL/TLS protocol itself.

What it is
The gist of this vulnerability is that back in 2011 a bug slipped into the OpenSSL software that would allow any attacker who can connect to a service protected by SSL/TLS encryption to take a snapshot of a small 64 kilobyte chunk of the target server’s memory. Such a small amount of memory may not seem like much of a big deal but there is nothing preventing an attacker from making repeated requests for different memory addresses in order to reconstruct larger swaths of memory. The risk is exacerbated by the fact that OpenSSL by its very nature as an encryption product is often used to protect sensitive services, almost guaranteeing that an attack on an SSL/TLS service will result in something of use to an attacker. This could include usernames, passwords, session IDs, credit card numbers, or the encryption and decryption keys that protect the communication channel itself. Anyone who can connect to a server running a vulnerable version of OpenSSL can exploit this vulnerability whether they are logged into the protected service or not.

The vulnerability and the method of exploiting it is now well known. Attackers may already be using this techniques to capture information from vulnerable servers and the attack does not leave any evidence in logs so there is no way to know if a particular server has or has not been attacked. We must assume that any system found to have a vulnerable version of OpenSSL may have had data compromised and act accordingly. Because SSL/TLS connections are encrypted it would be very difficult to detect attacks using an Intrusion Detection System and this should not be seen as a reliable way of mitigating the threat.

How to fix it
The first step that an organization should take to mitigate this threat is to immediately patch any vulnerable systems. Anything running OpenSSL version 1.0.1 through 1.0.1f should be considered vulnerable and be patched to the latest version that includes a fix, currently 1.0.1g. Older versions of OpenSSL in the 1.0.0 branch and 0.9.8 branch are not vulnerable to this particular issue although they are older branches and may have other vulnerabilities of their own. It should be kept in mind that OpenSSL is used for more than just web servers: SSL/TLS encrypted email services, SSL/TLS VPN solutions, and just about anything else that uses an encrypted communication channel could be based on OpenSSL. Embedded devices and “Appliance” systems are areas that are often overlooked when it comes to patching and should be considered as potentially vulnerable.

Unfortunately patching alone is not enough to fully remediate this issue: An attacker can use the vulnerability to get SSL/TLS secret keys from the server and what seems to be overlooked in most reports about this flaw is that an attacker who gets those keys before the SSL/TLS service is patched can potentially continue to use the keys to decrypt data long after the patch has been applied. The same is true for login credentials or other sensitive data (social security numbers, credit card numbers, etc.) that an attacker gathers either from memory directly via the vulnerability or via decrypting traffic with stolen keys later on. As a result the complete guidance should be to patch OpenSSL and then immediately generate new encryption keys, revoke the old keys, and force users to change their potentially compromised passwords. Steps to address other potentially compromised data such as credit card numbers would have to be decided on a case-by-case basis depending on how likely it was that the data could have been affected.

What should I do?
The risk to an Internet user is that their information (access credentials, credit card numbers, etc.) might be captured by a malicious individual using the method described above. There isn’t much anyone can do to protect themselves if a service they use, such as an SSL encrypted web site or email account, is vulnerable beyond simply not using that service until the vulnerability is patched by the service provider. Even determining whether or not a service provider is vulnerable could be difficult, a tool does exist to check services for the vulnerability but running it against a service could potentially attract unwanted legal attention (there are unfortunately cases where individuals have ended up in prison for independently investigating services in web sites and other services). The possibility that the service’s encryption keys might have been stolen by an attacker while a service was vulnerable, as described above, also presents a risk to individual users even after the provider has patched the service: this would allow an attacker to decrypt traffic, a particular concern for users of public WiFi services where eavesdropping on others’ traffic is simple. Perhaps the easiest way to check if a site has taken steps to mitigate the vulnerability (and done it properly by generating new keys) is to check the certificate presented by the service. If the service provider was known to be vulnerable and the issue date of the certificate is prior to the release of the fix for this vulnerability then the keys likely have not been changed. On the other hand if the certificate was issued shortly after the fix was released it would indicate that the provider has taken steps to remediate the issue.

A long goodbye for XP

Windows XP will no longer be supported as of April 8th. This should not come as a surprise to anyone as Microsoft’s lifecycle has been known for years but what may come as a surprise is how many organizations are likely to be affected by this: Recent reports indicate that over 30% of Windows installations are still running XP. With such a high percentage it is almost assured that any given organizations has an XP installation somewhere on their network, likely on long-forgotten servers or workstations at rarely upgraded remote sites. Even if servers and workstations have been expunged XP may still be lurking in one final holdout: embedded systems. These systems are almost like appliances in their nature, you just plug them in and they work, but behind the scenes they are still computers and require some sort of operating system, often Windows XP. As one example, over 90% of ATMs run XP; other embedded systems running XP could include digital surveillance video recording systems, electronic door lock access control systems, graphic displays (like the departure screens in airports), digital telephone exchanges, etc.

The risks of an unsupported operating system should be obvious: Microsoft will no longer be providing patches for Windows XP so any security vulnerabilities that are discovered in the future will remain permanently unfixable. With such a large number of XP systems still in use, attackers will almost certainly be looking for new vulnerabilities in XP and adjusting their exploit kits to take advantage of them knowing that the exploits will work indefinitely. Even if the remaining XP machines on a network do not provide critical functionality they may still serve as a gateway into the network for an attacker: most network administrators focus their security resources at the perimeter and have very little protection or detection capability internally, attackers have been taking advantage of this for years by compromising workstations (often through malware distributed via phishing emails) and using them to target other more sensitive systems on the name network. Leaving unsupported XP installations in place, whether on servers, workstations, or embedded systems, will provide just such a stepping stone for an attacker to penetrate a network and steal sensitive data.

In addition to the risk concerns there are compliance concerns as well: Any unsupported operating systems detected during an ASV scan results in an automatic failure. Because PCI defines the compliance scope as the systems that directly handle payment card data plus other connected systems (due to the risk of stepping-stone attacks described above) an unsupported XP machine that has nothing to do with card processing could cause this failure merely because it is on the same network.

A common refrain amongst organizations that run older software is that they do not upgrade either because they are concerned about the stability of the system or the cost of the upgrade. While these are valid concerns they should be considered in light of the potential stability impact of an attacker compromising the system with malware in order to use it as a platform to warehouse stolen data, send spam, launch DDoS attacks, and for further attacks within the network, as well as the cost of cleaning up after such a breach. The likelihood of such a compromise will increase by the day as vulnerabilities are identified and disseminated and it is unlikely that any objective risk assessment would conclude that keeping the unsupported operating system in place is the safest and least costly course of action.

NTT Com Security can help our clients identify XP machines on their network through scanning: When provided with access credentials our tools can connect to systems on the network and accurately identify the operating system. Fingerprinting techniques can help to identify systems that can’t be logged into (such as Unix systems with unique passwords) and flag potential unsupported installations for follow-up investigation. Additionally NTT Com Security can help design security controls to help protect existing XP systems while replacements are designed, procured, and tested.

The end of XP support will likely affect every one of our clients if it hasn’t already. Lets see what we can do to help smooth the transition and make sure there are no surprises left behind.

Hard target

Businessweek is reporting that Target spent $1.6 million to install FireEye (a next-generation network monitoring solution), they had an operations center in Bangalore monitoring the FireEye solution, the FireEye solution alerted on the malware penetrating Target's network, and the operations center treated it as a false positive and ignored it. Also revealed in this article is that Target's CEO said they were certified PCI
compliant in September of 2013 (I'm assuming he means that this was when they completed their last Report on Compliance). For the icing on the cake Businessweek made this their cover story with a huge “Easy Target” headline (complete with a cute animated online version) which demonstrates the potential PR fallout from a breach like this.
The article is here.

Compliance, monitoring, and response
For quite a while now I’ve been beating the drum on the message that you can't rely on protection mechanisms alone (firewalls, patching, etc.) to secure a network and the data within it; given enough time a motivated attacker will find a way in. You have to be able to detect the intruder and be able to respond to him in order to limit the damage he can cause. This is why banks have cameras, alarms, guards, and a hotline to the police despite also having a vault to keep valuables in. I've raised this point in the context of the Target breach before as well: we already knew that the breach was based on malware that had been modified to evade antivirus detection and this illustrates the need for monitoring and response capability rather than relying on antivirus alone. Reports indicated that Target first found out about the breach when they were informed of it by Federal authorities, likely because the stolen cards had already turned up on underground markets and had been traced back to Target via Federal or bank fraud analysis units. This indicates that Target's detection and response capabilities were not effective but was not surprising: 69% of breaches are first detected by an external party according to the Verizon 2013 Data Breach Investigations Report. Now the FireEye revelation, indicating that Target had all the right pieces in place to detect and respond to this breach, changes the nature of the conversation a bit.

Based on what we now know about the FireEye deployment it appears that Target was in fact trying to do all the right things: they became PCI compliant, they had robust monitoring infrastructure (FireEye) in place as required by PCI-DSS, and they had actual human beings reviewing the alerts generated by those monitoring systems also as required by PCI-DSS. Regardless of how effective the offshore operations center was (which I'm sure will become a topic of much speculation) these 3 points alone demonstrate more security effort than is apparent at most companies that handle credit cards. We are doing assessment work for major companies that haven't even attempted to become PCI compliant yet (some in the retail sector), most of these companies (compliant or not) have not invested in monitoring infrastructure any more advanced than IDS/IPS and basic system log collection, and manually reviewing these logs is usually an often overlooked side-job assigned to an overworked IT guy.

So here is where I disagree with Businessweek's characterization of "Easy Target" (although I'll admit it does make a great headline): In light of this revelation I would say that Target is likely one of the harder targets. Despite the enormous impact of this breach it is still only a single breach and should be viewed in light of Target's overall security efforts. I would be very interested to see numbers around how many attacks Target successfully stopped with their monitoring capabilities before this attack slipped through. This breach did still happen though and companies will want to know why and what they can do to protect themselves; based on what we know now I would say that Target made 2 errors, both relatively minor when compared to how atrocious security is in most organizations. The 2 errors both have to do with how monitoring is conducted; specifically what behaviors generate alerts and how false positives are handled.

False positives
Any security monitoring system, whether it is a network intrusion detection system, a motion sensor in a bank, or a metal detector at an airport, can be tuned to be more or less sensitive and a FireEye deployment is no different. The tuning capability exists because there is unfortunately no such thing as a security sensor that only alerts on threats without ever generating false positive results: a metal detector that alerted on any metal at all would alarm every time a person with metal fillings in their teeth or metal rivets in their jeans walked through, a motion sensor that alerted on any motion at all would alarm every time a spider crawled across the floor, and network monitoring system that alerted on any activity would inundate its operators with alerts on normal activity. Tuning the system to be less sensitive in order to eliminate false positives is not as simple as it may seem: if a metal detector is tuned only to detect a lump of metal the size of a gun it will fail to alarm when a group of people each carries through a single component of a gun for reassembly on the other side. In order for security technology to be effective it must be tuned to be sensitive enough that it will detect most of the conceivable threats and an allowance must be made for humans to thoroughly investigate the potential false positives that will inevitably occur as a result.

Published information on Target's response indicated that the FireEye solution labelled the ignored threat as "malware.binary", a generic name for a piece of code that is suspected to be malicious even though it does not match any of the patterns for more widely spread malware that has been analyzed and given a name. So far this indicates that Target has likely tuned their monitoring solution well enough as it did detect the actual threat and generated an alert based on it (a system that had been tuned to be too permissive wouldn't have generated an alert at all). Where Target's system fails is the human response to that alert: It is likely that Target's monitoring center received many of these generic alerts on a regular basis, most of them either false positives or simple attacks that were automatically blocked by other security mechanisms; after too many of these false positive generic alerts the humans responsible for responding to them will learn to ignore them. This is like asking each person who sets off the metal detector if they have metal fillings and sending them on their way without further inspection if they respond in the affirmative; it wouldn't be a surprise at all if something slipped through at that point. The only way to make effective use of the security solution is to actually investigate each alert and resolve the cause; this is time consuming and expensive but not nearly so much as a breach. It appears that this is the key piece of Target's process that failed.

Behavior monitoring
The second error is something I am inferring from what was not mentioned: specifically any alerts based on activities on the network. Malware is a "known bad", a chunk of code that is suspected to be suspicious because it exhibits certain characteristics. The same could be said for most alerts generated by intrusion detection and prevention systems: they are based on network traffic exhibiting known suspicious characteristics such as a chunk of network traffic that would exploit a known vulnerability in an email server or a computer that quickly tries to connect to each of the protected systems in turn. Attempting to monitor a network by only watching for "known bad" traffic is akin to setting up a firewall to allow all network traffic except that which is explicitly denied (a practice that was mostly abandoned many years ago). The standard for configuring firewalls today is to deny all traffic by default and to only allow specific "known good" services to pass through when they are explicitly defined and this is the method we must look into for effective network monitoring as well: Define "known good" traffic and alert when anything else out-of-the-ordinary happens on the network.

The actual method used to penetrate and infect the network aside, reports indicate that credit card data was sent from Target's point-of-sale terminals to another compromised server on Target's network where it was then encrypted and sent out for the attackers to retrieve over the Internet. This represents the exfiltration of a huge amount of data and, had Target been looking for anything other than "known bad" traffic, provides 2 opportunities for detection: point-of-sale terminals would have suddenly started interacting with a system on Target's own internal network that they did not normally interact with and then that system suddenly started sending large amounts of encrypted traffic to a system on the Internet that it had never communicated with before. Neither of these communication vectors would have been flagged as "known good" and therefore should have triggered alerts for investigation. Unfortunately almost no-one monitors networks in this way and Target can't really be faulted for not being on the bleeding edge of security best-practices.

Law enforcement
There is a third failing that is worth mentioning here, one that is not at all under Target's control but that nevertheless contributed directly to this breach and many others around the world: the inability or unwillingness of law enforcement to stop criminals who operate primarily online. In the physical world we are used to the concept that when a bank gets robbed the police will respond, investigate, and at least attempt to identify and arrest the offender but in the online world this simply isn't happening all that often.

There are various published reports identifying the individuals behind the malware used at Target and the website used to sell the stolen credit card numbers. These reports weren't the results of Secret Service investigations or NSA metadata collection programs, rather they were identified, fairly easily, by private individuals piecing together information from social media sites and underground forums. Unsurprisingly to anyone in the security industry, the implicated individuals are all young, from Eastern Europe, and have been engaged in these activities for many years. The economic realities in much of Eastern Europe is such that there aren't many legitimate career opportunities for bright young computer enthusiasts. Given the sad state of information security in the rest of the world and the potential income it isn't surprising that many of these kids, who under different circumstances may have been the brains behind a multi-million dollar Silicon Valley startup, are turning to crime against corporations on the other side of the planet. With the recent events unfolding in Ukraine perhaps there is a glimmer of hope that these economic conditions will start changing in the near future.

One would assume, if these are just broke kids with a knack for computers and they are so sloppy about protecting their identities that someone with computer know-how (and some knowledge of the Russian language) can figure out who they are, that law enforcement must already be heading for their door but things are not so simple: a significant fraction of online crime crosses borders and while large breaches like Target attract law enforcement attention a small business owner would be hard-pressed to get any meaningful law enforcement response to a breach regardless of the consequences for his business. Local law enforcement agencies usually don't have the resources to conduct investigations across state lines, never mind national borders. In the post 9/11 world Federal law enforcement priorities are often focused elsewhere, often in the name of "national security"; the agencies that have historically focused on information security seem to be more concerned with threats posed by other governments than criminal enterprises and the FBI is now spinning itself as a counterterrorism and foreign intelligence agency. The political realities in Eastern Europe are also such that the cooperation between Western law enforcement agencies and their local counterparts that would be necessary to bring offenders to justice would be difficult or non-existent, the recent events unfolding in Crimea indicate that any change in this status-quo is unlikely. For the foreseeable future the attackers will be mostly left to their own devices, honing their skills across hundreds or thousands of attacks until they have the capability to penetrate even the most well defended network.

Where do we go from here?
Technology alone can't solve all our problems. Hopefully most of us know that already but there were quite a few vendors at the RSA conference this year proclaiming that their technology would have prevented the Target breach or, even more ludicrously, claiming that it would have prevented the Snowden breach at NSA. If technology could in fact solve all of our woes then, in light of Target's $1.6 million dollar investment in
FireEye's solution, any organization that hasn't spent that enormous amount on security technology should be very worried. This also demonstrates once again that compliance alone is not security either: we don't know who Target's PCI assessor was or if they took the compliance mandate seriously (versus taking the checkbox approach) but from what I've read so far it is entirely possible for this breach to occur in the manner that it did even if Target was serious about compliance. We need to treat compliance as a minimum standard, a guideline upon which we should build security appropriate to our own threat environment. And finally, it is becoming increasingly obvious that the next step in the cat-and-mouse game of security is to increase real-time monitoring and response capabilities to make more effective use of the technology that we have deployed and to make sure that the people tasked with that response must have the time and resources to conduct proper investigations (no more pretending that the overworked IT guy will have time to do it).

The changing face of malware

Stories are circulating about a “remote access trojan” for Android that made its way into the Google Play store. This malware is making headlines due to its ability to activate cameras and microphones to spy on victims but what is also interesting is that the malware comes from a malware construction kit known as Dendroid.

The existence of the Dendroid toolkit isn’t surprising. As mobile platforms are increasingly used to handle sensitive data, both personal and business, the criminal elements that profit from the information captured by malware will shift more attention to these platforms in order to expand their illicit businesses. The pattern used by Dendroid is a familiar one: virus construction kits have existed for years, allowing attackers to quickly and easily combine various vulnerability exploits with malicious payloads in a customized package.

The malware generated by Dendroid managed to evade Google’s detection mechanisms and has since been picked up by antivirus signatures but this is only the first step in what will be a cat-and-mouse game. As we’ve seen with traditional malware, the authors will now begin modifying their software to evade the latest antivirus signatures, always trying to stay one step ahead of the vendors. The Target breach is a high profile example of this modus operandi: the malware used on Target’s point-of-sale systems was believed to have been purchased on an underground market where it had been available for months, if not years, and was then modified to evade antivirus detection before being deployed.

Evading Antivirus
Modifying malware to evade antivirus solutions is made simple by the very methods that antivirus software uses to detect malicious code: most antivirus solutions are signature based. When a new virus sample is found “in the wild” the antivirus vendors will look for unique patterns in the files or in the behaviors of the offending code and build a signature based on these patterns. These signatures are then added to a database that is distributed to the antivirus installations deployed around the world. The antivirus software simply looks for the signature patterns contained in their databases and then alerts on or quarantines any suspect files.

This approach may be effective at preventing a common virus from spreading widely across the Internet when virus samples can be identified and signatures generated but the approach quickly becomes ineffective in the face of custom-assembled malware; a malware author can simply review the same antivirus software databases in order to determine how not to trip any signatures when he develops a piece of malicious code and test the code against live antivirus installations to be sure. If the resulting malware is only deployed against a few selected targets there will be no publicly circulating samples for antivirus vendors to build signatures off of and the malicious code will likely remain undetected until a breach is well underway unless the target has other behavior detection and response capabilities deployed besides antivirus.

There is an inherent limit to how quickly signatures can be developed: antivirus vendors must first find a sample of the malware, examine it for patterns, and then carefully test the resulting signatures in order to avoid false-positive results once it is deployed. If a signature is not specific enough it can cause the antivirus solution to alert on legitimate software that just happens to match the signature patterns. Despite the efforts of the antivirus vendors this does happen occasionally and can have catastrophic consequences when critical files end up automatically quarantined and systems crash. As a point of reference, it took about 2 weeks after the Target breach was announced before signatures that would detect the malware used in their environment began to be released.

Malware authors are also becoming more creative in deploying their malware, utilizing “dropper” code that causes the initial infection, installs a separate backdoor, and then deletes itself in order to avoid leaving any files behind to be sampled by antivirus vendors. This makes the development of antivirus signatures that can stop the initial infection more difficult, even for more widely spread viruses, as there are no samples. Antivirus vendors also try to analyze virus files in an isolated “sandbox” environment so that they don’t unintentionally infect their own systems. Malware authors can design their code so that it can attempt to detect these sandbox environments and alter its behavior to prevent effective analysis.

Evading Google
The mobile app marketplace has gravitated toward a model with centralized app stores that is very different from the distributed model common to the personal computer world. The malware generated by Dendroid is a member of a specific sub-class of viruses uniquely suited to distribution in the app store model, called “trojans” after the infamous trojan horse: the malicious code is attached to another, often legitimate, program that potential victims would be likely to voluntarily download and run. The idea of getting a malicious app into a sanctioned app store is a very enticing prospect for a malware author as it will be almost guaranteed to have more exposure than software on a standalone website. Furthermore, users trust software on these sanctioned app stores and are unlikely to even consider that there may be embedded malicious code.

Google (and Apple) recognize the trust users place in their marketplaces and attempt to prevent malware from ending up in the stores. This is typically done by running an app that has been submitted to the store in another “sandbox” environment, basically a virtual machine that simulates an actual mobile device. The behavior of the software within the sandbox is monitored by QA staff to determine if it performs any suspicions actions when run. This sandbox environment is very similar to the way antivirus vendors monitor the behavior of malware samples in order to build signatures and a technique that virus authors have been using to prevent analysis of their malicious code has been adapted by Dendroid to sneak its malware past Google’s checks: detect whether the software is being executed on an actual device or in a sandboxed virtual environment and if it is in a virtual machine suppress any malicious behavior to avoid detection.

What about Apple
A recent study indicated that 97% of mobile malware is on the Android platform which is an incredible result considering the widespread popularity of Android’s main rival, Apple’s iOS. One would expect malware authors to target any popular platform rather than focusing on one and ignoring the other half of the market. A likely reason for this phenomenon is the very openness of the Android platform that causes many of it’s users to choose it over Apple’s competing products with their “walled garden” approach.

Apple’s mobile platforms are very restrictive when compared to the freewheeling personal computer world and the Android platform where any software can be run and nearly every aspect of the platform can modified at will. With a few setting changes Android users can access alternative app stores (often riddled with malware), allow apps to interact with each other, and access or change the operating system software at a very low level. Apple on the other hand restricts their users to official apps from the official app store only, prevents apps from communicating with each other except in very specifically defined ways, shields the underlying operating system from the user, and works very hard to prevent jailbreaks that would allow users to bypass these restrictions. The restrictions placed on iOS users and apps makes it much more difficult for users to perform actions that would result in the successful installation of malware and limit the damage that malware could cause to the system if installed.

Besides the apparent security benefits of Apple’s restrictive walled garden, Apple’s vigorous attempts to prevent jailbreaks is also likely to be a contributing factor in the platform’s resistance to malware. Jailbreak software that allows users to bypass Apple’s restrictions in iOS function by exploiting bugs in the operating system in order to gain low level access and open a backdoor that offers the user a greater degree of control over their device than Apple intended. This is the exact same technique that malware uses to gain access to a system and open a backdoor for attackers to take control. Apple is engaged in its own cat-and-mouse game with the authors of jailbreak software, quickly patching the bugs that allow jailbreaks to function and thereby also closing many of the holes that malware would be able to use in order to break out of an app and compromise the underlying system.

Playing Defense
Mobile malware is here to stay and companies must consider what they can do to protect themselves from it. Unlike desktop PCs, mobile devices are often used outside the company’s security perimeter where they can be exposed to any number of threats.

Antivirus should be considered a first-line defense against common viruses that have begun to spread widely but it is ineffective at stopping most targeted attacks or viruses in the early stages of their spread. It should not be considered a standalone substitute for other security mechanisms.

Similarly, the mobile platform vendors’ methods of controlling access to their app marketplaces also have their limits. Much like the game between malware authors and antivirus vendors, there will be constant attempts to evade whatever controls are put in place to keep malware out of app stores and Google or Apple’s approvals can not be completely relied upon. In spite of this it would appear that, faced with iOS as a much tougher target for malware than Android, attackers have been focusing their efforts on Google’s platform. Apple’s iOS is still a very enticing target and malware will certainly be released for the platform but, for now at least, it would appear that the security risk on iOS is much lower than on Android. This of course assumes that users are not jailbreaking their devices and bypassing all of Apple’s controls that make the platform a more difficult target.

Ultimately the solution is a combination of techniques based on the risk mobile devices pose to an organization: Companies must think very carefully about the risks of allowing sensitive information on a privately owned device where little control can be exercised over the other software installed on the device, or conversely about what software they allow to be installed on company-owned devices. In most cases old technologies like antivirus should be combined with newer technologies like Mobile Device Management to provide defense-in-depth while increasing monitoring, alerting, and response capabilities so that potential breaches can be detected and stopped before they get out of hand.

On detection and response

Organizations need to move beyond merely trying to keep attackers out and start building the capability to quickly detect and respond to intrusions while designing compartmentalized networks to slow attackers once they have breached the perimeter, buying more time to detect and respond to the attack. According to the Verizon Breach Report 69% of breaches were spotted by an external party. This shows us that security staff are often asleep at the wheel.

Effective detection and response capability can be difficult and expensive, it is not as simple as deploying a piece of technology that will sound the alarm when a breach happens. Intrusion prevention systems, web application firewall, security information and event monitors, file integrity monitoring software, and other technological detection mechanisms require extensive tuning when they are deployed and continue to require ongoing tuning to adjust to changing conditions on the network. Without tuning these systems will generate mountains of false-positive alerts, essentially “crying wolf” so frequently that legitimate attack alerts will be lost in the noise and ignored as well.

While some technology, such as intrusion prevention systems and web application firewalls, have the ability to automatically stop basic attacks when they are well tuned and properly configured, a sophisticated attacker will eventually be able to find a way around these and we must have real live humans paying attention to the network in order to stop attacks. Many of the sophisticated attackers are located overseas and are likely not keeping standard office hours therefore this monitoring and response capability must be operating 24x7 in order to be effective. Staffing for 24x7 monitoring capability can be difficult and cost-prohibitive for all but the largest of organizations and this is an area where many companies may benefit from outsourcing the monitoring and initial response roles to a managed services provider.

Most typical networks have security resources concentrated at the perimeter with very little to protect systems inside the network from each other. This puts the attacker who successfully breaches the perimeter in a position where he can “pivot” on the compromised system and use it to attack other, potentially more sensitive, systems on the network without much interference. Unfortunately any host can provide the gateway for an attacker to breach the perimeter whether it is a poorly written web application that allows commands to be run on the underlying server or a user who falls for a phishing email and downloads malware onto their workstation.

Protecting and building the capability to monitor an entire network with all of its possible attack points can be cost-prohibitive regardless of the size of the organization. This can be mitigated by compartmentalizing the network into separate segments, for example building a dedicated section of the network for systems that handle credit card data and protecting this from the rest of the internal network with firewalls, intrusion prevention systems, and other security measures just as it would be protected from the Internet. This would impede an attacker who managed to compromise another less sensitive and protected system on the network by forcing him to go through the internal security perimeter, hopefully attracting attention from the security team as a result. An advantage of this approach, beyond slowing attackers down so that they are more likely to be detected, is that it also allows organizations to concentrate their limited security resources on the network segments that contain critically sensitive data rather than expending resources unnecessarily on systems that would not directly impact sensitive data.

Although all of the details haven’t been released yet, these lessons can be applied to the Target breach based on what we do know and suspect of the techniques used there. The attackers are believed to have gained entry into Target’s network by using the login credentials of an HVAC company that provides services to Target in order to access a web page (suspected to be an invoicing system). Although we don’t know how well segmented Target’s network is, a segmented network where critical systems like point-of-sale terminals, are isolated from other unrelated systems would make it much more difficult for the attacker to move into the point-of-sale systems undetected. The attackers are also believed to have conducted a test-run of their malware by installing it on a few point-of-sale terminals before deploying the malware on a wider scale. The attack seems to have run for a few weeks before it was detected, demonstrating that Target likely did not have the monitoring and response capability necessary to detect that the POS systems had been compromised (such as with file integrity monitoring) or to detect the stolen card data being exfiltrated from the network (such as with data loss prevention technology). It is believed that the breach was detected through fraud analysis on the stolen cards or undercover purchases of stolen cards rather than by direct detection on the network further illustrating this point.

The corporate key to agility -- and cybersecurity

The strategic concerns of security in cloud computing Read More...

A real-world approach to risk-based security planning

Using risk-based decision making to deal with information security concerns with a combination of technical and process controls. Read More...

Biting the silver bullet: Protecting corporate assets

Using risk-based decision making to deal with information security concerns. Read More...