Vulnerabilities

Bleeding Heart Vulnerabilities

A very nasty vulnerability has been discovered in the OpenSSL encryption software that powers the TLS/SSL* encryption behind many web sites, email servers, VPNs, and other solutions that require the confidentiality or integrity of information. OpenSSL is very widely used (coming standard with most Linux distributions and open source web servers like Apache) and most organizations will likely have vulnerable systems. This should be taken very seriously and remediated immediately, it has already been said that this vulnerability is worse than not having encryption at all.

*SSL and TLS are essentially the same thing: the encryption protocol used to be called “SSL” but was renamed to “TLS” a few years ago. Essentially what is TLS version 1.0 would have been SSL version 4.0 had it not been renamed. Although most implementations now primarily use the newer TLS version of the protocol people still commonly refer to it as SSL so I use “SSL/TLS” throughout this text to avoid confusion. Also note that OpenSSL is just one implementation of the open SSL/TLS protocol, there are other implementations of SSL/TLS that do not contain this vulnerability. To be clear: this is a bug in certain versions of the widely used OpenSSL software that implements the SSL/TLS encryption protocol, not a problem with the SSL/TLS protocol itself.

What it is
The gist of this vulnerability is that back in 2011 a bug slipped into the OpenSSL software that would allow any attacker who can connect to a service protected by SSL/TLS encryption to take a snapshot of a small 64 kilobyte chunk of the target server’s memory. Such a small amount of memory may not seem like much of a big deal but there is nothing preventing an attacker from making repeated requests for different memory addresses in order to reconstruct larger swaths of memory. The risk is exacerbated by the fact that OpenSSL by its very nature as an encryption product is often used to protect sensitive services, almost guaranteeing that an attack on an SSL/TLS service will result in something of use to an attacker. This could include usernames, passwords, session IDs, credit card numbers, or the encryption and decryption keys that protect the communication channel itself. Anyone who can connect to a server running a vulnerable version of OpenSSL can exploit this vulnerability whether they are logged into the protected service or not.

The vulnerability and the method of exploiting it is now well known. Attackers may already be using this techniques to capture information from vulnerable servers and the attack does not leave any evidence in logs so there is no way to know if a particular server has or has not been attacked. We must assume that any system found to have a vulnerable version of OpenSSL may have had data compromised and act accordingly. Because SSL/TLS connections are encrypted it would be very difficult to detect attacks using an Intrusion Detection System and this should not be seen as a reliable way of mitigating the threat.

How to fix it
The first step that an organization should take to mitigate this threat is to immediately patch any vulnerable systems. Anything running OpenSSL version 1.0.1 through 1.0.1f should be considered vulnerable and be patched to the latest version that includes a fix, currently 1.0.1g. Older versions of OpenSSL in the 1.0.0 branch and 0.9.8 branch are not vulnerable to this particular issue although they are older branches and may have other vulnerabilities of their own. It should be kept in mind that OpenSSL is used for more than just web servers: SSL/TLS encrypted email services, SSL/TLS VPN solutions, and just about anything else that uses an encrypted communication channel could be based on OpenSSL. Embedded devices and “Appliance” systems are areas that are often overlooked when it comes to patching and should be considered as potentially vulnerable.

Unfortunately patching alone is not enough to fully remediate this issue: An attacker can use the vulnerability to get SSL/TLS secret keys from the server and what seems to be overlooked in most reports about this flaw is that an attacker who gets those keys before the SSL/TLS service is patched can potentially continue to use the keys to decrypt data long after the patch has been applied. The same is true for login credentials or other sensitive data (social security numbers, credit card numbers, etc.) that an attacker gathers either from memory directly via the vulnerability or via decrypting traffic with stolen keys later on. As a result the complete guidance should be to patch OpenSSL and then immediately generate new encryption keys, revoke the old keys, and force users to change their potentially compromised passwords. Steps to address other potentially compromised data such as credit card numbers would have to be decided on a case-by-case basis depending on how likely it was that the data could have been affected.

What should I do?
The risk to an Internet user is that their information (access credentials, credit card numbers, etc.) might be captured by a malicious individual using the method described above. There isn’t much anyone can do to protect themselves if a service they use, such as an SSL encrypted web site or email account, is vulnerable beyond simply not using that service until the vulnerability is patched by the service provider. Even determining whether or not a service provider is vulnerable could be difficult, a tool does exist to check services for the vulnerability but running it against a service could potentially attract unwanted legal attention (there are unfortunately cases where individuals have ended up in prison for independently investigating services in web sites and other services). The possibility that the service’s encryption keys might have been stolen by an attacker while a service was vulnerable, as described above, also presents a risk to individual users even after the provider has patched the service: this would allow an attacker to decrypt traffic, a particular concern for users of public WiFi services where eavesdropping on others’ traffic is simple. Perhaps the easiest way to check if a site has taken steps to mitigate the vulnerability (and done it properly by generating new keys) is to check the certificate presented by the service. If the service provider was known to be vulnerable and the issue date of the certificate is prior to the release of the fix for this vulnerability then the keys likely have not been changed. On the other hand if the certificate was issued shortly after the fix was released it would indicate that the provider has taken steps to remediate the issue.

Adobe Critical Flash Player Update Repairs Flaw Used In Targeted Attacks

Quoted in CRN on a new zero-day vulnerability in Internet Explorer:

Read More...

Time to stop using IE

The IE vulnerability that has been released (CVE-2014-1776) follows a fairly typical pattern we have seen before. Internet Explorer and Flash have a long track record of nasty vulnerabilities (along with Java and Adobe Reader). These vulnerabilities are useful for attackers who can set up web sites to exploit the vulnerability and then direct victims to those web sites via phishing emails, manipulating search engines, buying ads, or compromising legitimate popular web sites (so called “drive-by download attacks”). These types of attacks have been reported to be exploiting this vulnerability in the wild. Internet Explorer versions 6 though 11 are affected. Microsoft has issued an advisory with a number of workarounds that can be put into place while a patch is developed that can be found here: https://technet.microsoft.com/library/security/2963983

This vulnerability also factors into the recent news that Windows XP is no longer supported by Microsoft: This represents the first major vulnerability released for Windows XP since it went out of support earlier this month and, according to early reports, a patch will not be released for that platform. This means that the risk posed by any remaining Windows XP systems has just moved from theoretical to actual. Organizations should be moving off of the XP platform as soon as possible and taking extraordinary steps to protect any remaining XP systems in the interim.

Relying on basic vulnerability scans to detect this sort of vulnerability can lead to a false sense of security if the results come back clean: Most vulnerability scans are conducted from the perspective of an attacking coming in across a network and focus on making inbound connections to network services in order to identify vulnerabilities. In most cases these types of scans will not detect client-side vulnerabilities like this one as client side vulnerabilities are based on outbound connections. Most scanning tools can be configured to connect to target systems with a valid username and password in order to analyze the installed software versions and this type of scan should be effective in identifying this and other client-side vulnerabilities. Organizations that do not typically conduct this type of scan may be shocked at how many client-side vulnerabilities they actually have the first time they run it.

The broader issue here is that any installed software may include vulnerabilities that increases the "attack surface" an attacker has to work with. A core security concept is that any unnecessary software should be removed or disabled whenever possible to reduce the attack surface. Unfortunately (for security at least) most software vendors and IT organizations often choose ease-of-use over security and have default installations that tend to include many potentially unnecessary enabled features and plugins, including Flash, whether or not they are actually needed for business purposes. As system and network administrators have gotten better at disabling or firewalling unnecessary server software the attackers have shifted to attacking client software in order to gain a foothold inside a target network. Flash along with Java, Adobe¹s Reader software, and Internet Explorer itself are the most common client-side targets likely due to both their ubiquity and complexity (more complexity usually means more likely vulnerabilities).

Preventing this and future drive-by attacks will require IT to rethink how they deploy software. Rather than installing everything by default "in case someone needs it" IT should be creating workstations and servers with as little software as possible and then deciding what software to add based on the use-case for each system. For example if a workstation’s only business purpose is to enter credit card numbers into a processor’s web site and that web site does not require Flash then there is no reason to install Flash and add more potential vulnerabilities to the workstation. Most businesses will find that vulnerable plugins like Flash and Java are only needed for business purposes by a very small subset of their users. Of course many users are likely using these plugins for non-business purposes, like watching YouTube videos during downtime, and the organization will have to weigh the tradeoff of security versus the users’s desire to use their workstation just like they would use their home computer.

Apple in particular is already taking action along these lines: After years of having Java enabled by default Apple released a patch for Mac OS X that disabled Java due to a rash of zero-day vulnerabilities, users who actually need to use Java are provided with instructions on how to re-enable it when they reach a web site that requires it. Apple also added a feature to Safari that allows for the Flash and other plugins to be allowed or disallowed on a site-by-site basis. This feature in particular would provide the sort of granular control an IT organization would need in order to effectively manage client-side plugins like Flash: allow them for sites with a legitimate business need and disallow them everywhere else. The web does seem to be making a move to HTML version 5 which is an open standard that has the capability to replace most of Flash’s functionality. There is some hope that this transition will lead to less vulnerabilities than we’ve seen from Adobe’s proprietary software in the past.

Ultimately the choice is to keep scrambling with tactical fixes like workarounds and patches whenever these zero day vulnerabilities come out or making strategic decisions about how systems are deployed to reduce the overall risk to the organization.

Microsoft issues workaround for Internet Explorer bug

Quoted in USA today on Microsoft’s workaround for the zero-day vulnerability in Internet Explorer:

Read More...

Beyond Heartbleed: 5 Basic Rules To Reduce Risk

As published in Wall Street and Technology:

Heartbleed made headlines like no other vulnerability in recent memory. This was partly due to the slick name, logo, and web site that explained the vulnerability (a rarity in a field where most bug reports are dry technical dispatches with names like “CVE-2014-0160”) but also due to the pervasiveness of the affected OpenSSL software, its role as a keystone in the Internet’s security architecture, and the potential consequences of a successful exploitation.

When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):
  • Affects software that is in widespread use as an Internet-facing service that commonly handles sensitive data
  • Is present in version(s) of that software representing a sizable percentage of the deployed base
  • Can be exploited to reveal sensitive information (directly or indirectly) without logging in with a valid username and password
  • Its method of exploitation is widely known

For those who speak CVSS this would roughly translate to AV:N/Au:N/C:C/E:F/RC:C/TD:H/CR:H

The justification for this definition is simple: Software that is Internet-facing can be exploited at any time from anywhere, and if the exploited software doesn’t contain any sensitive data then it is unlikely that anyone would care. For example databases may contain sensitive data but should normally be firewalled off from the outside world, requiring an attacker to compromise another internal system before exploiting a database, while vulnerabilities in desktop software require convincing a user to download and open a file; both can be done but they require more work than just scanning the Internet for vulnerable services.

Note that he definition does not include any reference to whether or not a patch has been released as this is implicitly covered in the 2nd point: it doesn’t matter that a vulnerability is fixed in version “1.0.1g” of a piece of software if 90% of the installed base is still running the vulnerable “1.0.1f” version. Sadly we still see vulnerabilities being exploited that are many years old and even after all the press that Heartbleed got there are still many tens of thousands of affected servers out there on the Internet. The inverse of this can also work out to our benefit when a vulnerability is only present in newer versions of software but there is a sizable installed base still running older non-vulnerable versions (as we saw with Heartbleed and the old and still widely deployed 0.9.8 and 1.0.0 branches of OpenSSL), this isn’t much of a factor though as more vulnerabilities are typically fixed in newer versions than would be avoided by using older versions of software.

Back to the topic at hand, using this definition narrows things down very quickly simply because there are only 2 types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.

To get one thing out of the way, the potential for this sort of supercritical vulnerability is not limited to open source software, it could pop up in commercial software just as easily. Following the Heartbleed vulnerability there was a bit of press about how small the OpenSSL team actually is despite how critical the software is to Internet security. I would venture a guess that the team responsible for SChannel (Microsoft’s SSL/TLS implementation, analogous to OpenSSL) doesn’t look that different from OpenSSL’s team (one full time person with a few other part-timers to help out as needed). This sort of underlying infrastructure code tends to get written and then put on the proverbial shelf until an issue is discovered or a new function is required. Most companies would rather pay their programmers to build the new features in their flagship products that will attract more customers than to reviewing old code for potential issues. There is a long and well documented track record of commercial software vendors ignoring early vulnerability reports with a response of “that’s just theoretical” only to be subjected to a zero-day exploit later on.

This means our candidates for the next Heartbleed would be among the following common software packages:
  • Email software (Sendmail, Postfix, and Exchange)
  • Web server software (Apache and IIS)
  • The encryption packages that supports both of them (OpenSSL and SChannel)
  • The TCP/IP stacks of the operating systems they usually run on (Linux, FreeBSD, and Windows)
  • The server-side languages and other plugins that are frequently used within web servers (PHP, Java, PERL, Python, Ruby, .Net)

So, as to what such a vulnerability can do: it depends on where in the “stack” (from the server on down to the operating system) of software the vulnerability falls. If a vulnerability falls anywhere in a web server’s software stack we can assume that the sensitive data in the web application and its backend database can be compromised. From authentication credentials on down to credit card numbers, the possibilities are really only limited by what types of sensitive data is handled by a particular web application.

Anything that compromises email is particularly nasty as email represents the hub of our digital lives: besides all of the sensitive communications traversing a corporate email server that would be disclosed (take a look at the results of the HBGary Federal breach for an example of that) we also have to consider that nearly every 3rd-party service we access utilizes email as part of the password reset functionality. Essentially if an attacker can read your email he can take over nearly every other account you control in very short order.

It’s also worth pointing out that many vulnerabilities fall into the category known as “arbitrary code execution”. This is a fancy way of saying that the attacker can run whatever software he wants on the target system and is actually worse than a vulnerability like Heartbleed that only allows the attacker to grab data from a vulnerable system. The software an attacker would usually run in this situation is a type of malware called a “root kit” that opens up a backdoor, allowing for access later on even if the original vulnerability is closed off. From there the possibilities are endless (keyloggers, eavesdropping on network communications, siphoning data off from applications, launching attacks on other systems within the network, etc.).

Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.

Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:

Less software is better: Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.

Reduce privileges:
Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.

Patch:
For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.

Firewall:
The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason, it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.

Defense in Depth:
There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.

Microsoft Fixes 24 Browser Flaws, Adobe Repairs Flash Player Bug

Quoted in CRN on the severity of Flash and Java vulnerabilities:

Read More...

New OpenSSL breach is no Heartbleed, but needs to be taken seriously

Quoted in ZDNet on the implications of new SSL vulnerabilities:

Read More...

10 ways to strengthen web application security

Quoted in mrc's Cup of Joe Blog on ways to prevent web application vulnerabilities:

Read More...

Black Hat 2014 spotlights mobile device management, modem threats

Quoted in Tech Page One on vulnerabilities in Mobile Device Management products:

Read More...

MDM is Terrible: When Security Solutions Hurt Security

While the headline is a little too sensationalist for my liking, quoted in PC Magazine SecurityWatch on vulnerabilities in Mobile Device Management products:

Read More...

Driving Information Security, From Silicon Valley to Detroit

As published in Wall Street and Technology:

For better or worse, computer software vendors are practically devoid of any liability for vulnerabilities in the software they sell (although there is certainly a heated discussion on this topic). As far as vendors are concerned, software is “licensed” rather than sold, and users who accept those licenses are agreeing to waive certain rights, including the right to collect damages resulting from failures in the software.

To pull one particular example from
the license for Microsoft SQL Server Enterprise 2012, a widely used piece of database software that underpins a significant number of enterprise applications that handle millions of dollars worth of transactions each:

YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO THE AMOUNT YOU PAID FOR THE SOFTWARE... YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES. 


When a flaw is discovered, including security flaws that are
actively being exploited to breach systems, a vendor will typically issue a patch (sometimes many months later, and, hopefully without causing more problems than they fix), and that is the end of the issue: no lawsuits, no refunds, and no damages.

This liability-free model used by software vendors stands in stark contrast to almost any other product that is bought and sold. Product liability laws hold manufacturers and sellers responsible for design or manufacturing defects in their products. Rather than releasing a fix and calling it a day, these companies will find themselves on the hook financially for the consequences of their failures.

Software infiltrates everything

Government oversight from organizations like the Consumer Product Safety Commission, the National Highway Traffic Safety Administration, and the Food and Drug Administration track complaints and have the ability to force recalls or issue fines. For a recent example of these consequences we can look to General Motors’ ignition recall troubles that have so far resulted in $2.5 billion worth of recalls, fines, and compensation funds.

Most consumer products also don’t receive the frequent software updates that we are used to applying to our computers; whatever software version comes in a consumer product tends to stay in it for life. In the automotive world this has already led to some comically outdated in-dash navigation, information, and entertainment systems (especially when compared to today's rapidly evolving smartphones and tablets) but will also likely lead to some horribly vulnerable unpatched software.

These two worlds, both operating under very different rules, are colliding. Cutting-edge computers and software are increasingly finding their way into the types of products we buy every day, and nowhere is this more apparent than in the automotive world. The days of carbureted vehicles that could be tuned with a timing light and a screwdriver ended in the 1990s, replaced with fuel injection and electronic ignition systems that are controlled by computers actively adjusting engine parameters as we drive, based on the readings from a network of sensors scattered throughout the vehicle. These networks have grown to include more than just the engine sensors.

In-car networking standards, such as the CAN bus standard, enable a wide array of devices within a vehicle to communicate with each other, allowing huge wiring harnesses containing hundreds of bundled wires, fuses, and switches to be replaced with commands and updates traveling over a single wire. On modern cars the brakes may not be controlled by a hydraulic line connected to the brake pedal; the throttle may not be controlled by a cable connected to the gas pedal; and the steering may not be controlled by a shaft connected to the steering wheel. Instead, the brake pedal, gas pedal, and steering wheel could all just be electronic sensors that send computerized commands over the CAN bus network to electric motors elsewhere in the vehicle that carry out those commands. Toyota’s electronic throttle control system has already made some headlines as part of a series of 
unintended acceleration lawsuits that resulted in 16 deaths, 243 injuries, a motorist released from jail, and a $1.2 billion fine.

This issue goes much deeper than the types of software mistakes that can cause a car to malfunction on its own. As we’ve seen with much of the software connected to the Internet, including some other systems that can have real-world (and sometimes 
very messy) consequences, it is the malicious hackers that can cause the most problems. Security researchers have already been looking into these sorts of possibilities and have separately demonstrated the ability to gain access to in-car networks from a remote location and affect a vehicle’s braking, steering, and acceleration (among other things) once they gain access to the in-car network.

Other attacks like location tracking and eavesdropping on a vehicle’s occupants via hands-free communication microphones are also possible, but they pale in comparison to the potentially fatal consequences of interference with the vehicle controls. Presentations at the annual
Black Hat Conference and DEF CON security conferences this month have also covered topics related to automotive network and computer security, while a group in China is offering a prize of $10,000 to anyone who can gain remote access to a Tesla’s on-board operating system.

Although some of the media reports on this topic are being dismissed within the information security community as “stunt hacking” (sensationalist stories based on hacks conducted in unrealistic conditions) and manufacturers are quick to state that their car systems have safety checks built in, it is clear that the building blocks for a real-world attack are being built and assembled. The 
firmware manipulation techniques demonstrated at DEF CON earlier this month could be used to override or eliminate the safety checks built in by the manufacturers, and it is only a matter of time before the techniques that are being used to remotely access cars are combined with the techniques to manipulate the controls.

Many ways to attack

For an attacker, getting access to a car’s network is not as hard as it may initially seem. The most obvious attack point would be the On-Board Diagnostics connector that is usually located in a discrete spot under a vehicle’s steering wheel where a small and cheap micro controller could be connected. More interesting attacks could be launched via malware contained on CDs, DVDs, or USB devices loaded into the vehicle’s infotainment system. Moving into the wireless realm, many cars come equipped with Bluetooth or WiFi connectivity for smartphones and other devices within the vehicle.

All of these attack vectors would require the attacker to be in or near the target vehicle, but services like GM’s OnStar, BMW’s Assist, and others utilize mobile cellular connections to connect vehicles to the outside world. New smartphone apps that allow vehicle owners to interface with their cars remotely can open up these interfaces essentially to anyone on the Internet. It’s not too far-fetched to imagine that a few years from now bored Chinese hackers could spend their downtime crashing cars instead of 
trying to cause trouble at water treatment plants.

Motor vehicles have been built with mechanical and hydraulic linkages for over a century, and the basic safety principles for those types of systems are well understood. Designing reliable software for complex vehicles is a fairly new discipline that is only understood by 
a few companies (and even they make mistakes). Malfunctions or outside interference with operating vehicles can easily have fatal consequences, and the increasing use of networked control systems connected to the outside world increases the likelihood of accidental or malicious incidents.

The developers of the electronic systems in our vehicles would do well to heed the the saying “with great power comes great responsibility.” As we’ve seen with both Toyota and GM’s recent troubles, safety issues can bring heavy financial consequences for manufacturers. Congress is starting to 
pay attention to the issue of car hacking as well, and it will likely only take one high-profile incident to provoke regulatory action.

Tesla Motors has already shaken up the industry by bringing its Silicon Valley approach to the automobile business and continues with this approach by 
actively soliciting information from the public on security vulnerabilities in its vehicles and publicly posting a “Hall of Fame” for security researchers who have assisted them. Perhaps this is part of the future, manufacturers working closer with their customers to find and address issues.

As Google experiments with some of the first realistic self-driving cars, it isn’t too far fetched to imagine them following the same path as Tesla when it comes to working with security researchers, especially in light of Google’s existing 
bug bounty programs. In any case, one habit of Silicon Valley that we can be almost assured won’t carry over to the automotive world is the practice of disclaiming liability for damages from the improper operation of software; the Toyota case has shown us that those days are already over. Who knows? Before long, it may be Silicon Valley looking to Detroit for advice on how to handle product liability concerns.

As a footnote, many of the issues raised here are applicable to other industries outside the automotive sector as well (software vulnerabilities in medical devices and industrial control systems have been getting quite a bit of attention as of late). But it’s hard to imagine any other industry that is as integral to the national (and global) economy, whose products are used more frequently by such a large proportion of the population, and the correct operation of which carries life-and-death consequences.

Chip & Pain, EMV Will Not Solve Payment Card Fraud

As published in Wall Street and Technology:

Switching to EMV cards will lower retail fraud, but it's not enough. Here's the good, the bad, and the ugly.

Home Depot, much like Target before it, has responded to its breach with a press release indicating that it will be rolling out Chip and PIN technology. While this is a positive step, it is also a bit of a red herring: Chip and PIN technology alone would have done little to nothing to prevent these breaches.

Chip and PIN is one piece of a larger standard called EMV. This standard defines how chip cards interoperate with point-of-sale terminals and ATMs. It includes the Chip and PIN functionality that we hear so much about as well as Chip and Sign functionality that seems more likely to get rolled out in the US. EMV is not without its flaws. 

It's all about the money
The card brands are pushing for EMV to be in place by October 2015 with gas pumps and ATMs allowed an extension until October 2017. The mechanism by which this is being accomplished is a liability shift.
In the US today the bank or card brand is typically responsible for most fraud losses. When the deadlines pass the acquirers will be transferring liability for fraud losses down to whoever isn't using EMV technology. For example, if fraud is committed with an EMV card at a merchant that only supports stripe cards then the merchant will be liable.

The good
The advantage of an EMV card is that the chip is much harder to clone than a magnetic stripe.

The magnetic stripes are like miniature tape cassettes that can easily be overwritten with stolen data while chips are more like miniature computers that cryptographically validate themselves. The chips are not supposed to give up the secret keys that would be necessary in order to create a clone.

Chip and PIN cards also make it more difficult to steal and use a physical card. The thief would need to know the PIN to use the stolen card.

The bad
So far banks in the US are rolling out Chip and Sign cards due to fears about consumer acceptance of PINs. With Chip and Sign it remains possible for a thief to steal a physical card and make a purchase at any store by drawing a squiggle on a piece of paper.

There are deeper problems with the transition though. Not every merchant or bank will support EMV right away so both EMV cards and terminals will continue to support magnetic stripes. Stripe data stolen from a non-EMV merchant can still be used for fraud and unless terminals enforce the use of cards in EMV mode this opens the door to stolen card data being used in magnetic stripe mode regardless of its source.

The ugly
The chip helps verify that the card is legitimate but most EMV terminals read the unencrypted card details off of the chip in nearly the same way that a magnetic stripe terminal reads them now. A compromised point-of-sale terminal could still skim off card details that could be used for fraud elsewhere.

Security researchers have also identified a few different techniques for capturing PINs and an attack that allows an incorrect PIN to be used successfully. EMV terminals are also not immune from people tampering with the terminals themselves, including in the supply chain, and this has already resulted in some real-world breaches.

E-commerce still relies on punching a card number into a website. EMV offers no protection here, cards could be stolen from compromised e-commerce servers and stolen card data could be used to make online purchases.

What, if not EMV?
EMV does lower retail fraud where it is used today because it's easier to steal cards and commit fraud in another geography where EMV is not in use. As other sources of card data dry up we can expect the flaws in EMV that we already know about will be exploited more widely and new exploits will be found. Before too long we will end up right back where we are today.

The real solution to the retail breaches we've been seeing is encryption. By the time the card data gets to the point-of-sale terminal it's too late. Encryption should happen as close to the card as possible, this means in the terminal hardware as the card is read. In this model the only realistic attack a merchant would have to be concerned with is tampering with the terminal hardware itself.

PCI has published the Point-to-Point-Encryption (P2PE) standard to standardize this approach but most merchants are focusing on the migration to EMV instead. I'm afraid that soon after the shift to EMV is complete we will find ourselves making another forced migration to P2PE. Either that or consumers and merchants begin their own migration to alternative payment technologies.