Skip to content

BYOD: Bring Your Old Dilemmas

617px-PalmPilot ProfessionalYay for lists! Here’s a list of four security issues around BYOD besides malware that you should worry about. Let me summarize:

1. Lost and Stolen Phones

2. Insecure Communications

3. Leaving the Walled Garden (uh, this is malware)

4. Vulnerable Development Frameworks

Ignoring the fact for the moment that issue number 3 (jailbroken phones using alternative app stores are more likely to get malware) is really about malware, I think this list is really a useful reminder that the most interesting risks are not usually the most important. I like interesting problems too, but data loss from theft and, well, loss is far more common than malicious electronic compromise. Impact and cost might be a different discussion, though.

It’s also worth noting that the remainder of the issues aren’t about the phone at all. Insecure communications and vulnerable development frameworks are issues for all information security disciplines. So the lesson here, perhaps, is that BYOD brings along all the same issues you already have, except in greater quantity and on endpoints you don’t control.

Could PRISM Improve Enterprise Security Response?

Dispersion prismWhile we’re all up in arms about the unwarranted data collection that the NSA has been performing, and the potential issues around privacy and legality of the PRISM program, one intrepid reportert stopped to ask the question of how much this is costing the US Taxpayers. “The program was expected to cost $278 million in the current fiscal year, down nearly one-third from its peak of $394 million in 2011.”

It turns out that the US Federal Government is required to provide appropriate compensation for complying with legal orders for data. That makes perfect sense, if you think about it. It costs money, either in the form of equipment or people, to collect the data requested. It’s not a revenue generating event or tied to revenue, so it’s pure cost to the company; they should get some compensation. There’s some disagreement about this, but I think the logic is sound.

The interesting part to me is what hasn’t been reported. I’m interested in how this system afftected the organizations involved. After all, it’s another kind of compliance. Information Security teams have learned how to wield the compliance stick in order to get things done, usually by applying compliance budget in ways that are useful beyond a specific regulation or policy. While the amount of money spent on complying with data collection orders is interesting, we should ask what other capabilities these compliance activities have enabled.

For example, if Verizon or AT&T successfully argued that they needed more expansive full packet capture capabilities across their infrastructure to comply with government surveillance requests, like those associated with PRISM, or to do so more cost effectively, they very well may have simultaneously put themselves in a better position to conduct accurate forensics on malicious attacks. Is there more to this? I don’t know, but if I were an industry journalist, I might just ask a few people.

The Cloud is Local

What an Astonished German Regulator Might Look Like.

What an Astonished German Regulator Might Look Like.

 

Facebook has a cloud problem. Or maybe the cloud has a Facebook problem. The issue is that the ubiquity of a cloud-based service conflicts with the locality of law. This picture is of the Hamburg Commissioner for Data Protection and Freedom of Information Johannes Caspar, and he’s astonished about the most recent changes in the Facebook data protection policy: “It is astonishing to find the facial recognition again in the new proposed privacy policy that Facebook published yesterday. We therefore have directly tried to contact officials from Facebook to find out if there is really a change in their data protection policy or if it is just a mistake of translation.”

I might say it’s an error in translation regardless of whether it’s legal, cultural or simply linguistic. Still, I think I’d go with a possible regression as an explanation instead. Maybe they just cut the new policy from a branch of code that didn’t have the previous changes merged.

Causes aside, for large entities jumping into the cloud with the hopes of streamlining service delivery, the costs of local legal knowledge and expertise remain. Even if it’s cloud-based, you’re still delivering a product in another country and the laws that apply may be different, especially when it comes to privacy and security. If you want to get a sense for how these laws vary throughout the world, check out this compendium [pdf]. That gives you an indication of the variation in law, but not in interpretation or enforcement.

Still, these considerations aren’t new. If you’re delivering a service or product around the world, this kind of legal compliance is the norm. The important part is to recognize that it applies to cloud-based services as well.

Tagged , , ,

The Malware Problem

 fire_banner

I like the term ‘malware.’ If you step out of the marketing for a minute, it’s a very simple, clear term to describe software that does something bad in your execution environment. A virus is a kind of malware, and so are rootkits, malicious shell code, and just about anything else you don’t like the result of.

In reading Wendy Nather’s discussion of “The Malware Detection Dilemma” I’m fascinated by this sentiment: “People are giving up on prevention.” 

The principle here, if you accept that sentiment as true, is that a strategy of rapid and effective detection is better than one of prevention. I find myself asking if that’s true in any other discipline?

There’s a classic analogy (for me anyway) between information security and fire fighting. It’s most often used to explain why prevention is a better investment than detection, though almost always a secondary investment. You’ll never prevent every fire, so you have to detect and respond, but once you have hit the maximum cost/value intersection on that activity, you then invest in prevention. Why? I mean, you could put an active fire response unit in every building or on every floor of every building. We could all carry fire extinguishers with us at all times. We don’t do that, though, because it’s not worth it. Let’s put that another way: the increase in cost of these measures does not equal or outweigh the value delivered. Instead, we invest in prevention because that investment, when you already have detection in place, is worth it.

Shift back to information security now. Does the analogy hold? Sure, I think it does. What doesn’t hold, however, is the assumption that we’re in the same place with costs and outcomes. With fire fighting, the fire doesn’t change behavior so much. With information security, we are dealing with rapidly changing technology on both sides. Perhaps we did hit a threshold with detection a few years back, and we rightly shifted to prevention as the preferred investment, but then the conditions changed in both platform and malware, such that the detection cost/value threshold shifted. Now we’re back to detection being a required investment.

While Nather looks at what’s after ‘advanced’ for anti-malware, she’s only looking at one side of the coin (or perhaps it’s a multi-sided die). We also have to consider what’s next for prevention, for platform, and for attack technology. All three are shifting and while we invest in detection, it may very well be that the next shift is in a next generation platform or prevention technology.

Information Security Logos

 

 

kwestin_tweet

7

6

5

1

2

3

4

 

 

The Interconnected Web and Shifting Target Surfaces

Water_droplet_backjet

Take a look at what’s in your browser right now. I’ll go ahead and assume you’ve got multiple tabs open. They each display a different site, which is probably pulling in code and content from at least 3 or 4 distinct  sources, maybe more, not to mention the 3rd party libraries and tools that are incorporated. Right now you are connected to and interacting with dozens of logical surfaces. For example, if I hit ‘view source’ on my logged in Facebook page, then simply search the page for ‘.com,’ I get:

mathtag.com
washingtonpost.com
mediamath.com
chase.com
sprint.com
turn.com
marriott.com
and more.

Keep that in mind for a minute.

Recently, a backdoor was discovered and removed in a downloadable ‘platform’ for serving ads called OpenX. Oh, wait, it may have been in the flowplayer video player that was incorporated into the OpenX software. At this point, you’re at least 4 levels removed from the actual user. I connect to a website (level 1), I get served ads (level 2), that use the OpenX software (level 3), that uses a 3rd party video player (level 4).

Another example is the use of legitimate cloud services for nefarious purposes. There are not only examples of cloud services simply being used as one might a physical server, but also more novel examples of attackers using Dropbox and WordPress to host and deliver malware.

This activity represents a shift in the target surface for the enterprise. Scratch that; shift is the wrong word. It’s really more of an expansion. You don’t get to stop worrying about the previously valid attack vectors. You just have to worry about new ones too. As we combine ubiquity of connectivity with the multi-layered model of delivering content above, and the ability of users to bring their own devices into the corporate network, it becomes a significant challenge for any organization to actually measure and monitor the surface on which their data is exposed to attack.

There’s no pithy conclusion at the end of this post. This is just a real, looming problem that I haven’t seen anyone really attempt to solve yet. Of course, solving problems that are merely looming is rarely profitable.

Is PRISM Ultimately Good for Privacy?

It seems like common sense to think of privacy and transparency as opposing forces. One seeks to expose, while the other seeks to hide. The Broken_glassreality, however, is a little more complex.

There are two revelations in the history of cryptography that shed light on the value of transparency to privacy.

Public-Key Cryptography is the real world realization that a system designed to ensure privacy that is wholly based on a private key will ultimately fail for very practical reasons. The effort to exchange a key, but also keep it secret, is a problem. Ultimately, a secret is something only one person knows, and so the most effective cryptographic method relies on the ability to publicly exchange a functioning key. In this case, privacy relies on transparency to function.

Open Source Cryptography is the other example where disclosure ensures privacy most effectively. The idea that an openly available method of encrypting data actually produces a more secure result may seem obvious to those in the information security now, but it actually flies in the face of what many would consider common sense.

What does this have to do with PRISM? There have been now a series of disclosures, accusations, and consequences from Edward Snowden’s actions. We’ve started down a rabbit hole, but we’re not at the bottom yet. As we learn more and more about the program and the data collected, we move further toward an open model. This is important, and difficult to articulate in a crowded room full of heated conversation. Let me start with two assumptions about government data collection:

  • The government must be able to collect private data in order to ensure national security.
  • The individual whose data is collected cannot know that this has happened until after a conclusion has been reached.

These two assumptions are used to drive programs like PRISM through FISA and other means. Proponents of secrecy use these assumptions to arrive at the conclusion that the entirety of these operations must be kept secret. This is the argument for closed source government data collection. It’s wrong, and it ultimately fails gloriously with Snowden. Add some new assumptions:

  • The government must be able to collect private data in order to ensure national security.
  • The individual whose data is collected cannot know that this has happened until after a conclusion has been reached.
  • The government may only collect data that is specifically relevant to an investigation (insert constitutional law here).
  • The method and system for requesting, approving and collecting this data must be publicly disclosed.
People will argue about constitutional law and searches. It’s a good argument to have because defining what a ‘search’ is and when it’s relevant is really important, but it’s the fourth assumption that’s really sticky. It’s only when the entire apparatus of data collection is available to *anyone* that the first two assumptions can be held true for any length of time. The ‘Snowden Effect’ is unavoidable in any system that relies on the secrecy of its methods. It’s the transparency that drives a better system, and ultimately more effective data collection and more confidence about privacy. Is PRISM ultimately good for privacy? That question should be “Is the inevitable disclosure of secret data collection methods ultimately good for the transparency of government operations?” I think the answer is yes.
There are many who would argue that the mechanism and methods must be kept secret. So what are those arguments?

Time Frames and Risk Perception

I found myself reading the results of a survey today that had questions about risk perception, or more specifically, about how likely you perceive the Aztec_calendar_(Sunstone)realization of a particular threat to be in a particular time frame. The question made me wonder how much the specified time frame affects your perception of  the risk. Take the following questions as examples:

  • How likely are you to be hit by a car?
  • How likely are you to be hit by a car in the next 10 years?
  • How likely are you to be hit by a car in the next month?
  • How likely are you to be hit by a car today?

With the diminishing time frame, the perception of probability that a threat will be realized decreases. I feel like I’m much less likely to be hit by a car today than in the next 10 years. The same principle can be applied to information security risk. I feel like it’s much less likely I’ll be compromised today than sometime over the next 12 months.

Is that probability actually less? I suspect the answer is yes. At what point does the curve level out? I suspect that’s a much, much harder question to answer as it requires that you actually test the probability, and given the consistent failure of organizations to detect breaches, the results are unlikely to be reliable.

A Collection of Headlines for Alexander’s Black Hat Talk

GenAlexander

I thought the myriad variety of headlines streaming through my news feeds was kind of interesting as a collection. It would be interesting to rate each as positive/negative towards the program and map them to new source, and maybe number of days after the event that it was published.

NSA director addresses Black Hat, says there have been “zero abuses” of data

At Black Hat, U.S. general offers a modest glimpse into NSA protocols

NSA Director Defends Surveillance To Unsympathetic Black Hat Crowd

NSA Chief To Hackers: Analysts Don’t Abuse Their Power

NSA chief: Snooping is crucial to fighting terrorism

NSA director heckled at Black Hat cybersecurity conference

NSA Chief Keith Alexander Speaks About PRISM at Black Hat

NSA director addresses Black Hat, says there have been “zero abuses” of data

NSA Director Defends Surveillance Programs

NSA Chief Keith Alexander Speaks About PRISM at Black Hat

Black Hat: NSA Chief Alexander Talks About PRISM

Buffeted by New Disclosures, NSA Chief Defends Surveillance Programs at Black Hat

Black Hat: NSA boss Keith Alexander claims PRISM only gathers terrorist data

 

The Blurry Line of Marketing Funded Research

Microsoft’s Security Engineering Center recently published a document called Software Vulnerability Exploit Trends. In reading it, I was confronted with a familiar feeling, a mix of interest and frustration that I’ll just call frustinterest. I was totally frustrinterested in this document. It had charts like this one.

CVE_Exploit_Trends

I really want to love this chart. It’s a temptress of possible conclusions. When you combine it with the chart including exploits before and after availability of a patch, it’s really interesting. But there are questions about the underlying data. The paper says, of the chart above, “The following figure represents the number of common vulnerabilities and exposures (CVEs) that were classified as RCE [remote code execution] CVEs over the last seven years.” Ah, solid; now I have a clear understanding of what data is included in the chart, except I’m left wondering how the RCE classification was determined. Let me cut to the conclusion. This document does a poor job of explaining its data sources. It only covers those conditions included in a Microsoft Security Bulletin and categorized by Microsoft as remote code execution. Does that make the results invalid? No. It is a very different report with that information in hand than without? Yes. If this is research, then I shouldn’t have to dig for that. The ‘Data Sources’ appendix should explain it very, very clearly. This is the data sources appendix:

datasources

 

 

 

I’m not entirely sure why this was even included. It provides almost zero information about what data was actually included. It doesn’t even answer the very relevant question of whether this document includes only Microsoft vulnerabilities? After a significant period of time poking around for answers, I’m forced to conclude that this is a marketing piece masquerading as research. I normally wouldn’t spend the effort to write a whole blog post about that, but it brings up a tricky and interesting dilemma. How much stock should one put in vendor funded research?

Contrast the Standard

Consider a publication that’s the polar opposite: the Verizon Data Breach Incident Report. There’s little question that the DBIR is a valuable research document that helps drive behavior in the InfoSec community. At a minimum, it drives real conversation and debate. Looking at the DBIR, there are a few differences that jump out immediately. It has a methodology section up front, and it starts with this sentence “Based on feedback, one of the things readers value most about this report is the level of rigor and integrity employed when collecting, analyzing, and presenting data.” It references standards for collecting data (VERIS). It lists its sources by name, and they’re not limited to the vendor (Verizon). It has contact information for feedback that’s readily available. All these characteristics give you the (accurate) impression that this document is the result of research, not marketing.

The Dilemma

So what, you say? Well, the issue arises more when the line isn’t clear. We all suffer from confirmation bias, so when we see white papers, briefs or surveys that support our conclusions, we tend to accept them. On the other hand, there isn’t an overabundance of vendor-neutral information security research; we shouldn’t ignore good data from available sources. The risk, however, is that we draw conclusions that are simply incorrect or correct, but narrowly applicable. The risk is that we change our behavior based on these conclusions.

Lessons for the Vendors

If you’re a vendor, you can directly affect this situation in either a positive or negative way. The first lesson is to be clear about your objectives. Before producing and publishing a white paper, survey or research document, decide why you’re doing it. It sounds dead simple, but I know  that these things get spun up and spit out of organizations without the objectives being clear to all involved, and with the distinction between marketing and research unclear at best. If you want research, fund research and not marketing; then market the research. If you want to promote a product or service, then start with that objective and don’t lose track of it. The result will be that the final product, research or collateral, will be better targeted and more successful at accomplishing the objective. That illusive thing you’re after that happens with good research like the DBIR happens because it’s good research.

Lessons for the Readers

Be skeptical, but practical. Learn to tell the difference between a sales pitch disguised as data and real research. That doesn’t mean you have dismiss the data-driven sales pitch. It’s a tool; if you use it well, you can build something. Skepticism is all about asking questions, so make the effort to write them down while you read the document. And please, don’t use the conclusions from a blog post summarizing the document without reading the source material yourself. That source material should answer questions that you have from the blog post or article. Honing this skill, of sorting the informational wheat from the promotional chaff, will give you far greater ability to recognize good data that just happens to be promoted by a vendor from data generated by a vendor in support of a position.