Skip to content

The Blurry Line of Marketing Funded Research

Microsoft’s Security Engineering Center recently published a document called Software Vulnerability Exploit Trends. In reading it, I was confronted with a familiar feeling, a mix of interest and frustration that I’ll just call frustinterest. I was totally frustrinterested in this document. It had charts like this one.

CVE_Exploit_Trends

I really want to love this chart. It’s a temptress of possible conclusions. When you combine it with the chart including exploits before and after availability of a patch, it’s really interesting. But there are questions about the underlying data. The paper says, of the chart above, “The following figure represents the number of common vulnerabilities and exposures (CVEs) that were classified as RCE [remote code execution] CVEs over the last seven years.” Ah, solid; now I have a clear understanding of what data is included in the chart, except I’m left wondering how the RCE classification was determined. Let me cut to the conclusion. This document does a poor job of explaining its data sources. It only covers those conditions included in a Microsoft Security Bulletin and categorized by Microsoft as remote code execution. Does that make the results invalid? No. It is a very different report with that information in hand than without? Yes. If this is research, then I shouldn’t have to dig for that. The ‘Data Sources’ appendix should explain it very, very clearly. This is the data sources appendix:

datasources

 

 

 

I’m not entirely sure why this was even included. It provides almost zero information about what data was actually included. It doesn’t even answer the very relevant question of whether this document includes only Microsoft vulnerabilities? After a significant period of time poking around for answers, I’m forced to conclude that this is a marketing piece masquerading as research. I normally wouldn’t spend the effort to write a whole blog post about that, but it brings up a tricky and interesting dilemma. How much stock should one put in vendor funded research?

Contrast the Standard

Consider a publication that’s the polar opposite: the Verizon Data Breach Incident Report. There’s little question that the DBIR is a valuable research document that helps drive behavior in the InfoSec community. At a minimum, it drives real conversation and debate. Looking at the DBIR, there are a few differences that jump out immediately. It has a methodology section up front, and it starts with this sentence “Based on feedback, one of the things readers value most about this report is the level of rigor and integrity employed when collecting, analyzing, and presenting data.” It references standards for collecting data (VERIS). It lists its sources by name, and they’re not limited to the vendor (Verizon). It has contact information for feedback that’s readily available. All these characteristics give you the (accurate) impression that this document is the result of research, not marketing.

The Dilemma

So what, you say? Well, the issue arises more when the line isn’t clear. We all suffer from confirmation bias, so when we see white papers, briefs or surveys that support our conclusions, we tend to accept them. On the other hand, there isn’t an overabundance of vendor-neutral information security research; we shouldn’t ignore good data from available sources. The risk, however, is that we draw conclusions that are simply incorrect or correct, but narrowly applicable. The risk is that we change our behavior based on these conclusions.

Lessons for the Vendors

If you’re a vendor, you can directly affect this situation in either a positive or negative way. The first lesson is to be clear about your objectives. Before producing and publishing a white paper, survey or research document, decide why you’re doing it. It sounds dead simple, but I know¬† that these things get spun up and spit out of organizations without the objectives being clear to all involved, and with the distinction between marketing and research unclear at best. If you want research, fund research and not marketing; then market the research. If you want to promote a product or service, then start with that objective and don’t lose track of it. The result will be that the final product, research or collateral, will be better targeted and more successful at accomplishing the objective. That illusive thing you’re after that happens with good research like the DBIR happens because it’s good research.

Lessons for the Readers

Be skeptical, but practical. Learn to tell the difference between a sales pitch disguised as data and real research. That doesn’t mean you have dismiss the data-driven sales pitch. It’s a tool; if you use it well, you can build something. Skepticism is all about asking questions, so make the effort to write them down while you read the document. And please, don’t use the conclusions from a blog post summarizing the document without reading the source material yourself. That source material should answer questions that you have from the blog post or article. Honing this skill, of sorting the informational wheat from the promotional chaff, will give you far greater ability to recognize good data that just happens to be promoted by a vendor from data generated by a vendor in support of a position.

Post a Comment

Your email is never published nor shared. Required fields are marked *