In a summary report by a researcher from GFI Software, a security products company, we learned yesterday that the count of vulnerabilities discovered in 2014 was up over the previous year.
We got a lot of graphs.
Who wants pie?
We got tables, too.
OSX and Linux make disturbingly large ripples in the pool, for once.
But all this rather misses the point.
The counts of the vulnerabilities researchers have discovered in your software are only one factor in your overall security picture, and I would argue, a relatively minor one. Most attacks succeed because of misconfigurations and human factors. Malicious insiders and social engineering.
The vast majority of technologically vulnerable software is on machines that should not be accessible from the Internet, and perhaps not even from the majority of the company’s intranet. And yet audit after audit will find default-allow access rules, especially on internal firewalls. These, plus lousy defaults for on-the-box controls create many times more opportunities for attackers than should exist.
And for the most part, human failures are really design failures. IT architects design systems with an unspoken and largely unexamined assumption that the operators of those systems will do things correctly. This assumption is one that we security practitioners must challenge at every turn. Two things that security uber-consultant Bruce Schneier
has said stick with me. The first is about the fact that good security people are people who break stuff, by breaking the assumptions under which they are designed. For example, here he wrote about
a hilarious product called SmartWater, which is water with microscopic particles in it that provide a unique coding, to mark property as yours. Schneier said, “The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your
valuables, and then call the police.” This should have given the architects of the whole SmartWater idea what we like to call an, Oh, $#!+ moment.
And the second one might be his most-quoted one-liner: “If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology.”
Ultimately, vulnerability counts are about nitpicking the technology. Good technology is important, and we should be pushing the manufacturers to make it better for security all the time. But getting the numbers on all those charts and graphs to zero won’t be the final answer.
PC Manufacturers have been installing crapware in their machines for years, perhaps decades. I bought a Packard-Bell computer in 1996 that needed to have quite a few “sponsored utilities” cleaned off to make it usable. This week, Lenovo got caught red-handed installing actual malware: the Superfish utility added a bogus certificate to the root certificate store, enabling them to intercept and examine all HTTPS traffic via a simple-to-implement and impossible-to-detect man-in-the-middle attack. Superfish created a deliberate data tap in all your encrypted traffic.
So yesterday Lenovo issued this press release
, as companies do in this situation. For the most part it was pretty standard eyes-glazing-over corporate doubletalk, most of which translates as “oh, s*, we got caught, how shall we walk it back?”
Still, a couple of key points stood out for me.
- We thought the product would enhance the shopping experience, as intended by Superfish.
That’s what Superfish intended, is it? Enhancing my shopping experience? Well, I’ll tell you what would enhance my shopping experience: someone who follows me around and carries all the bags. This is not really accomplished with fake root certificates stealthed into my Windows certificate store. Also, notice how the “intent” is now ascribed to Superfish, not Lenovo. A kettle of lawyers are circling….
- It did not meet our expectations or those of our customers.
Oh those pesky customers. Always expecting not to have their banking credentials stolen.
I was in a meeting and someone from another company (but with a good reason to want to know) asked me, Does your organization respect the need for security or do they view the requirements you bring to them as an annoyance and a burden? In other words, he asked if we have a good security culture.
I told him that I am indeed fortunate that when I add security requirements to a project, or alert admins to a newly-uncovered flaw that make their systems less secure, it is always a welcome addition. I know there are plenty of organizations where this is not true: where “Security” shows up and eyes begin to roll even before s/he speaks. So I know I am lucky this way.
But he went on to ask, How can you show me that? And that stopped me cold. I realized that, even though I am in a good security culture I don’t really have artifacts to demonstrate that fact. I can show that awareness training takes place… but not that people are happier for having been trained. I can show that risk mitigation is done (and on time!)… but not that anyone welcomed the tasks or was glad to do them.
We security practitioners always talk about wanting to have this kind of security culture in our organizations. How do we know when we get it? It’s like Justice Stewart’s famous non-definition of obscenity: “I know it when I see it.” But if it has business value — and I believe we’d insist to our last breath that it does — then it should be measurable. So how is it measured?
I don’t think a survey can truly measure something like this. I am fairly sure that responses to surveys of employees are skewed in the direction of “good news.” Employees know what answers their employer wants, and protestations of the survey manager that all responses are confidential and anonymous might be a tad more credible if the survey link didn’t arrive in the company email inbox sporting a 56-character random-looking string after the ‘?’ in the URL.
In any case, now I am now on the hook to produce artifacts of the good security culture in which I work, and I am not sure what those might look like.
Have you ever been asked for such things? Or perhaps you know of a way to measure “security culture?”