Because I'm all about the "good enough."

Saturday, February 11, 2012

In 50 gigabytes, turn left: data-driven security.

I love Scott Crawford's research into data-driven security.  I agree with him that IT operations and development can both benefit from the right security data -- where "right" means at the appropriate level and relevant to what they're doing.  It also has to be in the right mode:  an alert should be based on a conclusion drawn from the analysis of data (20 failed logins per second = someone is using automation to try to break in), based on an event or confluence of certain events.  Once someone in IT needs to perform an investigation, the need changes to looking at more atomic data (exactly which logins are being targeted, whether they're active or disabled, etc.).  In other words, the details need to be available on demand, but they shouldn't be shoved at the IT staff in lieu of useful alerts.

Another kind of data that is useful is situational data:  how things are configured and what is happening during "normal" operation.  Viewing all the responses from a database is too much to ask of a developer -- but the developer would benefit a lot by knowing that some queries are taking 25 minutes to return (do you suppose that would have some effect on application performance?).  This is the sort of data that is incredibly useful, but setting up every possible abnormal situation to trigger an alert is way beyond the scope of an overworked operations team.  Sometimes you just have to sit down and do some exploring every so often, to find out these sorts of operational problems.  Packet captures can teach you things you can't learn any other way -- if you have the time and skills to read them.

Because detection is expensive.  It requires the luxury of having staff both knowledgeable in the technology and in the context of those particular systems, and having them devote a lot of their time just to sitting and looking at things, sorting out what's normal from what's not.  Those are the kind of costly eyeballs that have been transferred so frequently to managed security service providers.  It's the kind of thing you pay consultants to do, because if your staff weren't completely occupied with keeping the infrastructure running, you wouldn't be allowed to keep them.  Data analysis today is expensive, and it's a one-off deal unless you can find economies of scale somewhere.

Yes, automation is getting better, but it's not there yet.  There are still too many alerts taking up too much time to sort through (particularly in the tuning phase).  IT staff get hundreds of emails a day; they can't handle more than two or three alerts that require real investigation.  (By the way, this is why operations often can't respond to something until it's down -- it's the most severe and least frequent kind of alert that they receive all day, and they don't have time to chase down anything lower-level, like a warning message that hasn't resulted in badness yet.)

If you break security events down, you're generally looking for two kinds of things:  normal activities that are being done by the wrong people (as in, from a Tor exit node through your administration console), or abnormal activities that are being done by the "right" people (internal fraud, or someone has taken over an authorized account).  And by "people," of course, I also mean "systems," but at first glance it's sometimes hard to tell the difference. 

This determination of "wrong" and "right" is a security activity, and for the reasons I listed above, operations people may not care that much until it makes something happen that they have to fix.  If someone wipes a database, they'll care a whole lot, but if there's some unusual encrypted traffic leaving the enterprise on port 80, not so much.  A fully leveraged (i.e. overworked) ops team doesn't have time to analyze alerts at that level.

"Wrong" and "right" to the business is on a completely different stratum, and it's one that's hard for automation to reach today.  Executives care when it gets to the level where they have to do something about it, like fire someone for looking at patient data, or talk to the press about a breach.  They care when an event starts to present the risk of legal liability or increased cost.  But you can't bring them alerts like that until you have digested everything at a lower level and put together enough evidence to reveal a business issue.

And finally, historical data can be extremely useful in determining what works in security and operations and what doesn't.  But that kind of data has to be analyzed in a different way from real-time operational data or situational data.  It requires a different model that caters to the requirements of risk analysis -- and that, too, is expensive, even assuming you know how to do it today.  (Hi, Chris.)

My point here is to say that data-driven security is where we need to go, absolutely.  But there is no single path to take with the data we have; there are a number of divergent paths that are all needed in the enterprise.  We also need to be able to drive the data in the right delivery directions -- which means that we need a really good data navigation system.

Thursday, February 9, 2012

Security: ur doin it rong.

As I mentioned before, a lot of security work consists of telling people they're doing something wrong.  There are all the "thou shalt nots" in security policies, there's the "scanning and scolding" of vulnerability assessment, and there's the "Ha! Got you!" inherent in penetration testing and exploit development.

In other words, it takes a lot of moxie (pun intended) to stand up to a security professional.

Rob Lewis, aka @Infosec_Tourist, made the comment yesterday:
You're right. Nobody says "we're screwed!" with as a sincere and calm demeanour as @451wendy.
Which I appreciate, but it's been bothering me lately that that's almost always how we discuss security.

In his preso at Security B-Sides London last year, David Rook (aka @securityninja) made a great point about application security:  if we taught driving the same way we taught secure development, we'd make a whole big list of different ways you could crash the car, but never actually tell the student how to drive safely.

A good number of talks at security conferences focus on what we (or other people) are Doing Wrong.  Very, very few are about how to do something right.  Part of the reason for this, of course, is that practitioners are afraid to stand up in front of an audience and talk about how they're defending themselves, for fear that someone in the audience will take it as a challenge and de-cyber-pants them before they've even gotten to the Q&A session.  (I've heard tell of presenters' laptops being hijacked in the middle of a presentation.)  I know a lot of practitioners are doing very cool things that their management would never let them say publicly.

But when we focus too much on what people are doing wrong, there's a danger of our talks turning into pompous lectures.  "We need to do something different from what we're doing today."  Okay, but what, exactly?*  This is why I admire those who are proposing alternative solutions, such as Moxie Marlinspike's Convergence.  These folks might be right, or they might be wrong, but at least they're trying to make things better.

So, lest this turn too Gödel, Escher, Bach on us, I'll stop lecturing too, and talk about what I plan to do about it.  I'm going to do more talks about what I think works in security.  I've done a few before on topics such as how to bootstrap an infosec program, what multi-contextual identity and access management looks like, and how to dicker on the contract with third party providers.   I won't aspire to #sexydefense; I'll leave that to the ones who show up all the time on the Top Ten Infosecsiest lists.  But I'll encourage people to turn that frown upside down, and try not to bring up a problem without also proposing a solution.  


Maybe this way, we can get invited to a few more non-security parties instead of having to throw them all ourselves.


*No, the answer is NOT "use our product."  Thanks for playing, though.

Wednesday, February 8, 2012

Insecure at any speed.

With the release of breach data reports, such as the one from Trustwave SpiderLabs that came out recently and the highly anticipated one from Verizon Business, inevitably comes a wave of data dissection and then disbelief.  Security pundits moan at the statistics, such as the one this year that 78% of organizations that Trustwave investigated had no firewalls at all.  The report itself takes an incredulous tone as it describes the pervasive use of unencrypted legacy protocols (one highlighted case study described a breach involving an X.25 network), insecure utilities such as telnet and rsh, and more.

Security pros who specialize in this sort of thing may be surprised at how big the problem is, particularly among smaller enterprises, but anyone who has actually tried to implement security in these organizations isn't surprised at all.  You can tell by the faces in the audience when one of these talks goes on:  it's the difference between "ZOMG!" and "Yup, *sigh*."

It's not that these organizations don't care about security.  You'd have to know about security first in order to care about it.  The next time you go to a sandwich shop or a gas station, ask the manager about the security in the POS system they're using.  It should be an interesting, but very brief, exchange.

Should everyone be able to manage their own security?  It's very much out of reach for those below the security poverty line; when you think about it, the level of security management needed for technology today reaches the equivalent of having to rebuild and restock grocery shelves on a weekly basis, or requiring an accountant to know construction, electricity and plumbing for the office.  Just reading through the Trustwave report, and all the myriad ways that systems are breached, I can't help but imagine the look on a manager's face if I made it into a checklist and handed it out.  Who outside of the clannish IT industry knows how to spell ftp, much less knows that it's insecure?  Who would know the better options and be able to implement them? Who has the time to examine and reconfigure computers on a regular basis?

What this indicates to me is that our IT infrastructure -- from the networks to mobile -- is inherently, badly insecure.  And we're so far down the road in its widespread implementation that it will be decades before the problem is substantially fixed, even assuming we started today with all software developers and manufacturers.  Nobody is going to pay to replace what's running just fine today -- until someone loses a figurative eye.

As technology advances, organizations have to deal with an ever-widening range of technology that they have to try to get secured.  Yes, there are still X.25, COBOL, VMS, DOS, NT, SunOS, Sybase, and token ring out there. At the same time, iOS and Android are coming into play, along with "the cloud" and Hadoop and NoSQL and everything else that's new.  A CIO needs to know about all these; a CISO has to know how to secure them all -- especially when older systems aren't compatible with updated software.  The complexity grows year by year, and the inertia of the legacy environment weighs more heavily on it.

And make no mistake: security is disruptive.  It's enormously disruptive.  Getting the network architected correctly, every version of software patched and every configuration right, especially after the system has been in use for a while, is as disruptive to the business as migrating to a completely new system or platform.  Ask anyone who has tried to manage a security initiative in an enterprise.  Even assuming the enterprise wants to do it, it's a major undertaking.  All this shows how badly security is designed today; you shouldn't have to keep reconfiguring your systems on a weekly or monthly basis in-flight just to keep the security entropy at bay.

It's an intractable problem, and frankly, it's one that the enterprise shouldn't have to solve.  People are trying to work with the equivalent of a pencil, and it's not their fault that their pencils are fragile, complicated, and prone to exploding at inopportune moments.  They shouldn't have to know or care why the pencil isn't working; they want a new one without any delay, and without hearing long stories about how the graphite in this type of pencil isn't backwards-compatible with all the erasers in the firm.

So when we read about how bad security is getting, we shouldn't be pointing the finger at the compromised enterprises.  We should be pointing it at their IT providers, who really ought to know better; but more fundamentally, we should be pointing it at ourselves.  We should stop demanding that the user be responsible for security; those of us who are building this stuff to begin with should fix it ourselves, and build it in to all future technology.  Today security is an afterthought, and a bad one at that.  As long as it remains separate from the systems it's supposed to protect, instead of being simply an attribute, and as long as it requires users to maintain an abnormal height of awareness as they go about their daily jobs, security is going to continue to be as bad as it is today.

Tuesday, February 7, 2012

Analyst geometries.

Quadrants and cycles and waves, oh my! 

We're all familiar with the best-known graphics, in which there are #WINNING parts of the page and #LUSING parts.  In fact, I like anything that lays out concepts and relationships so that I can pick them up at a glance, like this lovely "subway map" from The Real Group.  I've argued that my employer needs a "magic dartboard" so that we could write reports like this:
"Vendor X is in its third year right next to the bullseye.  On the other hand, Vendor Y took a wrong turn recently and is now firmly wedged in the fake wooden paneling on the wall."
I myself have presented a Punnett Square of Doom before; we have Christofer Hoff's Hamster Sine Wave of Pain; and we have the one that started it all, Andrew Jaquith's Hamster Wheel of Pain.  Someone even proposed a magic quadrant for analysts, with one axis being "ego" and the other being "clue." (I'm not drawing that one up; someone else will have to do that.)

However, the issue in drawing something out, especially as a chart or graph, is that people want to see numbers (mostly so they can argue with them: "We should be at least 3.5 to the right!").  And where there are numbers, there is a danger of misleading math holding it all together: quantitative depictions of what are really qualitative properties.  I don't think anyone means "20/300" when describing a company's vision.*  There's also a tendency by decision-makers to turn the positioning into a binary sort of proposition: "Upper right or not?  Okay, I'll sign the purchase order."  I've never had a discussion in which I successfully argued for one vendor over another based on one being eighteen pixels down but twenty degrees north-northwest of the equator.

So what kinds of graphics are useful without turning the exercise into a rating system?  I started a mind map of vendors in one particular sector, in which I simply tried to categorize them by offerings, show who was reselling whom, and who was partnering with whom.  It turned into a confusing mass of spaghetti faster than you could say "al dente."  It certainly wouldn't help anyone who was trying to evaluate products.

The problem is, sectors within security are blurring and merging, companies are building out portfolios, and everyone's adding discrete functionality from different categories.  Static and dynamic security analysis, for example, aren't separate revenue streams for some vendors who do both, and it'll just get more muddled when you add "glass box" or "hybrid" testing to the mix.  To make matters worse, some vendors invent a new sector for themselves: "We're not Category X!  We're next-generation big data hybrid security snorkeling!"  There just aren't enough drinks at RSA to make up for that kind of headache.

So any kind of graphic that I can come up with to depict market placement is going to look more like Jackson Pollock than a fixed geometry, maybe with contrails behind some of the vendors going in different directions from their current paintdrop.  Especially with the startups, the best I could do would be to create a magic pinball machine.  I'll mull it over some more and let you know what I come up with for the next report.



*Although it would be really fun to get into business astigmatism or technology presbyopia.  Hey!  Magic Spectacles!

Monday, February 6, 2012

Of Egyptian rivers &c.

Just for fun, I've compiled some of the top security excuses I've heard in my career.

  1. It's okay, it's behind the firewall.
  2. Won't antivirus catch that?
  3. No, we don't have confidential data on our system, just these Social Security numbers of our employees.
  4. But nobody would do that [exploit of a vulnerability].
  5. I can't remember all these passwords.
  6. My application won't work with a firewall in the way.
  7. They won't be able to see that; it's hidden.
  8. It's safe because you have to log in first.
  9. No, we don't have credit cards on our system, just on this one PC here.
  10. We didn't HAVE any security issues until YOU came to work here.*

*True story.