NIST Cybersecurity Framework Needs More Focus on Collaboration and Finding Anomalies

Jason Fredrickson

A few days ago, I was delighted to see the National Institute of Standards and Technology (NIST) release its Preliminary Cybersecurity Framework for reducing cyber risks to critical infrastructure. And my first read-through was pretty positive: they cover a lot of material, and I think it will help organizations understand the full picture of security readiness. Their tiered approach, for instance, is sound, and I’ve seen it work successfully in other industries–e-discovery, for instance, has the EDRM Maturity Model, and software development has the CMMI. And I’m very pleased to see such attention paid to PII and privacy.

That said, however, I saw a few structural problems on my second review. The Framework has a lot of noise about security policies and procedures and not as much of a call-to-action on collaboration and threat intelligence-sharing as I would like. It lacks any mention of proactive forensics or proactive investigation. It contains a wealth of detail on rules and process for ensuring information security, but very little in the way of the means of, or requirements for, organizations to work together to fight the good fight. And it has a major hole in its attempt to categorize threat detection and response.

Detection is Where the Rubber Meets the Road


The framework names five functions for implementation: Identify, Protect, Detect, Respond, and Recover. And of these five, “Detect” is what we should all care about.

  • Identify--knowing what needs to be protected--is a worthwhile dream, but IS teams have unsuccessfully been attempting this for decades--it's nothing new. Putting it into this framework isnt' going to change anything.

  • Protect--encompassing policies, procedures, and access controls--helps, but the blk of the threats we deal with today are engineered to bypass these controls.

  • Respond--containment, eradication, and notification of stakeholders (law enforcement, etc.) are important, but what we're seeing is that the response step usually occurs far too late to have any real impact. By the time these steps are taken, the damage is done.

  • Recover--managing public relations and reputation damage--is perhaps the least important part of the entire process.
All this focus on security protocols, permissions, and reputation is worthwhile, but at the end of the day, detection is where the rubber meets the road. Despite our ever-increasing proficiency with access control and security policies, we continue to see more intrusions. The black hats will find ways past our protocols, procedures, and access controls. And if you can’t detect the attack, you can’t respond or recover.

What about Deviation from the Baseline?

Detection is where we care about making bold strides and it represents our biggest opportunity for collaboration. And I am thrilled to see that the very first category in the Detect function is “Anomalies and Events.” We’ve been talking about “events” and “event management” for years, but we all know that the scariest words in the English language are: “Huh. That’s funny.”

Something strange or weird--in short, an anomaly--is usually the first clue of a potential security incident. And most of the time… we all miss it.

We need to focus on detecting the anomaly and seeing the behavior that’s different for what it is - which brings me to the biggest hole in the entire Preliminary Framework. The first subcategory (DE.AE-1) in Detect calls for the creation of a baseline of normal behavior, but there is no explicit mention of detecting deviations from the baseline. These deviations are the first indicators of potential malfeasance, and need to be considered with the same gravity as other events within the context of security awareness. Deviations from normal behavior are often the only indication of an Advanced Persistent Threat--an attack that will almost certainly consist of customized malware that will evade all signature-based detection systems.

I also would like to see explicit mention of something that I’ll call proactive forensics-–a practice that I see security groups across the nation doing, but which attracts little discussion. Well-informed security groups will often audit various network components, even without cause, looking to detect incidents before they can grow. I don’t see any mention of that here.

Paranoia vs. Collaboration

I have one more problem with the Framework as a whole: the lack of collaborative language. ID.RA-2 discusses receiving information about threats from information sharing resources, but not contributing any. PR.AT-3 states that third parties need to understand their responsibilities, but does not include them in any conversation of how those responsibilities are assigned. DE.CM-6 talks about monitoring external service providers. RS.CO-5 uses the term “voluntary” to describe coordination with external stakeholders in the event of an incident. And RC.CO-1 and RC.CO-2 discuss reputation repair and public relations, not clear disclosure of the threat or how to prevent other organizations from suffering a similar attack.

This is not collaborative--this is paranoid. I understand that we are all participating in a world-size defense in depth, and need to be implementing our own checks, but there needs to be more here about collaborating with our peers in the security industry. I understand the need to protect an organization’s reputation and I understand the cutthroat competitive environment so well embroidered into in our daily business life. But the good guys have to work together or the threat landscape will only become more daunting, and the bad guys more successful.

Final Thoughts

So am I happy?

In a word–-yes. I think that the Preliminary Framework is a good–-even great–-start. Its authors clearly recognize that it needs to be a living and growing standard, offering areas for potential improvement. But it’s just a start.

We need to be having more conversations about collaborating and detecting threats before they can wreak havoc, and less about repairing reputation after the damage is done. That’s the path to the greater good, and the path I hope to see in this Framework in the future.

Jason Fredrickson is the Senior Director, Enterprise Application Development at Guidance Software. 

No comments :

Post a Comment