Article by Dr. Martin Stemplinger, Senior Security Consultant (Germany). Dr. Stemplinger has almost 20 years of experience in IT Security. He worked as a security architect and security manager at a German financial institution responsible for security governance, network & system security and identity & access management. Currently he works as a senior security consultant at a large European telco provider.
It seems that hardly a day goes by without news coverage of a high-profile data breach or successful attack. All these organisations employ security teams that do their best to secure their IT environment, although some organisations may lack the necessary focus on security or simply don't provide sufficient budget. Nevertheless, why do all these data breaches occur? One reason may be that too many security teams concentrate on protective measures alone.
Historically, security was chiefly concerned with access control (remember the Bell - La Padula model?). The underlying idea was that if we control the access a single user may have to what he or she strictly needs this will protect our crown jewels sufficiently.
After the invention of the Internet, perimeter defences became a must. Everybody implemented firewalls at the perimeter, sometimes very sophisticated multi-level architectures that were thought of as the equivalent of a high wall that nobody would be able to penetrate. Some years ago we learned that the concept of a hard shell with a soft inside is dead. So what did we do? We segregated our networks and hardened to hosts within. In a way, we just shrank the perimeter into many smaller perimeters.
Nowadays antivirus software is present almost everywhere, which should notice any malware and block it, thus protecting us. Later, people added network intrusion detection and prevention systems to the mix again with the idea to protect against intrusion from the perimeter or from other networks, After the first big data breaches, people started to add sandboxing devices to their network as yet another attempt to block malicious attackers.
Because of this we as an industry are used to thinking in terms of protection and this is also the simplest metaphor to explain to management. The fundamental problem with this is that it skips over something: attackers are really clever and will find a way to get what they want whatever the protection. In a way cyber threats are like bacteria in the human body: they are constantly in-and-around us. If we accept that fact, then the question changes from “what do we need to do to protect against an attack?” to “what do we need to do to survive the inevitable successful attack with as little damage as possible?”
If we start to think in this way the obvious first question (and unfortunately a hard one!) becomes: “how would we notice an attack and how long would it take us?” As a hint: according to several studies the mean time to attack a network can be calculated in terms of days whereas the mean time to notice is in terms of months. Furthermore most companies only notice a breach because outside parties tell them. And the closely related second question is: “if we notice the attack do we know what to do to contain it and limit its consequences?”
The trouble with these kind of questions is that you can’t buy a shiny box to solve the issue: It involves people and processes which are harder to get right than adding yet another tool. Don’t get me wrong: I’m far from declaring protection dead, but I strongly believe that it needs to be accompanied by stable incident detection and response capabilities.