Friday, September 19, 2014

Expect Control Failures - Create Fail-Safe Security Models



The "Heartbleed" virus where OpenSSL's encryption ability was compromised, permitting the adversary to see user authentication information (User ID and associated password) was used as an excuse for data breaches by several firms. After all, once this information is known for a powerful user like a network engineer or an administrator, then your systems are wide open to the attacker.

Nevertheless, the design flaw in the above scenario is that their design assumed that the single factor “something you know” authentication credentials (user ID and password) would always be secure and known only by the authorized user. Their design did not expect this authentication control to fail, so there were no layers in place to provide security to either prevent a failure of this control or detect it – if it happened.  


Speaking directly to the choice of single factor authentication, with no compensating controls this security model is only appropriate for private networks with little or no internet access. The choice of a two-factor authentication design, particularly a two factor that was an Out Of Band (OOB) solution like using a cellular phone as the “something you have” would have prevented the breaches associated with the HeartBleed, because the adversary would not have had access to the cellular phone for authentication.

However, what if the two-factor authentication fails? Can we create a “Fail-Safe” security design that protects our data and information when this happens? Yes! A useful second layer is the deployment of an Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) utilities. If an adversary does gain authentication information and access to our network, the Intrusion Detection would notice that the session activity of our “Administrator” is against policy, and that this ID is attempting to copy and export sensitive information – notify the Security Officer and perhaps terminate the session.


Finally, a third layer is to provide a Data Leak Prevention (DLP) layer that would examine data flows – both their size and content as well as inspect their destination, prior to permitting the data from leaving our network.


The firms that experienced a HeartBleed related breach deployed none of these layers. From where I stand, it was not so much the particulars of the HeartBleed virus that caused the breaches, the lack of a Fail-Safe security design was the root cause.

Friday, January 10, 2014

Annual Vulnerability Testing - Are you really managing your Internet exposures?


To achieve compliance, GLBA requires a financial institution to have an independent test of their Firewall's ability to protect their internal network once a year. This is managing their internet exposure.  An annual test is performed, the regulator is shown the invoice and report that the Firewall rules are protecting the firm, compliance is achieved. Then a breach occurs and Management is "shocked that there is gambling in this cafe!".

Let's look at some facts. In my humble experience, an Internet facing firewall is "pinged" about 11 times every second, 24 hours a day, 7 days a week by robots looking to identify and place a marker for later exploitation. OK - the math 60 seconds in one minute, 60 minutes in an hour and 24 hours in a day gives us 950,400 "pings" every 24 hours [11per second*60*60*24= 950,400] knocks on the Firewall door every 24 hours.

So when Jim, our Network Engineer par excel lance makes an adjustment to our firewall rules to accommodate new business - at 10 AM Wednesday morning - and accidentally  creates an interaction with existing rules that creates a vulnerability to the Internet - How long before we find this opening and do we find it before the hackers do, becomes the question.

The answer is that under the annual vulnerability testing scenario it could be up to a year before this vulnerability is discovered. If this is unacceptable then we can strengthen our control over a firewall and contract with a third party like Verizon to scan our Firewall every 24 hours and send us an email to report the status. Under this control model, Jim makes the rule change at 10 AM, and Verizon scans our Firewall at 02:30, and sends Jim an email notifying him of the vulnerability - that he opens when he checks his email the next morning. He acts on the email and corrects the Firewall rules by 10 AM, Thursday morning. So, our network was exposed to almost a million "knocks at the door" because of our improved risk management approach. Are we actually managing our Telecommunications perimeter risk effectively? I think not! 

Penny Wise?

Turns out that there are real time software products that evaluate proposed changes to your telecommunications perimeter / Firewall rules as you are making the change - and let you know if there is a potential vulnerability due to your actions. AlgoSec and Manage Engine are providers of these tools - though there are others. For a small price, this tool can be installed and it will alert Jim at 10:01 AM, just before he makes his rule change in production. Jim - so advised can correct the rule before it ever is pushed to production - actually managing the risks of Internet perimeter security effectively!

This approach, COMBINED with an 24 hour or annual vulnerability test program and an Intrusion Detection System,  you can actually stand before the Board and report that a breach due to a telecommunication perimeter security failure is acceptably unlikely. You are managing this risk.