Friday, September 19, 2014

Expect Control Failures - Create Fail-Safe Security Models



The "Heartbleed" virus where OpenSSL's encryption ability was compromised, permitting the adversary to see user authentication information (User ID and associated password) was used as an excuse for data breaches by several firms. After all, once this information is known for a powerful user like a network engineer or an administrator, then your systems are wide open to the attacker.

Nevertheless, the design flaw in the above scenario is that their design assumed that the single factor “something you know” authentication credentials (user ID and password) would always be secure and known only by the authorized user. Their design did not expect this authentication control to fail, so there were no layers in place to provide security to either prevent a failure of this control or detect it – if it happened.  


Speaking directly to the choice of single factor authentication, with no compensating controls this security model is only appropriate for private networks with little or no internet access. The choice of a two-factor authentication design, particularly a two factor that was an Out Of Band (OOB) solution like using a cellular phone as the “something you have” would have prevented the breaches associated with the HeartBleed, because the adversary would not have had access to the cellular phone for authentication.

However, what if the two-factor authentication fails? Can we create a “Fail-Safe” security design that protects our data and information when this happens? Yes! A useful second layer is the deployment of an Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) utilities. If an adversary does gain authentication information and access to our network, the Intrusion Detection would notice that the session activity of our “Administrator” is against policy, and that this ID is attempting to copy and export sensitive information – notify the Security Officer and perhaps terminate the session.


Finally, a third layer is to provide a Data Leak Prevention (DLP) layer that would examine data flows – both their size and content as well as inspect their destination, prior to permitting the data from leaving our network.


The firms that experienced a HeartBleed related breach deployed none of these layers. From where I stand, it was not so much the particulars of the HeartBleed virus that caused the breaches, the lack of a Fail-Safe security design was the root cause.

Friday, January 10, 2014

Annual Vulnerability Testing - Are you really managing your Internet exposures?


To achieve compliance, GLBA requires a financial institution to have an independent test of their Firewall's ability to protect their internal network once a year. This is managing their internet exposure.  An annual test is performed, the regulator is shown the invoice and report that the Firewall rules are protecting the firm, compliance is achieved. Then a breach occurs and Management is "shocked that there is gambling in this cafe!".

Let's look at some facts. In my humble experience, an Internet facing firewall is "pinged" about 11 times every second, 24 hours a day, 7 days a week by robots looking to identify and place a marker for later exploitation. OK - the math 60 seconds in one minute, 60 minutes in an hour and 24 hours in a day gives us 950,400 "pings" every 24 hours [11per second*60*60*24= 950,400] knocks on the Firewall door every 24 hours.

So when Jim, our Network Engineer par excel lance makes an adjustment to our firewall rules to accommodate new business - at 10 AM Wednesday morning - and accidentally  creates an interaction with existing rules that creates a vulnerability to the Internet - How long before we find this opening and do we find it before the hackers do, becomes the question.

The answer is that under the annual vulnerability testing scenario it could be up to a year before this vulnerability is discovered. If this is unacceptable then we can strengthen our control over a firewall and contract with a third party like Verizon to scan our Firewall every 24 hours and send us an email to report the status. Under this control model, Jim makes the rule change at 10 AM, and Verizon scans our Firewall at 02:30, and sends Jim an email notifying him of the vulnerability - that he opens when he checks his email the next morning. He acts on the email and corrects the Firewall rules by 10 AM, Thursday morning. So, our network was exposed to almost a million "knocks at the door" because of our improved risk management approach. Are we actually managing our Telecommunications perimeter risk effectively? I think not! 

Penny Wise?

Turns out that there are real time software products that evaluate proposed changes to your telecommunications perimeter / Firewall rules as you are making the change - and let you know if there is a potential vulnerability due to your actions. AlgoSec and Manage Engine are providers of these tools - though there are others. For a small price, this tool can be installed and it will alert Jim at 10:01 AM, just before he makes his rule change in production. Jim - so advised can correct the rule before it ever is pushed to production - actually managing the risks of Internet perimeter security effectively!

This approach, COMBINED with an 24 hour or annual vulnerability test program and an Intrusion Detection System,  you can actually stand before the Board and report that a breach due to a telecommunication perimeter security failure is acceptably unlikely. You are managing this risk.

Friday, March 15, 2013

The Last IT Risk Assessment was conducted by Coopers & Lybrand in 1980



Why are their still so many data breaches ?

Asks a board member to his CIO. 

The question is of course a poor one.

Borne out of lack of clarity in the identification and understanding of technology risks. But lets play along. When does a data breach occur? Is it when -
  • a former employee's remote access is left on? No, but we audit for this says the "Risk Based Audit Program", Partner
  • a file is not encrypted when it is stored? No, but we audit for this.
  • a firewall fails to stop malicious software from landing on the corporate network? No, but we audit for this.
  • intrusion detection software fails to detect an intrusion? No, but we audit for this.
  • the malicious software, having found sensitive information sends this file to the author of the malicious software? Yes, but we don't audit for this.

How can this be?

It is a rather simple human story. In the late 70's corporations were using mainframe computers on what we now call a private network basis. It occured to management that there were profitability risks if these machines stopped working - the Disaster Recovery Plan movement started. Then a clever programmer took the round off error from the accounting program and credited that amount to a personal account - whether this actually happened or not, the story made the point and management got budgets for Computer Auditing.

But what to test? Where were the risks? The industry looked to Coopers and Lybrand and their Handbook of EDP Auditing by Stanley D. Halper and published by Warren, Gorham & Lamont, Inc. for answers. Their risk assessment lead to the concept of IT General and Application controls. Extreme focus was put on ensuring only employees had access to the computers and reports. There was no way to save data "off line" at that time, no thumb drives, no email, no Instant Messaging - in short the only data egress would be if an employee physically stole a printed report or data tape - and access to these items was controlled - so this exposure was managed.

 The AICPA embraced the approach as gospel, the concepts were replicated in audit programs, regulatory standards and became unquestioned common sense approach for anyone in the know. To this day GLBA, PCI and HIPAA models are based on the access control models of C&L's 1980 assessment of risks! But the risk have significantly changed with the advent of the Internet and easy connectivity between computers!

The Solution - True Risk Identification and Mitigation

IT Risk assessments need to ask can we identify sensitive data that someone or a software program is attempting to send off our computers  - yes or no?  Without a well designed Data Leak Prevention approach the answer is no, we can't. Starting with the identification of where sensitive data is kept, and then "tagging" this data when it is copied, or saved to a different location. Then the Firewall, augmented with an outbound traffic sentinel called a data egress filter looks for these tags and when necessary, blocks these outbound transmissions. Sounds simple and in fact it is, but one must be strong to fight the tide of conventional wisdom that has been accepted as truth since 1980!

Friday, February 22, 2013

It's only a breach when data leaves our control - Stupid!



It's only a breach when information leaves our control, typically this means the network where the information is at rest, but is can also mean a mistake has been made and an excel spreadsheet with thousands of incidences of PII (Personally Identifiable Information) has been sent to someone outside the Company.

How curious that the data access control models for information security, rooted in the loins of pre-Internet mainframe exposures, continue to drive the shape of control models as they do not actually manage the risk at hand - egress! Regulators and CPA firms alike tow the line for the access control model, while the actual exposure - data leaving our network without our consent - goes typically unchecked. No wonder that with all the certifications, all the software, all the auditors, all the regulations, data breaches continue - we are all looking in the wrong direction!

Lets look a little bit at this crazy, insulting assertion.

One of the first questions I ask a Victim of a breach is "What software did you have monitoring data egress on ports 80 (http) and 443 (https)?"  No - but we check active users against active employees monthly, have an annual Internet vulnerability test performed, keep our Anti-Virus definitions up to date and pushed each night to all workstations, have our Intrusion Prevention - Detection Systems tuned up and watching, and have independent Application level security that requires a log on to get to the data.

OK, but what happens when a Zero Day Bot net lands on your network and by definition a zero day is not yet detected by the AV device, and the bot net finds and starts to transmit credit card numbers, associated names and birth dates, not to mention addresses and cellular phone numbers. What finds the outbound transmission at the gateways? 

There response is either silence (Crickets) or our old enemy - rationalization!
 
The error in data flow is represented in the "Onion" - layered security model below.


Access Control Model In use as of 2013 - Based on Circa 1980 Exposures

 
What do we do? We start by recognizing that we must clearly communicate the actual risk to our business lies not limiting access to data ( while this will work if everything works perfectly) but in preventing unauthorized data egress.