als, bls, cissp

Those of you who have the misfortune to know me personally know that information security is but one piece of the pie that is Al Berg.  (mmmm…. pie…)  On Friday nights, I swap my desk for an ambulance of the Weehawken Volunteer First Aid Squad where I am an Emergency Medical Technician.  Most of the time, these two parts of my life don’t really intersect, but this week, I saw something that seems to bridge the gap.

So, there are two different kinds of ambulances here in the US.  BLS (Basic Life Support) rigs are staffed by EMTs who are trained in basic life support techniques focused on airway, breathing and circulation.  EMTs do not administer drugs – we cannot even give you a Tylenol for pain.  If you are unfortunate enough to be meeting us on a day when you are having a cardiac arrest, we will do CPR, give you oxygen and maybe zap you with a automated defibrillator.  We’ll also call for our ALS (Advanced Life Support) colleagues – the paramedics – to respond and give you the advanced monitoring and interventions (EKG, intubation, intravenous drugs, and the like) that we can’t.

As an EMT, I am always happy to have paramedics on any call, especially a cardiac arrest, so I was really surprised to read an article this week which described a study published in the Journal of the American Medical Association which found:

90 days after hospitalization, patients treated in BLS ambulances were 50 percent more likely to survive than their counterparts treated with ALS. The basic version was also “associated with better neurological functioning among hospitalized patients, with fewer incidents of coma, vegetative state or brain trauma.”

Now, to be clear, your chances of surviving an out of hospital cardiac arrest are pretty lousy… 9 out of 10 patients who ‘code’ in the field will not survive to hospital discharge.  CPR works way better on TV than it does in real life.

Anyway, while I am a bit skeptical of this study’s results, it does seem to me that there is a bit of an information security aspect to this.  Time and again we hear of companies who have spent big on flashy technology still getting owned by hackers.  For example, Target had purchased advanced anti malware defenses from FireEye as well as outsourced monitoring for those defenses.  According to reports, the people and tech detected the bad guys, but failing to do “information security BLS” by examining the systems which were showing signs of trouble sealed Target’s place on the front page.

There are a lot of “information security BLS” measures that don’t use flashy technology or wheelbarrows of money that we can take to protect our systems:

  • Documented policies and procedures
  • Least privilege for user accounts
  • Segmentation of internal networks
  • Applying security patches and updates in a timely fashion
  • Security awareness training
  • Sharing information with other organizations

These (and many other) “information security BLS” interventions go a long way towards keeping hackers away from corporate data.  They aren’t complicated, and you don’t need to buy all sorts of blinkie light boxes to implement them.  Yet, time and again, companies fail to pay enough attention to them.  Part of the problem is that infosec professionals want to get hands on with the latest technology and doing some of these low tech interventions requires serious time and planning to avoid negative impacts to the business.

So, my resolution for 2015 is to take another look at the Council on CyberSecurity’s Critical Security Controls list and make sure my organization is doing everything we can to implement them.   As an industry we need to make sure we are doing the BLS interventions right and apply the ALS level security-fu when it is needed.

als, bls, cissp

insecure systems? no insurance for you!

It seems that car thieves have been targeting the keyless entry systems of high end vehicles, taking advantage of insecure security in their on board computers.  In addition to stolen cars, this has also caused some insurers in the UK to refuse coverage for certain models of Range Rovers in London unless their owners take additional security measures.  This is an interesting development – if your potential customers can’t get insurance coverage because your car (or other device’s) computer enabled systems aren’t secure, then you have a real incentive to fix the problem.   Now how do we apply this to other types of systems?

insecure systems? no insurance for you!

OpenAuth/OpenID flaw – ok, now what?

It seems like the latest big security story is a newly discovered flaw in the OAuth and OpenID protocols which allow users to authenticate to third party web sites using their account on another web site like Google, LinkedIn or Facebook.  Apparently, it is relatively easy for attackers to create an attack via a phishing email with a link to a site which then asks the user to authenticate (to the fake site) using their Google account (or any other identity provider which supports OAuth and OpenID).  The authentication pop up will look legitimate – it will actually seem to point to the identity provider’s web site, but it will, in fact, deliver the unsuspecting user’s credentials to the attacker.

So what do we, as security professionals, do with this information?  Given the “behind the scenes” nature of the issue, and the fact that there is no cue to the user that a particular site is trying to use the flaw to gather credentials, we are stuck with telling our users to “be more careful” about using their Google/Facebook/LinkedIn etc. credentials to log in to sites.  Well, that’s pretty darn vague.  I guess the best advice to give people would be not to set up any new site credentials using OAuth/OpenID  until the problem has been fixed.

This is a classic example of the tradeoffs we make between security and privacy.  While logging in to multiple sites using credentials from a “trusted” provide makes life easier for the web user, he or she also risks having the security of all of their accounts linked to that ID compromised when that one provider suffers a security breach or there is a problem with the underlying technology.   This is one of the many reasons we need to move away from password authentication and  come up with easy to use 2 factor login methods to reduce the risk associated with weak/stolen passwords.

OpenAuth/OpenID flaw – ok, now what?

how not to do a risk assessment

So, the risk management mavens for the City of Portland, Oregon have provided us all with an object lesson in how not to make risk based decisions.  It seems that one of the local young rowdies had the audacity to urinate into one of the reservoirs supplying the city with drinking water.  This particular reservoir contains 38 million gallons of water.   Horrified at this sullying of the public water supply, the city fathers made the obvious decision – empty and refill the reservoir.   I mean, it had pee in it!   Never mind that the uncovered reservoir contains all sorts of other contaminants (animal urine and feces, dead birds, pollutants carried by rain, etc.) as a matter of course.   Never mind that the concentration of urea caused by the wayward urinator would be around 3 parts per BILLLION – the EPA allows up to 10 parts per billion of arsenic in tap water, people.  No, because this particular infintessimal contamination made the news, 38 million gallons of water is going to be dumped.  As someone who has witnessed small children lugging jerry cans of water to their homes located miles away from the communal tap in Rwanda, this makes perfect sense to me.

It is this kind ridiculous approach to risk management that ensures that society will spend billions of dollars protecting itself from the wrong risks, and leave us vulnerable to the ones that really threaten us.

We need to get better at this, folks – science knows that people are bad at judging risk.  That’s why we need to train professionals in all fields to use evidence based methods and processes which compensate for our built in handicap in this area.  The basis of for good risk analysis is to train kids in critical thinking skills early and often throughout their education.  Maybe, they’ll be better at this stuff than we are.

 

how not to do a risk assessment

remember bird flu?

We're coming for you, humans....

A couple of years back, before the H1N1 swine flu was all the rage, all of us disaster obsessed types were focused on H5N1 bird flu, which in addition to being 4Hs worse than swine flu, had a human death rate of 60%.  Then swine flu came along (underwhelming us as far as global pandemics are concerned) and we all went back to worrying about people with explosives in their underwear.  Well, it seems that the birds and pigs have been plotting behind our backs, coming up with a new hybrid bird-pig flu, which in one case described in New Scientist magazine, developed a mutation which gives it the ability to bind to receptors found in the noses of pigs… and humans (cue ominous music, please).  Just a reminder that virii (like the rest of nature as far as I can tell) is out to destroy the human race.  And that we need to keep an eye on the flu.

remember bird flu?

testing, 1, 2, 3, oopsie!

Last week, an experiment conducted by Duke University and the European RIPE Network Control Center got a little bit out of hand, interrupting Internet traffic in 60 countries worldwide.  In all, about one percent of Internet traffic was affected by the test gone awry.  One percent of Internet traffic does not sound like a lot – most of that traffic was probably illegal file sharing, lolcats and porn, but what if your Internet based business was affected?  My employer (who shall remain nameless and whose opinions this post does not reflect) is an Internet based business in which the value of each (time sensitive) transaction is probably thousands of times the average for the rest of the net.  We were not affected by the testers’ little oopsie, but had we been, the potential losses would have been significant.  I am sure my company is not the only one in such a situation.

Yes, Cisco did fix the bug which caused this particular outage, but I think that this incident points out some questions that really need to be answered:

Should researchers be conducting experiments on the Internet with potential for widespread negative impact on a shared business resource? If someone ran this type of potentially disruptive testing on my company’s network during business hours, I’d be looking for them to be fired, sued, arrested and forced to listen to this album for the rest of their lives.  Researchers need to realize that the Internet is the planet’s “production network” with no “maintenance window” and that the same best practices we follow in the enterprise (separate test environment, for example) need to be followed when tinkering with its innards.

Had someone experienced significant financial losses due to this experiment, what would its recourse be? No one expects the Internet to be free of glitches and outages, but in this case, a conscious decision was made to do something which could reasonably be expected to cause problems.  Could there be lawsuits here?  Are the researchers exposing their organizations to potentially ginormous liability?  If the damaged party was in, say, Asia, who would have jurisdiction over the case and where would it be tried?

In an era where cyberspace is increasingly recognized as a “battlespace,” could an experiment such as this (on a larger scale) be mistaken for a cyber attack and possibly lead to real world hostilities?

Researchers and governments should take this opportunity to stop and think about the “rules of the road” for the global Internet.  Long ago, we all recognized that the oceans are a common resource and that we need a Law of the Sea to allow us to agree on what is and is not acceptable on the bounding main.  It seems to me that the Internet is the sea of the 21st century and needs a similar set of supranational rules to ensure that it accessible to all.  Are you listening, UN?

testing, 1, 2, 3, oopsie!

the great helium shortage of 2035?

It turns out that helium is important for more than party balloons and making our voices high and squeaky… and that we may run out of the stuff in spite of the fact that it is the second most abundant element in the universe (after hydrogen).   Amongst atomic element number 2’s many uses are cryogenics (required for MRI scans) and the manufacture of semiconductors, optic fiber and liquid crystal displays.  Here on Earth, there is a finite supply of helium, half of which sits in the US Government’s Federal Helium Program stockpiles.  In 1996, the US Congress decided to mandate that the entire stockpile be sold off by 2015.  The result?  Bargain basement helium prices which encourage waste.  Many of the applications for helium can be designed to recapture and reuse the gas, but since the stuff is so cheap, there is no incentive for users to manage the supplies in a sane manner.  As a result, we could run out of the gas within 25 years.

Currently, there is no commercially viable way to make more helium – our supplies here on Earth are the result of radioactive decay, and extracting helium from the air would result in prices many thousands of times higher than today (think $100 for a single party balloon).  And I shudder to think how much a big screen TV would cost in a helium poor world (now we are talking an emergency the public can understand).

Seems to me that Congress screwed up here and we still have time to fix the problem – simply raise the price of helium to a point where it makes sense to conserve the stuff.   It seems to me that the need for helium is going to grow over the coming years and we are setting ourselves up for a totally avoidable problem – time to write the congress-creatures…

the great helium shortage of 2035?