A security vulnerability in the way that online storage provider DropBox (and possibly rival Box) handles links to shared files caused some documents (which were supposed to be viewable only people designated by the file owner) accessible and available to web site owners using Google’s visitor analytics and advertising tools. The rival online storage firm which found the issue claimed to have reported the problem (which gave access to sensitive files like mortgage documents and tax returns) to Dropbox last November. Dropbox fixed this issue, which it insists is a feature rather than a security flaw, this past Monday.
This issue highlights the need to make encryption of files and data stored on cloud service providers with keys stored on the user’s local system simple enough for non technical folks. The solution also needs to be able to support sharing of encrypted files securely with a third party or with other cloud services you authorize. If cloud providers can get this right (no small feat), living your life in the cloud will truly be ready for prime time.
Some solutions which currently exist:
- Boxcryptor is a software solution which sits on top of Dropbox and other storage providers and automagically encrypts files as they are sent to and received from the cloud. They provide secure sharing as well as mobile apps for the major platform. Of course, since Boxcryptor is an overlay to services like DropBox, using this product would break the integration between DropBox and other cloud apps.
- There is at least one consumer usable provider (SpiderOak) which currently claims to offer this type of Zero Knowledge Encryption.
The real answer to the issue of cloud encryption lies in having the encryption built in to the platforms in a standard and interoperable way. C’mon cloud vendors, you can do it!
One of the nice things about Apple’s iOS platform is the “hardware level encryption” that protects “all of the information on the device.” At least, that used to be the case.
Starting in iOS 7, email attachments stored on iPhones, iPads, and iPod Touches (remember those?) are not stored in encrypted form. A security researcher recently announced that he was able to retrieve plaintext attachments from encrypted iPhones using standard forensic tools. Apple never corrected its previous statements indicating that all data in iOS was “protected by hardware encryption,” so millions of personal and business users have been working under a false assumption of security for a couple of months now.
When the researcher reported the issue to Apple, he was told that they were aware of it but had no date for a fix.
This is why I continue to recommend that corporate users stick with containerized solutions for their iOS and Android mobile users. Consumer level mobile devices are not designed with the level of security appropriate for business (especially in highly regulated industries like Finance and Health Care). Yes, it would be nice to use the native apps on personal devices to deliver corporate data from an ease of use point of view, but if your users are carrying around sensitive information in their email attachments, you have to consider the risk of an adversary extracting that information from the device relatively easily.
Apple really dropped the ball on this one. They were not up front with their users regarding the loss of a key security feature and didn’t give them the chance to make an informed decision based on that information. Not cool. This incident underline’s Apple’s lack of commitment to and understanding of the corporate market. If they want to be a corporate player, they need to step up and accept the responsibilities that the role entails – otherwise, stop trying to do things half way, guys.
It seems like the latest big security story is a newly discovered flaw in the OAuth and OpenID protocols which allow users to authenticate to third party web sites using their account on another web site like Google, LinkedIn or Facebook. Apparently, it is relatively easy for attackers to create an attack via a phishing email with a link to a site which then asks the user to authenticate (to the fake site) using their Google account (or any other identity provider which supports OAuth and OpenID). The authentication pop up will look legitimate – it will actually seem to point to the identity provider’s web site, but it will, in fact, deliver the unsuspecting user’s credentials to the attacker.
So what do we, as security professionals, do with this information? Given the “behind the scenes” nature of the issue, and the fact that there is no cue to the user that a particular site is trying to use the flaw to gather credentials, we are stuck with telling our users to “be more careful” about using their Google/Facebook/LinkedIn etc. credentials to log in to sites. Well, that’s pretty darn vague. I guess the best advice to give people would be not to set up any new site credentials using OAuth/OpenID until the problem has been fixed.
This is a classic example of the tradeoffs we make between security and privacy. While logging in to multiple sites using credentials from a “trusted” provide makes life easier for the web user, he or she also risks having the security of all of their accounts linked to that ID compromised when that one provider suffers a security breach or there is a problem with the underlying technology. This is one of the many reasons we need to move away from password authentication and come up with easy to use 2 factor login methods to reduce the risk associated with weak/stolen passwords.