It was all over the web on June 18: Code Spaces went off the air, as we discussed during the Virtualization Security Podcast on 6/19. The reasons are fairly normal in the world of IT and the cloud. They were hacked. Not by subverting the Amazon cloud, but in ways considered more traditional—even mundane. An account password was discovered, either by hacking using one of the seven SSL attacks that exist today or by guessing with the help of inside knowledge gained through social engineering. However the account was hacked, the damage was total. While we may all ask why Code Spaces was attacked, we may never know the answer. Nevertheless, in general such attacks are all about the Benjamins. What lessons can we learn about this attack? How can we improve our usage of clouds to protect our own data, systems, and more from similar attacks? 
So, what lessons can we take from what happened to Code Spaces?
Use good SSL hygiene: There have recently been two new attacks against SSL, with five older ones. These attacks allow attackers to hijack sessions or act as a Man-in-the-Middle (MiTM). In either case, attackers can either gain access to credentials or create new credentials for themselves. SSL is not secure unless the user does extra work, such as presharing certificates or inspecting every certificate to ensure the server is who they say it is. This extra work is often overlooked. The client (often the user and any automation tool) must verify that the server is the proper server and not an MiTM, ensure that the tools used have not been compromised with known bugs (such as the latest two SSL attacks), and have excellent situational awareness regarding where things are accessed from. Could this have been the attack vector? I am unsure, but we can surmise a credential was stolen somehow, and this happens to be one of the easiest ways to do so.
Use good monitoring: It is crucial to monitor your cloud management for new users being created or old users being given more privileges. These should raise very large red flags and be verified immediately. If this alarm goes off and the action was not authorized, perhaps it is time to disable the management portal until the items can be resolved. Such monitoring can be enhanced by other tools such as Adallom, Skyfence, Elastica, etc., as I have discussed before. In addition, these tools can disable functionality when a breach is detected; they do not need to be just in monitoring mode. If these tools did detect a problem, my next call would be to Amazon to disable the cloud control panel completely for my account.
Use good credentials: Make sure your credentials are passphrases that are long and full of odd things and characters. In effect, if your mother can figure it out, it is a horrible passphrase. This is where identity management platforms can help by creating complex passwords or phrases that meet specific requirements of complexity. In addition, if set up properly, you could easily use these tools to create one-time passwords for various super-admin accounts, those with the ability to delete anything and everything. In addition, your API access should be using accounts different from your users’, perhaps even an account by API-enabled program.
Use multifactor authentication: Multifactor authentication (MFA) would also have been a way to prevent such an attack, as long as the API was not the attack vector. A human can use MFA, but an API generally can not. Thus, MFA would have been one way to prevent a human attack by requiring not just a password but not just what you know, but what you have, or what you are. However, even if MFA was in use, and the attacker was using a hijack approach, the attacker could easily have disabled the need for MFA on its newly created accounts. It all depends on the sophistication of the attacker and how that attacker initially attacked. MFA has its place, and it will improve overall security.  There are MFA approaches for APIs but they are still only one factor, what you have.
Use role-based access controls: No one user should have the ability to delete all aspects of an environment. Nor should a single user have super-admin privileges. Gaining access to super-admin privileges should be a break glass scenario. For example, a user who can delete AMI images should not be able to create them or even to access any backup sources. The same is true for API access. If the management interface in use does not have adequately fine-grained access controls, then use a third-party tool such as Adallom, Skyfence, Elastica, or the like to add in those fine-grained controls. Perhaps you have a single API account that allows you to pull the plug, kicking out all users and locking them out from being able to access the control panel. This would be the equivalent of a big red button in a data center. This particular API account would have only the ability to do those actions and no other, as those are generally reserved for super-admin users.
Use a there and back again data protection scheme: Ensure data protection is pull-style or does not allow deletion easily from within the cloud. This may require a dead letter box approach: you put data in the dead letter, and another tool picks it up to say the backup is ready to pull down. The backup tool would then not be able to delete anything from within Amazon. There is a real need to keep your data out of the same cloud in which you are running and to segregate delete capability within your backup store. I wrote about the need for there and back again data protection in the past, but this just brings the need to further light. We need to look at a cloud as a volatile resource that could go away in an instant either from power outages, fibre cuts, non-manmade hazards, or attacks. This means you should store your data off the cloud. Perhaps this will spur sales of cloud-based backup tools, onsite archival tools, or something entirely new. But the message is clear: back up your data in such a way that access to the cloud assets will not allow the destruction of an unacceptable amount of data. That amount is an unknown that differs per organization.
During the podcast we made, we went over some steps you can take today:

  • Inspect your current control panel system users. Ensure that they are still with the company and there are no shadow accounts anywhere.
  • Inspect your current RBAC to ensure least privilege for all users; e.g., an account that deletes cannot add users, etc.
  • Ensure no one logs into the control panel using a super-admin account.
  • Employ MFA as necessary and possible.
  • Inspect your data protection. Does it follow the there and back again philosophy? Is the backup within the same cloud? If so, can any one user delete everything from the backup? Perhaps ensure that backups are pull-style, not push-style.
  • Reevaluate your usage of the cloud to ensure that there is no single point of failure and that multiple controls are in use.
  • Get help from known security professionals as necessary.
  • Review the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) and determine where you are lax.

This is a wake-up call to cloud tenants. Attacks happen, and hackers are still an issue, even in clouds. Clouds do not magically protect you from hackers. In effect, tenants are responsible for their own data security and protection. Take advantage of all the security features afforded you by the cloud service provider, add in your own monitoring and security tools, and ensure your data follows the there and back again data protection philosophy. Heed the lessons of those who failed.

One reply on “Lessons We Can Learn from the Code Spaces Attack”

Comments are closed.