How DevOps can eliminate the risk of ransomware and cyber attacks
By Bob Aiello
Reports of global cyberattacks, said to have impacted more than 200,000 users in over seventy countries, have certainly garnered much attention lately. You might think that the global IT “sky” was falling and there is no possible way to protect yourself. That just isn’t true. The first thing that you need to understand is that any computer system can be hacked, including yours. In fact, you should assume that your systems will be hacked and you need to plan for how you will respond. Experts are certainly telling us all to refrain from clicking on suspicious attachments and to keep our Windows patches up-to-date. None of that advice is necessarily wrong, but it fails to address the real problem here. In order to properly avoid such devastating problems, you really need to understand the root cause. There certainly is plenty of blame to go around, starting with who made the malware tools used in the attack. There is widespread speculation that the tool used in the attack was stolen from the National Security Agency (NSA), which leads one to question whether those agencies in the business of securing our national infrastructure are really up to the job. This global cyberattack was felt by thousands of people around the world.
Hospitals across the UK were impacted, which, in turn, impacted medical care, even delaying surgical procedures. Other organizations hit were FedEx in the United States, the Spanish telecom company Telefónica, the French automaker Renault, and Deutsche Bahn, Germany’s federal railway system. I have supported many large organizations relying upon Windows servers, often running complex critical systems. Building and upgrading windows software can be very complex and that is the key consideration here. It is often not trivial to be able to rebuild a Windows machine and get to a place where you have software fully functioning as required. I have seen teams tinker with their windows build machines and actually get to a place where they simply could not build another windows machine with the same configuration. Part of the problem is that very few technology professionals really have an understanding of how Microsoft actually works, which really is the problem here. In DevOps, we need to be able to provision a server and get to a known state without having to resort to heroic efforts.
With infrastructure as code, we build and provision machines from scratch using a programmatic interface. Many use cloud-based resources which can be provisioned in minutes and taken down just as easily when no longer needed. Container-based deployments are actually taking this capability to the new level making infrastructure as code routine. From there, we can then establish the deployment pipeline and within a reasonable period of time, which enables us to deploy known baselines of code very quickly. Backing up our data is also essential and in practice, you may lose some transactions. If you are supporting a global trading system, then obviously there must be strategies to create transactions logs which can “replay” your buys and sells, restoring you to a known state.
What we need now is for corporate executives to understand the importance of having competent IT executives implement best practices capable of addressing these inevitable risks. We know how to do this. In the coming months, a group of dedicated IT professionals will be completing the first draft of an industry standard for DevOps, designed to cover many of these considerations. Let’s hope the folks running institutions from hospitals to retail chains take notice and actually commit to adopting DevOps best practices.