How to Fix Change Control – Understanding DevOps’ Secret Weapon

0

How to Fix Change Control – Understanding DevOps’ Secret Weapon
by Bob Aiello with Dovid Aiello

In many organizations, Change Control is badly broken. Very often, change control only focuses on the calendar and fails to realize its true potential of ensuring that changes can be delivered as frequently as necessary in a secure and reliable way. In our consulting practice, we hear many complaints of boring two-hour meetings where nothing seems to actually get done. Change control is often perceived as being little more than a rubber stamp and, as one esteemed colleague famously claimed publicly, that the purpose of change control was to “prevent change”. We disagree and believe that change control can be a valuable function that helps identify and mitigate technical risk. That said, very few organizations have effective change control practices. Here’s how to fix change control in your company and realize the benefits of DevOps’s secret weapon.

We have previously written about the dysfunction that often resides in the operations organization. This dysfunction often involves change control. The poorly-managed change control function wastes time, but that is only the tip of the “dysfunctional” iceberg. Far more serious is the missed opportunity to identify and mitigate serious technical risks. This failure often results in incidents and problems – often at a loss to the company, both in terms of profitability, as well as reputation. The bigger problem is the missed opportunity to be able to roll out changes faster and thus enabling secure and reliable systems, not to mention, delivering business functionality. When change control fails, most senior managers declare that they are going to stop allowing changes – which is actually the worst thing that you can decide to do. Slowing down changes almost always means that you are going to bunch together many changes and allow change windows less frequently, such as on a bimonthly basis. When we help teams fix their change control, the first thing we push for is making more frequent changes, but keeping them to very tiny changes. The typical cadence that we find that works well is most often moving from bimonthly change windows to twice weekly – ideally during the week. There is something magical about moving from bimonthly to twice a week that often eliminates much of the noise and frustration.

One important approach is to identify which changes are routine and low-risk, categorizing them as “pre-approved” or standard changes. Changing a toner cartridge on a printer is a good example, as it is a task that has been done many times before. Communication that the printer will be down for this activity is important, but it does not require a discussion during the change control meeting. Standard changes should be, ideally, fully automated and if you are using the ITIL framework, listed in your [1] service catalogue. Getting all of the easy changes pre-approved means that your change control meeting can now focus on the changes which actually require some analysis of technical risk.

Normal changes follow your normal change control process. Emergency changes should only be used when there is a true emergency and not just someone trying to bypass the change control process. Occasionally, someone may miss the normal change control deadline and then you may need an “out-of-cycle” change that would have been a normal change had the person made the deadline. One effective way to ensure that folks are not just using emergency changes to bypass the change control process is to require that all emergency changes be reviewed by your highest-ranking IT manager – such as the CTO.

Another effective approach is to distinguish between the change control board (CCB) and the change advisory board (CAB). Frankly, this has been an area of confusion for many organizations. The CCB is responsible for the change control process. The change advisory board should be comprised of sharp subject matter experts who can tell you the potential impact of making (or not making) a particular change. Make sure that you ask them who else might be impacted and should be brought into the discussion. We have seen many organizations, unfortunately, rename their CCB to CAB (cool name it is) and in doing so, lose the input from the change advisory folks. Keep your CCB separate from your CAB. The CCB handles the process – while the CAB advises on the technical risk of making (or perhaps not making) a particular change.

In reviewing each change, make sure that the change is described clearly and in sufficient detail to understand each step. We see many change requests that are just high-level descriptions which can be open to interpretation by the person making the changes and consequently result in human errors that lead to incidents and problems.

Testing, as well as verification and validation (V&V), criteria should always be specified. By testing, we refer to continuous testing beginning with unit testing and extending into other forms of testing, including regression, functional/nonfunctional, and integration testing. (We are huge fans of API and service virtualization testing, but that is the subject of another article.) Verification usually refers to whether or not the change meets the requirements and validation ensures that the system does what it needs to do. Some folks refer to fitness for warranty and fitness for use. If you want effective DevOps you must have robust continuous testing and the change control process is the right toll gate to ensure that testing has been implemented and especially automated. We’d be remiss if we did not mention the importance of asking about backout plans. In short, always have a plan B.

Change control done well is indeed the DevOps’ secret weapon. Making changes more often should be your goal and keeping those changes as tiny and isolated as possible will help to reduce the risk of making changes. We like to have everyone share a screen and have that DevOps cross-functional team ensure that every change is executed successfully. Every change must be automated. If this is not possible, then make sure that you have a 4-eyes policy where one person makes the change and another person observes and verifies that the manual step has been completed successfully. Always record the session – and allow others to see what you’re doing and then review the recordings to identify areas where you can improve your deployment processes.

The best organizations have processes which are transparent and allow others to learn and help continuously improve the deployment process. Change control can help you get to a place where you can safely make changes as often as you need to, helping to deliver secure and reliable systems.

What change control practices do you believe are most effective? Drop us a line and share your best practices!

 

 

[1] The service catalogue is an automated set of jobs that perform routine “low risk” tasks such as taking backups and changing toner cartridges.
Since the request is in the service catalogue, the change may be designated as being “standard” (pre-approved) and then there is no need to perform risk assessment in change control.

DevOps for Hadoop

0

DevOps for Hadoop
By Bob Aiello

Apache Hadoop is a framework that enables the analysis of large datasets (or what some folks are calling “Big Data”), using clusters of computers or cloud-based “elastic” resources offered by most cloud-based providers. Hadoop is designed to scale up from a single server to thousands of machines, allowing you to start with a simple installation and scale to a huge implementation capable of processing petabytes of structured (or even unstructured) complex data. Hadoop has many moving parts and is both elegant in its simplicity and challenging in its potential complexity. This article explores some of the essential concerns that the DevOps practitioner needs to consider in automating the systems and applications build, package and deployment of components in the Hadoop framework.

For this article, I read several books [1][2] and implemented a Hadoop sandbox using both Hortonworks as well as Cloudera. In some ways, I found the technology to be very familiar as I have supported very large Java-based trading systems, which were deployed in web containers from Tomcat to Websphere. At the same time, I was humbled by the sheer number of components and dependencies that must be understood by the Hadoop administrator as well as the DevOps practitioner. In short, I am kid in a candy store with Hadoop, come join me as we sample everything from the lollipops to the chocolates 🙂

To begin learning Hadoop you can set up a single node cluster. This configuration is adequate for simple queries and more than functional for learning about Hadoop and then scaling to a cluster setup when you are ready to implement your production environment. I found the Hortonworks sandbox to be particularly easy to implement using virtual box (although almost everything that I do these days is on docker containers). Both Cloudera Manager and Apache Ambari are administration consoles that help to deploy and manage the hadoop framework.

I used Ambari which has features that help provision, manage, and monitor Hadoop clusters, including supporting the Hadoop Distributed File System (HDFS), Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. I used the Ambari dashboard which helps to view cluster health (including heatmaps) and also provides the ability to view MapReduce, Pig and Hive applications including performance and other resources. HDFS can also be supported on Amazon Simple Storage Service (S3) buckets, Azure blobs and OpenStack Object Storage (Swift) .

One of the most interesting things about Hadoop is that you can use “commodity hardware” which does not necessarily mean cheap, but does mean that you use the servers that you are able to obtain and support and they do not all have to be the exact same type of machine. Obviously, the DevOps practitioner will need to pay attention to the requirements for provisioning, and especially supporting, the many required configuration files. This is where configuration tools including Chef, Puppet, CFEngine, Bcfg2 and SaltStack can be especially helpful, although I am finding myself migrating towards using Ansible for configuration management as well as both infrastructure and application deployments.

Logfiles, as well as the many environment settings, need to be monitored. The administration console provides a wide array of alerts which can identify potential resource and operational issues before there is end-user impact. The Hadoop framework has daemon processes running, each of which require one or more specific open ports for communication.

Anyone who reads my books and articles knows that I emphasize processes and procedures which ensure that you can prove that you have the right code in production, detect unauthorized changes and have the code “self-heal”, through returning to a known baseline (obviously while adhering to change control and other regulatory requirements). Monitoring baselines in a complex java environment utilizing web containers including Tomcat, jBoss and WebSphere can be very difficult especially because there are so many files which are dynamically changing and therefore should not be monitored. Identifying the files which should be monitored (using cryptographic hashes including MAC SHA1 and MD5) can take some work and should be put in place from the very beginning of the development lifecycle in all environments from development test to production.

In fact getting Hadoop to work in development and QA testing environments does take some effort, giving you an opportunity to start working on your production deployment and monitoring procedures. The lessons learned while setting up the lower environments (e.g. development test) can help you begin building the automation to support your production environments. I had a little trouble getting the Cloudera framework to run successfully using docker containers, but I am confident that I will get this to work in the coming days. Ultimately, I would like to use docker containers to support Hadoop – mostly likely with Kubernetes for orchestration. You can also run Hadoop on hosted services such as AWS EMR or use AWS EC2 instances (which could get pretty expensive). Ultimately, you want to run a lean environment that has the capability of scaling to meet peek usage needs, but can also scale down when all of those expensive resources are not needed.

Hadoop is a pretty complex system with many components and as complicated as the DevOps requirements are, I cannot imagine how impossible it would be to manage a production Hadoop environment without DevOps principles and practices. I am interested in hearing your views on best practices around supporting Hadoop in production as well as other complex systems. Drop me a line to share your best practices!

Bob Aiello (bob.aiello@ieee.org)
http://www.linkedin.com/in/BobAiello

[1] Hadoop the Definitive Guide, by Tom White, O’Reilly Media; 4 edition, 2015
[2] Pro Apache Hadoop, by Jason Venner et al, Apress; 2nd ed. edition, 2014
[3] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010
[4] Aiello, Robert and Leslie Sachs. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement, Addison-Wesley, 2016

The Magic of DevOps

0

The Magic of DevOps
By Bob Aiello with Dovid Aiello

Recently, a well-respected colleague of mine reacted to an article that I had written regarding the Equifax data breach and suggested that I had made it sound as if DevOps “magically” could solve problems. I was stunned at first when I saw his comments, because everything that I have written has always gone into specific details on how to implement DevOps and CM best practices including the core functions of source code management, build engineering, environment management, change control, and release and deployment engineering. At first, I responded to my colleague that he should get a copy of my book to see the detail in which I prescribe these principles and practices. My colleague reminded me that he not only had a copy of my CM best practices book, but had reviewed it as well – and as I recall it was a pretty positive review. So how then could he possibly believe that I viewed DevOps as magically solving anything? The more I pondered this incident, the more I realized that DevOps does indeed have some magic and the effective DevOps practitioner actually does have some tricks up his sleeve. So, unlike most magicians I am fine with sharing some of my magic and I hope that you will write back and share your best practices as well.

DevOps is a set of principles and practices intended to help improve communication and collaboration between teams, including development and operations, but equally important are other groups including quality assurance, testing, information security and of course the business user and the groups who help us by representing their interests. DevOps is all about sharing often conflicting ideas and the synergy we enjoy from this collaboration. At the core of DevOps systems and application delivery is a set of practices based upon configuration management, including source code management, build engineering, environment management, change control, and release and deployment engineering.

Source code management is fundamental and without the tools and processes you could easily lose your source code, not to mention, have a very difficult time tracking changes to code baselines. With robust version control systems and effective processes, you can enjoy traceability to know who made changes to the code – and back them out if necessary. When you have code in version control you can scan that code using a variety of tools and identify open source (and commercial) code components which may have one or more vulnerabilities as identified in CVEs and the VulnDB database. I have written automation to traverse version control repositories and scanned for licensing, security and operational risks – actually identifying specific code bases which had zero-day vulnerabilities much like the one which impacted Equifax recently. Build engineering is a fundamental part of this effort as the build process itself may bring in dependencies which may also have vulnerabilities. Scanning source code is a good start, but you get a much more accurate picture when you scan components which have been compiled and linked (depending upon the technology). Taking a strong DevOps approach means that your security professionals can work directly with your developers to identify security vulnerabilities and identify the best course of action to update the code. With the right release and deployment automation you can ensure that your changes are delivered to the production as quickly as necessary.

Environment management is the function which is most often forgotten and understanding your environment dependencies is an absolute must-have. Sadly, we often forget to capture this information during the development process and discovering it after the product has been deployed can be a slow and painful process. Similarly, change control should be your first line of defense for identifying and mitigating technical risk, but most companies simply view change as a “rubber stamp” which focuses mostly on avoiding calendar collisions. Change control done well can help you identify and mitigate technical risk, deliver changes more quickly and avoid costly mistakes so often the cause of major outages.

As news reports emerge claiming that Equifax actually knew that they had code which contained the Struts vulnerability, the focus should be on why the code was not updated. Sadly, many companies do not have sufficient automation and processes in place to be able to safely update their systems without introducing significant technical risk. I have known of companies who could not respond effectively to a disaster, because their deployment procedures were not fully automated and failing over to a DR site resulted in a week-long set of outages. Companies may “test” their DR procedures, but that does not guarantee that they can actually be effectively used in a real disaster. You need to be able to build your infrastructure (e.g. infrastructure as code) and deploy your applications without any chance of a step being missed which could result in a systems outage.

DevOps and CM best practices actually give you the specific capabilities required to identify security vulnerabilities and update your code as often as needed. The first step is to assess your current practices and identify specific steps to improve your processes and procedures. I would like to say that really there is no magic – just good process engineering, picking the right tools and of course rolling up your sleeves and automating each and every step. But maybe the truth is that there is some magic here. Taking a DevOps approach, sharing the different views between development, operations, security and other key stakeholders can make this all possible. Please drop me a line and share your challenges and best practice too. Between us, we can see magic happen as we implement DevOps and CM best practices!

Bob Aiello (bob.aiello@ieee.org)

 

How to Maintain 143 Million Customers

0

How to Maintain 143 Million Customers
by Phil Galardi

So what recently happened to 143 Million Americans anyway? Well, you probably heard that it was a cyber security incident related to an open source software component called Apache Struts. What exactly is Apache Struts? Why was it so easily hacked? Could it be prevented using some common best practices? And, what can you do to protect your organization now and in the future?

Apache Struts is a common framework for developing Java web applications. It’s one of the most commonly used open source components, with plenty of community support. 634 commits in the last 12 months at the time of this blog, meaning that folks from all over the world are actively participating in efforts to fix bugs, add features/functions, and remediate vulnerabilities.

According to Lgtm, the folks who discovered the vulnerability that impacted nearly half of Americans, more than 65% of Fortune 100 companies are using Struts meaning 65% of the Fortune 100 could be exposed to remote attacks(similar to Equifax) if not fixed.

Initially, the suspect vulnerability was a zero-day (CVE-2017-9805), impacting the Struts framework since 2008. However, recent speculation is pointing to a more likely culprit (CVE-2017-5658) which was reported in March 2017. If the latter is the case, Equifax and any other organizations properly managing open source components would have had visibility into this issue and could have remediated it before the attack occurred. At this time, Equifax has not issued a public statement pinpointing the exploit.

The Apache Struts Project Management Committee lists 5 steps of advice to anyone utilizing Struts as well as all open source libraries. To paraphrase, these are:

  1. Know what is in your code by having a component bill of materials for each of your product versions.
  2. Keep open source components up to date and have a process in place to quickly roll out security fixes.
  3. Close the gap, your open source components are going to have security vulnerabilities if unchecked.
  4. Establish security layers in your applications, such that a public facing layer (such as Struts) should never allow access to back-end data.
  5. Monitor your code for zero-day vulnerability alerts. Again, back to #1. If you know what is in your code, you can monitor it. You can reduce incidence response time, and notify your customers quickly (or catch it before it’s too late).

Certainly, you can prevent Apache Struts vulnerabilities from ever making their way into your web applications by not using the component. However, based on metrics from Black Duck software for Struts we see that it would take an estimated 102 years of effort to build on your own. You probably won’t need every line of code. Yet even still, there are huge advantages to using open source software in your applications.

Best practices dictate identifying the open source components in your applications at the time of build and integrating into CI tools when possible. This provides you with an inventory or bill of materials for all the open source developers are using. You can further drive automation by monitoring those applications bill of materials and creating policies around what actually get’s built. For example, you could warn and notify developers that a particular component (OpenSSL 1.0.1 through 1.0.1f) is not acceptable to use if they build it and ultimately fail builds containing critical vulnerabilities.

What can you do now about this latest vulnerability? According to Mike Pittenger, VP of Security Strategy at Black Duck Software, if you don’t need the REST plug-in for Apache Struts, you can remove it. Otherwise, users are advised to update to versions 2.3.34 and 2.5.13 as soon as possible.

So back to keeping your customers happy? Protect their data, maintain the security of your applications, and don’t forget about open source components and applying best practices.

About the author:

Phil Galardi has over 15 years of experience in technology and engineering; 8 years as an application developer, 3 years in application lifecycle management and currently helping organizations improve, manage, and secure their SDLC. With experience spanning multiple vertical markets, Phil understands what is required to build secure software from each aspect of people, process, and technology.  While he loves coffee, he doesn’t get the same feelings of joy from completing expense reports

 

Personality Matters – Development in the Cloud

0

Personality Matters – Development in the Cloud
By Leslie Sachs

Developing software in the cloud involves working with people who are likely in a different location and employed by an entirely different company. These folks may have very different priorities than you do – and getting what you need may be quite a challenge at times. Development in the Cloud likely involves working in an environment where you do not always have full control of the resources you need. You may feel that you are the customer and deserve priority service. But the reality is that interfacing with all of the stakeholders in a cloud-based development environment presents unique challenges. Your ‘people skills’ may very well determine whether or not you get what you need when you need it. Read on if you would like to be more effective when developing in the Cloud.

Control of Resources

Development in the Cloud has a number of challenges, but none more apparent than the obvious loss of control over essential resources. Development in the cloud involves relying upon another entity and the services that they provide. This may turn out great for you, or it may be the worst decision of your career. One issue to address early on is how you feel about having to rely upon others for essential resources and its implicit loss of control. This situation may result in some stress and, at times, considerable anxiety, for technology managers who are responsible for the reliability of their companies’ systems.

Anxiety in the Cloud

Seasoned IT professionals know all too well that bad things happen. Systems can crash or have other serious outages that can threaten your profitability. When you have control over your resources, you usually have a stronger sense of security. With the loss of control, you may experience anxiety. As a manager, you need to assess both your, and upper management’s, tolerance for risk. Risk is not inherently bad. But risk needs to be identified and then mitigated as best as is practical. One way to do that is to establish a Service-level Agreement (SLA).

Setting the SLA

The prudent manager doing development in the cloud will examine closely the Service-level Agreements that govern the terms of the Cloud-based resources upon which that team depends. One may have to choose, however, between working with a large established service provider and a smaller company, willing to work harder for your business. This is where you need to be a savvy consumer and technology guru, too. If you’re thinking that ironing out all of these terms is going to be easy, then think again. The one thing that you can be certain about, though, is that communication is key.

Communication as Key

Make sure that you establish an effective communications plan to support your Cloud development effort, including announcing outages and service interruptions. [1] You should consider the established communications practices of your service provider within the context of the culture of your organization. Alignment of communication styles is essential here. Plan to not only receive communications, but to process, filter and then distribute essential information to all of your stakeholders. Remember, also, that even weekend outages may impact the productivity of your developers. The worst part is that you may not have a specific dedicated resource at the service provider with whom to partner.

Faceless and Nameless Partners

Many large Cloud-based providers have well-established service organizations, but you as a manager need to consider how you feel about working with partners who you do not know and may never actually meet. The faceless and nameless support person may be just fine for some people especially if they do a great job. But you need to consider how you will feel if you cannot reach a specific person in charge when there is a problem impacting your system. This may seem like a non-issue if you are the customer. Or is it?

Customer Focus

If you are paying a service provider then you will most likely be expecting to be treated as a customer. Some Internet Service Providers (ISPs) may have excellent service while others may act like they are a utility with an absolute monopoly. At CM Best Practices Consulting, we’ve had some experiences with ISPs who provided horrible service resulting in an unreliable platform supporting the website for our book on Configuration Management Best Practices. Poor service aside, there are certainly advantages to considering cloud services as a utility.

Cloud as a Utility

When you need more electricity, most of us just assume that the Electric Company will provide as much as we need. So the cloud as a utility certainly has some advantages. If you need to scale up and add another hundred developers, giving each one a virtual image on a server farm can be as easy as providing your credit card number. However, knowing that additional resources are there for the asking, does have its own special risk of failing to plan for the resources that you need. You still need to plan strategically.

Planning and Cost

Planning and cost can be as dangerous as running up bills on your credit card. In fact, they may actually be on your credit card. From a personality perspective, you should consider whether or not using Cloud-based services is just a convenient excuse to avoid having to plan out the amount of resources you really need. This approach can get expensive and ultimately impact the success of your project. Development in the cloud does not mean that you have to stay in the cloud. In fact, sometimes cloud-based development is just a short-term solution to respond to either a seasonal need or a temporary shortage. You should always consider whether or not it is time to get out of the clouds.

Bringing it Back In-house

Many Cloud-based development efforts are extremely successful. Others are not. Ultimately, smart technology professionals always consider their Plan-B. If you find that you are awake at night thinking about all of the time lost due to challenges in the Cloud, then you may want to consider bringing the development back in from the Cloud. Just keep in mind that every approach has its risks and you probably cannot implement a couple hundred development, test or production servers overnight, either. Many managers actually use a hybrid approach of having some servers in-house, supplemented by a virtual farm via a cloud-based service provider. Having your core servers in-house may be your long term goal anyway. Smart managers consider what works best from each of these perspectives.

Conclusion
Being pragmatic in the Cloud means that you engage in any technology effort while keeping both eyes open to the risks and potential advantages of each approach. Cloud-based development has some unique challenges and may not be the right choice for everyone. You need to consider how these issues fit with you and your organization when making the choice to develop in the cloud.

References
[1] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010, p. 155.

How DevOps Could Have Saved Equifax

0

How DevOps Could Have Saved Equifax
by Bob Aiello

Equifax is the latest large firm to make unwanted headlines due to exposure of clients’ personal data; a reported 143 million people may have had their Social Security numbers, birth dates, credit card numbers and other personal information stolen. According to published accounts, the breach occurred through a vulnerability in the Apache Struts web framework which is used by many organizations. The incident was an embarrassment to a company whose entire business revolves around providing a clear, and presumably confidential, financial profile of consumers that lenders and other businesses use to make credit decisions.

Large organizations often have hundreds of major systems using thousands of commercial and open source components – each of which could potentially have a security vulnerability. The Apache organization issued a statement about the most recent incident. There were also many alerts issued about the potential risks in the Apache Struts framework, but large organizations which receive alerts via Common Vulnerabilities and Exposures (CVEs) and VulnDB may find it very difficult to identify exactly which software components are vulnerable to attack and be unable to quickly fix the problem and/or deploy the updated code that prevents hackers from exploiting known vulnerabilities.

So how best to handle these scenarios in large organizations?

The first step is to get all of your code stored and baselined in a secure version control system (VCS). Then you need to be able to scan the code using any of the products on the market which can identify vulnerabilities as reported in CVEs and the VulDB database. There are costs involved with implementing an automated solution, but the cost of not doing so could be far greater.

One approach could be to clone each and every repo in your version control system (e.g. bitbucket) and then programmatically scan the baselined source code identifying the projects which contain these vulnerabilities. You can get better results if you scan code that has been compiled, as the build process may pull in additional components. But even just scanning source code will help you get the conversation involving your security experts, operations engineers and the developers who wrote the code started. Suddenly, you can find that needle lost in your haystack pretty quickly and begin taking steps to update the software. Obviously, another key ingredient is having the capability to immediately roll out that fix through a fully-automated application build, package and deployment process, what many folks are referring to as continuous delivery.

Implementing these tools and processes does take some time and effort, but, as the Equifax data breach has painfully demonstrated, effective DevOps is clearly worth it.

What is your strategy for identifying security issues buried deep in a few hundred thousand lines of code? It is actually not that hard to fix this issue as long as you can work across development, operations and other stakeholders to implement effective CM best practices including:

Source Code Management
Build Engineering
Environment Management
Lean and Effective Change Control
Release & Deployment engineering

yeah – I am saying that you need DevOps today!

Bob Aiello
bob.aiello@ieee.org

 

How DevOps can eliminate the risk of ransomware and cyber attacks

0

How DevOps can eliminate the risk of ransomware and cyber attacks
By Bob Aiello

Reports of global cyberattacks, said to have impacted more than 200,000 users in over seventy countries, have certainly garnered much attention lately. You might think that the global IT “sky” was falling and there is no possible way to protect yourself. That just isn’t true. The first thing that you need to understand is that any computer system can be hacked, including yours. In fact, you should assume that your systems will be hacked and you need to plan for how you will respond. Experts are certainly telling us all to refrain from clicking on suspicious attachments and to keep our Windows patches up-to-date. None of that advice is necessarily wrong, but it fails to address the real problem here. In order to properly avoid such devastating problems, you really need to understand the root cause. There certainly is plenty of blame to go around, starting with who made the malware tools used in the attack. There is widespread speculation that the tool used in the attack was stolen from the National Security Agency (NSA), which leads one to question whether those agencies in the business of securing our national infrastructure are really up to the job. This global cyberattack was felt by thousands of people around the world.

Hospitals across the UK were impacted, which, in turn, impacted medical care, even delaying surgical procedures. Other organizations hit were FedEx in the United States, the Spanish telecom company Telefónica, the French automaker Renault, and Deutsche Bahn, Germany’s federal railway system. I have supported many large organizations relying upon Windows servers, often running complex critical systems. Building and upgrading windows software can be very complex and that is the key consideration here. It is often not trivial to be able to rebuild a Windows machine and get to a place where you have software fully functioning as required. I have seen teams tinker with their windows build machines and actually get to a place where they simply could not build another windows machine with the same configuration. Part of the problem is that very few technology professionals really have an understanding of how Microsoft actually works, which really is the problem here. In DevOps, we need to be able to provision a server and get to a known state without having to resort to heroic efforts.

With infrastructure as code, we build and provision machines from scratch using a programmatic interface. Many use cloud-based resources which can be provisioned in minutes and taken down just as easily when no longer needed. Container-based deployments are actually taking this capability to the new level making infrastructure as code routine. From there, we can then establish the deployment pipeline and within a reasonable period of time, which enables us to deploy known baselines of code very quickly. Backing up our data is also essential and in practice, you may lose some transactions. If you are supporting a global trading system, then obviously there must be strategies to create transactions logs which can “replay” your buys and sells, restoring you to a known state.

What we need now is for corporate executives to understand the importance of having competent IT executives implement best practices capable of addressing these inevitable risks. We know how to do this. In the coming months, a group of dedicated IT professionals will be completing the first draft of an industry standard for DevOps, designed to cover many of these considerations. Let’s hope the folks running institutions from hospitals to retail chains take notice and actually commit to adopting DevOps best practices.

Parasoft – API Testing and Service Virtualization at Microsoft Build

0

Parasoft showcases new release of API Testing and Service Virtualization at Microsoft Build

Monrovia, CA & Seattle, WA – May 10, 2017 – Parasoft, the leader in software testing solutions, today announced its latest enhancements of their API testing and service virtualization solutions for the Microsoft Environment at Microsoft Build 2017, taking place May 10-12 in Seattle. Parasoft will be featuring its new functionality at booth #209. To get started and learn more, visit: http://software.parasoft.com/virtualize/microsoft/

Parasoft SOAtest and Virtualize are widely recognized as industry standard tools for enabling teams to quickly solve today’s most challenging issues, including security, performance, and test environment obstacles. In a continued effort to improve functionality and ease-of-use for customers, Parasoft has introduced new functionality and streamlined workflows to address everyday challenges that software developers and testers face.

Parasoft has focused on three key areas with this new release:

  • Broadening access to testing through the thin client interface: Greater access enables teams to quickly initiate testing projects, facilitate correlation and collaboration, and seamlessly tie test scenarios to environments.
  • Solving data challenges through enhanced workflows: Providing quick and simple access to test data helps test designers create move efficient and effective tests.
  • Shift-left performance testing: Early-stage performance testing is available by reusing existing test artifacts in performance tests and reviewing results in the Web-enabled dashboard.

To learn more about Parasoft’s offering, please visit:

About Parasoft

Parasoft provides innovative tools that automate time-consuming testing tasks and provide management with intelligent analytics necessary to focus on what matters. Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software, by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization. Parasoft supports software organizations as they develop and deploy applications in the embedded, enterprise, and IoT markets. With developer testing tools, manager reporting/analytics, and executive dashboarding, Parasoft enables organizations to succeed in today’s most strategic development initiatives — agile, continuous testing, DevOps, and security.

Behaviorally Speaking: Creating the Deployment Pipeline

Creating the deployment pipeline.
By Bob Aiello

Continuous Delivery (CD) depends upon the successful implementation of a fully automated deployment pipeline. As a practitioner, I have heard many folks refer to CD in overly simplistic terms, creating a naive image of deployments being fully automated as a simple push button. The truth is that continuous delivery is anything but simple and it takes some effort to create a fully automated deployment pipeline. Continuous Delivery touches every aspect of the application lifecycle and is best created in an iterative manner. This article will help you get started with creating your fully automated continuous delivery pipeline.

The first thing that you need to understand about Continuous Delivery is that it is – well – continuous. This means that you need to consider every step of the process to create your application build, package and deployment. Some folks refer to continuous deployment and continuous delivery in the same sentence. In fact, I have seen DevOps thought leaders use these terms interchangeably. Continuous Deployment refers to being able to push changes as often as necessary. Continuous Deployment can be disruptive in that users are often not ready for a change. You should use continuous deployment when you must push out an urgent security patch – with or without the user’s consent. Continuous Delivery is more subtle in that changes are technically pushed to the target environment, but are hidden through a technique known as feature toggle where changes are hidden until they are ready to be exposed to end users. Continuous delivery helps reduce risk as changes are technically deployed and can be tested to some degree. Continuous delivery is less disruptive than continuous deployment because continuous deployment pushes changes whether or not the user is ready to accept them.

The deployment pipeline is the automated procedure that implements continuous delivery and continuous deployment. Most often, deployment procedures consist of shell scripts, which ensure that each step of the build, package and deployment is successfully completed without any errors. If you were to look at my scripts, you would see that more than half the code is testing each step and then logging the results. Mistakes happen and it is very common for one step in the deployment automation to not complete as expected. The deployment pipeline completely eliminates manual steps, but recognizes the reality that sometimes problems occur. If a step does not complete as expected, you want the deployment to fail immediately and notify the operator as to what has occurred. One challenge that often arises is how to handle reviewing and approving proposed changes. Agile and lean change control is an absolute must-have.

The secret to effective change control is to identify which changes are low risk and can be categorized as being preapproved. The remaining changes, by definition are a little more complicated, and should be reviewed and assessed for technical risk. The DevOps approach is to ensure that all of the subject matter experts (SMEs) are involved with the technical review process. It has been my experience that too often change control resembles a phone-tag game where change managers “represent” changes which they barely understand instead of ensuring that the SMEs are fully engaged with the technical review process. The deployment pipeline relies upon effective change control which employs agile and lean principles.

The deployment pipeline should pull source code from the version control system (VCS) based upon labeled (or tagged) baselines and built via a fully automated procedure – most often using Ant, Maven, Make or other build scripting language. Continuous integration servers such as Jenkins initiate the build, often triggered by the code being committed into a version control system. I try to script every single step using Ruby, Python or the available command line tools such as bash or, on windows, PowerShell. Remember that we build once and then deploy the same built components to each environment using the same automated procedures. The build process should also automatically generate the SHA1 or MD5 hashes which can be used to verify that each configuration item was successfully deployed to its target location and also later used to identify any unauthorized changes.

Infrastructure should always be built using automated procedures which is known as infrastructure as code. Managing configurations and environment dependencies also should be performed via automated procedures. It is important that deployment procedures be reliable and verifiable while also logging each step as it completes. The procedures should yield the same results no matter how many times they are run which is known as being idempotent.

Creating the deployment pipeline is not trivial and you may need to complete this effort in steps. I always focus on starting with attended automation where I do my best to automate each step and require that an operator look at the screen and verify that each step has completed successfully and then press enter. Over time, I may know enough to be able to fully automate the build, package and deployment as that magical “push button” that we all strive for, but in the meantime the steps I have outlined will significantly improve the reliability of your deployment process – eliminating many potential sources of error.

If you want to succeed in creating the deployment pipeline then start in small steps. Eliminate any possible sources of error. Manual steps will result in human error. Taking an agile iterative approach to creating the fully automated pipeline will help you deliver changes as often as necessary while ensuring security and reliability. May sure that you drop me a line and share your best practices for creating the deployment pipeline!

Imitation is Limitation – Why Your Agile and DevOps Transformations are Failing

Imitation is Limitation – Why Your Agile and DevOps Transformations are Failing

By Nicole Bryan

If you’re a business or IT leader trying to compete in a digital world, you need to leverage your software delivery capabilities to the max to stay competitive. If you don’t, your organization is ripe for digital disruption from younger, more digital-centric companies.

To address this threat, you’ve probably adopted widely publicized Agile and DevOps practices to enhance your software delivery. Perhaps you or one of your team were inspired by a dazzling presentation at a tech conference that implied, “If you copy this model, you can enjoy the same success!”

But tread softly. There is no ‘one-size-fits-all’ approach to Agile and DevOps, and imitating the successful transformations of Facebook, Netflix, Airbnb, et al. will not necessarily improve your capability to quickly deliver quality software. In fact, it may be detrimental, leading to more bottlenecks and waste that could cost your business millions in lost productivity.

Why is there no silver bullet or magical blueprint? Largely it’s because you’re not the same as those digital disruptors. In fact, you’re distinctly different businesses, each with its own unique software ecosystems. These nuances must be understood if you’re to have any success with your digital transformation.

To understand why, it’s important to remind ourselves of the core values of Agile and DevOps. Both are about delivering value to the customer and for the business, and regard operational waste as the scourge of delivering consistent end-to-end value. With this in mind, let’s look at how waste is created, and value lost, when a large organization tries to scale its transformations.

  • Application-critical

Digital-native companies can adopt a ‘trial and error’ approach to building new software innovations, relaxed in the knowledge that it’s not the end of the world if there’s a bug in the system (as it can be easily rectified). Whereas larger organizations, such as banks or healthcare providers, require a more diligent approach. They will be using heavyweight tools to ensure rigorous scanning of requirements, builds and tests to limit the risk of software downtime so they can always access bank accounts and find medical records. Being tied to such influential legacy tools within the workflow can slow down the speed with at which teams can operate – unless these tools are working in harmony with all the other tools in the lifecycle, which they’re not naturally designed to do.

  • Audit

Back in the day, companies such as Facebook and LinkedIn were private and didn’t need to worry about regulatory or corporate policies. Now that they are public, they must adhere to strict rules and regulations, as do most established organizations. Audits are a huge part of industries such as financial and healthcare, and the software delivery environment must document every activity within the software lifecycle. More teams and tools mean more elements that need to be consistently recorded across all systems and databases so there’s ‘one source of truth.’ Without an automated flow of information into a centralized point, this process is a time-consuming manual entry task for big enterprises, generating huge amounts of waste on non-value work. Smaller private companies don’t have such pressing audit concerns, if any at all, meaning more time spent on value-added activities.

  • Developer pool

Large global organizations tend to have thousands more developers than their younger competitors. All of these developers have their preferred tool and Agile methodology, which creates much conflict and discourse. Meanwhile, digital-native companies tend to work from the same place – the same tool, same methodology and a clear, shared goal. A connected software value stream can help organizations create unity and increase understanding in every aspect of the software development and delivery lifecycle, removing waste and keeping the focus on value-added work.

  • Partner concerns

To operate at a high level across the board, parts of the business are outsourced to third parties to develop components or even whole non-trivial applications. Unfortunately, these third parties are unlikely to share the same toolset as their client, creating a discontinuity of information that causes unnecessary friction. Again, digital-centric companies are unlikely to have such concerns, as they won’t be delegating or outsourcing on such a large scale. With a system in place to connect an organization’s toolchain with its partners, the flow of information can be instant and controlled.

These are just a few of the key encumbrances that larger organizations must contest with, and that heavily influence any Agile and DevOps transformation. By this point, you may be thinking “Well, what’s the point? The game’s over. Throw in the towel!” But don’t give up, because there is a way forward.

Many transformations fail because the flow of project-critical information between key stakeholders is too slow, damaged or AWOL. This means much waste, a plethora of doomed projects and lot of unhappy, disengaged employees.

This information must be flowed across tools, teams, disciplines, organizations and partners, as the data’s very creation was intended to do. To do this, you need to integrate your software lifecycle, which resolves any conflict or friction caused by scaling tool deployments and projects.

Not only will this remove the barriers between tools and teams, helping them to work together and enhancing their individual value, but it will enable the enterprise to easily expand and manage its tool landscape. And it will result in a software ecosystem that is tailored to an individual organization’s needs and that supports customer requirements.

By creating an integrated software value stream, you create a robust backbone with which to scale Agile and DevOps transformations, enabling your organization to compete (and innovate) in a digital world.

Nicole Bryan is Vice President, Product Management at Tasktop Technologies. Nicole has extensive experience in software and product development, focused primarily on bringing data visualization and human considerations to the forefront of Application Lifecycle Management. Most recently, she served as director of product management at Borland Software/Micro Focus, where she was responsible for creating a new Agile development management tool. Prior to Borland, she was a director at the New York Stock Exchange (NYSE) Regulatory Division, where she managed some of the first Agile project teams at the NYSE, and VP of engineering at OneHarbor (purchased by National City Investments). Nicole holds a Master of Science in Computer Science from DePaul University. She is passionate about improving how software is created and delivered – making the experience enjoyable, fun and yes, even delightful.