Parasoft – API Testing and Service Virtualization at Microsoft Build

0

Parasoft showcases new release of API Testing and Service Virtualization at Microsoft Build

Monrovia, CA & Seattle, WA – May 10, 2017 – Parasoft, the leader in software testing solutions, today announced its latest enhancements of their API testing and service virtualization solutions for the Microsoft Environment at Microsoft Build 2017, taking place May 10-12 in Seattle. Parasoft will be featuring its new functionality at booth #209. To get started and learn more, visit: http://software.parasoft.com/virtualize/microsoft/

Parasoft SOAtest and Virtualize are widely recognized as industry standard tools for enabling teams to quickly solve today’s most challenging issues, including security, performance, and test environment obstacles. In a continued effort to improve functionality and ease-of-use for customers, Parasoft has introduced new functionality and streamlined workflows to address everyday challenges that software developers and testers face.

Parasoft has focused on three key areas with this new release:

  • Broadening access to testing through the thin client interface: Greater access enables teams to quickly initiate testing projects, facilitate correlation and collaboration, and seamlessly tie test scenarios to environments.
  • Solving data challenges through enhanced workflows: Providing quick and simple access to test data helps test designers create move efficient and effective tests.
  • Shift-left performance testing: Early-stage performance testing is available by reusing existing test artifacts in performance tests and reviewing results in the Web-enabled dashboard.

To learn more about Parasoft’s offering, please visit:

About Parasoft

Parasoft provides innovative tools that automate time-consuming testing tasks and provide management with intelligent analytics necessary to focus on what matters. Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software, by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization. Parasoft supports software organizations as they develop and deploy applications in the embedded, enterprise, and IoT markets. With developer testing tools, manager reporting/analytics, and executive dashboarding, Parasoft enables organizations to succeed in today’s most strategic development initiatives — agile, continuous testing, DevOps, and security.

Behaviorally Speaking: Creating the Deployment Pipeline

Creating the deployment pipeline.
By Bob Aiello

Continuous Delivery (CD) depends upon the successful implementation of a fully automated deployment pipeline. As a practitioner, I have heard many folks refer to CD in overly simplistic terms, creating a naive image of deployments being fully automated as a simple push button. The truth is that continuous delivery is anything but simple and it takes some effort to create a fully automated deployment pipeline. Continuous Delivery touches every aspect of the application lifecycle and is best created in an iterative manner. This article will help you get started with creating your fully automated continuous delivery pipeline.

The first thing that you need to understand about Continuous Delivery is that it is – well – continuous. This means that you need to consider every step of the process to create your application build, package and deployment. Some folks refer to continuous deployment and continuous delivery in the same sentence. In fact, I have seen DevOps thought leaders use these terms interchangeably. Continuous Deployment refers to being able to push changes as often as necessary. Continuous Deployment can be disruptive in that users are often not ready for a change. You should use continuous deployment when you must push out an urgent security patch – with or without the user’s consent. Continuous Delivery is more subtle in that changes are technically pushed to the target environment, but are hidden through a technique known as feature toggle where changes are hidden until they are ready to be exposed to end users. Continuous delivery helps reduce risk as changes are technically deployed and can be tested to some degree. Continuous delivery is less disruptive than continuous deployment because continuous deployment pushes changes whether or not the user is ready to accept them.

The deployment pipeline is the automated procedure that implements continuous delivery and continuous deployment. Most often, deployment procedures consist of shell scripts, which ensure that each step of the build, package and deployment is successfully completed without any errors. If you were to look at my scripts, you would see that more than half the code is testing each step and then logging the results. Mistakes happen and it is very common for one step in the deployment automation to not complete as expected. The deployment pipeline completely eliminates manual steps, but recognizes the reality that sometimes problems occur. If a step does not complete as expected, you want the deployment to fail immediately and notify the operator as to what has occurred. One challenge that often arises is how to handle reviewing and approving proposed changes. Agile and lean change control is an absolute must-have.

The secret to effective change control is to identify which changes are low risk and can be categorized as being preapproved. The remaining changes, by definition are a little more complicated, and should be reviewed and assessed for technical risk. The DevOps approach is to ensure that all of the subject matter experts (SMEs) are involved with the technical review process. It has been my experience that too often change control resembles a phone-tag game where change managers “represent” changes which they barely understand instead of ensuring that the SMEs are fully engaged with the technical review process. The deployment pipeline relies upon effective change control which employs agile and lean principles.

The deployment pipeline should pull source code from the version control system (VCS) based upon labeled (or tagged) baselines and built via a fully automated procedure – most often using Ant, Maven, Make or other build scripting language. Continuous integration servers such as Jenkins initiate the build, often triggered by the code being committed into a version control system. I try to script every single step using Ruby, Python or the available command line tools such as bash or, on windows, PowerShell. Remember that we build once and then deploy the same built components to each environment using the same automated procedures. The build process should also automatically generate the SHA1 or MD5 hashes which can be used to verify that each configuration item was successfully deployed to its target location and also later used to identify any unauthorized changes.

Infrastructure should always be built using automated procedures which is known as infrastructure as code. Managing configurations and environment dependencies also should be performed via automated procedures. It is important that deployment procedures be reliable and verifiable while also logging each step as it completes. The procedures should yield the same results no matter how many times they are run which is known as being idempotent.

Creating the deployment pipeline is not trivial and you may need to complete this effort in steps. I always focus on starting with attended automation where I do my best to automate each step and require that an operator look at the screen and verify that each step has completed successfully and then press enter. Over time, I may know enough to be able to fully automate the build, package and deployment as that magical “push button” that we all strive for, but in the meantime the steps I have outlined will significantly improve the reliability of your deployment process – eliminating many potential sources of error.

If you want to succeed in creating the deployment pipeline then start in small steps. Eliminate any possible sources of error. Manual steps will result in human error. Taking an agile iterative approach to creating the fully automated pipeline will help you deliver changes as often as necessary while ensuring security and reliability. May sure that you drop me a line and share your best practices for creating the deployment pipeline!

Imitation is Limitation – Why Your Agile and DevOps Transformations are Failing

Imitation is Limitation – Why Your Agile and DevOps Transformations are Failing

By Nicole Bryan

If you’re a business or IT leader trying to compete in a digital world, you need to leverage your software delivery capabilities to the max to stay competitive. If you don’t, your organization is ripe for digital disruption from younger, more digital-centric companies.

To address this threat, you’ve probably adopted widely publicized Agile and DevOps practices to enhance your software delivery. Perhaps you or one of your team were inspired by a dazzling presentation at a tech conference that implied, “If you copy this model, you can enjoy the same success!”

But tread softly. There is no ‘one-size-fits-all’ approach to Agile and DevOps, and imitating the successful transformations of Facebook, Netflix, Airbnb, et al. will not necessarily improve your capability to quickly deliver quality software. In fact, it may be detrimental, leading to more bottlenecks and waste that could cost your business millions in lost productivity.

Why is there no silver bullet or magical blueprint? Largely it’s because you’re not the same as those digital disruptors. In fact, you’re distinctly different businesses, each with its own unique software ecosystems. These nuances must be understood if you’re to have any success with your digital transformation.

To understand why, it’s important to remind ourselves of the core values of Agile and DevOps. Both are about delivering value to the customer and for the business, and regard operational waste as the scourge of delivering consistent end-to-end value. With this in mind, let’s look at how waste is created, and value lost, when a large organization tries to scale its transformations.

  • Application-critical

Digital-native companies can adopt a ‘trial and error’ approach to building new software innovations, relaxed in the knowledge that it’s not the end of the world if there’s a bug in the system (as it can be easily rectified). Whereas larger organizations, such as banks or healthcare providers, require a more diligent approach. They will be using heavyweight tools to ensure rigorous scanning of requirements, builds and tests to limit the risk of software downtime so they can always access bank accounts and find medical records. Being tied to such influential legacy tools within the workflow can slow down the speed with at which teams can operate – unless these tools are working in harmony with all the other tools in the lifecycle, which they’re not naturally designed to do.

  • Audit

Back in the day, companies such as Facebook and LinkedIn were private and didn’t need to worry about regulatory or corporate policies. Now that they are public, they must adhere to strict rules and regulations, as do most established organizations. Audits are a huge part of industries such as financial and healthcare, and the software delivery environment must document every activity within the software lifecycle. More teams and tools mean more elements that need to be consistently recorded across all systems and databases so there’s ‘one source of truth.’ Without an automated flow of information into a centralized point, this process is a time-consuming manual entry task for big enterprises, generating huge amounts of waste on non-value work. Smaller private companies don’t have such pressing audit concerns, if any at all, meaning more time spent on value-added activities.

  • Developer pool

Large global organizations tend to have thousands more developers than their younger competitors. All of these developers have their preferred tool and Agile methodology, which creates much conflict and discourse. Meanwhile, digital-native companies tend to work from the same place – the same tool, same methodology and a clear, shared goal. A connected software value stream can help organizations create unity and increase understanding in every aspect of the software development and delivery lifecycle, removing waste and keeping the focus on value-added work.

  • Partner concerns

To operate at a high level across the board, parts of the business are outsourced to third parties to develop components or even whole non-trivial applications. Unfortunately, these third parties are unlikely to share the same toolset as their client, creating a discontinuity of information that causes unnecessary friction. Again, digital-centric companies are unlikely to have such concerns, as they won’t be delegating or outsourcing on such a large scale. With a system in place to connect an organization’s toolchain with its partners, the flow of information can be instant and controlled.

These are just a few of the key encumbrances that larger organizations must contest with, and that heavily influence any Agile and DevOps transformation. By this point, you may be thinking “Well, what’s the point? The game’s over. Throw in the towel!” But don’t give up, because there is a way forward.

Many transformations fail because the flow of project-critical information between key stakeholders is too slow, damaged or AWOL. This means much waste, a plethora of doomed projects and lot of unhappy, disengaged employees.

This information must be flowed across tools, teams, disciplines, organizations and partners, as the data’s very creation was intended to do. To do this, you need to integrate your software lifecycle, which resolves any conflict or friction caused by scaling tool deployments and projects.

Not only will this remove the barriers between tools and teams, helping them to work together and enhancing their individual value, but it will enable the enterprise to easily expand and manage its tool landscape. And it will result in a software ecosystem that is tailored to an individual organization’s needs and that supports customer requirements.

By creating an integrated software value stream, you create a robust backbone with which to scale Agile and DevOps transformations, enabling your organization to compete (and innovate) in a digital world.

Nicole Bryan is Vice President, Product Management at Tasktop Technologies. Nicole has extensive experience in software and product development, focused primarily on bringing data visualization and human considerations to the forefront of Application Lifecycle Management. Most recently, she served as director of product management at Borland Software/Micro Focus, where she was responsible for creating a new Agile development management tool. Prior to Borland, she was a director at the New York Stock Exchange (NYSE) Regulatory Division, where she managed some of the first Agile project teams at the NYSE, and VP of engineering at OneHarbor (purchased by National City Investments). Nicole holds a Master of Science in Computer Science from DePaul University. She is passionate about improving how software is created and delivered – making the experience enjoyable, fun and yes, even delightful.

 

Test Environment Management: The Secret Ingredient for DevOps

0

Test Environment Management: The Secret Ingredient for DevOps

 

Nikhil Kaul, Senior Product Marketing Manager at SmartBear Software

The rapid adoption of DevOps necessitates continuous delivery and a shift toward higher levels of test automation. As a result, QA teams often invest a lot of time and effort in ensuring their tests are automated as a part of the continuous testing cycle. The right test automation tools allow teams to increase test speed and coverage. In fact, a common framework agile teams use while navigating this process is the test automation pyramid – an approach that focuses on automating tests at three levels: unit, service and user interface (UI).

At the base of this pyramid is unit testing. By running more unit tests at a lower level, teams can get feedback faster and develop a solid testing base to build upon. The intermediate layer consists of API tests, which work under the UI of an application. Finally, the top of pyramid represents end-to-end UI tests.

To implement a test pyramid approach, teams spend a lot of time and energy hunting for the right test automation tools. As an example, for unit level testing, teams use JUnit or NUnit. Similarly, for API level testing, teams love SoapUI, while for UI testing, TestComplete or open source tools like Selenium could be used.

A DevOps chain can only be as strong as its weakest link. And often one area that is neglected during such a test pyramid approach is the way teams go about managing their test environments.

Test automation efforts often falter as the efficiency gained by investing in test automation is lost when teams still maintain the test infrastructure manually. In the case of UI testing, teams can spend hours provisioning, maintaining, monitoring and tearing down the underlying test environment infrastructure.

As an example, without a proper strategy, teams spend a significant portion of their testing cycles ensuring the right testing environments are available. And often, even when these environments are available, there are large discrepancies between test environments and production environments.

Traditional Ways of Managing Test Environments

 

There are three common tactics teams use to manage test environments. One of the easiest ways is to use local machines (i.e., browsers hosted locally). The second way is using virtual machines such as Amazon EC2. Finally, the last way teams go about test environments is by leveraging device labs, which primarily consists of rented or bought mobile devices.

Regardless of which of these three routes a team takes, there is manual work required to maintain and upgrade environments when new browsers, operating systems, devices or resolutions are introduced.

In addition to the manual efforts required to maintain these UI test environments, there are other challenges such as reduced speed to delivery, redundancy and discrepancies.

Speed and Agility Concerns

 

Setting up environments on local machines as a part of your testing cycle automatically slows down the entire process. QA teams frequently find running UI tests on virtual machines can help overcome up front hardware costs, but there is still a manual component involved in ensuring the right configurations are available when tests are being kicked off. Provisioning clean test environments often means spinning up new VMs with new configurations which can be time consuming and adds to the labor cost.

Parallel Execution Needs a Lot of Work

 

Having an ability to run tests in parallel across various browsers and OS combinations can be useful to speed up the testing process and increase coverage. However, often setting up parallel execution capabilities for UI tests with traditional test environment management efforts like local machines, VMs and device labs can be a painstaking process.

To run UI test on different devices, teams need to set up a hub with multiple node devices. Registering the node to the hub involves specifying arguments like browser type, version, hostname, port etc. If these arguments change or get updated, QA might have to go back to the command line or configuration file to update. Additionally, adding an iPad or an Android device as a node requires even more additional steps.

Redundant Activities

 

Setting up environments locally results in duplicated efforts as each individual has to perform the same exact activity. In case, teams are maintaining an in-house device lab, the redundant efforts associated with upgrading various browsers and OS combinations can grow exponentially. Keeping systems and devices up-to-date becomes more challenging as teams grow and new environments (web browsers, operating system version and resolutions) are added to the mix to improve coverage.

Increased Cost

 

Setting up local environments, using VMs or device labs can often result in cost overruns. In fact, costs encountered while using these options can be primarily broken down into three different buckets: device costs, labor costs and licensing costs.

Device cost represents the cost associated with having devices with the right configurations of operating systems, resolutions and browser versions.

Labor cost consists of the time and effort your operations, test or DBA team spends on maintaining, upgrading or even tearing down test environments, database servers and labs, among others.

Licensing costs include the inevitable part of using licenses of additional software to ensure environments are working as expected, both with device labs and VMs. An example, you’ll likely want to know that if and when your test environments go down. Setting up this type of notification and monitoring system for your environments requires additional spend on a third-party vendor.

UI Test Environment Management: What to Look for?

 

While looking for the best way to optimize and manage your test environments, there are four key aspects to take into consideration.

  1. Test Environments Should be Easy to Install

Easy installation helps ensure that application development and delivery (AD&D) teams don’t spend considerable amount of time with set up and configuration.

  1. Test Environments Should be Easy to Enhance

If a new browser version is released, the needs of your end-user will change accordingly – forcing you to modify your test environments. Therefore, easy to enhance test environments can thereby be really helpful, especially in a DevOps environment.

  1. Test Environments Should be Easy to Share

The easier it is to share test environments among team members, the more scalable the process becomes. The time it takes for your set of tests to complete a run can be drastically reduced by running UI tests in parallel. To do that, environments should be easy to share among team members.

  1. Test Environments Should be Easy to Tear Down

As an end user yourself, you should be able to spin down, reconfigure or reinstall your environments as needed.

The Right Way to Set Up UI Test Environments

 

Having cloud-based test environments available on demand can help overcome scalability, cost and maintenance challenges posed by use of local machines, virtual machines or an in-house testing lab.

Cloud-based environments, like TestComplete Environment Manager, give on-demand access to over 1,500 combinations of browser versions, operating systems and resolution settings. Application development and delivery teams can easily and quickly execute and report on automated UI tests across test environments without setup or configuration.

Other benefits of using cloud-based test environments include:

Easy to Install, Enhance, Share and Teardown

Cloud-based test environments are incredibly easy to manage and are accessible anytime, eliminating the need for an installation process altogether. You simply need to link those environments to your tests. Cloud-based environments are also simple to enhance. When new browser versions are released, a good cloud-based environment solution will update these, thereby eliminating your need to spend time on upgrades.

As you’re accessing a cloud-based environment through a URL, it is easy to share with anyone, enabling you to scale as needed. Tearing down environments in this case is as simple as deciding to not use the environments.

Parallel Test Execution Without Setup

The challenges associated with setting up environments for parallel execution can be easily overcome with cloud-based environments as they are built with frameworks to handle concurrency.

Report Across Environments with Zero Configurations

This is a gold mine. The reporting is built-in, so QA managers can pull metrics and see how tests are trending across operating systems, browsers or resolution settings – all in one place.

Combine Manual and Visual Tests with Automated Tests

Cloud-based test environments offer complementary benefits to automated testing which you won’t get through virtual machines, on premise systems or local devices.

For example, you will often realize that manual or visual testing is needed to uncover issues that are hard to find through automated test scripts. These include:

  1. Establishing aspects of the UI are of the right color, shape or size
  2. Uncovering overlapping elements
  3. Ensuring the appearance or usability of a website looks as expected

The right cloud-based test environment management tool will have out-of-the-box capabilities to tie these into your test automation framework.

 

Nikhil Kaul is the Senior Product Marketing Manager of Testing Products at SmartBear Software. Prior to SmartBear, Nikhil served Digité, a leading provider of Application Lifecycle Management solutions, in various roles including Senior Software Executive. There, he gained insights into software development and testing market. Nikhil received a master’s degree in business administration from Georgetown University. He is very involved in discussions taking place in development and testing communities. Follow him on Twitter @kaulnikhil or read his blog at: http://blog.smartbear.com/author/nikhil-kaul

 

 

Demystifying DevOps – Which DevOps do you mean?

0

Demystifying DevOps – Which DevOps do you mean?
by Uday Kumar

DevOps has been the absolute buzzword of the IT world over the last couple of years and yet its precise meaning still remains unclear. Along with Agile, this word represents a promising new wave within software development. Currently, a team of industry experts is collaborating to develop a universal standard for this ambiguous and confusing term. Until this IEEE working group presents their final document, though, those in the trenches will continue to interpret the concept with flexibility, based on a balance of experience and convenience.

Personally, viewing DevOps primarily as a new organizational culture is definitely not my cup of tea. I see greater value in the DevOps focus on maximizing team members’ creative utilization of process and tools.

I have spent a lot of time and researched many sources to determine exactly what DevOps means. Basically, there are 4 categories that are broadly associated with this term. In this blog, we will cover each of these categories.

Apart from DevOps, there are several other related buzzwords like ChatOps, CloudOps, SecOps that are now trending, based on the increasingly popular idea that software development is, by necessity, an integrative process. Apart from DevOps, the rest of these terms are out of scope for this blog.

Category 1: DevOps – Software Developers (Dev) and IT Operations (Ops) – CI & CD

DevOps is primarily defined as Collaboration, Communication and Integration between Software Developers and IT Operations, groups whose fundamental interest areas are usually different and contrasting.

Software Developers (Dev) want to focus on creating new code and applications while IT Operations (Ops) wants to focus on sustainability or quality. When the application is not working as expected, Dev often thinks “it is not my code, it’s your machine”, whereas Ops thinks “it’s not my machine, it must be your code”. DevOps is a term coined to reflect the bridge which must be built between these two teams so that new code gets deployed on the production systems smoothly without blaming one another.

In order to achieve this seamless continuity, we need to have essential process and platform in place; two essential ingredients are CI (continuous integration) and CD (continuous deployment). Tools + Automation is key. Companies will often implement Agile principles to achieve the highest degree of success with CI and CD. IBM BlueMix and Cloudbees are SAAS- based products with ready made CI and CD platforms for many different applications (especially web and mobile). Take a look at this Sonatype slideshare presentation with Architecture types to get a good overview on this topic. It is a very good reference for a deep dive into this helpful technology.

Although the majority of the people with whom I have interacted adopt this view of DevOps, many companies have started realizing that CI and CD implementation is only able to provide a partial fix to a complex problem.

Category 2 and 3: DevOps – Infrastructure (Ops) as Code (Dev)

The IT Operations team is generally responsible for managing the infrastructure of updating the software (OS /Application ) manually. Updating this configuration as part of change management is one essential step of ITIL process. It is crucial that any and all changes be documented, as well as auditable.

Virtualization and Cloud technologies have enabled team members to write the code necessary to create and manage the infrastructure, as well as to control the changes using the updated code. Docker, Puppet and Chef are taking this to the next level. Effectively, all operations that are being performed by the IT team are now getting automated. Coding is abstracting the complexity of managing the infrastructure and developers don’t have to specify required infrastructure as a guideline. The marketing materials of Puppet and Chef reflect the awareness that effective automation of Configuration Management currently falls under the DevOps umbrella. Clearly, the DevOps term provides an apt metaphor for the way to achieve the most stable environments.

Category 2: Internal Infrastructure Management for conducting different levels of testing either on Internal servers or in the Cloud (external).

This setup is quite complex especially for Product companies (not SAAS) as they need to support various product versions/variants. Along with Virtualization/Cloud, technologies like Dockers, Puppet and/or Chef will fit well. Nowadays, professionals are seeing many profiles in this area with DevOps.

Category 3: Managing the production servers especially for SAAS product (application) like Amazon, Google Apps, SalesForce.

DevOps demands continuous automation which means that scripts, instead of people, are initiating automated jobs including continuously deploying software updates. They also use automation to check the health of the system and its environment by monitoring applications, securing the applications, load balancing and dynamically provisioning new servers as needed and even ensuring automatic recovery should a problem arise. Implementation of ITIL-based solutions is also considered to fall within DevOps.

Personally, I don’t like to label automation of IT operations as being “DevOps”. The first reason is because the automation of IT operations has been around long before anyone started using the term DevOps. Addteq, has been automating process since we first opened our company over 11 years ago, long before the DevOps term caught on. Secondly, DevOps is really about improving communication, collaboration and integration between groups including business end user, development, operations among other key stakeholders.  IT automation is essential, but DevOps has an even broader focus.

Category 4: BizDevOps – Product Management (Biz), Engineering (Dev) and IT Operations (Ops)

This new term, though not yet a very popular buzzword compared to the others discussed, may just be my favorite. Probably only heard in the past 2+ years, and quickly picking up speed, this three-pronged word covers the entire value stream, starting from initial customer request straight through to delivery. Unified cross-functional teams work together (collaboration, communication and integration) in order to generate value to the customer, while simultaneously and judiciously balancing the available resources. According to some, this can be considered as Agile (Scrum/Kanban/Enterprise Agile frameworks) along with CI and CD mentioned above. IBM, CA, and Xebialabs continue to promote this categorization under the DevOps label.

 

I use the analogy of physical manufacturing to understand the software lifecycle; the development process mirrors the steps implemented in a manufacturing factory while the cross-functional value chain ( from Product Management to IT Operations ) may be thought of as a manufacturing assembly line. To achieve Enterprise Agility, the assembly line must be continuous. This assembly line is the Continuous Delivery (Development + Integration + Deployment + Testing + Service) platform which is critical to enable continuity. Integrated Release Management or Application Lifecycle Management(ALM) along with Automation are really other terms which refer to the continuous delivery platform.

At Addteq, we provide services for all the above 4 categories. Plus, we are always adding and updating based on the latest industry developments. For more details, you can refer to our extensive slide deck.

We invite customer and community input regarding this timely debate: which category do you regard as “the real DevOps”, is your understanding of DevOps included in our category list?

As there is so much confusion around this word, the next time you say DevOps to someone, it might be helpful to specify to which aspect you are referring. And if someone is discussing DevOps, then don’t hesitate to request clarification, especially during consulting and recruitment. DevOps can’t work unless everyone is on the same page.

We encourage you to add your comments or perspectives.

About the author:

Uday Kumar is Product Manager and DevOps ALM Management Consultant for Addteq, an Atlassian Platinum Solution Partner providing business solutions to enterprise clients. Uday is an Intrapreneur, providing innovation within his organization including his software development centers working towards a vision which he refers to as “Scientific Management of Software Industry”. Uday focuses on reducing all types of operational waste while achieving Operational Excellence.

Predictions for the Coming Year

0

Predictions for the Coming Year
by Bob Aiello

This is an exciting time to be part of the technology industry. The demand for complex systems is only surpassed by user expectations that new features, as well as bugfixes, can be delivered rapidly – even during normal business hours. The internet of things (IoT) has expanded the definition of connectivity to everything from your car to your washing machine. Mobile devices can help monitor your smart house, optimize your electricity and heating bills while ensuring mobile connectivity throughout your international travel. Cloud-based resources deliver remarkable scalability at low cost, enabling the analysis of big data which continuously astounds with new applications of its remarkable business intelligence capabilities. Savvy architects ensure not only an API-first, but often an API-only architecture. While there are limitless possibilities, there are also are limitless risks.

Cybersecurity has emerged as a core capability to address the challenges of cybercrime (and, for state entities, the related challenges of cyberwarfare and even cyberterrorism). The remarkable number of high-profile security incidents have damaged reputations for many firms and led many consumers to view cyber-responsibility as a must-have. In the coming year, consumers will expect companies to up their game with regard to cyber capabilities ensuring systems security and reliability. Technology professionals will need to embrace a comprehensive approach to continuous security that reaches from designing systems to be secure, through secure code analysis, to thorough penetration testing in order to confirm that applications have been not just designed, but also built, packaged and deployed to be secure.

DevOps is growing beyond deployment engineering to encompass a broader set of competencies which enable rapid delivery of business capabilities. DevOps itself will mature from the domain of internet startups to proficiencies which can align with the demands of even the most highly regulated financial services firms. DevOps is no longer just about development and operations. DevOps enables better communication and collaboration between organizational units throughout the enterprise. This journey to maturity must include an industry standard to guide the use of DevOps in firms which must adhere to audit and regulatory requirements and seasoned IT professionals are working on that effort right now. Feel free to contact me for more information on the IEEE P2675 DevOps Working Group.

Agile development has established itself as the preferred approach to software engineering. But many organizations struggle to achieve agility due to inevitable constraints – from idiosyncratic corporate culture to the complex demands of technology development efforts. As the old adage goes “nine women cannot make a baby in one month” and the realities of systems constraints often make iterative development challenging, if not near impossible. In 2017, firms will likely need to focus more on software process improvement than trying to meet the demands inherent in agile development. That’s right; I am saying to focus more on process improvement and less on proving that you are agile! Even projects which must use waterfall methodology can benefit greatly from the wisdom of the Agile Manifesto and agile principles. Our new book on Agile ALM & DevOps discusses this approach in detail. As it always has, process maturity will need to include traceability, while still achieving fully agile application lifecycle management (ALM). Organizations in highly regulated industries, including financial services, will need to implement agile ALM while still maintaining fully traceable processes.

DevOps and Agile ALM will accelerate your process developer velocity; however, your performance will be significantly impacted unless you have implemented continuous testing, including API testing and the use of service virtualization. Unit and functional testing are essential, but not sufficient, to keep up with the demands of Agile and DevOps. Agile depends upon DevOps and DevOps relies upon Continuous Testing and Continuous Security.

Tools are essential for successful software development and solid tools integration is also crucial. Vendors will continue to evolve in terms of building tools that can be integrated across the ecosystem. Vendors themselves will continue to play a key leadership role in DevOps innovation with every vendor stretching to ensure that their solutions help enable DevOps.

Bob Aiello (bob.aiello@ieee.org)

 

Enterprise DevOps and Microservices in 2017

0

Enterprise DevOps and Microservices in 2017
By Anders Wallgren, CTO, Electric Cloud                                       Anders B&W

As a new year begins, we often take some time to reflect on the previous year and look ahead at what is to come. Suffice to say, 2016 was a momentous year for software delivery in the enterprise. DevOps adoptions matured greatly and, at conferences worldwide, industry leaders were eager to share their experience and expertise.

While DevOps was taking over, two trends focusing on ‘small’ things were making big waves: containers and microservices as a means for organizations to scale their application development and releases. While both technologies have matured considerably in the last year, they are still challenging – particularly for enterprises that need to incorporate these new trends with monolithic applications, traditional releases, VMs, etc. While it’s not a silver bullet that’s suitable for every use case, we’re increasingly seeing more enterprises exploring Docker and microservices for their needs, and I’m sure these trends will continue to lead positive advancements in modern application delivery for the coming year (and beyond).

Continue reading for my insights on some of the opportunities and challenges surrounding these two major trends, as well as a few predictions for what’s to come in 2017.

2016 Trends – Looking Back

DevOps has matured:

DevOps experienced considerable maturation and advancement in 2016. From its grass roots beginning with small-teams, mainly for green-field applications or startup companies, DevOps has matured, where complex enterprises – not just the unicorns – are now often on the forefront of DevOps innovation. DevOps practices have expanded beyond just web apps, to be used for your database deployments, embedded devices, and even mainframes.

Microservices and Containers go hand in hand:

Microservices are an attractive DevOps pattern because of their enablement of speed to market. With each microservice being developed, deployed and run independently (often using different languages, technology stacks, and tools), microservices allow organizations to “divide and conquer”, and scale teams and applications more efficiently. When the pipeline is not locked into a monolithic configuration – of either toolset, component dependencies, release processes or infrastructure, there is a unique ability to better scale development and operations. It also helps organizations easily determine what services don’t need scaling to optimize resource utilization.

However, we have seen in 2016 that adopting containers and microservices can be challenging. What’s starting to happen, and what some people are starting to realize the hard way, is that you’re going to have a really hard time doing containers and microservices well with bad architecture. Whether that is the architecture for your application, services, infrastructure and delivery pipeline – architecture matters greatly in your ability to do microservices well and take advantages of the benefit they, and containers, can offer. Microservices are not for everyone, and if you’re looking for microservices as a way to “uncomplicate” your life – you certainly won’t find that if you haven’t resolved CI, automated testing, monitoring, high availability or other major prerequisites, before. If you’re struggling in one area, it is highly recommended that you address those issues first and then you can decide if microservices will be an asset to your organization.

Legacy applications are another challenging factor when we look at the surge of demand for microservices over the past year. Do you decompose your application? Or parts of it? Do you build API around it? Do you completely re-write it? We’ve learned the value of a phased approach when re-architecting a legacy application or dealing with an organization that has established processes and requirements for ensuring security and regulatory compliance. In these instances, teams should consider starting with a monolithic application where practical and then gradually pull functions into separate services.

Microservices and containers will continue to be major players in 2017, and gradually some of the kinks and challenges associated with them will be ironed out as the community as a whole becomes more proficient with them, and tools and patterns are introduced to accelerate the adoption and large-scale operations of these technologies.

What’s Ahead in 2017  

DevOps failures will come to the forefront

While the hype around embarking on a DevOps transformation is credible and the ROI and transformational benefits of a DevOps transformation have been established, there has been a silent backlash that will make itself known in 2017. This will be the year we start to hear some of the failures – which is good because we learn from them. There will be quite a few people who say, “Yeah, we tried DevOps. It doesn’t work. It’s all hype. It only works for the unicorns. It only works for new software. It only, it only, it only…” This type of doubt is bound to happen when any new process or framework is introduced into legacy environments and cultures, and 2017 is the time for the doubt around DevOps to rise. Every DevOps adoption story is unique, and every journey is one of trial and error, and the road towards continuous improvement. We know the road is challenging, frustrating at times, and certainly it’s not all-roses. But the rewards are immense. As the community shares patterns and learning for what works, we should also be more forthcoming about sharing our failures, setbacks and wrong turns. So we can all learn, and become better at realizing software. Together.

Importance of Microservices Adoption

2017 looks to be a year where microservices and containers will continue to rise, and get pushed even further into the limelight as companies continue to invest in software-driven innovation and technology. It will be critically important for companies to have a solid architecture in place and understand how to approach and scale them effectively.

DevOps’ Impact on Financial Services

Furthermore, we will start to see some interesting disruption in FinServ. Interestingly, financial institutions have often seen technology as their own personal differentiator. However, more often than not the problem these organizations have is with supporting the right culture to enable a successful DevOps implementation. As veteran companies work on instilling the right mindset to enable faster releases in such a heavily regulated industry we could also see disruption in the space, where new financial technology, new online banks, new companies, etc. are taking up more and more market share.

Secure DevOps

With the proliferation of IoT and our always-connected world, and with the growing cyber security threats, we are going to see more security verification and more built-in compliance validation checks happening earlier in the lifecycle that are fully integrated with the development process.

Final Thoughts

All in all, what all teams and organizations need to do is keep their eyes on the ball. We practice DevOps to bring more value to our customers and employees, delight our users, and make things better, safer and faster. Keep that in mind – because if what you’re doing on any given day isn’t moving that ball forward, what was the point? Focus on that for the new year!

Behaviorally Speaking: DevOps Development

0

Behaviorally Speaking: DevOps Development
by Bob Aiello

DevOps puts the focus on developing the same build, package and release procedures to support deployment throughout the entire application lifecycle. This is a big change for many companies where developers traditionally did their own deployments and operations joined the party late in the process, usually when the application was being prepared to be promoted to production. In fact, I have seen organizations who put a tremendous amount of work into their production deployments, but failed miserably simply because they started too late in the development lifecycle. DevOps puts the focus on creating an automated application lifecycle management (ALM) to support development test, integration, QA, UAT and Production. But how exactly do you develop DevOps itself and how do you know when you have achieved success?

DevOps is not new; like many other Agile and Lean practices, DevOps makes use of principles that have been around for a long time. But, also like Agile and Lean, DevOps clarifies and highlights industry best practices in a way that is particularly compelling. Traditionally, developers have demanded free reign in being able to build, package and deploy their own work. As a group, developers, by and large, are known to be smart and hardworking folks. Unfortunately, seasoned professionals also know that many IT problems and systems outages are caused by a smart person accidentally missing an essential step. Creating repeatable processes and ensuring uninterrupted services is primaily about creating procedures that are simple, clear and reliable. DevOps teaches us that we need to begin this journey early in the process. Instead of just automating the build and deploy to QA, UAT and Production – we shift our focus upstream and begin automating the processes for development. too.

Continuous integration has been made popular by industry expert Martin Fowler who strongly advocated deploying to a test environment even for a development build. Creating seamless and reliable fully-automated deployment procedures is not an easy task. There are many dependencies that are often difficult to understand – much less automate. Developers spend their time learning new technologies and building their technical knowledge, skills and abilities. DevOps encourages improved communication between Development, QA and Operations. The most important part of this effort is to transfer knowledge earlier in the process – reducing risk by creating a learning organization. DevOps helps to improve both QA’s and Ops focus by enabling them each to get involved early in the development cycle and also by creating automated procedures to reliably build, package and deploy the application. If you want to be successful, then you should also strive to be agile.

DevOps was made popular by a number of agile technology experts, including those involved with what has become known as agile systems administration and agile operations. It is essential to remember that the journey to Devops must also follow agile principles. This means that creating your automated build, package and deployment procedures should be handled in an agile iterative way. This has been exactly how I have always handled this effort. Usually, I get called in to deal with a failed application build and release process. I often have to start by performing many tasks manually. Over time, I am able to automate each of the tasks, but this is an iterative effort with many decisions made at the last responsible moment. Source code management is also an essential starting point.

Developers need to be trained to successfully use version control tools to secure the source code and reliably create milestones – including version labels or tags. Just as DevOps starts at the beginning of the lifecycle – DevOps also needs to focus on the seminal competencies of excellent source code management. Automated application build is next, with each configuration item embedding a unique and immutable version ID. Release packages are created with embedded manifests containing a complete list of every included configuration item, whether it be a derived binary, text based configuration file or a word document containing essential release notes. Good release management means that you can identify all of the code that is about to be deployed and also provide a procedure to verify that the correct code was in fact deployed (known as a physical configuration audit). It is equally essential to provide a mechanism to ensure that a release package has not been modified by an unauthorized individual.

What makes these practices “DevOps” is the focus on developing these procedures from the beginning of the lifecycle and taking an agile iterative approach as they are developed. There is also an intentional effort to share knowledge equally between development, QA and operations. Development often knows the technology best, but operations understands the real world challenges that will result in developers being roused from their slumber in the middle of the night. QA contributes by developing the procedures to ensure that bugs are never found a second time in any release.

Devops is all about the synergy of productivity and quality with the real world focus on sharing knowledge and building competencies!

Personality Matters – Personality of Tools

0

Personality Matters – Personality of Tools
By Leslie Sachs

Do tools have personality? Writers and inventors have long suggested that machines will eventually develop to a point where they can think, learn, experience emotions, and display traits commonly associated with having a personality. Meanwhile, computer scientists have been studying thinking processes and learning machines [1]. Many people certainly believe that some tools have apparent personality flaws, including acting stubborn, unpredictable, and, at times, irrational. Science fiction aside, tools often do, in fact, display characteristics that are commonly associated with human personality, and understanding this phenomenon can help when it comes to evaluating, selecting, and implementing tools to support your software development process. This article will help you handle the people side of tools selection and adoption.

Remember Eliza?

MIT Professor Joe Weizenbaum shocked many people with his groundbreaking work to develop Eliza, a natural language program that mimicked the non-direct probing commonly associated with Rogerian Psychology [2]. Dr. Weizenbaum asked people to converse with Eliza as a way of improving the natural language capabilities of the program. Soon it became apparent that some people had trouble remembering that Eliza was only a computer program. Eliza became real to many people and there were reports of refusing to show the script of their conversation with Eliza to the researchers because the information revealed was too personal!

Being a People Person

I am not by nature a computer person; I prefer to relate to other human beings, along with all of their complex feelings and emotions. But I must admit that there are times when my laptop does seem remarkably like a stubborn teenager going through a tough adolescence. So what does all this blurring of human-machine qualities mean in terms of selecting, implementing, and supporting tools?

Tools Have Personalities, Too

You need only have a short conversation with a true open source enthusiast to realize that many tools have personalities that have been impacted by their process of creation and development. Open source tools usually have traits that relate to the community where they were developed – or perhaps nurtured is a better word. Those who support open source often gladly accept limitations or even bugs in their tools for the sake of maintaining the transparent and communal nature of tools written and supported by the open source community. It ís not only open source tools, though, that have their own personalities; commercial products also have their own, too, in addition to cultural norms, especially in terms of expectations for support and maintenance.

Commercial Products

Similarly, commercial products frequently have personalities that mirror the companies that developed them, although possibly once or twice removed due to corporate acquisitions. Many companies are truly dedicated to their products and customer satisfaction. Others seem to be far less committed, although these may still have their own competencies.

Organizations Have Personalities, Too

Some organizations have almost a philosophical orientation in favor of one tool approach or another. This phenomenon is most obvious in companies that insist on only selecting from a wide array of open source tools. Other firms may require the features or perhaps the security of a commercial tools vendor. Cognitive complexity also factors into alignment of tools and their personalities by providing just enough features to get the job done effectively. Some organizations just really like to keep things simple, while others may want to push the envelope with advanced procedures and development methodology. Interestingly enough, some commercial tools vendors are learning to be a little more communal by providing a lightweight open source version of their tools.

Hybrid Approaches

Many commercial tools vendors , including IBM, are learning to structure themselves much more like their open source counterparts, with full transparency into the defects that are being fixed and tasks being completed. Today, you can see plans for features, defects, and resources all exposed, even while working to compete with other commercial tools vendors in the same space.

Conclusion
Understanding the personality of tools and their vendors, whether they be commercial or open source, is a matter of understanding your organization’s behavior. Corporations have personalities, too. As always in psychology, you need to be self-aware and realize your own needs and capabilities. You will succeed if you can align your organization’s personality with the tools and processes that you want to implement. Feel free to drop me a line and let me know about your own challenges with understanding the personality of your tools!

References
[1] McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. W.H.Freeman & Co Ltd November 1979
[2] Rogers, Carl. Client-Centered Therapy: Its Current Practice, Implications, and Theory. Houghton Mifflin, 1951
[3] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010, p. 155.
[4] Aiello, Robert and Leslie Sachs. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement, Addison-Wesley, 2016

Parasoft Features New Continuous Testing Release at Gartner Summit

0

December 06, 2016 as reported on the Parasoft website

Parasoft announced an update to its Continuous Testing solution—integrating Service Virtualization, API Testing, and Test Data Management with Test Environment Management to enable early, rapid, and rigorous testing of highly connected applications. The solution is currently being featured at Parasoft’s booth #309 at Gartner Application Strategies & Solutions Summit in Las Vegas.

The latest Continuous Testing solution (Continuous Testing Platform 3.0, Virtualize 9.10, and SOAtest 9.10) provides the industry’s most powerful and comprehensive thin client interface for service virtualization and API testing. From any browser, a broad range of team members can rapidly create, leverage, and share API testing and service virtualization assets to accelerate testing and support advanced automation for Continuous Integration and Delivery scenarios.

“In this release, we have focused on broadening service virtualization access and packaging, addressing both creation and deployment, stated Mark Lambert VP of Products for Parasoft. For creation, we have enabled advanced workflows of the Parasoft Virtualize Desktop providing users the ability to create sophisticated assets directly from their browser. On the deployment side, Docker scripts, available on the Parasoft Marketplace, and prebuilt images, hosted within the Microsoft Azure Marketplace for both on-demand and BYOL licensing, provide a complete set of deployment options for scalable service virtualization. “

The new release also features:

  • Burp Suite Integration for API and Web Security Penetration Testing:Integration with Burp Suite, the application security testing tool recognized as the industry standard, brings a new level of API and web security penetration testing to the Parasoft solution.
  • HTTP/2 Support for Testing and Service Virtualization: Extending its highly trusted message/protocol support, Parasoft’s solution now supports testing and simulation of HTTP/2.
  • HTTP Archive (HAR) Support for Creating Tests and Virtual Assets from Fiddler Traffic Files: Converts HTTP Archive (HAR) files (e.g. from Fiddler) into traffic files that can be used for creating Parasoft virtual assets or test scenarios.
  • Jenkins Plugin for API Test Execution: Teams using Jenkins to execute Parasoft SOAtest tests during Continuous Integration can view SOAtest results within Jenkins and use the results of these tests to control Jenkins workflows.
  • Microsoft Azure and Visual Studio Team Services support: available from the Microsoft Marketplace

About Parasoft

Parasoft researches and develops software solutions that help organizations deliver defect-free software efficiently. By integrating development testingAPI testing, and service virtualization, we reduce the time, effort, and cost of delivering secure, reliable, and compliant software. Parasoft’s enterprise and embedded development solutions are the industry’s most comprehensive—including static analysis, unit testing, requirements traceability, coverage analysis, functional and load testing, dev/test environment management, and more. The majority of Fortune 500 companies rely on Parasoft in order to produce top-quality software consistently and efficiently as they pursue agile, lean, DevOps, compliance, and safety-critical development initiatives.

Training – Configuration Management, DevOps and Agile ALM

 
We provide courses courses on DevOps, Configuration Management, and Agile Application Lifecycle Management.
Configuration Management: Robust Practices for Fast Delivery
(See outline below)
Configuration management (CM) best practices are essential for creating automated application build, package and deployment to support agile (and non-agile) application integration and testing demands, including rapidly packaging, releasing, and deploying applications into production. Classic CM—identifying system components, controlling changes, reporting the system’s configuration, and auditing—won’t do the trick anymore. Bob Aiello presents an in-depth tour of a more robust and powerful approach to CM consisting of six key functions: source code management, build engineering, environment management, change management and control, release management, and deployment, which are the prerequisites for continuous delivery and DevOps. Bob describes current and emerging CM trends—support for agile development, container-based deployments including Docker, cloud computing, and mobile apps development—and reviews the industry standards and frameworks available in practice today. Take back an integrated approach to establish proper IT governance and compliance using the latest CM practices while offering development teams the most effective CM practices available today.
Continuous Delivery: Rapid and Reliable Releases with DevOps
DevOps is an emerging set of principles, methods, and practices that enables the rapid deployment of software systems. DevOps focuses on lowering barriers between development, testing, security, and operations in support of rapid iterative development and deployment. Many organizations struggle when implementing DevOps because of its inherent technical, process, and cultural challenges. Bob Aiello shares DevOps best practices, starting with its role early in the application lifecycle and bridging the gap with testing, security, and operations. Bob explains how to implement DevOps using industry standards and frameworks such as ITIL v3 (IT Service Management) in both agile and non-agile environments, focusing on automated deployment frameworks that quickly deliver value to the business. DevOps includes server provisioning essential for cloud computing in what is becoming known as Infrastructure as Code. Bob equips you with practical and effective DevOps practices—automated application build, packaging, and deployment—essential for meeting today’s business and technology demands.
Agile ALM: Using DevOps to Drive Process Improvement
Many organisations struggle to improve their existing IT processes to drive their software and systems development work. This leaves technology managers and teams to use whatever worked for them on the last project, often resulting in a lack of integration and poor communication and collaboration across the organisation. Agile application lifecycle management (ALM) is a comprehensive approach to defining development and operations processes that are aligned with agile methodology. Bob Aiello explores how to use DevOps principles and practices to drive the entire process improvement effort—establishing agile practices such as continuous integration and delivery that integrates with the IT operations controls. Learn how to use DevOps approaches to iteratively define a pragmatic and real-world ALM framework that will evolve, scale, and help your organisation achieve its software and business goals. Take back a template to define and automate your application lifecycle, accounting for all stakeholders and integrating their processes into a comprehensive agile ALM framework.
Outline of CM Best Practices Class:
Configuration Management Best Practices Training Program
Introduction:
Based upon industry standards and frameworks this course introduces the technology professional to all of the principles, concepts and hands-on best practices necessary to establish a comprehensive configuration and release management functions. Discussing both CM as it is practiced in product companies and IT organizations, Bob Aiello and Leslie Sachs provide expert guidance on establishing configuration management best practices.

  1. Configuration Management Concepts
    * Configuration Identification
    * Status Accounting
    * Change Control
    * Configuration Auditing (physical and functional)
  2. Source Code Management
    * Baselines
    * Sandboxes and Workspaces
    * Variant Management
    * Handling bugfixes
    * Streams
    * Merging
    * Changesets
  3. Build Engineering
    * Versions IDs and branding executables
    * Automating the build
    * Build tools to choose from including Ant, Maven, Make and MS  Build
    * Role of the Build Engineer
    * Continuous Integration
  4. Environment Configuration
    * Supporting Code Promotion
    * Implementing Tokens
    * Practical use of CMDBs
    * Managing Environments
  5. Change Control
    * Types of Change Control
    * Rightsizing the Change Control Process
    * The 29 minute Change Control Meeting
    * Driving the Process Through Change Control
  6. Release Management
    * Packaging the release
    * Ergonomics of Release Management
    * RM as coordination
    * Driving the RM Process
  7. Deployment
    * Staging the release
    * Deployment frameworks
    * Traceability
  8. Architecting Your Application for CM
    * CM Driven Development
    * How CM Facilitates Good Development
    * Build Engineering as a Service
  9. Hardware CM
    * Managing Hardware Configuration Items
    * Hybrid hardware/software CM
  10. Process Engineering (Rightsizing)
    * Agile/Waterfall
    * Hybrid Approaches
    * Agile Process Maturity
  11. Establishing IT Governance
    * Establishing IT Governance
    * Transparency
    * Improving the Process
  12. What you need to know about Compliance
    * Common Compliance Requirements
    * Establishing IT Controls
  13. Standards and Frameworks
    * What you need to know about standards and frameworks
  14. CM Assessments
    * Evaluating the Existing CM Practices
    * Documenting “As-Is” and “To-Be”
    * Writing your CM Best Practices Implementation Plan

About the Instructor:
Bob Aiello, CM Best Practices Consulting
Bob Aiello is a consultant and author with more than thirty years of experience as a technical manager at leading financial services firms, with company-wide responsibility for CM, ALM and DevOps. He often provides hands-on technical support for enterprise source code management tools, SOX/Cobit compliance, build engineering, continuous integration, and automated application deployment. He serves on the IEEE Software and Systems Engineering Standards Committee (S2ESC) management board, is the current chair of the IEEE P2675 DevOps Working Group and served as the technical editor for CM Crossroads for more than 15 years. Bob is also editor of the Agile ALM DevOps journal, coauthor of Configuration Management Best Practices (Addison-Wesley, 2010) and his latest book Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement (http://agilealmdevops.com).

Behaviorally Speaking: Dysfunctional Ops

0

Behaviorally Speaking – Dysfunctional Ops
By Bob Aiello

IT Operations plays an important role in any organization by ensuring that systems are always functioning as they should and all services meet the needs of customers and other end users. Some developers get frustrated when trying to work with their operations colleagues and may even try to bypass IT Ops whenever possible. My experience has been that developers may loose faith in the competency of the operations team – leading to doubts about their abilities. Are developers just a bunch of undisciplined wild cowboys or are there good reasons why many developers feel that they have to try to avoid working with Ops whenever possible? The truth is that operations, in many large organizations, is incapable of handling their role and developers are well-justified in trying to bypass Ops. In this article, we are going to have a tough conversation about the reality that many operations organizations lack qualified personnel, are more focused on blocking progress than delivering value to customers and exhibit many other dysfunctional behaviors. How did we get to this point where operations is often so dysfunctional?

Computer operators were once solely focused on handling decks of punch cards, data tapes and handling very large print jobs. The job focused on operating the equipment and dealing with any environment issues such as keeping the computer room nice and cold for the sake of the machines themselves. Back then developers were much more skilled than computer operators and required many more years of training in technical areas, from machine code programming to intensive work in languages including Fortran, COBOL and PL/1. By contast, many of today’s operations engineers are just as technical and skilled as any developer, often focusing on provisioning complex cloud-based runtime systems, environment monitoring while designing and implementing fully automated application deployment pipelines. Unfortunately, some operations engineers are not up to the demands of the job and developers may feel justified, and even motivated, to bypass operations whenever possible. In my own work, I have come across operations engineers who lacked even the most basic Unix command line skills, including use of the VI editor. This might seem trivial, but these same engineers had the root (e.g. super user) passwords, which means that they did have the potential to make serious mistakes, which could take down large-scale production systems. Why would IT operations management hire people who lacked the basic skills necessary to successfully administrate systems? Some organizations allocate management compensation based upon the number of people reporting into a function. When this happens, managers are incentivized to have large teams with lots of head count, without the more important focus of whether or not these engineers have the right level of knowledge, skills and abilities.

It has been my experience that when operations engineers lack sufficient technical skills, developers loose faith and resort to working around the operations group instead of partnering with them to get the work done successfully. Operations engineers themselves may exhibit counterproductive coping skills to deal with their limited expertise. Some teams focus narrowly on one specific skill which they understand and can execute successfully. They may end up adopting a siloed mentality where they maintain a very narrow focus, do not share their knowledge and procedures and consequently lack the broader systems knowledge required for success. I have seen many large operations organizations, which consisted of small, specialized siloes which could not effectively meet their objectives even within the operations organization itself.

The best operations groups consist of highly skilled engineers with strong cross-functional capabilities. Having experience with development, particularly scripting and automation, is an absolute must-have. When these colleagues work on deployment automation, we often call these technology professionals DevOps engineers. Going forward, operations must employ engineers with sufficient knowledge, skills and abilities to get the job done. Do you agree or disagree? Feel free to drop me a line to share your opinion!

Personality Matters—CM Excellence

0

Personality Matters—CM Excellence
By Leslie Sachs

Excellence in Configuration Management conjures up images of efficient processes with fully automated procedures to build, package and deploy applications resulting in happy users enjoying new features on a regular basis. The fact is that continuous integration servers and great tools alone do not make CM excellence. The most important resources are the technology professionals on our team and the leader who helps guide the team towards excellence. That doesn’t mean that everyone is (or can be) a top performer, even if you are blessed with working on a high performance cross-functional team. What it does mean is that understanding the factors that lead to excellence will enable you to assess and improve your own performance. A good place to start is to consider the traits commonly associated with effective leadership and successful results. This article will help you understand what psychologists have learned regarding some of the essential qualities found among top leaders and others who consistently achieve excellence!

Software Process Improvement (SPI) is all about identifying potential areas of improvement. Achieving excellence depends upon your ability to identify and improve upon your own behavior and effectiveness. It is well-known that we are each born with specific personality traits and innate dispositional tendencies. However, it is an equally well-established fact that we can significantly modify this endowment if we understand our own natural tendencies and then develop complementary behaviors that will help us achieve success!

Let’s start by considering some of the personality traits that help predict effective leadership. One of the first studies on effective leadership was conducted by psychologist Ralph M. Stogdill [1]. This early study identified a group of traits including intelligence, alertness, insight, responsibility, initiative, persistence, self-confidence, and sociability. It is certainly not surprising that these specific traits are valuable for successful leaders and achieving excellence. Being intelligent speaks for itself and of being alert (for new opportunities) and having insight into the deeper meaning of each situation and opportunity. Although general intelligence was for a long time considered static, more recent research suggests that it is possible to bolster one’s genetic inheritance. Certainly, one can consciously strive to develop the behavioral patterns, such as attentiveness to detail and novelty and thoughtful analysis of options, closely associated with intelligence.

You might want to ask yourself whether or not you are responsible, take initiative and show persistence when faced with difficult challenges. Displaying self-confidence and operating amiably within a social structure is essential as well. Do you appreciate why these characteristics are essential for leadership and achieving success and look for opportunities to incorporate these qualities into your personality? To improve your leadership profile, you must also actively demonstrate that you know how to apply these valuable traits to solve real workplace dilemmas. Upon reflection, you can certainly see why CM excellence would come from CM professionals who are intelligent, alert and insightful. Being responsible, showing initiative and being persistent along with having self-confidence and being a social being are all clearly desirable personality traits which lead to behaviors that result in CM excellence.

Stogdill conducted a second survey in which he identified ten traits which are essential for effective leadership. This expanded cluster includes drive for responsibility, persistence in pursuit of goals, risk-taking and problem-solving capabilities, self-confidence, and a drive for taking initiative. In addition to these, Stogdill also discovered that people’s ability to manage stress (such as frustration and delay) as well as their accountability for the consequences of their actions are both integral to leadership success. After all, intelligence, insight, and sharp analytic skills are not very useful if a manager is too stressed out to prioritize efficiently or authorize appropriate team members to implement essential programs Finally, you need to be able to influence other people’s behavior and to handle social interactions. [2] Other noted psychologists have also studied leadership traits and many have identified similar traits.

Jurgen Appelo lists 50 team virtues [3] which, not surprisingly, also correspond to many of the traits identified in Stogdill’s studies and I discussed many of these same traits in the book that I coauthored with Bob Aiello [4]. You need to consider each of these traits and understand why they are essential to leadership and achieving success. Keep in mind that while people are born with natural tendencies, each of us is capable of stretching beyond them if we understand our own personality and identify which behaviors are most likely to lead to the changes we desire. So if you want to achieve greater success, consider reflecting upon your own behaviors and comparing your style with those traits that have been associated repeatedly with good leaders and CM excellence. For example, being proactive in solving problems and having the self-confidence to take appropriate risks can help you achieve success. Remember also that being social means that you involve and interact with your entire team- full team participation maximizes the power of each member’s strengths while minimizing the impact of individual weaknesses.

Configuration Management excellence depends upon the skilled resources who handle the complex IT demands on a day-to-day basis. The most successful professionals are able to take stock of their personality and consider the traits that experts regard as essential for effective leadership. If you can develop this self-awareness, you can achieve success by developing the behaviors that result in strong leadership and excellence in all you do!

References:
[1] Yukl, Gary, Leadership in Organizations, Prentice Hall, 1981, p. 237
[2] Northouse, Peter G., Introduction to Leadership Concepts and Practice Second Edition, SAGE Publications, Inc 2012, p. 17
[3] Appelo, Jurgen, Management 3.0 – Leading Agile Developers, Developing Agile Leaders. Addison-Wesley, 2011, p. 93
[4] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010

DevOps and Segregation of Duties

DevOps and Segregation of Duties
By Bob Aiello and updated Thursday November 10th, 2016

Editor’s note – this article was originally written in response to a July 31, 2016InfoQ articleDevOps Survival in the Highly Regulated Financial Industry, written by my esteemed colleague, Manuel Pais. Many people read the original article and came to the wrong conclusion that there is no legal requirement for a segregation of duties. I wrote my response to this article and contacted Mr. Manuel Pais, asking him to comment. Please do not interpret my words as criticizing Manuel or his writing in any way. Manuel Pais is a skilled technology professional and a talented technology writer. That said, I feel compelled to set the record straight on the issue of whether or not a segregation of duties is an actual requirement. Specific references to the requirement for segregation of duties are included at the end of this article.

Here are my original comments and Mr. Pais’ response.

Bob Aiello’s original comments:
In a July 31, 2016 InfoQ articleDevOps Survival in the Highly Regulated Financial Industry, my esteemed colleague, Manuel Pais quoted a presentation by Robert Scherrer, head of application engineering at SIX, which questioned the need for some regulatory requirements such as a segregation of duties.

Pais quotes Scherrer as saying, “Particularly noteworthy was an internal rule that developers could not access production systems. Turned out there was no external regulation with such requirement, instead this was a literal interpretation of the segregation of duties requirement. “

According to the article, Scherrer reportedly suggested that “blindly following internal directives is a costly mistake that hinders the benefits of DevOps.”

Response from Manuel Pais…
“As with many, if not most, ideas in the tech world, context is key. The story was not trying to claim there are no segregation of duties requirements at all. Instead it tried to summarize a 20m talk about an organization that was able to move to a better place in terms of deployability and recoverability with advanced mechanisms for controlling access to production environments. A mechanism that complies with regulations while allowing more flexible ways of working and organizing teams and work. It is hard to summarize such a dense talk about a complex system in a highly regulated environment. It should be clear that the need for adequate controls should never be taken lightly. Instead each organization must fully understand what are the regulations applicable in their context and geography, while at the same time (hopefully) trying to reduce waste and (often self-imposed) constraints that have become obsolete due to evolutions in technology, people and understanding of those very regulations. I urge readers to watch the recording of the presentation that the story tried to summarize (with inherent shortcomings) in order to better understand its intricacies.”
Hope that helps.
Manuel Pais | @manupaisable   |   InfoQ   |   LinkedIn   |   SlideShare

Bob Aiello’s analysis…

The premise that we need to discuss is that some folks believe that there is no explicit requirement for a segregation of duties…

I disagree with this premise completely. Effective IT controls help those responsible develop secure and reliable code much faster while also minimizing costly mistakes. As discussed in my latest book on Agile Application Lifecycle Management, DevOps requires right-sized IT controls. In my opinion, the above mentioned InfoQ article could lead some of our colleagues to believe that there is no explicit requirement for a segregation of duties (actually I know for a fact that it did which is why I wrote this article).

In fact, there are many specific references to a segregation of duties.

For example, the Office of the Currency (OCC) comptrollers handbook states (page 7), “Segregation of duties to reduce a person’s opportunity to commit and conceal fraud or errors. For example, assets should not be in the custody of the person who authorizes or records transactions.” And on page 23 it states that auditors are trained to ask “Is there segregation or rotation of duties to ensure that the same employee does not originate a transaction, process it, and reconcile it to the general ledger account?”

FINRA has similar requirements for Technology management, “We have observed deficiencies in firms’ risk management practices in these areas, for example through a lack of written procedures and evidence of supervision, insufficient segregation of duties for personnel involved in the development and deployment of technology changes, as well as insufficient user acceptance testing and quality assurance.”

ISACA Cobit (used for compliance with section 404 of the Sarbanes-Oxley act of 2002) explicitly lists the requirement for a segregation of duties. For example, in PO4.11 – Segregation of Duties, we see COBIT Control Objective PO4.11 – Segregation of Duties is contained within PO4 Define the IT Processes, Organisation and Relationships.

Another Cobit 4.1 control object, AI2 – Acquire and Maintain Application Software states that:

Control over the IT Process of Acquire and Maintain Application Software that satisfies the business requirement for IT of aligning available applications with business requirements, and doing so in a timely manner and at a reasonable cost by focusing on ensuring that there is a timely and cost-effective development process is achieved by

  • Translating business requirements into design specifications
  • Adhering to development standards for all modifications
  • Separating development, testing and operational activities ***

and is measured by

  • Number of production problems per application causing visible downtime
  • Percent of users satisfied with the functionality delivered

In practice, regulatory Compliance always comes down to what (compliance) lawyers will defend, but the assertion that the requirements are not specified and clear is just not true. Both internal and external auditors will write up audit violations when there is a lack of segregation of duties – and rightfully so. If you are working in a bank or other financial services firm you may find yourself receiving a letter from regulatory authorities – telling you that your organisation must correct deficiencies or cease and desist from a particular area of business. It is also unlikely that you will pass your ISO 9000 review without effective IT controls in place.

It is time for some common sense guidance on how to implement DevOps and still comply with regulatory and audit requirements. I have good news for you. Many of us are involved with efforts to create industry standards for DevOps.  Our goal is to focus on establishing the right balance of IT controls that help to avoid human error while focusing on building secure and reliable systems. Let me know if you would like to participate in these efforts!

Bob Aiello
bob.aiello@ieee.org

DevOps Principles and Practices

2

DevOps Principles and Practices
By Bob Aiello

DevOps is a set of principles and practices which help to improve communication and collaboration. DevOps is not just between development and operations, but in fact can be practiced between any two organizational structures which need to improve how they interact with one other. My new book on Agile ALM and DevOps explains how to use DevOps principles and practices to improve communication and collaboration between each of the groups interacting to deliver software and systems to end users. In practice, DevOps principles and practices help to operationalize the DevOps approach by explaining how to improve the existing application lifecycle management (ALM) practices, including application build, package and deployment procedures, change management and much more. The journey to DevOps should begin with effective communication and collaboration.

Effective communication and collaboration

Effective communication requires that the involved parties be included in discussions that impact how DevOps practices will be implemented. Too often, key stakeholders are not included in meetings which impact them in significant ways. Effective communication might include working sessions to review working documents or working sessions intended to develop procedures related to application build, package and deployment. I am usually sharing my screen during these sessions and asking my colleagues to partner with me to fully automate all of the procedures required for reliable and security application and systems deployment.

Systems thinking

Systems thinking refers to considering the entire software and systems lifecycle from beginning to end. Very often, solutions only address a limited scope of the system. Many technology professionals are specialists and only have a narrow understanding of how the rest of the system works. For example, developers may come up with optimal methods for deploying to development machines without considering that it is in the best interests of the organization to have a consistent process across the entire software and systems lifecycle. In the same way, the interests of each of the stakeholders including development, operations, QA, testing and security all need to be considered. Systems thinking can and should eliminate the narrow perspective-taking often the favored by organizational silos. You do not have to change your organizational structure in order to eliminate the negative impact of silos.

Ops using the same practices as development – “left-shift”

Deployment procedures should be consistent across all environments from the initial development test environments through to UAT and Production. The practice of “left-shift” involves getting operations involved in the very beginning of the software and systems lifecycle so that operations professionals can understand the required procedures and help create automated procedures essential for a repeatable process.

Feedback and experimentation

Feedback loops are a fundamental approach in DevOps that facilitate experimentation and continuous process improvement. Agile-oriented retrospectives and collaborative communication between stakeholders help to communicate what works well and address areas that require improvement. Teams should be able to experiment and try different approaches to identify and implement effective best practices.

Sustainable pace – effective culture

Many teams take a defeatist attitude and assume that deployments will always be weekend-consuming activities that require everyone to work very long hours. DevOps transforms the deployment process into a routine event that is highly regarded, but executed with minimized risk and potential impact to the
firm. DevOps spends a lot more time upfront automating the entire application build, package and deployment process. Initially, it may even seem like the
deployment is taking longer than expected because automated scripts are being written to create the automated deployment pipeline. However, automating from the beginning has proven repeatedly to be a very wise investment.

Automation and tools

Robust automation is a key feature of any DevOps implementation and selecting the right tools is an essential first step. While process is more important than tools, in DevOps, tools are definitely a first class citizen. Robust toolsets often require support as well as skilled engineers who know how to use them effectively. The right organizational structure can help to make tools more effective, especially in terms of providing the right support structure to manage their usage on a day-to-day basis.

Organizational structure where collaboration is compelling

DevOps is not specifically a group, although most organizations find that it is helpful to establish a dedicated DevOps team to implement and support DevOps principles and practices. For organizations that utilize the ITIL v3 framework, the Release and Deployment Management (RDM) team is often the right group to take responsibility for establishing DevOps best practices throughout the organization. 

Agile and DevOps

Many DevOps practices existed way before agile became popular. But agile dramatically demonstrates the value of DevOps by requiring support for a frequent release cadence. Agile principles also are inherent in DevOps principles. Equally important is understanding the perspectives of each of the stakeholders especially when their approaches and goals are not completely in alignment.

Ops and Dev and the Clash of Perspectives

Development is focused on creating and implementing new features as often as possible. Their world is often characterized as being the “Wild,
Wild West” because almost anything goes and developers are frequently not as disciplined as they need to be. The operations team has a very different
perspective and is generally focused on ensuring that releases are both reliable and repeatabl. The push to deliver changes quickly versus the desire to
maintain uninterrupted services often creates a clash of priorities between development and operations. DevOps aims to address both goals and ensure that
changes can be delivered as often as desired with completely reliable systems and services.

DevOps and the ALM

DevOps should be understood within the broad scope of Application Lifecycle Management (ALM). Just as DevOps engages in broad systems thinking,
DevOps also includes all stakeholders within the software and systems lifecycle.

Getting Started with DevOps

DevOps should always begin by assessing the existing practices to build, package and deploy the application. Manual procedures should be documented into a checklist and then the manual steps should be automated – usually using scripts (e.g. Ruby, Shell, Python etc). Getting operations involved from the beginning of the deployment lifecycles across all environments (development, test through production, inclusive) is essential. The same procedures should be used across all environments with any necessary discrepancies documented, reviewed and understood by all participants. Release coordination and change control are essential functions that help to ensure that all tasks are understood and tracked as needed.  Technical risk should also be reviewed and discussed with plans created to mitigate risks that cannot be avoided. Getting started with DevOps requires an iterative approach that is a learning experience for all participants. Goals should be established early so that everyone understands exactly what the team is working towards achieving. DevOps requires complete certainty that the correct code has been baselined in the version control repository, built correctly and packaged with identifiers to ensure that the configuration item is fully traceable and identifiable. Cryptographic hashes should be created to ensure that we can definitively and uniquely identify each CI, making it child’s play to track if the wrong configuration items have been selected.

Securing the Trusted Application Base

Ultimately, we want to create a secure trusted application base. The secure trusted application base enables us to programmatically verify that the correct code has been deployed and also proactively identify any unauthorized changes. This approach will facilitate an accurate and programmatically-updated configuration management database (CMDB) by ensuring that code can be easily discovered through automated procedures. This is the only reasonable way to gurantee an updated and accurate CMDB.

DevOps Process Improvement

Successfully implementing DevOps requires an iterative approach. Frequent retrospectives (some people call them post-mortems) should be held to discuss what went well and what can be improved. The DevOps transformation itself is an agile iterative development effort that needs to involve all of the stakeholders responsible for the successful application build, package, and deployment.

Make sure that you drop me a link and tell me about your DevOps best practices!

Bob

How to implement a comprehensive API-first framework that meets the needs of agile development and DevOps

0

How to implement a comprehensive API-first framework that meets the needs of agile development and DevOps
By Steve Davis, CTO of Four51

An API-first framework, or product, allows customers and developers to innovate rapidly. The API product must be capable of reacting to bug fixes and feature requests quickly. It’s an expectation of the developer who has most likely been involved in internal only software development environments. Your API is now a part of their overall development plan and you cannot be a bottleneck to their progress. Traditionally, even within SaaS software platforms, release cycle times were measured in months, not days. With the proliferation of Open APIs and microservice architecture, your API must be capable of the expected release capability or it will be rejected. The right way to enable this is through quality DevOps.

One of the pillars of a continuous integration strategy is to always be deployable. Simply put this means you should always have a repository of code that can be deployed to your production environment on a moment’s notice. Enabling this while also allowing for multi-faceted feature development can be managed in various ways. The two major distributed version control systems (DVCS), git and mercurial, support the requirements for continuous integration.

One of the more popular and effective workflows is the “Git-flow” method and a variation promoted by GitHub’s engineering team called “Git-hub flow”. If you are inclined to use mercurial, there is a similar method named “Hg-flow” that was inspired by “Git-flow”. The concepts are simple and they work for small and large teams.

With the “Git-hub flow,” the only required rule is that the master branch should always be deployable to production. Any new features should be branched and named descriptively. Once that branch is ready for deployment, you can create a pull request and merge it into master. Merging code can present conflicts at times, though, so choosing a good merge conflict tool is important. You should also strive to release the master branch as quickly as possible once the merge has occurred. The benefits are clear –– you are pushing new functionality rapidly.

Believable progress is very difficult to demonstrate with an API-first platform. Frequent production releases provide this insight. Your marketing team has something to promote and your executive team sees tangible progress.

As with any new process or methodology, there are legitimate concerns that will be brought forward. One of the primary considerations is stability of the deployment and quality assurance. The continuous integration strategy addresses these two explicitly.

First, the methodology itself offers greater stability by design. Because the feature branches deployed should be isolated and consist of less code changes, the impact of the deployment is minimized. Monitoring the activity immediately after deployment will shed light on any unintended side effects and, if a problem is discovered, your team can address the problem, create a fix and deploy rapidly. Because the release doesn’t contain a vast amount of changes, any disruption is generally minimal and the recognition of the right fix is usually obvious.

Secondly, the quality must be covered by a comprehensive unit test routine. There are a number of good unit test frameworks, but the key for your team is to implement rigid controls around how they are managed. Unit tests should all pass before any pull requests are created. Your build system should also run all unit tests during its automated build, stop the build process on any failure and notify all developers of such issues. If there are failures, the entire team should focus on creating a solution to bring the build back to a stable status. Integration testing is also extremely important. It is different from unit testing in that it’s generally more focused on testing the system as a whole rather than testing individual methods in isolation. Your team should strive to perform these tests in a clone of your production environment. You’re hitting a homerun if your integration test routine is capable of creating and tearing down environments during the build process.

An API-first framework presents challenges that are not present in application development. Applications are used by humans, and humans are generally good at adapting to small changes without much fuss. APIs are used by code, which is generally not as good at this. When an API changes, all code that uses it must also be changed, tested, and re-deployed. Applications have control over actions of users, through a UI, that shape the intended usage. APIs offer developers opportunities to stretch the intended usage and potentially to go far outside the bounds of intention. Unit tests are crucial to ensuring the integrity of the API base so developers can feel confident that major changes are less likely to disrupt users. Having quality unit and integration tests is essential to ensuring your development process is stable, reliable and capable for rapid development and deployment.

Today, the expectations for a rapid delivery of software is normal. A sound DevOps strategy is critical to support an Agile development workflow. Getting your operations right allows your development team to focus on delivering, which benefits your customers and your organization.

About the author:
Steve Davis is CTO of B2B eCommerce solutions provider Four51,
and has held the role for nearly a decade. Steve has been working in technology for about 20 years, starting in his Navy days, which helped him develop a strong but motivating leadership style. Visit www.four51.com for more information.

So How Are Industry Standards Created?

0

So How Are Industry Standards Created?
By Bob Aiello

Industry standards and frameworks provide the structure and guidance to help ensure that your processes and procedures meet the requirements for audit and regulatory compliance. For US-based firms, this may involve passing your SOX audit (for compliance with section 404 of the Sarbanes-Oxley Act of 2002) or acquiring the highly respected ISO 9000 Quality Management System certification expected by many customers throughout the world.

Industry standards are not perfect and some of the specific reasons for why they may fall short of expectations can be traced back to how they were initially created. The process of creating an industry standard is actually quite deliberate and time-consuming.. There are some excellent resources from the IEEE and other standards bodies, which describe the process to draft and implement standards. But I would like to describe some of my own personal experiences participating in the collaboration and teamwork of creating an industry standard. Working closely with other colleagues who are dedicated to excellence has been far and away the most exciting professional experience that I have been fortunate to have.

Please note that this is not an official IEEE article, but rather Bob’s recounting of a personal experience being involved with creating industry standards.

The first step is always to decide on the initial scope and focus of the standard. We then review any existing resources available – including related industry standards and frameworks or simply documents which can help educate the members of the team involved with this effort. The standards working group is a high-performance self-managing self-educating cross-functional team with subject matter experts from a variety of disciplines and perspectives. We do not always agree with each other and, in fact, the discussions can be quite confrontational at times – although always professional and collaborative. These disagreements are a natural expression of the group’s striving to come up with the best approach to advocate in the text of the standard.

We create an initial outline and list of topics to consider and then address the task of creating a working draft. The focus is on “shall” statements which are mandatory (for compliance with the standard) and “should” statements which are recommended. We hold numerous sessions to collaboratively create the initial draft. It is common to assign specific sections to individual members (or subgroups) who then go off and independently create the initial draft wording.

Once the draft is written, it is sent to a few SMEs outside of the working group for their reaction and comment. Once this feedback is evaluated and incorporated, the draft is sent out to a wider group for review and comment and, once again, feedback is incorporated. The objective is to have validation that it is a solid document before it is put out for a vote.

Above all else, the standards creation process is collaborative and transparent. Typically, contributor’s comments are recorded and the reason for their acceptance or rejection documented. We have a strong desire to ensure that the draft standard is aligned with other industry standards and frameworks and do our utmost to harmonize with the current guidance provided by other sources. Final decisions are made and sometimes folks are not happy, but they know that their views are always heard and, most often, recorded for traceability. It is customary for a standard to require a significant percentage of voter approvals for passage and acceptance by the standards body. On occasion, controversial paragraphs have to be dropped in order to obtain the required votes for approval, similar to the negotiations, aka “horse-trading”, for which politics is known. Although such modifications felt to me personally like we were “watering” down the standard just to gain the required consensus, the teams focus and mission is to produce a clear document that will be both respected and adopted.

Over the years, I have written extensively on how to comply with configuration management related standards, including the highly popular IEEE 828 (which I had the privilege to participate in updating). Lots of folks like to criticize standards, but often they are criticizing a document that they have never actually spent the time to read – let alone understand or see implemented effectively.

It has been my personal experience that implementation of a standard requires two key skills. The first is harmonizing the guidance by understanding similar industry standards and frameworks. The second is tailoring, in which we provide a rationale for why specific guidance cannot be followed, if this is in fact necessary.

Here’s your opportunity! We are starting up an effort to create a working group to write an industry standard for DevOps. Please consider getting involved now to help shape the guidance that we provide. Rest assured that I will continue writing about this exciting project in the coming months and your voice is important to us!

Bob Aiello
bob.aiello@ieee.org
http://www.linkedin.com/in/BobAiello

 

 

Call for participation – P2675 DevOps Standard for Building Reliable and Secure Systems Including Application Build, Package and Deployment

0

P2675 – DevOps – Standard for Building Reliable and Secure Systems Including Application Build, Package and Deployment
Contact Bob Aiello to join the IEEE working group.

Scope: This standard will specify practices for groups including development, operations and other key stakeholders to collaborate and communicate effectively to build, package and deploy software and systems in a secure and reliable way. All of these activities and functions shall be integrated within the complete lifecycle.

Purpose: The purpose of this standard is to specifiy required practices for operations, development and other key stakeholders to collaborate and communicate to deploy systems and applications in a secure and reliable way

Who Should Participate: DevOps, by its very nature, impacts every member of the team so technology professionals including developers, operations engineers, QA & testing, project managers and other members of the team responsible for ensuring that applications meet their business objectives.

The DevOps standard must address and incorporate existing IEEE and ISO standards related to Systems and Software engineering, Configuration Management, Verification & Validation, Information Security, Measures of Reliability and Software Testing among others. DevOps does not contradict existing industry standards, but applies new principles and practices to integrate existing processes.

Related standards include:

  • ISO/IEC/IEEE 12207, Systems and software engineering–Software engineering processes
  • IEEE 828, IEEE Standard for Software Configuration Management
  • IEEE 1012 IEEE Standard for System and Software Verification and Validation
  • ISO/IEC 27001 Information technology — Security techniques — Information security management systems — Requirements
  • IEEE 982.1 IEEE Standard Dictionary of Measures to Produce Reliable Software
  • ISO/IEC/IEEE 29119 Software and Systems Engineering–Software TestingThis standard will describe standard DevOps practices and principles in general terms as “practices” and then include specific references for detailed tools/practices in an Annex.

    Contact Bob Aiello to join the IEEE working group.

Behaviorally Speaking – CM Excellence

0

Behaviorally Speaking – CM Excellence
by Bob Aiello with Dovid Aiello

Configuration management (CM) is a comprehensive and awesome discipline with practices that can help improve quality and productivity throughout the entire software and systems delivery lifecycle. Configuration management is the basis of DevOps and without excellent CM you are not going to be successful implementing continuous practices such as integration and delivery.

The lack of CM Best Practices or poorly-implemented CM can be disastrous for an organization. Although, interestingly enough, disasters themselves can provide a great opportunity to improve your configuration management because they highlight the importance of getting the process right. As Winston Churchill once said, “never let a good crisis go to waste”. For example, software installation and upgrades can result in major outages that are so disruptive that they threaten the very existence of the corporation. Conversely, excellence in configuration management can be just as beneficial. Read on if you would like to truly achieve CM excellence.

It is always important to start by considering what excellence will look like. For me, CM Best Practices means that we always know exactly what version of the code (actually configuration items or CIs) is running in production (or QA) at any point in time. We also want to be able to retrieve the exact version of the code used to build and deploy the release to production (or QA) and we need to be able to make a small change to the code without any chance of the code regressing due to the wrong version of a header file or some other dependency. These three capabilities are the starting point for CM excellence. Unfortunately, many companies do not have the necessary procedures in place to guarantee these capabilities . Here’s how to implement them.

It is always essential to start with version control, including baselining your code using a tag or label. Baseling refers to identifying all of the configuration items (CIs) for a specific milestone. You also want to be able to link changesets to workitems so that you have traceability as to exactly why a change occurred and who gave the authorization. When a bugfix is necessary, it is non-negotiable to have relible variant management (using branches or streams).

Automated application builds with immutable version IDs embedded in each configuration item (CI) are also essential as is packaging the release with a manifest specifying the contents (often called a bill of materials or BOM). The package also needs to contain a procedure that can verify both the contents of the package and also the application that has been deployed. Physical Configuration Audits verify that you have the correct configuration items (CIs) and a functional configuration audit verifies that the CIs are performing as they wre intended.

Application deployment must also be fully automated and should enable you to deploy applications frequently and painlessly. It is ideal to have a complete deployment framework to help ensure a consistent and reliable methodology for automated application deployment. There are many challenges that can impact the successful implementation of configuration management. The most challenging that I have found is a lack of communication between developers and the operations folks who are responsible for ensuring reliable and continuous service. One aspect of this challenge involves the sharing of technical knowledge. Developers get to learn new technologies and spend countless (truly countless) hours working with complex technologies. Operations folks get the runtime code when it is about to go live and often have only a short period of time to get up to speed. I have experienced this exact challenge as a build engineer. It is particularly difficult when the technology changes and there is less than adequate communication and training to help operations get up to speed.

DevOps has emerged as an industry best practice that focuses on improving communication between development and operations. I often explain that in CM we mix the ingredients and in DevOps, we make the pie. Applications should be built with the tools necessary to maintain them and, equally importantly, ensure continuous and excellent service. Moving automated build, package and deployment upstream means that automated procedures required for deployment to production can also be used for deployment to development test regions, QA, and UAT. This approach allows us to achieve excellence by having our automation developed and tested early in the process.

Excellence in CM requires that you implement best practices that meet the needs of your team. Implemented well, automated build, package and deployment can help streamline the entire software and system development process, improving quality and productivity for your entire team.

 

Micro Focus Announces Intent to Merge with Hewlett Packard Enterprise’s Software Business Segment

0

Micro Focus Announces Intent to Merge with Hewlett Packard Enterprise’s Software Business Segment

Strategic Combination Furthers Company’s Long-term Strategy to Maximize Mature Technology Investments, Accelerate Innovation Opportunities for Customers
________________________________________
ROCKVILLE, Md., Sept. 7, 2016 /PRNewswire/ — Micro Focus (LSE: MCRO.L) announced today its intent to merge with HPE’s Software Business Segment in a transaction valued at approximately $8.8 billion. The merger is subject to customary closing conditions, including anti-trust clearances and shareholder approval and is expected to close in Q3 2017. Micro Focus and HPE also announced as part of the transaction the intent to enter into a commercial partnership naming SUSE as HPE’s preferred Linux partner.

The proposed merger brings together two well established enterprise software vendors with highly complementary portfolios. With revenues of approximately $4.5 billion, it creates one of the world’s largest pure-play infrastructure software companies with a truly global footprint, agility and financial strength to drive software innovation across both traditional and emerging IT market segments.

“Today’s announcement marks yet another significant milestone for Micro Focus and is wholly consistent with the long-term business strategy we have been pursuing to be the most disciplined global provider of infrastructure software. The proposed merger with HPE Software is consistent with our recent acquisitions of Serena Software and the Attachmate Group,” said Kevin Loosemore, Executive Chairman, Micro Focus. “The combination of Micro Focus with HPE Software will give customers more choice as they seek to maximize the value of existing IT assets, leveraging their business logic and data along with next-generation technologies to innovate in new ways with the lowest possible risk.”

Organizations continue to seek technology solutions that improve their time to market and create new avenues to further engender customer loyalty and improve employee productivity and more. HPE Software brings additional breadth and depth across IT Operations Management, Software Delivery & Test, Enterprise Security, Information Management & Governance and Big Data Analytics, giving Micro Focus additional advantage to deliver richer solutions that effectively bridge existing IT infrastructure with emerging technologies to meet those business demands.

“We believe that the software assets that will be a part of this combination will bring better value to both our customers and shareholders as part of a more focused software company committed to growing these businesses on a stand-alone basis,” said Meg Whitman, President and Chief Executive Officer, HPE.

At the same time, Micro Focus and HPE have announced their intent to enter into a commercial partnership naming SUSE as HPE’s preferred Linux partner as well as exploring additional collaboration leveraging SUSE’s OpenStack expertise for joint innovation around HPE’s Helion OpenStack and Stackato Platform-as-a-Service solutions. SUSE and HPE are working together to define the specifics of the commercial partnership.

“SUSE and HPE have a long history of successful strategic partnership,” said Nils Brauckmann, CEO, SUSE. “We are excited now to explore new ways of expanding upon that with a commercial partnership focused on areas such as cloud computing, software-defined networking and application platforms. The combination of SUSE’s open source expertise and OpenStack capabilities with HPE’s Helion and Stackato offerings can create best-in-class enterprise solutions for our mutual customers.”

About HPE
HPE is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.

About Micro Focus
Micro Focus (LSE: MCRO.L) is a global enterprise software company helping customers innovate faster with lower risk. Our software helps customers build, operate and secure IT systems that bring together existing business logic and applications with emerging technologies to meet increasingly complex business demands. For more information, visit: www.microfocus.com.