Test Environment Management: The Secret Ingredient for DevOps

0

Test Environment Management: The Secret Ingredient for DevOps

 

Nikhil Kaul, Senior Product Marketing Manager at SmartBear Software

The rapid adoption of DevOps necessitates continuous delivery and a shift toward higher levels of test automation. As a result, QA teams often invest a lot of time and effort in ensuring their tests are automated as a part of the continuous testing cycle. The right test automation tools allow teams to increase test speed and coverage. In fact, a common framework agile teams use while navigating this process is the test automation pyramid – an approach that focuses on automating tests at three levels: unit, service and user interface (UI).

At the base of this pyramid is unit testing. By running more unit tests at a lower level, teams can get feedback faster and develop a solid testing base to build upon. The intermediate layer consists of API tests, which work under the UI of an application. Finally, the top of pyramid represents end-to-end UI tests.

To implement a test pyramid approach, teams spend a lot of time and energy hunting for the right test automation tools. As an example, for unit level testing, teams use JUnit or NUnit. Similarly, for API level testing, teams love SoapUI, while for UI testing, TestComplete or open source tools like Selenium could be used.

A DevOps chain can only be as strong as its weakest link. And often one area that is neglected during such a test pyramid approach is the way teams go about managing their test environments.

Test automation efforts often falter as the efficiency gained by investing in test automation is lost when teams still maintain the test infrastructure manually. In the case of UI testing, teams can spend hours provisioning, maintaining, monitoring and tearing down the underlying test environment infrastructure.

As an example, without a proper strategy, teams spend a significant portion of their testing cycles ensuring the right testing environments are available. And often, even when these environments are available, there are large discrepancies between test environments and production environments.

Traditional Ways of Managing Test Environments

 

There are three common tactics teams use to manage test environments. One of the easiest ways is to use local machines (i.e., browsers hosted locally). The second way is using virtual machines such as Amazon EC2. Finally, the last way teams go about test environments is by leveraging device labs, which primarily consists of rented or bought mobile devices.

Regardless of which of these three routes a team takes, there is manual work required to maintain and upgrade environments when new browsers, operating systems, devices or resolutions are introduced.

In addition to the manual efforts required to maintain these UI test environments, there are other challenges such as reduced speed to delivery, redundancy and discrepancies.

Speed and Agility Concerns

 

Setting up environments on local machines as a part of your testing cycle automatically slows down the entire process. QA teams frequently find running UI tests on virtual machines can help overcome up front hardware costs, but there is still a manual component involved in ensuring the right configurations are available when tests are being kicked off. Provisioning clean test environments often means spinning up new VMs with new configurations which can be time consuming and adds to the labor cost.

Parallel Execution Needs a Lot of Work

 

Having an ability to run tests in parallel across various browsers and OS combinations can be useful to speed up the testing process and increase coverage. However, often setting up parallel execution capabilities for UI tests with traditional test environment management efforts like local machines, VMs and device labs can be a painstaking process.

To run UI test on different devices, teams need to set up a hub with multiple node devices. Registering the node to the hub involves specifying arguments like browser type, version, hostname, port etc. If these arguments change or get updated, QA might have to go back to the command line or configuration file to update. Additionally, adding an iPad or an Android device as a node requires even more additional steps.

Redundant Activities

 

Setting up environments locally results in duplicated efforts as each individual has to perform the same exact activity. In case, teams are maintaining an in-house device lab, the redundant efforts associated with upgrading various browsers and OS combinations can grow exponentially. Keeping systems and devices up-to-date becomes more challenging as teams grow and new environments (web browsers, operating system version and resolutions) are added to the mix to improve coverage.

Increased Cost

 

Setting up local environments, using VMs or device labs can often result in cost overruns. In fact, costs encountered while using these options can be primarily broken down into three different buckets: device costs, labor costs and licensing costs.

Device cost represents the cost associated with having devices with the right configurations of operating systems, resolutions and browser versions.

Labor cost consists of the time and effort your operations, test or DBA team spends on maintaining, upgrading or even tearing down test environments, database servers and labs, among others.

Licensing costs include the inevitable part of using licenses of additional software to ensure environments are working as expected, both with device labs and VMs. An example, you’ll likely want to know that if and when your test environments go down. Setting up this type of notification and monitoring system for your environments requires additional spend on a third-party vendor.

UI Test Environment Management: What to Look for?

 

While looking for the best way to optimize and manage your test environments, there are four key aspects to take into consideration.

  1. Test Environments Should be Easy to Install

Easy installation helps ensure that application development and delivery (AD&D) teams don’t spend considerable amount of time with set up and configuration.

  1. Test Environments Should be Easy to Enhance

If a new browser version is released, the needs of your end-user will change accordingly – forcing you to modify your test environments. Therefore, easy to enhance test environments can thereby be really helpful, especially in a DevOps environment.

  1. Test Environments Should be Easy to Share

The easier it is to share test environments among team members, the more scalable the process becomes. The time it takes for your set of tests to complete a run can be drastically reduced by running UI tests in parallel. To do that, environments should be easy to share among team members.

  1. Test Environments Should be Easy to Tear Down

As an end user yourself, you should be able to spin down, reconfigure or reinstall your environments as needed.

The Right Way to Set Up UI Test Environments

 

Having cloud-based test environments available on demand can help overcome scalability, cost and maintenance challenges posed by use of local machines, virtual machines or an in-house testing lab.

Cloud-based environments, like TestComplete Environment Manager, give on-demand access to over 1,500 combinations of browser versions, operating systems and resolution settings. Application development and delivery teams can easily and quickly execute and report on automated UI tests across test environments without setup or configuration.

Other benefits of using cloud-based test environments include:

Easy to Install, Enhance, Share and Teardown

Cloud-based test environments are incredibly easy to manage and are accessible anytime, eliminating the need for an installation process altogether. You simply need to link those environments to your tests. Cloud-based environments are also simple to enhance. When new browser versions are released, a good cloud-based environment solution will update these, thereby eliminating your need to spend time on upgrades.

As you’re accessing a cloud-based environment through a URL, it is easy to share with anyone, enabling you to scale as needed. Tearing down environments in this case is as simple as deciding to not use the environments.

Parallel Test Execution Without Setup

The challenges associated with setting up environments for parallel execution can be easily overcome with cloud-based environments as they are built with frameworks to handle concurrency.

Report Across Environments with Zero Configurations

This is a gold mine. The reporting is built-in, so QA managers can pull metrics and see how tests are trending across operating systems, browsers or resolution settings – all in one place.

Combine Manual and Visual Tests with Automated Tests

Cloud-based test environments offer complementary benefits to automated testing which you won’t get through virtual machines, on premise systems or local devices.

For example, you will often realize that manual or visual testing is needed to uncover issues that are hard to find through automated test scripts. These include:

  1. Establishing aspects of the UI are of the right color, shape or size
  2. Uncovering overlapping elements
  3. Ensuring the appearance or usability of a website looks as expected

The right cloud-based test environment management tool will have out-of-the-box capabilities to tie these into your test automation framework.

 

Nikhil Kaul is the Senior Product Marketing Manager of Testing Products at SmartBear Software. Prior to SmartBear, Nikhil served Digité, a leading provider of Application Lifecycle Management solutions, in various roles including Senior Software Executive. There, he gained insights into software development and testing market. Nikhil received a master’s degree in business administration from Georgetown University. He is very involved in discussions taking place in development and testing communities. Follow him on Twitter @kaulnikhil or read his blog at: http://blog.smartbear.com/author/nikhil-kaul

 

 

Demystifying DevOps – Which DevOps do you mean?

0

Demystifying DevOps – Which DevOps do you mean?
by Uday Kumar

DevOps has been the absolute buzzword of the IT world over the last couple of years and yet its precise meaning still remains unclear. Along with Agile, this word represents a promising new wave within software development. Currently, a team of industry experts is collaborating to develop a universal standard for this ambiguous and confusing term. Until this IEEE working group presents their final document, though, those in the trenches will continue to interpret the concept with flexibility, based on a balance of experience and convenience.

Personally, viewing DevOps primarily as a new organizational culture is definitely not my cup of tea. I see greater value in the DevOps focus on maximizing team members’ creative utilization of process and tools.

I have spent a lot of time and researched many sources to determine exactly what DevOps means. Basically, there are 4 categories that are broadly associated with this term. In this blog, we will cover each of these categories.

Apart from DevOps, there are several other related buzzwords like ChatOps, CloudOps, SecOps that are now trending, based on the increasingly popular idea that software development is, by necessity, an integrative process. Apart from DevOps, the rest of these terms are out of scope for this blog.

Category 1: DevOps – Software Developers (Dev) and IT Operations (Ops) – CI & CD

DevOps is primarily defined as Collaboration, Communication and Integration between Software Developers and IT Operations, groups whose fundamental interest areas are usually different and contrasting.

Software Developers (Dev) want to focus on creating new code and applications while IT Operations (Ops) wants to focus on sustainability or quality. When the application is not working as expected, Dev often thinks “it is not my code, it’s your machine”, whereas Ops thinks “it’s not my machine, it must be your code”. DevOps is a term coined to reflect the bridge which must be built between these two teams so that new code gets deployed on the production systems smoothly without blaming one another.

In order to achieve this seamless continuity, we need to have essential process and platform in place; two essential ingredients are CI (continuous integration) and CD (continuous deployment). Tools + Automation is key. Companies will often implement Agile principles to achieve the highest degree of success with CI and CD. IBM BlueMix and Cloudbees are SAAS- based products with ready made CI and CD platforms for many different applications (especially web and mobile). Take a look at this Sonatype slideshare presentation with Architecture types to get a good overview on this topic. It is a very good reference for a deep dive into this helpful technology.

Although the majority of the people with whom I have interacted adopt this view of DevOps, many companies have started realizing that CI and CD implementation is only able to provide a partial fix to a complex problem.

Category 2 and 3: DevOps – Infrastructure (Ops) as Code (Dev)

The IT Operations team is generally responsible for managing the infrastructure of updating the software (OS /Application ) manually. Updating this configuration as part of change management is one essential step of ITIL process. It is crucial that any and all changes be documented, as well as auditable.

Virtualization and Cloud technologies have enabled team members to write the code necessary to create and manage the infrastructure, as well as to control the changes using the updated code. Docker, Puppet and Chef are taking this to the next level. Effectively, all operations that are being performed by the IT team are now getting automated. Coding is abstracting the complexity of managing the infrastructure and developers don’t have to specify required infrastructure as a guideline. The marketing materials of Puppet and Chef reflect the awareness that effective automation of Configuration Management currently falls under the DevOps umbrella. Clearly, the DevOps term provides an apt metaphor for the way to achieve the most stable environments.

Category 2: Internal Infrastructure Management for conducting different levels of testing either on Internal servers or in the Cloud (external).

This setup is quite complex especially for Product companies (not SAAS) as they need to support various product versions/variants. Along with Virtualization/Cloud, technologies like Dockers, Puppet and/or Chef will fit well. Nowadays, professionals are seeing many profiles in this area with DevOps.

Category 3: Managing the production servers especially for SAAS product (application) like Amazon, Google Apps, SalesForce.

DevOps demands continuous automation which means that scripts, instead of people, are initiating automated jobs including continuously deploying software updates. They also use automation to check the health of the system and its environment by monitoring applications, securing the applications, load balancing and dynamically provisioning new servers as needed and even ensuring automatic recovery should a problem arise. Implementation of ITIL-based solutions is also considered to fall within DevOps.

Personally, I don’t like to label automation of IT operations as being “DevOps”. The first reason is because the automation of IT operations has been around long before anyone started using the term DevOps. Addteq, has been automating process since we first opened our company over 11 years ago, long before the DevOps term caught on. Secondly, DevOps is really about improving communication, collaboration and integration between groups including business end user, development, operations among other key stakeholders.  IT automation is essential, but DevOps has an even broader focus.

Category 4: BizDevOps – Product Management (Biz), Engineering (Dev) and IT Operations (Ops)

This new term, though not yet a very popular buzzword compared to the others discussed, may just be my favorite. Probably only heard in the past 2+ years, and quickly picking up speed, this three-pronged word covers the entire value stream, starting from initial customer request straight through to delivery. Unified cross-functional teams work together (collaboration, communication and integration) in order to generate value to the customer, while simultaneously and judiciously balancing the available resources. According to some, this can be considered as Agile (Scrum/Kanban/Enterprise Agile frameworks) along with CI and CD mentioned above. IBM, CA, and Xebialabs continue to promote this categorization under the DevOps label.

 

I use the analogy of physical manufacturing to understand the software lifecycle; the development process mirrors the steps implemented in a manufacturing factory while the cross-functional value chain ( from Product Management to IT Operations ) may be thought of as a manufacturing assembly line. To achieve Enterprise Agility, the assembly line must be continuous. This assembly line is the Continuous Delivery (Development + Integration + Deployment + Testing + Service) platform which is critical to enable continuity. Integrated Release Management or Application Lifecycle Management(ALM) along with Automation are really other terms which refer to the continuous delivery platform.

At Addteq, we provide services for all the above 4 categories. Plus, we are always adding and updating based on the latest industry developments. For more details, you can refer to our extensive slide deck.

We invite customer and community input regarding this timely debate: which category do you regard as “the real DevOps”, is your understanding of DevOps included in our category list?

As there is so much confusion around this word, the next time you say DevOps to someone, it might be helpful to specify to which aspect you are referring. And if someone is discussing DevOps, then don’t hesitate to request clarification, especially during consulting and recruitment. DevOps can’t work unless everyone is on the same page.

We encourage you to add your comments or perspectives.

About the author:

Uday Kumar is Product Manager and DevOps ALM Management Consultant for Addteq, an Atlassian Platinum Solution Partner providing business solutions to enterprise clients. Uday is an Intrapreneur, providing innovation within his organization including his software development centers working towards a vision which he refers to as “Scientific Management of Software Industry”. Uday focuses on reducing all types of operational waste while achieving Operational Excellence.

Predictions for the Coming Year

0

Predictions for the Coming Year
by Bob Aiello

This is an exciting time to be part of the technology industry. The demand for complex systems is only surpassed by user expectations that new features, as well as bugfixes, can be delivered rapidly – even during normal business hours. The internet of things (IoT) has expanded the definition of connectivity to everything from your car to your washing machine. Mobile devices can help monitor your smart house, optimize your electricity and heating bills while ensuring mobile connectivity throughout your international travel. Cloud-based resources deliver remarkable scalability at low cost, enabling the analysis of big data which continuously astounds with new applications of its remarkable business intelligence capabilities. Savvy architects ensure not only an API-first, but often an API-only architecture. While there are limitless possibilities, there are also are limitless risks.

Cybersecurity has emerged as a core capability to address the challenges of cybercrime (and, for state entities, the related challenges of cyberwarfare and even cyberterrorism). The remarkable number of high-profile security incidents have damaged reputations for many firms and led many consumers to view cyber-responsibility as a must-have. In the coming year, consumers will expect companies to up their game with regard to cyber capabilities ensuring systems security and reliability. Technology professionals will need to embrace a comprehensive approach to continuous security that reaches from designing systems to be secure, through secure code analysis, to thorough penetration testing in order to confirm that applications have been not just designed, but also built, packaged and deployed to be secure.

DevOps is growing beyond deployment engineering to encompass a broader set of competencies which enable rapid delivery of business capabilities. DevOps itself will mature from the domain of internet startups to proficiencies which can align with the demands of even the most highly regulated financial services firms. DevOps is no longer just about development and operations. DevOps enables better communication and collaboration between organizational units throughout the enterprise. This journey to maturity must include an industry standard to guide the use of DevOps in firms which must adhere to audit and regulatory requirements and seasoned IT professionals are working on that effort right now. Feel free to contact me for more information on the IEEE P2675 DevOps Working Group.

Agile development has established itself as the preferred approach to software engineering. But many organizations struggle to achieve agility due to inevitable constraints – from idiosyncratic corporate culture to the complex demands of technology development efforts. As the old adage goes “nine women cannot make a baby in one month” and the realities of systems constraints often make iterative development challenging, if not near impossible. In 2017, firms will likely need to focus more on software process improvement than trying to meet the demands inherent in agile development. That’s right; I am saying to focus more on process improvement and less on proving that you are agile! Even projects which must use waterfall methodology can benefit greatly from the wisdom of the Agile Manifesto and agile principles. Our new book on Agile ALM & DevOps discusses this approach in detail. As it always has, process maturity will need to include traceability, while still achieving fully agile application lifecycle management (ALM). Organizations in highly regulated industries, including financial services, will need to implement agile ALM while still maintaining fully traceable processes.

DevOps and Agile ALM will accelerate your process developer velocity; however, your performance will be significantly impacted unless you have implemented continuous testing, including API testing and the use of service virtualization. Unit and functional testing are essential, but not sufficient, to keep up with the demands of Agile and DevOps. Agile depends upon DevOps and DevOps relies upon Continuous Testing and Continuous Security.

Tools are essential for successful software development and solid tools integration is also crucial. Vendors will continue to evolve in terms of building tools that can be integrated across the ecosystem. Vendors themselves will continue to play a key leadership role in DevOps innovation with every vendor stretching to ensure that their solutions help enable DevOps.

Bob Aiello (bob.aiello@ieee.org)

 

Enterprise DevOps and Microservices in 2017

0

Enterprise DevOps and Microservices in 2017
By Anders Wallgren, CTO, Electric Cloud                                       Anders B&W

As a new year begins, we often take some time to reflect on the previous year and look ahead at what is to come. Suffice to say, 2016 was a momentous year for software delivery in the enterprise. DevOps adoptions matured greatly and, at conferences worldwide, industry leaders were eager to share their experience and expertise.

While DevOps was taking over, two trends focusing on ‘small’ things were making big waves: containers and microservices as a means for organizations to scale their application development and releases. While both technologies have matured considerably in the last year, they are still challenging – particularly for enterprises that need to incorporate these new trends with monolithic applications, traditional releases, VMs, etc. While it’s not a silver bullet that’s suitable for every use case, we’re increasingly seeing more enterprises exploring Docker and microservices for their needs, and I’m sure these trends will continue to lead positive advancements in modern application delivery for the coming year (and beyond).

Continue reading for my insights on some of the opportunities and challenges surrounding these two major trends, as well as a few predictions for what’s to come in 2017.

2016 Trends – Looking Back

DevOps has matured:

DevOps experienced considerable maturation and advancement in 2016. From its grass roots beginning with small-teams, mainly for green-field applications or startup companies, DevOps has matured, where complex enterprises – not just the unicorns – are now often on the forefront of DevOps innovation. DevOps practices have expanded beyond just web apps, to be used for your database deployments, embedded devices, and even mainframes.

Microservices and Containers go hand in hand:

Microservices are an attractive DevOps pattern because of their enablement of speed to market. With each microservice being developed, deployed and run independently (often using different languages, technology stacks, and tools), microservices allow organizations to “divide and conquer”, and scale teams and applications more efficiently. When the pipeline is not locked into a monolithic configuration – of either toolset, component dependencies, release processes or infrastructure, there is a unique ability to better scale development and operations. It also helps organizations easily determine what services don’t need scaling to optimize resource utilization.

However, we have seen in 2016 that adopting containers and microservices can be challenging. What’s starting to happen, and what some people are starting to realize the hard way, is that you’re going to have a really hard time doing containers and microservices well with bad architecture. Whether that is the architecture for your application, services, infrastructure and delivery pipeline – architecture matters greatly in your ability to do microservices well and take advantages of the benefit they, and containers, can offer. Microservices are not for everyone, and if you’re looking for microservices as a way to “uncomplicate” your life – you certainly won’t find that if you haven’t resolved CI, automated testing, monitoring, high availability or other major prerequisites, before. If you’re struggling in one area, it is highly recommended that you address those issues first and then you can decide if microservices will be an asset to your organization.

Legacy applications are another challenging factor when we look at the surge of demand for microservices over the past year. Do you decompose your application? Or parts of it? Do you build API around it? Do you completely re-write it? We’ve learned the value of a phased approach when re-architecting a legacy application or dealing with an organization that has established processes and requirements for ensuring security and regulatory compliance. In these instances, teams should consider starting with a monolithic application where practical and then gradually pull functions into separate services.

Microservices and containers will continue to be major players in 2017, and gradually some of the kinks and challenges associated with them will be ironed out as the community as a whole becomes more proficient with them, and tools and patterns are introduced to accelerate the adoption and large-scale operations of these technologies.

What’s Ahead in 2017  

DevOps failures will come to the forefront

While the hype around embarking on a DevOps transformation is credible and the ROI and transformational benefits of a DevOps transformation have been established, there has been a silent backlash that will make itself known in 2017. This will be the year we start to hear some of the failures – which is good because we learn from them. There will be quite a few people who say, “Yeah, we tried DevOps. It doesn’t work. It’s all hype. It only works for the unicorns. It only works for new software. It only, it only, it only…” This type of doubt is bound to happen when any new process or framework is introduced into legacy environments and cultures, and 2017 is the time for the doubt around DevOps to rise. Every DevOps adoption story is unique, and every journey is one of trial and error, and the road towards continuous improvement. We know the road is challenging, frustrating at times, and certainly it’s not all-roses. But the rewards are immense. As the community shares patterns and learning for what works, we should also be more forthcoming about sharing our failures, setbacks and wrong turns. So we can all learn, and become better at realizing software. Together.

Importance of Microservices Adoption

2017 looks to be a year where microservices and containers will continue to rise, and get pushed even further into the limelight as companies continue to invest in software-driven innovation and technology. It will be critically important for companies to have a solid architecture in place and understand how to approach and scale them effectively.

DevOps’ Impact on Financial Services

Furthermore, we will start to see some interesting disruption in FinServ. Interestingly, financial institutions have often seen technology as their own personal differentiator. However, more often than not the problem these organizations have is with supporting the right culture to enable a successful DevOps implementation. As veteran companies work on instilling the right mindset to enable faster releases in such a heavily regulated industry we could also see disruption in the space, where new financial technology, new online banks, new companies, etc. are taking up more and more market share.

Secure DevOps

With the proliferation of IoT and our always-connected world, and with the growing cyber security threats, we are going to see more security verification and more built-in compliance validation checks happening earlier in the lifecycle that are fully integrated with the development process.

Final Thoughts

All in all, what all teams and organizations need to do is keep their eyes on the ball. We practice DevOps to bring more value to our customers and employees, delight our users, and make things better, safer and faster. Keep that in mind – because if what you’re doing on any given day isn’t moving that ball forward, what was the point? Focus on that for the new year!

Behaviorally Speaking: DevOps Development

0

Behaviorally Speaking: DevOps Development
by Bob Aiello

DevOps puts the focus on developing the same build, package and release procedures to support deployment throughout the entire application lifecycle. This is a big change for many companies where developers traditionally did their own deployments and operations joined the party late in the process, usually when the application was being prepared to be promoted to production. In fact, I have seen organizations who put a tremendous amount of work into their production deployments, but failed miserably simply because they started too late in the development lifecycle. DevOps puts the focus on creating an automated application lifecycle management (ALM) to support development test, integration, QA, UAT and Production. But how exactly do you develop DevOps itself and how do you know when you have achieved success?

DevOps is not new; like many other Agile and Lean practices, DevOps makes use of principles that have been around for a long time. But, also like Agile and Lean, DevOps clarifies and highlights industry best practices in a way that is particularly compelling. Traditionally, developers have demanded free reign in being able to build, package and deploy their own work. As a group, developers, by and large, are known to be smart and hardworking folks. Unfortunately, seasoned professionals also know that many IT problems and systems outages are caused by a smart person accidentally missing an essential step. Creating repeatable processes and ensuring uninterrupted services is primaily about creating procedures that are simple, clear and reliable. DevOps teaches us that we need to begin this journey early in the process. Instead of just automating the build and deploy to QA, UAT and Production – we shift our focus upstream and begin automating the processes for development. too.

Continuous integration has been made popular by industry expert Martin Fowler who strongly advocated deploying to a test environment even for a development build. Creating seamless and reliable fully-automated deployment procedures is not an easy task. There are many dependencies that are often difficult to understand – much less automate. Developers spend their time learning new technologies and building their technical knowledge, skills and abilities. DevOps encourages improved communication between Development, QA and Operations. The most important part of this effort is to transfer knowledge earlier in the process – reducing risk by creating a learning organization. DevOps helps to improve both QA’s and Ops focus by enabling them each to get involved early in the development cycle and also by creating automated procedures to reliably build, package and deploy the application. If you want to be successful, then you should also strive to be agile.

DevOps was made popular by a number of agile technology experts, including those involved with what has become known as agile systems administration and agile operations. It is essential to remember that the journey to Devops must also follow agile principles. This means that creating your automated build, package and deployment procedures should be handled in an agile iterative way. This has been exactly how I have always handled this effort. Usually, I get called in to deal with a failed application build and release process. I often have to start by performing many tasks manually. Over time, I am able to automate each of the tasks, but this is an iterative effort with many decisions made at the last responsible moment. Source code management is also an essential starting point.

Developers need to be trained to successfully use version control tools to secure the source code and reliably create milestones – including version labels or tags. Just as DevOps starts at the beginning of the lifecycle – DevOps also needs to focus on the seminal competencies of excellent source code management. Automated application build is next, with each configuration item embedding a unique and immutable version ID. Release packages are created with embedded manifests containing a complete list of every included configuration item, whether it be a derived binary, text based configuration file or a word document containing essential release notes. Good release management means that you can identify all of the code that is about to be deployed and also provide a procedure to verify that the correct code was in fact deployed (known as a physical configuration audit). It is equally essential to provide a mechanism to ensure that a release package has not been modified by an unauthorized individual.

What makes these practices “DevOps” is the focus on developing these procedures from the beginning of the lifecycle and taking an agile iterative approach as they are developed. There is also an intentional effort to share knowledge equally between development, QA and operations. Development often knows the technology best, but operations understands the real world challenges that will result in developers being roused from their slumber in the middle of the night. QA contributes by developing the procedures to ensure that bugs are never found a second time in any release.

Devops is all about the synergy of productivity and quality with the real world focus on sharing knowledge and building competencies!

Personality Matters – Personality of Tools

0

Personality Matters – Personality of Tools
By Leslie Sachs

Do tools have personality? Writers and inventors have long suggested that machines will eventually develop to a point where they can think, learn, experience emotions, and display traits commonly associated with having a personality. Meanwhile, computer scientists have been studying thinking processes and learning machines [1]. Many people certainly believe that some tools have apparent personality flaws, including acting stubborn, unpredictable, and, at times, irrational. Science fiction aside, tools often do, in fact, display characteristics that are commonly associated with human personality, and understanding this phenomenon can help when it comes to evaluating, selecting, and implementing tools to support your software development process. This article will help you handle the people side of tools selection and adoption.

Remember Eliza?

MIT Professor Joe Weizenbaum shocked many people with his groundbreaking work to develop Eliza, a natural language program that mimicked the non-direct probing commonly associated with Rogerian Psychology [2]. Dr. Weizenbaum asked people to converse with Eliza as a way of improving the natural language capabilities of the program. Soon it became apparent that some people had trouble remembering that Eliza was only a computer program. Eliza became real to many people and there were reports of refusing to show the script of their conversation with Eliza to the researchers because the information revealed was too personal!

Being a People Person

I am not by nature a computer person; I prefer to relate to other human beings, along with all of their complex feelings and emotions. But I must admit that there are times when my laptop does seem remarkably like a stubborn teenager going through a tough adolescence. So what does all this blurring of human-machine qualities mean in terms of selecting, implementing, and supporting tools?

Tools Have Personalities, Too

You need only have a short conversation with a true open source enthusiast to realize that many tools have personalities that have been impacted by their process of creation and development. Open source tools usually have traits that relate to the community where they were developed – or perhaps nurtured is a better word. Those who support open source often gladly accept limitations or even bugs in their tools for the sake of maintaining the transparent and communal nature of tools written and supported by the open source community. It ís not only open source tools, though, that have their own personalities; commercial products also have their own, too, in addition to cultural norms, especially in terms of expectations for support and maintenance.

Commercial Products

Similarly, commercial products frequently have personalities that mirror the companies that developed them, although possibly once or twice removed due to corporate acquisitions. Many companies are truly dedicated to their products and customer satisfaction. Others seem to be far less committed, although these may still have their own competencies.

Organizations Have Personalities, Too

Some organizations have almost a philosophical orientation in favor of one tool approach or another. This phenomenon is most obvious in companies that insist on only selecting from a wide array of open source tools. Other firms may require the features or perhaps the security of a commercial tools vendor. Cognitive complexity also factors into alignment of tools and their personalities by providing just enough features to get the job done effectively. Some organizations just really like to keep things simple, while others may want to push the envelope with advanced procedures and development methodology. Interestingly enough, some commercial tools vendors are learning to be a little more communal by providing a lightweight open source version of their tools.

Hybrid Approaches

Many commercial tools vendors , including IBM, are learning to structure themselves much more like their open source counterparts, with full transparency into the defects that are being fixed and tasks being completed. Today, you can see plans for features, defects, and resources all exposed, even while working to compete with other commercial tools vendors in the same space.

Conclusion
Understanding the personality of tools and their vendors, whether they be commercial or open source, is a matter of understanding your organization’s behavior. Corporations have personalities, too. As always in psychology, you need to be self-aware and realize your own needs and capabilities. You will succeed if you can align your organization’s personality with the tools and processes that you want to implement. Feel free to drop me a line and let me know about your own challenges with understanding the personality of your tools!

References
[1] McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. W.H.Freeman & Co Ltd November 1979
[2] Rogers, Carl. Client-Centered Therapy: Its Current Practice, Implications, and Theory. Houghton Mifflin, 1951
[3] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010, p. 155.
[4] Aiello, Robert and Leslie Sachs. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement, Addison-Wesley, 2016

Parasoft Features New Continuous Testing Release at Gartner Summit

0

December 06, 2016 as reported on the Parasoft website

Parasoft announced an update to its Continuous Testing solution—integrating Service Virtualization, API Testing, and Test Data Management with Test Environment Management to enable early, rapid, and rigorous testing of highly connected applications. The solution is currently being featured at Parasoft’s booth #309 at Gartner Application Strategies & Solutions Summit in Las Vegas.

The latest Continuous Testing solution (Continuous Testing Platform 3.0, Virtualize 9.10, and SOAtest 9.10) provides the industry’s most powerful and comprehensive thin client interface for service virtualization and API testing. From any browser, a broad range of team members can rapidly create, leverage, and share API testing and service virtualization assets to accelerate testing and support advanced automation for Continuous Integration and Delivery scenarios.

“In this release, we have focused on broadening service virtualization access and packaging, addressing both creation and deployment, stated Mark Lambert VP of Products for Parasoft. For creation, we have enabled advanced workflows of the Parasoft Virtualize Desktop providing users the ability to create sophisticated assets directly from their browser. On the deployment side, Docker scripts, available on the Parasoft Marketplace, and prebuilt images, hosted within the Microsoft Azure Marketplace for both on-demand and BYOL licensing, provide a complete set of deployment options for scalable service virtualization. “

The new release also features:

  • Burp Suite Integration for API and Web Security Penetration Testing:Integration with Burp Suite, the application security testing tool recognized as the industry standard, brings a new level of API and web security penetration testing to the Parasoft solution.
  • HTTP/2 Support for Testing and Service Virtualization: Extending its highly trusted message/protocol support, Parasoft’s solution now supports testing and simulation of HTTP/2.
  • HTTP Archive (HAR) Support for Creating Tests and Virtual Assets from Fiddler Traffic Files: Converts HTTP Archive (HAR) files (e.g. from Fiddler) into traffic files that can be used for creating Parasoft virtual assets or test scenarios.
  • Jenkins Plugin for API Test Execution: Teams using Jenkins to execute Parasoft SOAtest tests during Continuous Integration can view SOAtest results within Jenkins and use the results of these tests to control Jenkins workflows.
  • Microsoft Azure and Visual Studio Team Services support: available from the Microsoft Marketplace

About Parasoft

Parasoft researches and develops software solutions that help organizations deliver defect-free software efficiently. By integrating development testingAPI testing, and service virtualization, we reduce the time, effort, and cost of delivering secure, reliable, and compliant software. Parasoft’s enterprise and embedded development solutions are the industry’s most comprehensive—including static analysis, unit testing, requirements traceability, coverage analysis, functional and load testing, dev/test environment management, and more. The majority of Fortune 500 companies rely on Parasoft in order to produce top-quality software consistently and efficiently as they pursue agile, lean, DevOps, compliance, and safety-critical development initiatives.

Training – Configuration Management, DevOps and Agile ALM

 
We provide courses courses on DevOps, Configuration Management, and Agile Application Lifecycle Management.
Configuration Management: Robust Practices for Fast Delivery
(See outline below)
Configuration management (CM) best practices are essential for creating automated application build, package and deployment to support agile (and non-agile) application integration and testing demands, including rapidly packaging, releasing, and deploying applications into production. Classic CM—identifying system components, controlling changes, reporting the system’s configuration, and auditing—won’t do the trick anymore. Bob Aiello presents an in-depth tour of a more robust and powerful approach to CM consisting of six key functions: source code management, build engineering, environment management, change management and control, release management, and deployment, which are the prerequisites for continuous delivery and DevOps. Bob describes current and emerging CM trends—support for agile development, container-based deployments including Docker, cloud computing, and mobile apps development—and reviews the industry standards and frameworks available in practice today. Take back an integrated approach to establish proper IT governance and compliance using the latest CM practices while offering development teams the most effective CM practices available today.
Continuous Delivery: Rapid and Reliable Releases with DevOps
DevOps is an emerging set of principles, methods, and practices that enables the rapid deployment of software systems. DevOps focuses on lowering barriers between development, testing, security, and operations in support of rapid iterative development and deployment. Many organizations struggle when implementing DevOps because of its inherent technical, process, and cultural challenges. Bob Aiello shares DevOps best practices, starting with its role early in the application lifecycle and bridging the gap with testing, security, and operations. Bob explains how to implement DevOps using industry standards and frameworks such as ITIL v3 (IT Service Management) in both agile and non-agile environments, focusing on automated deployment frameworks that quickly deliver value to the business. DevOps includes server provisioning essential for cloud computing in what is becoming known as Infrastructure as Code. Bob equips you with practical and effective DevOps practices—automated application build, packaging, and deployment—essential for meeting today’s business and technology demands.
Agile ALM: Using DevOps to Drive Process Improvement
Many organisations struggle to improve their existing IT processes to drive their software and systems development work. This leaves technology managers and teams to use whatever worked for them on the last project, often resulting in a lack of integration and poor communication and collaboration across the organisation. Agile application lifecycle management (ALM) is a comprehensive approach to defining development and operations processes that are aligned with agile methodology. Bob Aiello explores how to use DevOps principles and practices to drive the entire process improvement effort—establishing agile practices such as continuous integration and delivery that integrates with the IT operations controls. Learn how to use DevOps approaches to iteratively define a pragmatic and real-world ALM framework that will evolve, scale, and help your organisation achieve its software and business goals. Take back a template to define and automate your application lifecycle, accounting for all stakeholders and integrating their processes into a comprehensive agile ALM framework.
Outline of CM Best Practices Class:
Configuration Management Best Practices Training Program
Introduction:
Based upon industry standards and frameworks this course introduces the technology professional to all of the principles, concepts and hands-on best practices necessary to establish a comprehensive configuration and release management functions. Discussing both CM as it is practiced in product companies and IT organizations, Bob Aiello and Leslie Sachs provide expert guidance on establishing configuration management best practices.

  1. Configuration Management Concepts
    * Configuration Identification
    * Status Accounting
    * Change Control
    * Configuration Auditing (physical and functional)
  2. Source Code Management
    * Baselines
    * Sandboxes and Workspaces
    * Variant Management
    * Handling bugfixes
    * Streams
    * Merging
    * Changesets
  3. Build Engineering
    * Versions IDs and branding executables
    * Automating the build
    * Build tools to choose from including Ant, Maven, Make and MS  Build
    * Role of the Build Engineer
    * Continuous Integration
  4. Environment Configuration
    * Supporting Code Promotion
    * Implementing Tokens
    * Practical use of CMDBs
    * Managing Environments
  5. Change Control
    * Types of Change Control
    * Rightsizing the Change Control Process
    * The 29 minute Change Control Meeting
    * Driving the Process Through Change Control
  6. Release Management
    * Packaging the release
    * Ergonomics of Release Management
    * RM as coordination
    * Driving the RM Process
  7. Deployment
    * Staging the release
    * Deployment frameworks
    * Traceability
  8. Architecting Your Application for CM
    * CM Driven Development
    * How CM Facilitates Good Development
    * Build Engineering as a Service
  9. Hardware CM
    * Managing Hardware Configuration Items
    * Hybrid hardware/software CM
  10. Process Engineering (Rightsizing)
    * Agile/Waterfall
    * Hybrid Approaches
    * Agile Process Maturity
  11. Establishing IT Governance
    * Establishing IT Governance
    * Transparency
    * Improving the Process
  12. What you need to know about Compliance
    * Common Compliance Requirements
    * Establishing IT Controls
  13. Standards and Frameworks
    * What you need to know about standards and frameworks
  14. CM Assessments
    * Evaluating the Existing CM Practices
    * Documenting “As-Is” and “To-Be”
    * Writing your CM Best Practices Implementation Plan

About the Instructor:
Bob Aiello, CM Best Practices Consulting
Bob Aiello is a consultant and author with more than thirty years of experience as a technical manager at leading financial services firms, with company-wide responsibility for CM, ALM and DevOps. He often provides hands-on technical support for enterprise source code management tools, SOX/Cobit compliance, build engineering, continuous integration, and automated application deployment. He serves on the IEEE Software and Systems Engineering Standards Committee (S2ESC) management board, is the current chair of the IEEE P2675 DevOps Working Group and served as the technical editor for CM Crossroads for more than 15 years. Bob is also editor of the Agile ALM DevOps journal, coauthor of Configuration Management Best Practices (Addison-Wesley, 2010) and his latest book Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement (http://agilealmdevops.com).

Behaviorally Speaking: Dysfunctional Ops

0

Behaviorally Speaking – Dysfunctional Ops
By Bob Aiello

IT Operations plays an important role in any organization by ensuring that systems are always functioning as they should and all services meet the needs of customers and other end users. Some developers get frustrated when trying to work with their operations colleagues and may even try to bypass IT Ops whenever possible. My experience has been that developers may loose faith in the competency of the operations team – leading to doubts about their abilities. Are developers just a bunch of undisciplined wild cowboys or are there good reasons why many developers feel that they have to try to avoid working with Ops whenever possible? The truth is that operations, in many large organizations, is incapable of handling their role and developers are well-justified in trying to bypass Ops. In this article, we are going to have a tough conversation about the reality that many operations organizations lack qualified personnel, are more focused on blocking progress than delivering value to customers and exhibit many other dysfunctional behaviors. How did we get to this point where operations is often so dysfunctional?

Computer operators were once solely focused on handling decks of punch cards, data tapes and handling very large print jobs. The job focused on operating the equipment and dealing with any environment issues such as keeping the computer room nice and cold for the sake of the machines themselves. Back then developers were much more skilled than computer operators and required many more years of training in technical areas, from machine code programming to intensive work in languages including Fortran, COBOL and PL/1. By contast, many of today’s operations engineers are just as technical and skilled as any developer, often focusing on provisioning complex cloud-based runtime systems, environment monitoring while designing and implementing fully automated application deployment pipelines. Unfortunately, some operations engineers are not up to the demands of the job and developers may feel justified, and even motivated, to bypass operations whenever possible. In my own work, I have come across operations engineers who lacked even the most basic Unix command line skills, including use of the VI editor. This might seem trivial, but these same engineers had the root (e.g. super user) passwords, which means that they did have the potential to make serious mistakes, which could take down large-scale production systems. Why would IT operations management hire people who lacked the basic skills necessary to successfully administrate systems? Some organizations allocate management compensation based upon the number of people reporting into a function. When this happens, managers are incentivized to have large teams with lots of head count, without the more important focus of whether or not these engineers have the right level of knowledge, skills and abilities.

It has been my experience that when operations engineers lack sufficient technical skills, developers loose faith and resort to working around the operations group instead of partnering with them to get the work done successfully. Operations engineers themselves may exhibit counterproductive coping skills to deal with their limited expertise. Some teams focus narrowly on one specific skill which they understand and can execute successfully. They may end up adopting a siloed mentality where they maintain a very narrow focus, do not share their knowledge and procedures and consequently lack the broader systems knowledge required for success. I have seen many large operations organizations, which consisted of small, specialized siloes which could not effectively meet their objectives even within the operations organization itself.

The best operations groups consist of highly skilled engineers with strong cross-functional capabilities. Having experience with development, particularly scripting and automation, is an absolute must-have. When these colleagues work on deployment automation, we often call these technology professionals DevOps engineers. Going forward, operations must employ engineers with sufficient knowledge, skills and abilities to get the job done. Do you agree or disagree? Feel free to drop me a line to share your opinion!

Personality Matters—CM Excellence

0

Personality Matters—CM Excellence
By Leslie Sachs

Excellence in Configuration Management conjures up images of efficient processes with fully automated procedures to build, package and deploy applications resulting in happy users enjoying new features on a regular basis. The fact is that continuous integration servers and great tools alone do not make CM excellence. The most important resources are the technology professionals on our team and the leader who helps guide the team towards excellence. That doesn’t mean that everyone is (or can be) a top performer, even if you are blessed with working on a high performance cross-functional team. What it does mean is that understanding the factors that lead to excellence will enable you to assess and improve your own performance. A good place to start is to consider the traits commonly associated with effective leadership and successful results. This article will help you understand what psychologists have learned regarding some of the essential qualities found among top leaders and others who consistently achieve excellence!

Software Process Improvement (SPI) is all about identifying potential areas of improvement. Achieving excellence depends upon your ability to identify and improve upon your own behavior and effectiveness. It is well-known that we are each born with specific personality traits and innate dispositional tendencies. However, it is an equally well-established fact that we can significantly modify this endowment if we understand our own natural tendencies and then develop complementary behaviors that will help us achieve success!

Let’s start by considering some of the personality traits that help predict effective leadership. One of the first studies on effective leadership was conducted by psychologist Ralph M. Stogdill [1]. This early study identified a group of traits including intelligence, alertness, insight, responsibility, initiative, persistence, self-confidence, and sociability. It is certainly not surprising that these specific traits are valuable for successful leaders and achieving excellence. Being intelligent speaks for itself and of being alert (for new opportunities) and having insight into the deeper meaning of each situation and opportunity. Although general intelligence was for a long time considered static, more recent research suggests that it is possible to bolster one’s genetic inheritance. Certainly, one can consciously strive to develop the behavioral patterns, such as attentiveness to detail and novelty and thoughtful analysis of options, closely associated with intelligence.

You might want to ask yourself whether or not you are responsible, take initiative and show persistence when faced with difficult challenges. Displaying self-confidence and operating amiably within a social structure is essential as well. Do you appreciate why these characteristics are essential for leadership and achieving success and look for opportunities to incorporate these qualities into your personality? To improve your leadership profile, you must also actively demonstrate that you know how to apply these valuable traits to solve real workplace dilemmas. Upon reflection, you can certainly see why CM excellence would come from CM professionals who are intelligent, alert and insightful. Being responsible, showing initiative and being persistent along with having self-confidence and being a social being are all clearly desirable personality traits which lead to behaviors that result in CM excellence.

Stogdill conducted a second survey in which he identified ten traits which are essential for effective leadership. This expanded cluster includes drive for responsibility, persistence in pursuit of goals, risk-taking and problem-solving capabilities, self-confidence, and a drive for taking initiative. In addition to these, Stogdill also discovered that people’s ability to manage stress (such as frustration and delay) as well as their accountability for the consequences of their actions are both integral to leadership success. After all, intelligence, insight, and sharp analytic skills are not very useful if a manager is too stressed out to prioritize efficiently or authorize appropriate team members to implement essential programs Finally, you need to be able to influence other people’s behavior and to handle social interactions. [2] Other noted psychologists have also studied leadership traits and many have identified similar traits.

Jurgen Appelo lists 50 team virtues [3] which, not surprisingly, also correspond to many of the traits identified in Stogdill’s studies and I discussed many of these same traits in the book that I coauthored with Bob Aiello [4]. You need to consider each of these traits and understand why they are essential to leadership and achieving success. Keep in mind that while people are born with natural tendencies, each of us is capable of stretching beyond them if we understand our own personality and identify which behaviors are most likely to lead to the changes we desire. So if you want to achieve greater success, consider reflecting upon your own behaviors and comparing your style with those traits that have been associated repeatedly with good leaders and CM excellence. For example, being proactive in solving problems and having the self-confidence to take appropriate risks can help you achieve success. Remember also that being social means that you involve and interact with your entire team- full team participation maximizes the power of each member’s strengths while minimizing the impact of individual weaknesses.

Configuration Management excellence depends upon the skilled resources who handle the complex IT demands on a day-to-day basis. The most successful professionals are able to take stock of their personality and consider the traits that experts regard as essential for effective leadership. If you can develop this self-awareness, you can achieve success by developing the behaviors that result in strong leadership and excellence in all you do!

References:
[1] Yukl, Gary, Leadership in Organizations, Prentice Hall, 1981, p. 237
[2] Northouse, Peter G., Introduction to Leadership Concepts and Practice Second Edition, SAGE Publications, Inc 2012, p. 17
[3] Appelo, Jurgen, Management 3.0 – Leading Agile Developers, Developing Agile Leaders. Addison-Wesley, 2011, p. 93
[4] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010