Survey – Current State of Data Quality


Message from Scott Ambler –

For my friends in the IT world:

As many of you know, for years now I have been running surveys to explore what is actually happening in the IT industry. I always share the results of the survey, including the questions as asked, the answers as provided, and my analysis online once the survey is over.

Recently I updated our State of Data Quality survey, it’s been a few years since we ran it, which explores whether organizations are adopting concrete approaches to find and fix potential quality problems in their databases. If you are currently working at an organization that has databases running in production I would appreciate it if you take a few minutes to fill out this survey. The survey, which is a max of 14 questions, can be found at 2016 Data Quality Survey. Click here to take the survey.

Digital Currency Firm Victim of Cybercrime!



As reported in the Wall Street Journal, digital currency firm DAO was the victim of a hack in which 3.6 million Ethereum coins, valued at $55 million dollars was removed from the DAO account. According to the article the “attacker appeared to have exploited a loophole that essentially allowed a DAO stakeholder to create an identical fund and move money into it”. There are now published reports that the digital currency firm will cease to exist – which brings to mind the 2012 incident in which Knight Capital lost 440 million dollars due to a failed system upgrade. Financial services need to consider their cyber security and reliability of their systems or risk the consequences of catastrophic loss.



Agile ALM Webinar – Friday, June 24, 2016 12:00 PM ET


Agile ALM: Author Bob Aiello – Lunch and Learn Webinar – Friday, June 24, 2016 12:00 PM ET

Agile ALM offers “just enough process” to get the job done efficiently and utilizes the DevOps focus on communication and collaboration to enhance interactions among all participants.

Leading expert Bob Aiello will show how to fully leverage Agile benefits without sacrificing structure, traceability, or repeatability.

Please join us for a webinar with Bob, who recently wrote the book on Agile ALM!

To Register click here

Agile ALM DevOps Book


Welcome! We are building this website to encourage an honest and open discussion on how to implement a comprehensive Agile ALM Using DevOps Best Practices. Some of the things that we wrote about may very well create a little controversy as we did not just go along with the usual narrative on agile development. Instead we covered some tough topics related to how you use agile in principles and practices in organizations which have strong cultures which run counter to agile adoption. So we would like to ensure that we stimulate discussions on all of the topics in our new book on Agile Application Lifecycle Management. Below is an outline of the topics covered in Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement (Addison-Wesley 2016). You are welcome to ask us any questions or start a dialogue on a relevant topic. Priority will be given to folks who suggest topics which are specific to a section in the book. Simply, send me an email ( and we will create a discussion thread on the topic of your interest. We will also ensure that we add to you to our mailing list and let you know as soon as the sight is fully functional and registration for special features is available.

Bob Aiello
@bob.aiello, @agilealmdevops, @cmbestpractices


Chapter 1 Introducing Application Lifecycle Management Methodology
Chapter 2 Defining the Software Development Process
Chapter 3 Agile Application Lifecycle Management
Chapter 4 Agile Process Maturity
Chapter 5 Rapid Iterative Development


Chapter 6 Build Engineering in the ALM
Chapter 7 Automating the Agile ALM
Chapter 8 Continuous Integration
Chapter 9 Continuous Delivery and Deployment


Chapter 10 Change Management
Chapter 11 IT Operations
Chapter 12 DevOps
Chapter 13 Retrospectives in the ALM


Chapter 14 Agile in a Non-Agile World
Chapter 15 IT Governance
Chapter 16 Audit and Regulatory Compliance
Chapter 17 Agile ALM in the Cloud
Chapter 18 Agile ALM on the Mainframe
Chapter 19 Integration across the Enterprise
Chapter 20 QA and Testing in the ALM
Chapter 21 Personality and Agile ALM
Chapter 22 The Future of ALM

To order your copy visit Inform IT and use the discount code of AGILE35!

Understanding Change Management


Understanding Change Management
By Bob Aiello

Change management helps the organization plan, review, and communicate the many different system modifications that are needed ensure that services are continuously available. Changes may be bugfixes or new features and can range from a trivial configuration modification to a huge infrastructure migration. Change requests (CR) are typically entered into a tool or system that helps to automate the flow of work. CRs are reviewed as part of what is usually called the change control function. The goal of change control is to manage all changes to the production (and usually User Acceptance and other “controlled” QA) environments. Part of this effort is focused on coordination and in many organizations handled by the Project Management Office (PMO). But part of this effort is also managing changes to the environment that could potentially affect all of the essential systems. It is also important to control which releases are promoted to quality assurance (QA), user acceptance and then of course production. Change control can act as the stimulus to all other configuration management related functions as well. Throughout this chapter we will discuss how to apply change management in the application lifecycle management (ALM).

The goal of change management is to identify and manage risk and this effort can help drive the entire build, package, and deployment process by identifying the potential downstream impact of making (or not making) a particular change. There are seven different types of change management which we describe in more detail in our CM Best Practices book, including managing changes to the software development process itself. Above all else, change management has the essential goal of facilitating effective communication and traceability. Done well, change management can be an essential part of the software and systems delivery process, accelerating the rate of successful upgrades and bugfixes while avoiding costly mistakes. Similarly, the lack of change management can lead to serious outages, lead to reduced productivity, and endanger the success and profitability of the company.

There is a dark side to managing change which often manifests itself in the form of long meetings where changes are reviewed by the group in a sequential function. In future articles we will consider best practices for streamlining your change control approach.

Change management is important because it helps drive the entire configuration management (CM) process while identifying technical risk inherent in making (or not making) a change. Risk is not always bad if it is identified and fully understood. An effective change management process helps identify, communicate, and then mitigate technical risks. Communication and traceability are key aspects of the effective change management process. In fact, change management can drive the entire ALM effort. When process changes occur, change management helps ensure that all stakeholders are aware of their new roles, responsibilities, and tasks. Organizations that lack effective change management are at increased risk for costly systems outages. Change management drives the ALM.

In some organizations, change management simply focuses on identifying potential scheduling conflicts that can adversely impact the application build, package, and deployment process. Managing the calendar is certainly important, but that is only the beginning. It is common for a particular deployment or   configuration change to depend upon the same resource as another change. The effective change management process also identifies the affected assets and the experts who can best understand and communicate the potential downstream impact of a change. With complex technologies, many IT professionals are specialists in a particular area, resulting in organizational silos that may not fully understand how changes that they may make could potentially impact other systems. The change management process should identify all stakeholders who should be involved with evaluating a proposed change and then communicate the potential risk of making (or not making) a particular change. If you get right – your team will be able to accelerate the rate of change while also reducing the incidence of errors and others issues which can result in serious systems outages. Change management is all about driving changes through the process while maintaining quality and agility.

So what is Status Accounting?


Configuration management experts have long defined CM in terms of four key functions: Configuration identification, status accounting, change control and configuration audit. If you have been reading my books or my Behaviorally Speaking column, then most likely, you are very familiar with most (or all) of these practices. Configuration identification focuses on naming conventions and ensuring that you can select the correct configuration items (CIs). Change control is most commonly associated with ensuring that changes to production are reviewed and occur within the change window. The one technical term that always baffled me was status accounting, which is defined as “following evolution of a configuration item” through its lifecycle [1]. In practice, this is done using a robust software development lifecycle (SDLC), which most of us today call application lifecycle management (ALM) to indicate a more comprehensive approach than the SDLC’s of long ago. I have always believed that this terminology came about in the days when we could track Cobol modules and copy books on the back end of a napkin as we had lunch with our customer who was asking for a new feature or report. Times have changed, but status accounting is nonetheless very important and for many a regulatory (or contractual) requirement.  The good news is that you can implement effective status accounting using the agile ALM!


The key to an effective software development methodology is to have just enough process so that you do not make a mistake, with the absolute minimum amount of red-tape or as the agile folks like to say “ceremony”.  But how much process is enough and are there times when you should scale up or scale down? The agile ALM should align with the Agile Manifesto and agile principles ensuring that your team can achieve an acceptable velocity without being unduly burdened with too much process [2]. Creating user stories, test plans and scripts are all must-have practices that really require the flexible tools. Your agile ALM should have an effective change control process along with completely automated application build, package and deployment. Most teams these days are thriving on continuous integration servers and working to implement effective continuous delivery practices.

When you define your processes make sure that you identify the tasks that really need to be tracked. Providing transparency and traceability helps everyone on the team understand what needs to be completed on a daily basis and especially helps them to understand how their work affects others. It also plays a key role in communicating when deliverables will be met – communicating to senior management any potential risks that need to be addressed. It is always best to automate your ALM using a workflow automation tool. Folks will likely bypass processes if there is no traceability and a reporting on whether or not each step was completed. But be careful here, because many companies have such time consuming and onerous processes that everyone works hard to find excuses to bypass the established IT controls and soon these steps are rendered meaningless.

If you are in a bank or other financial services firm then you are likely familiar with adhering to IT controls that are established to comply with federal regulatory requirements as well as commonly acceptable IT audit practices. The ITIL v3 framework, ISACA Cobit and industry standards from the IEEE and ISO are commonly used to determine exactly what controls are required for a particular industry. The only reasonable way to make this all happen is with an effective workflow automation tool that communicates next steps for each stakeholder while providing transparency into what tasks have been completed and also the steps required next. Software development is a complicated endeavor for most organizations, but the reward is highly effective systems which provide robust features and can scale to meet the peak usage.


Successful organizations establish their processes initially and continuously evolve them as needed. It is common for processes to be very light, with few IT controls in the beginning, but then become much more structured as the project advances and delivery dates approach. The agile should provide guidance for all of the activities required for the success of the project, without being overly burdensome in terms of rigid rules and ticketing systems. Workflow automation is essential, but the selection of easy to use and flexible tools is a must have.


The integrated service desk should streamline both internal requests and also provide a front line to handle customer calls and requests. When your help desk runs well then problems are typically solved quickly and often lead to customers who may even agree to purchase additional products and services while they are on the line. Actual problems and incidents should be handled by your incident response team and provide a feedback loop to your QA and testing function.


Continuous testing also requires effective processes and right tools to enable robust automated testing including the use of virtualized test assets. Release and deployment automation are also key ingredients and, implemented correctly, improve reliability and security. Status accounting may be a rather dry and innocuous term, but the agile ALM itself is a very existing capability that can empower your team to achieve success and ensure business success!



[1] Aiello, Bob and Leslie Sachs. April 2016 (In Press). Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement Addison-Wesley Professional

[2] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

Behaviorally Speaking – Deciphering DevOps

Many organizations struggle with understanding and implementing DevOps. The first question that most managers ask is, “what is DevOps and how will it help me?”  I have been seeing organizations refer to DevOps in several very different ways and therein lies the confusion. DevOps is actually a set of loosely defined principles and practices that help development, QA and operations organizations communicate and collaborate more effectively. Some organizations have developed a view that DevOps is focused on highly skilled developers who have the access to build, package and deploy applications directly to controlled environments including QA, UAT and Production.  Too often this approach does not include the separation of controls that are required in many industries for regulatory and audit requirements. I often refer to this as dysfunctional DevOps where, often well-meaning, technology resources are trying to bypass IT controls put in place to prevent mistakes in the name of getting things done faster. The truth is that developers with root access does not generally scale well beyond three guys in a dorm room creating the next internet startup. Once they achieve their first success and hire another seven engineers – they often start stepping on each other. IT controls don’t slow you down – they help you avoid costly mistakes such as the many recent outages at trading firms and large banks.

The second view of DevOps is automating the provisioning of servers and infrastructure, which is obviously a valuable – really essential endeavor. I have been scripting the provisioning of servers and their architecture for many years – long before it was popular to call this approach DevOps. My own focus on DevOps has been largely to help development and operations groups operate more effectively. DevOps is really about getting teams with very different perspectives to work more effectively together. Here’s how to approach this effort.

I generally start by conducting an assessment of the existing best practices as compared to industry standards and frameworks. Generally, I ask each participant to tell me what works well and what might be improved. The view that I get from the development community is usually quite different than what I hear from the operations team. When I speak with the QA and testing resources, I get another completely different perspective. There are actually many difference resources within software and systems lifecycle and DevOps is really all about getting your subject matter experts to share information more effectively. For DevOps, this usually comes together in creating the Deployment Pipeline.

Operations is focused on ensuring that applications are available 24 by 7 without the risk of a service interruption. The developers are charged with writing code to implement new and exciting features. Blending these two viewpoints is a matter of ensuring that we have the agility to deliver new features early and often without the risk out downtime due to a systems glitch – especially due to a mistake during application deployment or systems upgrade. The key to successfully implanting better deployment is through automation using the same procedures throughout the entire software and systems lifecycle.

Automating the application deployment requires that even the most tasks be scripted and also that the code verifies itself. To do this successfully, you need to start doing the right things from the very beginning of the lifecycle. I have seen many organizations which handled deployments in a very loose and unstructured way for development and integration testing. These same teams try to impose formal procedures for the deployment to production (and sometimes UAT).  Getting deployments right is hard work and the only way that you will be successful is if you start doing deployments to development test environments using the same procedures that you intend to use when deploying to production. This approach allows the deployment team more time to understand the technical details and also automate the entire application deployment process. I always tell people that getting the operations folks involved early makes perfect sense because somebody has to the deployments to development test, integration and QA environments and you get much more efficiency by having this resource being the same folks who will be responsible for deploying to production. Whoever does the deployment to a particular environment, the main point is to automate each step and make sure that your code also verifies itself. My own code always contains just as many lines to test and verify that each step completed successfully as the actual steps of the deployment itself. Your goal here is to detect any problems as soon as possible and enable your deployment pipeline to fail as fast as possible when there is a problem. I usually write these scripts in Ruby as most deployment frameworks allow you to plugin your Ruby scripts seamlessly.

Effective DevOps requires that you have configuration management best practices including version control, automated application build, package and deployment. It also requires that you ensure the effective collaboration of all of your stakeholders including development, operations, QA, testing and also data security. DevOps is really all about facilitating highly effective cross functional teams. Obviously enabling your team members to collaborate and communicate effectively will help your organization achieve success. Make sure that you drop me a line and tell me what you believe DevOps is all about and how you  are going about implementing DevOps principles and practices!

What is DevOps?


DevOps consists of principles and practices that help improve communication and collaboration between teams that all too often have very different goals and objectives. While the principles are consistent across all projects, the practices may indeed vary from one situation to another. But what are those practices and how do we implement them? In this article we will introduce you to some of the key aspects of DevOps. One of the most effective DevOps practices has become known as left-shift whereby we involve Ops early in the process ideally having them get involved with the application build package and deployment from the very beginning of the application lifecycle. When I am serving in the role of the build, I always ask to be given the job of deploying from the development test environments all the way through to production. By getting involved early, we share the journey and gain the knowledge that we need in order to be more effective. Many practitioners also embrace left-shift in regards to quality assurance and testing.

DevOps needs to embrace a continuous testing approach that begins with unit testing and evolves to help ensure the quality of the entire system. Systems thinking in DevOps involves taking a broad view of the entire application from the operating system through every component of the application. Once again this is an example of left-shift where we QA and testing involved early in the lifecycle and ideally working directly with the developers to create robust native automated test tools. The information security team has very similar needs in order to protect systems form unauthorized access. Successful DevOps usually results in deployment processes that are so quick that they can be run every day, several times a day – and even on a continuous basis. Without continuous testing and left-shift the team is unable to keep up with the rapid rate of change inherent in DevOps and continuous delivery. In my experience, left-shift requires that I convince the developers to include their colleagues from operations, quality assurance and testing earlier in the process. But this isn’t the only situation which requires sharing information and feedback loops.

The technology professionals responsible for infrastructure and operations also need to remember to involve their colleagues from development and I have seen this overlooked. The operations team often knows more about how the application behaves in the real world than the developers who wrote the code. Issues related to troubleshooting, scalability and even disaster recovery are often understood by the folks manning the console seven days a week – twenty four hours a day more than the developers who wrote the code. Behaviorally, this all comes down to sharing knowledge and expertise. Unfortunately, dysfunctional behavior can sometimes be driven by fear and especially the fear of losing credibility in front of our colleagues.

Many times, technology professionals do not want to involve their counterparts from other teams while they are trying to understand new technologies because they do not want to be embarrassed if it turns out that they are mistaken. This means that sometimes we work alone – learning new technologies and hold off on sharing until we really understand all aspects of how the technology works. In my opinion, it is essential for everyone on the team to be willing to share the journey of learning new technologies in order for everyone involved to gain the knowledge and expertise essential for getting the job done.


Building scalable infrastructure involves a great deal of expertise and, in my view, operations needs to share the journey with their development counterparts. Even backup and recovery procedures can involve a great deal of complexity. This requirement includes the need for feedback loops to share information and the journey to solve issues and achieve goals. Silos pose serious threats to the organization and need to be driven out in order to achieve productivity and quality.

Just to be completely clear – organizational structures are often necessary. If you are working in large financial firms such as big bank or trading firm – structures and separation of controls are often mandated by federal law. But having an org chart does not mean that teams cannot communicate effectively.

If you create an environment where everyone feels safe to share their knowledge and expertise then you will have a much better chance of success. More importantly, if your team feels safe sharing their challenges and asking for help then you are on your way to creating a high performance team. DevOps has extremely effective principles and practices that help improve communication and collaboration. Make sure that you embrace left-shift by involving operations, quality assurance testing and information security early in the process. Make sure that you also ensure that other members of the team shift their knowledge the other direction to help the developers understand how the systems behave in real world scenarios including scalability, infrastructure architecture and disaster recovery.

Make sure that you drop me a line and let know how your teams share knowledge, expertise and their DevOps transformation!



Understanding CM in an Agile World – Unit 1

Part 1 of 4 – Bob Aiello on Agile Configuration Management – The First Seven Things

Complete series:
Unit 1
Unit 2
Unit 3
Unit 4

Understanding CM in an Agile World – Unit 4

Part 4 of 4 – Bob Aiello on Agile Configuration Management – The First Seven Things

Complete series:
Unit 1
Unit 2
Unit 3
Unit 4