Survey – Current State of Data Quality

0

Message from Scott Ambler –

For my friends in the IT world:

As many of you know, for years now I have been running surveys to explore what is actually happening in the IT industry. I always share the results of the survey, including the questions as asked, the answers as provided, and my analysis online once the survey is over.

Recently I updated our State of Data Quality survey, it’s been a few years since we ran it, which explores whether organizations are adopting concrete approaches to find and fix potential quality problems in their databases. If you are currently working at an organization that has databases running in production I would appreciate it if you take a few minutes to fill out this survey. The survey, which is a max of 14 questions, can be found at 2016 Data Quality Survey. Click here to take the survey.

Digital Currency Firm Victim of Cybercrime!

0

 

As reported in the Wall Street Journal, digital currency firm DAO was the victim of a hack in which 3.6 million Ethereum coins, valued at $55 million dollars was removed from the DAO account. According to the article the “attacker appeared to have exploited a loophole that essentially allowed a DAO stakeholder to create an identical fund and move money into it”. There are now published reports that the digital currency firm will cease to exist – which brings to mind the 2012 incident in which Knight Capital lost 440 million dollars due to a failed system upgrade. Financial services need to consider their cyber security and reliability of their systems or risk the consequences of catastrophic loss.

 

 

Agile ALM Webinar – Friday, June 24, 2016 12:00 PM ET

0

Agile ALM: Author Bob Aiello – Lunch and Learn Webinar – Friday, June 24, 2016 12:00 PM ET

Agile ALM offers “just enough process” to get the job done efficiently and utilizes the DevOps focus on communication and collaboration to enhance interactions among all participants.

Leading expert Bob Aiello will show how to fully leverage Agile benefits without sacrificing structure, traceability, or repeatability.

Please join us for a webinar with Bob, who recently wrote the book on Agile ALM!

To Register click here

Agile ALM DevOps Book

0

Welcome! We are building this website to encourage an honest and open discussion on how to implement a comprehensive Agile ALM Using DevOps Best Practices. Some of the things that we wrote about may very well create a little controversy as we did not just go along with the usual narrative on agile development. Instead we covered some tough topics related to how you use agile in principles and practices in organizations which have strong cultures which run counter to agile adoption. So we would like to ensure that we stimulate discussions on all of the topics in our new book on Agile Application Lifecycle Management. Below is an outline of the topics covered in Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement (Addison-Wesley 2016). You are welcome to ask us any questions or start a dialogue on a relevant topic. Priority will be given to folks who suggest topics which are specific to a section in the book. Simply, send me an email (bob.aiello@ieee.org) and we will create a discussion thread on the topic of your interest. We will also ensure that we add to you to our mailing list and let you know as soon as the sight is fully functional and registration for special features is available.

Bob Aiello
http://www.linkedin.com/in/BobAiello
@bob.aiello, @agilealmdevops, @cmbestpractices
bob.aiello@ieee.org

PART I DEFINING THE PROCESS

Chapter 1 Introducing Application Lifecycle Management Methodology
Chapter 2 Defining the Software Development Process
Chapter 3 Agile Application Lifecycle Management
Chapter 4 Agile Process Maturity
Chapter 5 Rapid Iterative Development

PART II AUTOMATING THE PROCESS

Chapter 6 Build Engineering in the ALM
Chapter 7 Automating the Agile ALM
Chapter 8 Continuous Integration
Chapter 9 Continuous Delivery and Deployment

PART III ESTABLISHING CONTROLS

Chapter 10 Change Management
Chapter 11 IT Operations
Chapter 12 DevOps
Chapter 13 Retrospectives in the ALM

PART IV SCALING THE PROCESS

Chapter 14 Agile in a Non-Agile World
Chapter 15 IT Governance
Chapter 16 Audit and Regulatory Compliance
Chapter 17 Agile ALM in the Cloud
Chapter 18 Agile ALM on the Mainframe
Chapter 19 Integration across the Enterprise
Chapter 20 QA and Testing in the ALM
Chapter 21 Personality and Agile ALM
Chapter 22 The Future of ALM

To order your copy visit Inform IT and use the discount code of AGILE35!

Understanding Change Management

0

Understanding Change Management
By Bob Aiello

Change management helps the organization plan, review, and communicate the many different system modifications that are needed ensure that services are continuously available. Changes may be bugfixes or new features and can range from a trivial configuration modification to a huge infrastructure migration. Change requests (CR) are typically entered into a tool or system that helps to automate the flow of work. CRs are reviewed as part of what is usually called the change control function. The goal of change control is to manage all changes to the production (and usually User Acceptance and other “controlled” QA) environments. Part of this effort is focused on coordination and in many organizations handled by the Project Management Office (PMO). But part of this effort is also managing changes to the environment that could potentially affect all of the essential systems. It is also important to control which releases are promoted to quality assurance (QA), user acceptance and then of course production. Change control can act as the stimulus to all other configuration management related functions as well. Throughout this chapter we will discuss how to apply change management in the application lifecycle management (ALM).

The goal of change management is to identify and manage risk and this effort can help drive the entire build, package, and deployment process by identifying the potential downstream impact of making (or not making) a particular change. There are seven different types of change management which we describe in more detail in our CM Best Practices book, including managing changes to the software development process itself. Above all else, change management has the essential goal of facilitating effective communication and traceability. Done well, change management can be an essential part of the software and systems delivery process, accelerating the rate of successful upgrades and bugfixes while avoiding costly mistakes. Similarly, the lack of change management can lead to serious outages, lead to reduced productivity, and endanger the success and profitability of the company.

There is a dark side to managing change which often manifests itself in the form of long meetings where changes are reviewed by the group in a sequential function. In future articles we will consider best practices for streamlining your change control approach.

Change management is important because it helps drive the entire configuration management (CM) process while identifying technical risk inherent in making (or not making) a change. Risk is not always bad if it is identified and fully understood. An effective change management process helps identify, communicate, and then mitigate technical risks. Communication and traceability are key aspects of the effective change management process. In fact, change management can drive the entire ALM effort. When process changes occur, change management helps ensure that all stakeholders are aware of their new roles, responsibilities, and tasks. Organizations that lack effective change management are at increased risk for costly systems outages. Change management drives the ALM.

In some organizations, change management simply focuses on identifying potential scheduling conflicts that can adversely impact the application build, package, and deployment process. Managing the calendar is certainly important, but that is only the beginning. It is common for a particular deployment or   configuration change to depend upon the same resource as another change. The effective change management process also identifies the affected assets and the experts who can best understand and communicate the potential downstream impact of a change. With complex technologies, many IT professionals are specialists in a particular area, resulting in organizational silos that may not fully understand how changes that they may make could potentially impact other systems. The change management process should identify all stakeholders who should be involved with evaluating a proposed change and then communicate the potential risk of making (or not making) a particular change. If you get right – your team will be able to accelerate the rate of change while also reducing the incidence of errors and others issues which can result in serious systems outages. Change management is all about driving changes through the process while maintaining quality and agility.

So what is Status Accounting?

0

Configuration management experts have long defined CM in terms of four key functions: Configuration identification, status accounting, change control and configuration audit. If you have been reading my books or my Behaviorally Speaking column, then most likely, you are very familiar with most (or all) of these practices. Configuration identification focuses on naming conventions and ensuring that you can select the correct configuration items (CIs). Change control is most commonly associated with ensuring that changes to production are reviewed and occur within the change window. The one technical term that always baffled me was status accounting, which is defined as “following evolution of a configuration item” through its lifecycle [1]. In practice, this is done using a robust software development lifecycle (SDLC), which most of us today call application lifecycle management (ALM) to indicate a more comprehensive approach than the SDLC’s of long ago. I have always believed that this terminology came about in the days when we could track Cobol modules and copy books on the back end of a napkin as we had lunch with our customer who was asking for a new feature or report. Times have changed, but status accounting is nonetheless very important and for many a regulatory (or contractual) requirement.  The good news is that you can implement effective status accounting using the agile ALM!

 

The key to an effective software development methodology is to have just enough process so that you do not make a mistake, with the absolute minimum amount of red-tape or as the agile folks like to say “ceremony”.  But how much process is enough and are there times when you should scale up or scale down? The agile ALM should align with the Agile Manifesto and agile principles ensuring that your team can achieve an acceptable velocity without being unduly burdened with too much process [2]. Creating user stories, test plans and scripts are all must-have practices that really require the flexible tools. Your agile ALM should have an effective change control process along with completely automated application build, package and deployment. Most teams these days are thriving on continuous integration servers and working to implement effective continuous delivery practices.

When you define your processes make sure that you identify the tasks that really need to be tracked. Providing transparency and traceability helps everyone on the team understand what needs to be completed on a daily basis and especially helps them to understand how their work affects others. It also plays a key role in communicating when deliverables will be met – communicating to senior management any potential risks that need to be addressed. It is always best to automate your ALM using a workflow automation tool. Folks will likely bypass processes if there is no traceability and a reporting on whether or not each step was completed. But be careful here, because many companies have such time consuming and onerous processes that everyone works hard to find excuses to bypass the established IT controls and soon these steps are rendered meaningless.

If you are in a bank or other financial services firm then you are likely familiar with adhering to IT controls that are established to comply with federal regulatory requirements as well as commonly acceptable IT audit practices. The ITIL v3 framework, ISACA Cobit and industry standards from the IEEE and ISO are commonly used to determine exactly what controls are required for a particular industry. The only reasonable way to make this all happen is with an effective workflow automation tool that communicates next steps for each stakeholder while providing transparency into what tasks have been completed and also the steps required next. Software development is a complicated endeavor for most organizations, but the reward is highly effective systems which provide robust features and can scale to meet the peak usage.

 

Successful organizations establish their processes initially and continuously evolve them as needed. It is common for processes to be very light, with few IT controls in the beginning, but then become much more structured as the project advances and delivery dates approach. The agile should provide guidance for all of the activities required for the success of the project, without being overly burdensome in terms of rigid rules and ticketing systems. Workflow automation is essential, but the selection of easy to use and flexible tools is a must have.

 

The integrated service desk should streamline both internal requests and also provide a front line to handle customer calls and requests. When your help desk runs well then problems are typically solved quickly and often lead to customers who may even agree to purchase additional products and services while they are on the line. Actual problems and incidents should be handled by your incident response team and provide a feedback loop to your QA and testing function.

 

Continuous testing also requires effective processes and right tools to enable robust automated testing including the use of virtualized test assets. Release and deployment automation are also key ingredients and, implemented correctly, improve reliability and security. Status accounting may be a rather dry and innocuous term, but the agile ALM itself is a very existing capability that can empower your team to achieve success and ensure business success!

 

References

[1] Aiello, Bob and Leslie Sachs. April 2016 (In Press). Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement Addison-Wesley Professional

[2] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

Behaviorally Speaking – Deciphering DevOps

Many organizations struggle with understanding and implementing DevOps. The first question that most managers ask is, “what is DevOps and how will it help me?”  I have been seeing organizations refer to DevOps in several very different ways and therein lies the confusion. DevOps is actually a set of loosely defined principles and practices that help development, QA and operations organizations communicate and collaborate more effectively. Some organizations have developed a view that DevOps is focused on highly skilled developers who have the access to build, package and deploy applications directly to controlled environments including QA, UAT and Production.  Too often this approach does not include the separation of controls that are required in many industries for regulatory and audit requirements. I often refer to this as dysfunctional DevOps where, often well-meaning, technology resources are trying to bypass IT controls put in place to prevent mistakes in the name of getting things done faster. The truth is that developers with root access does not generally scale well beyond three guys in a dorm room creating the next internet startup. Once they achieve their first success and hire another seven engineers – they often start stepping on each other. IT controls don’t slow you down – they help you avoid costly mistakes such as the many recent outages at trading firms and large banks.

The second view of DevOps is automating the provisioning of servers and infrastructure, which is obviously a valuable – really essential endeavor. I have been scripting the provisioning of servers and their architecture for many years – long before it was popular to call this approach DevOps. My own focus on DevOps has been largely to help development and operations groups operate more effectively. DevOps is really about getting teams with very different perspectives to work more effectively together. Here’s how to approach this effort.

I generally start by conducting an assessment of the existing best practices as compared to industry standards and frameworks. Generally, I ask each participant to tell me what works well and what might be improved. The view that I get from the development community is usually quite different than what I hear from the operations team. When I speak with the QA and testing resources, I get another completely different perspective. There are actually many difference resources within software and systems lifecycle and DevOps is really all about getting your subject matter experts to share information more effectively. For DevOps, this usually comes together in creating the Deployment Pipeline.

Operations is focused on ensuring that applications are available 24 by 7 without the risk of a service interruption. The developers are charged with writing code to implement new and exciting features. Blending these two viewpoints is a matter of ensuring that we have the agility to deliver new features early and often without the risk out downtime due to a systems glitch – especially due to a mistake during application deployment or systems upgrade. The key to successfully implanting better deployment is through automation using the same procedures throughout the entire software and systems lifecycle.

Automating the application deployment requires that even the most tasks be scripted and also that the code verifies itself. To do this successfully, you need to start doing the right things from the very beginning of the lifecycle. I have seen many organizations which handled deployments in a very loose and unstructured way for development and integration testing. These same teams try to impose formal procedures for the deployment to production (and sometimes UAT).  Getting deployments right is hard work and the only way that you will be successful is if you start doing deployments to development test environments using the same procedures that you intend to use when deploying to production. This approach allows the deployment team more time to understand the technical details and also automate the entire application deployment process. I always tell people that getting the operations folks involved early makes perfect sense because somebody has to the deployments to development test, integration and QA environments and you get much more efficiency by having this resource being the same folks who will be responsible for deploying to production. Whoever does the deployment to a particular environment, the main point is to automate each step and make sure that your code also verifies itself. My own code always contains just as many lines to test and verify that each step completed successfully as the actual steps of the deployment itself. Your goal here is to detect any problems as soon as possible and enable your deployment pipeline to fail as fast as possible when there is a problem. I usually write these scripts in Ruby as most deployment frameworks allow you to plugin your Ruby scripts seamlessly.

Effective DevOps requires that you have configuration management best practices including version control, automated application build, package and deployment. It also requires that you ensure the effective collaboration of all of your stakeholders including development, operations, QA, testing and also data security. DevOps is really all about facilitating highly effective cross functional teams. Obviously enabling your team members to collaborate and communicate effectively will help your organization achieve success. Make sure that you drop me a line and tell me what you believe DevOps is all about and how you  are going about implementing DevOps principles and practices!

What is DevOps?

0

DevOps consists of principles and practices that help improve communication and collaboration between teams that all too often have very different goals and objectives. While the principles are consistent across all projects, the practices may indeed vary from one situation to another. But what are those practices and how do we implement them? In this article we will introduce you to some of the key aspects of DevOps. One of the most effective DevOps practices has become known as left-shift whereby we involve Ops early in the process ideally having them get involved with the application build package and deployment from the very beginning of the application lifecycle. When I am serving in the role of the build, I always ask to be given the job of deploying from the development test environments all the way through to production. By getting involved early, we share the journey and gain the knowledge that we need in order to be more effective. Many practitioners also embrace left-shift in regards to quality assurance and testing.

DevOps needs to embrace a continuous testing approach that begins with unit testing and evolves to help ensure the quality of the entire system. Systems thinking in DevOps involves taking a broad view of the entire application from the operating system through every component of the application. Once again this is an example of left-shift where we QA and testing involved early in the lifecycle and ideally working directly with the developers to create robust native automated test tools. The information security team has very similar needs in order to protect systems form unauthorized access. Successful DevOps usually results in deployment processes that are so quick that they can be run every day, several times a day – and even on a continuous basis. Without continuous testing and left-shift the team is unable to keep up with the rapid rate of change inherent in DevOps and continuous delivery. In my experience, left-shift requires that I convince the developers to include their colleagues from operations, quality assurance and testing earlier in the process. But this isn’t the only situation which requires sharing information and feedback loops.

The technology professionals responsible for infrastructure and operations also need to remember to involve their colleagues from development and I have seen this overlooked. The operations team often knows more about how the application behaves in the real world than the developers who wrote the code. Issues related to troubleshooting, scalability and even disaster recovery are often understood by the folks manning the console seven days a week – twenty four hours a day more than the developers who wrote the code. Behaviorally, this all comes down to sharing knowledge and expertise. Unfortunately, dysfunctional behavior can sometimes be driven by fear and especially the fear of losing credibility in front of our colleagues.

Many times, technology professionals do not want to involve their counterparts from other teams while they are trying to understand new technologies because they do not want to be embarrassed if it turns out that they are mistaken. This means that sometimes we work alone – learning new technologies and hold off on sharing until we really understand all aspects of how the technology works. In my opinion, it is essential for everyone on the team to be willing to share the journey of learning new technologies in order for everyone involved to gain the knowledge and expertise essential for getting the job done.

 

Building scalable infrastructure involves a great deal of expertise and, in my view, operations needs to share the journey with their development counterparts. Even backup and recovery procedures can involve a great deal of complexity. This requirement includes the need for feedback loops to share information and the journey to solve issues and achieve goals. Silos pose serious threats to the organization and need to be driven out in order to achieve productivity and quality.

Just to be completely clear – organizational structures are often necessary. If you are working in large financial firms such as big bank or trading firm – structures and separation of controls are often mandated by federal law. But having an org chart does not mean that teams cannot communicate effectively.

If you create an environment where everyone feels safe to share their knowledge and expertise then you will have a much better chance of success. More importantly, if your team feels safe sharing their challenges and asking for help then you are on your way to creating a high performance team. DevOps has extremely effective principles and practices that help improve communication and collaboration. Make sure that you embrace left-shift by involving operations, quality assurance testing and information security early in the process. Make sure that you also ensure that other members of the team shift their knowledge the other direction to help the developers understand how the systems behave in real world scenarios including scalability, infrastructure architecture and disaster recovery.

Make sure that you drop me a line and let know how your teams share knowledge, expertise and their DevOps transformation!

 

 

Understanding CM in an Agile World – Unit 1

Part 1 of 4 – Bob Aiello on Agile Configuration Management – The First Seven Things

Complete series:
Unit 1
Unit 2
Unit 3
Unit 4

Understanding CM in an Agile World – Unit 4

Part 4 of 4 – Bob Aiello on Agile Configuration Management – The First Seven Things

Complete series:
Unit 1
Unit 2
Unit 3
Unit 4

 

Understanding CM In an Agile World – Unit 3

Part 3 of 4 – Bob Aiello on Agile Configuration Management – The First Seven Things

Complete series:
Unit 1
Unit 2
Unit 3
Unit 4

 

Understanding CM in an Agile World – Unit 2

Part 2 of 4 – Bob Aiello on Agile Configuration Management – The First Seven Things

Complete series:
Unit 1
Unit 2
Unit 3
Unit 4

 

What Happened at Knight?

0

Wall Street got a dramatic reminder of the value of strong configuration management (CM) stewardship on August 1, 2012, when Knight Capital Group experienced an incident which resulted in the erroneous purchase of stocks worth over 7 billion dollars. Knight had little choice but to sell as many of the stocks as possible, resulting in a 440 million dollar loss which I reported on in stickyminds. Bloomberg published an article on August 14th that claims that the software glitch was due to, “software that was inadvertently reactivated when a new program was installed, according to two people briefed on the matter.” The Bloomberg articlewent on to say, “Once triggered on Aug. 1, the dormant system started multiplying stock trades by one thousand, according to the sources, who requested anonymity because the firm hasn’t commented publicly on what caused the error. Knight’s staff looked through eight sets of software before determining what happened, the people said.” This incident highlights the importance of Configuration Management Best Practices. This article will describe some of the essential IT controls that could have potentially prevented this mistake from occurring (and I personally guarantee that they would cost a lot less than 440 million dollars to implement). First, here’s a quick description of the regulatory environment within which Knight Capital and other financial services firms must operate on a daily basis.

Background:

Federal regulatory agencies, including the Federal Financial Institutions Examination Council’s (FFIEC) and the Office of the Comptroller of the Currency (OCC), monitor the IT controls established by banks and other financial institutions who are required to comply with section 404 of the Sarbanes-Oxley Act of 2002. The ISACA Cobit framework is the defacto standard which describes the essential controls that need to be established – including both change and configuration management.

Taking Control:

In a regulatory environment, all changes need to be controlled. This is usually accomplished by having the proposed changes reviewed by a Change Control Board (CCB). Each request for change (RFC) must be reviewed with an assessment of the potential downstream impact (e.g. risks) of the change. Once approved, releases can be deployed and then verified. In CM terminology, there is a requirement for a physical and functional configuration audit, which means that the deployed binaries (called configuration items) must be verified to be the correct version and also that they are functioning as desired. Obviously, software must be thoroughly tested before it is approved for promotion to production.

Automating the application build, package and deployment is essential for success and this is precisely what DevOps is all about. In classic CM terminology, status accounting is the function that tracks a configuration item (CI) throughout its lifecycle and this would absolutely include retiring (uninstalling) any assets determined to be no-longer needed.
Apparently, Knight Capital lacked the necessary procedures to accurately track changes and deploy their code. According to the Bloomberg report, Knight did not know exactly what code had been deployed to their production servers and, most importantly, how to retire assets that were no longer being utilized.

Now before anyone starts to feel too smug, let’s consider the fact that most of the financial services firms on Wall Street lack the basic configuration management procedures to ensure that this same problem cannot occur on their servers. Financial services firms are not the only companies lacking CM Best Practices. I have also seen medical firms (including those rsponsible for surgical operating room equipment), government agencies and many other companies with mission critical systems that lack these basic competencies.

It’s time for IT controls to be implemented on all computer systems that matter. This just makes sense and obviously improves productivity and quality. Not to mention the savings benefit – proper CM controls can prevent many types of errors which, though easily overlooked, can cause millions of dollars of losses in just minutes!

Latest Technology News

0

We will be covering the top technology news stories including DevOps-related stories about failed deployments as well as new and exciting technologies to help ensure that systems are secure and reliable.

Please submit your technology news announcements to agilealmdevops@gmail.com

Personality Matters- Using Positive Psychology In DevOps

0

DevOps focuses on improving communication and collaboration between software developers and the operation professionals who help to maintain reliable and dependable systems. In our consulting practice, we often assess and evaluate existing practices and then make recommendations for improving. Our focus is often on configuration and release management and, lately, today’s popular new star, DevOps best practices as well. Bringing different technology groups together can result in some interesting challenges. We often feel like we are doing group therapy for a very dysfunctional family and many of the challenges encountered highlight the biases that people often bring into the workplace. This article will describe how to identify these behavioral issues and utilize positive psychology to help develop high performance teams.

We all come to work with the sum of our own past experiences and personal views which, by definition, means that we are predisposed to having specific viewpoints and maybe even more than a few biases. Many professionals come into meetings with their own agenda based upon their experiences, most business-related, some not. When conducting an assessment, we are typically asking participants to explain what they believe works well in their organization and what can be improved. In practice, getting people comfortable results in better and more useful information. When we bring developers into a room to talk about their experiences, we get a very different view than when we speak with their counterparts in operations or other departments including QA and testing. The stories we hear initially may sound like a bad marriage that cannot be saved. Fortunately, our experience is that there is also a great deal of synergy in bringing different viewpoints together. The key is to get the main issues on the table and facilitate effective and open communication.

Developers are often pressured to rapidly create new and exciting product features, using technology that itself is changing at a breathtaking rate. The QA and testing group is charged with ensuring that applications are defect-free and members often have to work under consider pressure, including ever shrinking timelines. The operations group must ensure that systems are reliable and available on a consistent basis. Each of these stakeholders has a very different set of goals and objectives. Developers want to roll out changes constantly, delivering new and exciting features while operations and QA may find themselves challenged to keep up with the demand for new releases. The frustration we hear reflects the somewhat self-focused perceptions from each side of the table as their differing perspectives cause an impasse.

Developers are highly skilled and often much more technically knowledgeable than their counterparts in QA and operations. This makes for some challenging dynamics in terms of mutual respect and collaboration. The operations and QA professionals often feel that developers are the immature children who lack discipline and constantly try to bypass established and necessary IT controls. This clashing of views and values is often a source of conflict within the organization with decisions being made based upon positional power by senior executives who may not be completely aware of all of the details of each challenge. The fact is that this conflict can be very constructive and lead to high performance if managed effectively.

Psychologists Martin Seligman and Mihaly Csikszentmihalyi have developed an approach, known as Positive Psychology, which focuses on encouraging positive and effective behaviors that will help to bring out the best in each stakeholder during these challenging situations [1]. By focusing on developing desirable behaviors, positive psychology moves from just identifying behavioral dysfunction to promoting effective and high performance behaviors. The first area to focus on is honest and open communication. Martin Seligman uses the term bravery to describe speaking up or taking the initiative, a person’s capacity to exhibit valor and courage. Integrity and honesty, along with perseverance and diligence, are also desirable traits that need to be modeled and encouraged in positive organizations. Successful organizations value and encourage these characteristics and their active expression.  Positive organizations encourage their employees to take initiative and ensure that employees feel safe – even when reporting a potential problem or issue. Dysfunctional organizations punish the whistleblower, while effective organizations recognize the importance of being able to evaluate the risks or problems that have been brought to their attention and actively solicit such self-monitoring efforts.

We typically meet with each stakeholder separately and document their views, including frustrations and challenges. We then put together a report that synthesizes all of our findings including existing challenges and suggestions for improvements. The truth is that dysfunctional  behavior must be identified and understood. But the next step is to bring all stakeholders to the table to look for solutions and suggest positive ideas for making improvements. Sometimes, this feels a little like horse trading. We may get one group which is convinced that only open source tools are appropriate for use while another team may be very interested in the features and support that comes from commercial products. We often facilitate the evaluation and selection of the right tools and processes with appropriate transparency, collaboration and communication.

Positive psychology focuses on promoting the right kinds of behaviors that you need for a high performance team. Obviously, this has to start with understanding existing views and experiences. Clearly, bringing stakeholders to the table and getting their management to support, reward and model collaborative behavior is the key to any high performance team and successful organization!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

[1] Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14

[2] Seligman, Martin, Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Free Press, New York 2002

[3] Abramson, L. Y.; Seligman, M. E. P.; Teasdale, J. D. (1978). “Learned helplessness in humans: Critique and reformulation”. Journal of Abnormal Psychology 87

[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press

[5] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

 

Behaviorally Speaking – Managing Component Dependencies

0

Computer systems today have reached a level of complexity that is truly amazing. We expect websites and packaged software to have an incredible number of features and we also expect systems to practically anticipate our every need and response. Creating feature rich systems is not an easy job and neither is writing the deployment infrastructure that empowers the organization to deliver new features continuously while maintaining a high level of reliability and quality. Software engineers and architects do an amazing job designing a systems architecture to fully represent all of the parts of the system that are created during the development lifecycle. One of the biggest challenges is fully understanding how each part of the system depends upon the others.

Software today is often designed and implemented as components that fit together and run seamlessly as a complete system. One of the biggest challenges that we have today is being able to successfully update one or more components, without any risk of a downstream to the other parts of the system. There is a lot of complexity involved in creating a structure that allows one component of the system to be updated without any chance of a downstream impact on the other components. How do we go about understanding and managing this complexity?

The first step is to create a logical model of the system to help all of the other stakeholders understand how the different parts of the system are assembled and work together. In my work, I often find that we have many specialists but very few people who understand the entire system end-to-end. In deployment engineering, I often have developers who are concerned with deploying the entire release just to fix a specific bug. I understand their concern but truthfully, managing patches to a release can actually be a lot more complicated than deploying a full release. The next thing that I often hear is that deploying a full release requires that you test the entire system.

The truth is that you have to retest the entire system even of you just deploy a patch unless you may fully understand how that patch impacts the other components of the system. The point here is that managing component dependencies is essential and it is not a trivial task. I recommend that organizations develop their software to be discoverable by embedding immutable version IDs and having a formal way to represent component dependencies such as descriptive XML files that could be shipped with the code to help with understanding how each part of the system depends upon the others. The only time when you will be able to understand and document these dependencies is during the software and system development lifecycle.

Many systems are developed by teams of highly qualified consultants who work under extreme pressure to develop feature rich software in a very short period of time. Once these technology experts are done they move on to the next project and you may find that you no longer have anyone from the original development team who really understands all of the internal component dependencies. When software is being written you have a unique opportunity to document dependencies and design a strategy for managing patches or full baselined releases in an automated way. This is exactly the same challenge that quality engineers face when they develop robust automated tests including service virtualization testing which is becoming a popular practice within continuous testing. Systems have to designed to ensure that we can manage complexity.

 

Complexity is not bad. We need to develop strategies to understand and manage the complexity inherent in writing complex software systems. The first step is to design systems to be fully verifiable using automated test harnesses. We can use this same approach to understand and document component dependencies and then develop strategies to be able to reliably update software in patches (or verifiable full baselined releases) while ensuring that we fully understand component dependencies. Designing a logical model is an important part of this effort but ensuring that we have some mechanism such as a descriptive XML file is a must-have for documenting and managing component dependencies. There is no magic here and very few technologies allow you to reverse engineer component dependencies. You need to design your systems to have these capabilities.

 

The good news is that if you do this right you will find that your systems are easier to test and update. More importantly, you will be able to continuously deliver new features to your customers while maintaining a high level of systems reliability. How do you manage component dependencies? Drop me a line and share your best practices!!

Subscribe

0

Subscribe to receive the Agile ALM DevOps Journal!

[email-subscribers namefield=”YES” desc=”” group=”Public”]

 

Contact me directly if you have any questions,  Bob Aiello

 

 

Contact Us

0

Please join us on social media

Contact me by email at bob.aiello@ieee.org

Bob Aiello
http://www.linkedin.com/in/BobAiello
https://www.facebook.com/BobAiello.CMBestPractices (Warning – I express my opinions freely on Facebook 🙂

@bobaiello, @agilealmdevops, @cmbestpractices
bob.aiello@ieee.org

 

Privacy

0

We do not share your information with other organizations.  We ask for your Name and Email address so that we can mail you our newsletters and journals. We use other information that you provide to us to ensure that the articles that we write are relevant and useful to our readers.

Please contact me personally with any concerns or requests.

Bob Aiello
bob.aiello@ieee.org
http://www.linkedin.com/in/BobAiello

Disclaimer

0

We write about industry best practices based upon our experience and the reported experience of our colleagues. Your results may be different. Obviously, we cannot guarantee that best practices will yield the same results in your environment and we cannot take responsibility for how you implement these practices in your organization. That said, contact us and we will do what we can to help!

Bob Aiello
bob.aiello@ieee.org
http://www.linkedin.com/in/BobAiello