Personality Matters- The Danger of Agreeable People

0

Personality Matters- The Danger of Agreeable People
By Leslie Sachs

Technology professionals often need to get along with some interesting personalities. While sales and management professionals pride themselves on their people skills, some technology geeks find it challenging to deal with others, especially those with noticeable quirks. I work closely with a number of interesting personalities in our consulting practice- including our lead principal consultant, a well-known expert in configuration management. Truth be told, sometimes stubborn opinions can be tough to accept even when the person is an industry expert. Sometimes, I wish that the person that I am dealing with could be just a little bit more amicable. However, it is equally true that too much of a good thing certainly comes with its own set of challenges. In fact, dealing with overly agreeable people can be fraught with obstacles, although quite different than those usually associated with the stereotypical stubborn geek who seems to not be able to bend or compromise. This article will help you understand and deal with the unexpectedly challenging aspects that you may experience interacting with some agreeable people.

Psychologists have spent decades researching personality [1] and formed many different models to explain the behaviors that we may observe in the people we interact with on a daily basis. One of the most well-respected models, known by the acronym OCEAN, is the five factor model based upon many studies. The five factors, openness, conscientiousness, extraversion, agreeableness, and neuroticism describe essential dimensions of personality. We all have some degree of each of these core qualities in our own personality, although the specific balance of each factor can certainly vary considerably from one individual to another. Successful sales professionals are typically high in extraversion while police and law enforcement personnel are often focused on conscientiousness. Folks who have personalities more geared towards being agreeable typically exhibit highly social behaviors and enjoy the company of others, often demonstrating more kindness, empathy and affection for others than the average person. Sometimes these people come across as “pack animals” who seem to be most comfortable operating within the context of a group. By definition, folks who are agreeable prefer to get along and their tendency to reflexively concur with others is exactly why problems may arise.

The dysfunctional side of agreeableness manifests itself in extreme conflict avoidance. These are the people who just cannot take a stand and always seem to forgo their own will for the sake of “getting along” with others. In our book on configuration management best practices, we discussed the middle child who is often the peacemaker in the family [3]. This behavior is often beneficial and can come in handy in helping others understand one another’s differing views. Unfortunately, though, being agreeable becomes dysfunctional when the person just cannot take a stand or will not engage in uncomfortable conversations, including what most of us would regard as necessary negotiation. The overly agreeable person will often avoid letting you know where they really stand on the issues and will typically do almost anything to avoid conflict. Agreeableness is nice in appropriate amounts, but someone who habitually goes along with the crowd may reach a breaking point where they just cannot take the disparity between what they say and what they feel and, at such stressful times, their response may be extreme as it has been building up for so long.

Technology professionals often need to analyze tough problems and collaborate to arrive at consensus on how best to address and resolve problems. It’s no surprise that smart people don’t always agree on how to fix complex technology problems. When dealing with a systems outage, the situation may be very tense and effective technology professionals must be able to express their views and listen to the opinions of others, too. In a crisis, tact and diplomacy may be less than optimal and people with a thin skin may find it hard to cope with the stress of feeling that they are under attack. Some people back off and actually become passive-aggressive, allowing those with either positional power or perhaps just a loud voice to make the decisions – which may or may not be the optimal choice. Effective leaders create environments where everyone can feel safe expressing their professional views and experience-based opinions.

Dealing with a smart analytical person who tries to steer clear of conflict may require some very strong people skills and this is exactly where you can emerge as a leader within your group. Creating an environment where everyone’s opinion is expressed and the team collaborates to reach consensus is by far the best problem-solving strategy. The most effective teams frequently consist of diverse personality types and have a common value of respect and consideration for each person. Although most people enjoy the validation of hearing opinions in alignment with their own views, confident members of successful technology teams work together to encourage the selection of correct decisions based upon facts, whether they come from the most popular member of the team or the quiet non-confrontational guy in the corner who just happens to really know how to configure the software to get your system up and running!

References:
[1] Byrne, Donn. 1974. An Introduction to Personality: Research, Theory, and Applications. Prentice-Hall Psychology Series.
[2] Wiggins, Jerry S. PhD (Editor), The Five-Factor Model of Personality: Theoretical Perspectives The Guilford Press; 1996
[3] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.
[4] Harvey Robbins and Michael Finley, Why Teams Don’t Work – What Went Wrong and How to Make it Right, Peterson’s Pacesetter Books, 1995
[5] Meyer Friedman, Type A Behavior: Its Diagnosis and Treatment (Prevention in Practice Library). Springer 1996

Southwest Glitch Strands Thousands

0

On Wednesday, July 20th, Southwest Airlines suffered a massive systems outage which resulted in the airline being essentially grounded, with over a thousand flights cancelled [1]. This incident forced the airline to temporarily resort to manual procedures which could not handle the normal volume, causing widespread delays and cancellations. The airline had previously suffered a significant outage in 2015 when over 500 flights were cancelled due to another systems glitch. Southwest CEO Gary Kelly was quoted as saying that the cause of the latest outage was a backup system which failed to take over when a router failed. Ironically, this more recent incident occurred shortly after the airline announced record profits, raising the question of whether or not enough efforts are being made to upgrade the airlines critical support systems. Kelly was further quoted as saying, “Southwest has an aging technology infrastructure”, but “the airline has been making significant investments to upgrade it”. Southwest is reported to claim that “it expects to replace the longstanding reservations system next year and replace other key systems over the next three to five years”. Let’s hope that the airline will embrace industry best practices, including DevOps, related to software and systems upgrades. Otherwise, future reports will likely describe outages due to the attempted improvements themselves.

Editor footnote added July 26th, 2016
[1] According to an article published July 25th, 2300 flights were cancelled and thousands more delayed. The cost of the outage may be as high as 10 millions dollars.

Behaviorally Speaking – Using ALM to Drive DevOps

0

Behaviorally Speaking – Using ALM to Drive DevOps
by Bob Aiello

DevOps focuses on helping the development and operations teams work together more effectively by enabling better communication and collaboration. One way that this is accomplished is by moving the application build, package and deployment upstream, allowing operations to more effectively participate in automating the entire deployment pipeline earlier in the lifecycle and as well as gain the requisite knowledge required to understand how to support the application. DevOps clearly works well and both development and operations benefit from DevOps best practices. But there are other essential stakeholders who can also benefit from improved communication and collaboration, including information security, QA and testing. The fact is that every stakeholder, from the business analyst to the DBA, needs to understand their particular role and assigned tasks as well as the work being accomplished by other members of the team. Application Lifecycle Management (ALM) provides the structure and framework that ensures that every member of the team knows what they need to accomplish and how to communicate effectively with the other members of the team. This article explains how the ALM can drive DevOps!

DevOps is a set of principles and practices that focus on helping development and operations communicate and collaborate more effectively. Developers are typically focused on creating software with new features and responding to customer requests as quickly as possible. Operations has a very different focus and is primarily concerned with maintaining a reliable service environment. Much has been written about the natural conflict that exists between development and operations. But the truth is that many other groups also have a focus that often fails to really consider the requirements of the other stakeholders. Organizational silos can have a detrimental impact upon any development effort and DevOps provides us with the best practices necessary to foster effective communications and collaboration. The ideal scenario is to develop teams that are cross-functional and self-organizing. This approach provides a lesson as to exactly why DevOps is effective.

High performance teams are typically self-organizing and often cross-functional [1]. While having a team of technology experts who can perform any role is ideal, it is often not very practical. You can still realize the benefits of DevOps by embracing application lifecycle management (ALM) which helps by defining the workflow for the entire software and systems lifecycle. ALM has its roots in the software development lifecycle (SDLC) which typically defined stages including requirements, design, development, and testing. The ALM actually takes a much wider view.

The ALM defines the tasks, roles and responsibilities that are required to support the entire software and systems lifecycle. This effort includes defining requirements, design, development, testing, but also other related functions including ongoing systems support functions such as the service desk. Application Lifecycle Management takes an integrated and comprehensive approach and relies upon a robust workflow automation tool for its organization and success. The ALM provides transparency, traceability and, most importantly, knowledge management and communication which is the key lesson that we learn from effective DevOps. While cross-functional teams are often ideal and very effective, sometimes organizations must maintain a separation of controls.

In highly regulated industries such as banking and finance, federal regulatory requirements exist for a separation of roles. For example, the technology professionals who write the code are not permitted to be the same resource who build, package and deploy the code. These restrictions, known as IT controls do not prevent organizations from embracing DevOps principles and practices. Whether you are agile or have reason to use a waterfall methodology, the core lesson from DevOps is really about communication and collaboration. Developers have specialized knowledge that must be shared with the operations team in order to successfully maintain and support the runtime systems. The fact is that each team has specialized knowledge that can benefit each of the other stakeholders.

DevOps includes information security (InfoSec), QA and testing as key stakeholders. InfoSec can learn a great deal from developers and both development and information security can learn from the QA and testing professionals. The ALM helps to define what each stakeholder brings to the table and facilitates sharing knowledge and information which is essential for success. These practices improve productivity and quality. It is my own experience, that the most effective way to embrace DevOps with the wider ALM perspective is in small steps. Continuous process improvement (CSI) is one of the effective methodologies that can help you to start at the beginning and work your way through embracing each of the roles in the ALM. DevOps focuses on effective communication, collaboration and knowledge sharing. Using the ALM, you can understand all of the different roles and tasks required in the software and systems lifecycle. Using the ALM to drive your DevOps efforts can help your team embrace DevOps principles and practices for success!

[1] Mankin, Don, Cohen, Susan G., Bikson, Tora K., Teams and Technology : Fullfilling the Promise of the New Organization, Harvard Business School Press, 1996
[2] Harvey Robbins and Michael Finley, Why Teams Don’t Work – What Went Wrong and How to Make it Right, Peterson’s Pacesetter Books, 1995
[3] Bob Aiello and Leslie Sachs, Configuration Management Best Practices: Practical Methods that Work, Addison-Wesley Professional, 2011
[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press
[5] Bob Aiello and Leslie Sachs, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, Addison-Wesley Professional, 2016

Subscribe to the Agile ALM DevOps Journal !!

0

Subscribe to the Agile ALM DevOps Journal !!

Hi,

I am excited to invite you to subscribe to our new online publication which provides guidance on Configuration Management Best Practices, Agile Application Lifecycle Management (including videos) and, of course DevOps. Our publication explains hands-on best practices required to implement just enough process to ensure that you can build software and systems that are reliable and secure. Based upon our new book, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, the Agile ALM DevOps Journal seeks to promote a dialogue that is really needed in the industry today. We will discuss practical approaches to implementing the Agile ALM using DevOps best practices including continuous integration, delivery and deployment. We will also discuss process improvement strategies that work in large organizations that must comply with federal regulatory guidelines, along with smaller teams that may very well grow as they become successful. We are taking this journey together and our goal is to ensure that you have a voice and can share your experiences along with learning from other colleagues too.

Enjoy Leslie Sachs’s amazing column: Personality Matters and Bob Aiello’s column: Behaviorally Speaking.

We will also report on major incidents where organizations clearly need to improve on their ability to develop and deliver software effectively, including the recent Southwest systems glitches which resulted in thousands of flights being cancelled and thousands more being delayed. We will also bring you exciting technical product announcements such as jfrog’s new xray, which helps to scan your runtime objects, including docker images, for vulnerabilities. This is an exciting time to be in the technology field. Join the revolution!

You can submit your articles for publication to share your own knowledge and experience!

Bob Aiello
http://www.linkedin.com/in/BobAiello
@bobaiello@agilealmdevops@cmbestpractices
bob.aiello@ieee.org

PS. you received this announcement because you registered on the CM Best Practices website. If for any reason you do not wish to be on our mailing list, just drop me a note and I promise that I will remove your contact information immediately.

Personality Matters- Introducing Positive Psychology

0

Personality Matters- Introducing Positive Psychology
By Leslie Sachs

Many of my articles focus on identifying and dealing with dysfunctional behaviors in the workplace, such as paranoia or the learned helplessness that we have seen in many IT operations shops when systems went down even after the technical personnel warned that there was a problem. Psychology has long focused on pathologies in a valiant effort to identify and cure mental illness. However, one limitation with this approach is that focusing on the negative issues can sometimes become a self-fulfilling prophecy. First year medical students are notorious for thinking that they have almost every illness that they learn about in their classes. If you want an effective and healthy organization, then it seems obvious that it is essential to focus on promoting healthy organizational behavior. Psychologists Martin Seligman and Mihaly Csikszentmihalyi have pioneered a new focus on a positive view of psychology and this article will help you to understand and begin to apply these exciting and very effective techniques.

Technology leaders from CTOs to Scrummasters work every day to foster the optimal behaviors that lead to improved productivity and quality. That said, we all know that dealing with difficult people and dysfunctional behaviors can be very challenging and sometimes disheartening. Seligman and Csikszentmihalyi wrote that “psychology has become a science largely about healing. Therefore, its concentration on healing largely neglects the fulfilled individual and thriving community.” [1] Instead of concentrating so much energy on remediation, it would be better to empower technology leaders to focus on and encourage positive and effective behaviors in the workplace. Martin Seligman and Mihaly Csikszentmihalyi, note that “the aim of positive psychology is to begin to catalyze a change in the focus of psychology from preoccupation only with repairing the worst things in life to also building positive qualities [2].” Seligman delineates twenty four strengths, ranging from curiosity and interest in the world to zest, passion and enthusiasm, which he suggests are the fundamental traits of a positive and effective individual [3]. Notably, playfulness and humor, along with valor, bravery and a sense of justice, are also listed among these traits that Seligman describes. So, how do we apply this knowledge to the workplace and how can we use this information to be more effective managers? The fact is that we all know people whom we admire and we have all had more than a few employers who seemed less than completely effective.

Effective leaders do indeed exhibit valor, bravery and a sense of justice in identifying barriers to organizational success. The best leaders are not afraid to deliver a tough message and also use their positional power to help teams achieve success. Technology leaders are often particularly motivated by curiosity and exhibit interest in the world, as well as a love of learning. Most also display considerable enthusiasm and passion for their work. Other traits frequently observed in strong leaders include kindness and generosity, along with integrity and honesty. Successful leaders usually also exhibit perseverance and diligence. It hardly comes as a surprise that so many of these strengths are specified as beneficial traits. In fact, many of these aspects had been discussed earlier by Abraham Maslow and Carl Rogers in their work on Humanistic Psychology, a discipline which focuses on helping people to achieve success and realize their full potential. Positive psychology is providing a useful framework for understanding the traits which lead to success both at an organizational level, and also for each of us individually. Much of what positive psychology advocates aligns well with agile methodologies and the agile mindset which many organizations are finding to be so effective, especially in creating an environment where each stakeholder feels empowered to do the right thing and to speak up when there are problems or barriers to success.

Quality management guru W. Edwards Deming noted long ago the importance of healthy behaviors such as driving out fear, in order to ensure that your employees are willing to speak up and warn of potential issues [4]. Clearly, positive behaviors lead to highly effective teams and successful organizations.

Positive psychology cannot solve every problem and there is no doubt that many organizations have cultures and environments that just do not foster success. However, if you are a technical leader (or wish to emerge as a technical leader), then understanding the significance and impact potential of encouraging positive traits is essential for your success. In future articles, we will discuss strategies for employing these techniques in the workplace. Helping your organization to embrace and cultivate positive and effective behaviors will increase the productivity and success of every endeavor!
References:
[1] Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14
[2] Seligman, Martin.  (2002). Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Free Press, New York
[3] Abramson, L. Y.; Seligman, M. E. P.; Teasdale, J. D. (1978). “Learned helplessness in humans: Critique and reformulation”. Journal of Abnormal Psychology, 87
[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press

Aiello, Bob and Leslie Sachs. 2011. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

Aiello, Bob and Leslie Sachs. 2016. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement. Addison-Wesley Professional.

System Outages Cast a Cloud Over Amazon

0
System Outages Cast a Cloud Over Amazon

By Bob Aiello

Banking and financial services firms are among those companies required to provide secure and reliable platforms, both to comply with regulatory requirements and to meet their own business objectives. It has become common for many of these firms to embrace cloud-based service providers as they offer highly scalable and elastic services, often at a significant cost savings. Such choices are not without their own dangers, though, as banks and other firms recently discovered in Sydney, Australia. According to published reports, a number of firms were impacted by an Amazon Web Services outage. Those adversely affected included banking services and retailers, among others. It was also reported that “Amazon Web Services, a web hosting platform popular with several banks and retailers, has been blamed for many of the ATM and Eftpos problems which have affected a number of banks and their customers, including ME Bank and Commonwealth Bank”. Among those hit were ATMs from ME Bank and Commonwealth Bank. Millions of Australian banking customers were unable to access ATM and Eftpos services … after severe storms on the east coast caused widespread damage, including a major outage at a cloud computing service used by several banks. While the banks blamed the weather and their cloud service provider, Amazon itself focused on their platform’s ability to provide consistent and reliable service. As reported, Amazon itself blamed the clouds (this time the real clouds). Amazon provided additional details saying “that bad weather meant that… our utility provider suffered a loss of power at a regional substation as a result of severe weather in the area. This failure resulted in a total loss of utility power to multiple AWS facilities.” Amazon was quoted as explaining that its backups employ a “diesel rotary uninterruptable power supply (DRUPS), which integrates a diesel generator and a mechanical UPS.” Amazon’s supposedly uninterruptable power supply was consequently interrupted.

Amazon was not the only company which suffered outages due, in part, to inclement weather and no one can say that the impacted banks would not have suffered outages if they had not been using Amazon Cloud Services. But this much is certain: banks and other financial services firms have regulatory requirements to provide secure and reliable platforms. When a storm, hurricane or other natural disaster hits, consumers want to be able to get access to their money. Most regulatory experts would agree that banks cannot just blame their cloud service providers when their systems are not available. In the banking industry, this is pretty close to a “dog ate my homework excuse”. The cloud has many advantages, but firms may want to have an “on-prem” [2] disaster recovery capability – just in case.

 

[1] Additional technical detail included “a set of breakers responsible for isolating the DRUPS from utility power failed to open quickly enough.” That was bad because these breakers should “assure that the DRUPS reserve power is used to support the datacenter load during the transition to generator power.”

[2] – On-prem means “on premises” instead of outsourced cloud-based providers.

Behaviorally Speaking – DevOps Drives Cybersecurity

0

Behaviorally Speaking – DevOps Drives Cybersecurity
by Bob Aiello

DevOps focuses on improving communication and collaboration between the development and operations teams. This is no easy task as they each have very different priorities and the truth is that DevOps impacts other groups as well including QA, testing and, most importantly, information security (InfoSec). In a time when Cyber attacks can originate anywhere from a lone high school student to an organized crime, Cybersecurity is becoming a major concern, and this is where you need to focus on ensuring that your InfoSec function has all the necessary support from the other members of the team. DevOps is, above all else, a set of principles and practices that focus on improving communication between all stakeholders. Information security is a key stakeholder within any organization and depends upon effective DevOps for its success. This article will help you use DevOps to drive your InfoSec initiatives.

Information security is responsible for establishing policies that help ensure a secure environment. Understanding your threat level is an essential first step. You may be dealing with hackers from far away lands or a lone antagonist trying to show that he knows how to break into systems and post his message on your website. Other more serious forms of cyber attacks can involve state sponsored cyberterrorism designed to sabotage critical infrastructure from electrical grids to nuclear facilities. InfoSec encompasses a set of practices that help maintain a secure environment. The completely secure environment is known as the trusted base, and includes the underlying hardware platform, operating system and the applications which provide your end-user functionality. The National Institute of Standards and Technology (NIST) publishes a series of standards including the Guide for Security-Focused Configuration Management of Information Systems (NIST 800-128). There are several other security related standards that also impact configuration and release management. But the interesting fact that is that Information Security and the NIST related standards actually depend upon Configuration Management (CM).

Configuration Management best practices are described in industry standards including IEEE 828, EIA 649 and ISO 10007 along with frameworks such as CMMI, Cobit and ITIL. The security standards include the NIST and ISO 27000 family of standards and they all reference and depend upon the aforementioned configuration management standards and frameworks. InfoSec could not possibly be effective without CM and we increasingly see that DevOps facilitates InfoSec too. To understand the real life application of these standards and frameworks, it is best to start by examine how application code is built, packaged and deployed.

DevOps helps to ensure that we know exactly what configuration items (CIs) need to be built and that we use the correct source code baselines to build them. These best practices, based upon industry standards and frameworks, enable you to build fully verifiable configuration items (CIs) and to embed immutable version IDs as part of the automated build procedure. Immutable version IDs are essential for conducting a physical configuration audit which an essential function to comply with audit and regulatory requirements. InfoSec also relies upon the configuration audit to verify that the correct configuration items were built and deployed as planned. Release packaging is also a key aspect of this process.

Many build tools such as Ant, Maven, and Make provide routines to automate the creation of release packages such Java JARs, WARs and EARs. These automated procedures also enable you to create a manifest that contains essential information about the configuration items in the container and the release package itself. Another important best practice in the use of cryptography to ensure the integrity of the release package and its contents.
If you have ever downloaded software from the internet then you have likely come across packages that have been signed and verified using cryptographic hashes such as MAC-SHA1 and MD5. Cryptographic hashes can be used to ensure that the authenticity of the source or what is known as non-repudiation. They also can be used to verify that the package has not been tampered with itself. Cryptography can help by maintaining secure baselines and alert authorities to unauthorized changes. These practices enable you to create what is known as the trusted base.

The trusted base is the secure and verifiable runtime environment built using these security-focused best practices that ensure that you know exactly what CIs were built using the correct source code baseline in the build itself. There have been recent incidents where banks, exchanges and other financial institutions have suffered serious system glitches because of security issues including attacks by hackers. Knight Capital group reportedly suffered a $440 million loss due to the wrong version of networking software on its servers. These incidents highlight the need for robust configuration management best practices including DevOps that should start early in the development process. DevOps focuses on implementing automated application build, package and deployment for development, QA, integration, pre-production and production deployments. In all of these situations, testing is a must have.

Application testing is an essential best practice including smoke testing that should always be completed as the last step in an application deployment. Testing is fundamental and automated build procedures should include unit tests as part of the automated build stream. From a security perspective effective source code management and automated application build enables information security by facilitating the scanning of source code for security vulnerabilities using automated code analysis tools. You can also build variants of the code that enable specialized testing to detect potential security problems. DevOps helps to establish the core CM Best Practices including application build, package and deployment that are essential for establishing the trusted base.

DevOps implements information security best practices including source code baselining and automated build, package and deployment. Embedding immutable version IDs and effective use of cryptography are also essential and very effective at ensuring the trusted base. If you are using DevOps best practices effectively you will be able to drive your information security capabilities to new levels, provide secure and reliable services and most importantly, delight your customers!

.

Personality Matters- The DevOps Divide

0

Personality Matters- The DevOps Divide
By Leslie Sachs

IT organizations often face challenges that range from complex technology to even more complex personalities. When I work with consultants in our CM Best Practices Consulting practice, I often hear of the many types of problems that plague organizations. For example, build and release engineers often have to interface with software development, data security, project management and IT operations professionals. DevOps tries to address the dynamics between IT operations and highly skilled software and systems delivery teams. Recently, I had an opportunity to read The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win [1] by Gene Kim, Kevin Behr and George Spafford and my first reaction was that many of the situations described in the pages of that novel were ones we frequently encountered. Organizations are often challenged by the divisions between development and operations, what I have termed the “DevOps Divide.” Read on if you would like to improve your skills in dealing with these challenging dynamics.

Organizations evolve within a complex ecosystem with many forces impacting all of the stakeholders. Some of these forces exist and are controlled from within the organization. However, external forces can also impact an organization in very significant ways, too. For example, pressure from competitors and other market forces may require that companies take on significant risk in order to just stay competitive and ahead of the competition. With complex new technology systems emerging every three to six months, many businesses are hard-pressed to handle rapid deployments. While automation is an absolute must, collaborative communication is also essential. The development team is a key stakeholder in the DevOps divide.

Development bears the brunt of ill-defined and rapidly changing requirements. But development also gets to learn the new technology and get up-to-speed ahead of anyone else on the team. For this reason, build engineers and IT Ops often need to depend upon the technical expertise of the development resources who, in turn, can sometimes feel like they are the only stakeholders who truly understand the technical details. In fact, these dynamics often create a sense of friction between development and other stakeholders such as IT operations. Improving communications can help solve these problems, but in practice, there is often a significant gap between the development and the operations team. While development forges ahead, operations ensures that the systems stay online and all services are reliable.

Much is made of the fact that operations has a very different agenda than development. Operations is responsible for providing reliable services and, consequently, can sometimes be averse to constant change. While development is always pressed to have the next release completed and deployed, operations focuses on minimizing risk and ensuring continuous and reliable IT services. These opposing forces are at the heart of the DevOps divide and you must address them if you want your team to be more effective. The data security group also often faces similar challenges.

The data security group needs to set policy, determine standards and establish best practices to ensure that the systems are reliable and protected from intrusion. However, the data security group can also find itself challenged to safeguard complex technology that changes rapidly and is not well-understood. Once again, communication is essential or else the security team may find itself without the necessary expertise to be effective. There are other important stakeholders in the complex ecosystem as well.

The QA and testing function needs to understand how the system works, including business requirements which are often less than perfectly defined. Project managers are often challenged to provide the governance required by senior management to answer the classic questions of “are we there yet?” While Project Managers can be highly skilled at planning, they often do not fully understand all of the tasks described in the project plan especially in terms of duration, complexity and risk. Let’s not forget the end users either, whether they be internal corporate users or external customers. The DevOps divide, while most obvious between development and operations, can be viewed as a paradigm for similar communication impasses between any of the stakeholders involved.

At the heart of any organization is a culture and family structure that may be highly effective, but also may be at times dysfunctional. We discuss many of these factors in my book on Configuration Management Best Practices [2]. You probably have worked in many organizations that handle communication well and you have probably worked in more than a few which are sorely challenged in this crucial area. DevOps brings a set of principles and best practices which helps the team improve both quality and productivity. The most essential of these practices is effective communication. Understanding the organizational dynamics as well as the personalities of each of the team members can help you comprehend and effectively manage the DevOps divide.

References
[1] Kim, Gene, Behr, Kevin, and Spafford, George. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win, IT Revolution Press; January 10, 2013
[2] Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2010

Wendy’s Fast Food Chain Data Breach

0

Wendy’s Fast Food Chain Credit Card Data Breach
Hackers Pay a visit to Fast Food Chain And It Wasn’t Just to Grab A Hamburger
By Bob Aiello

Fast food chain Wendy’s has provided additional information on the data breach, first disclosed in January, in which customer credit information (including cardholder name, credit or debit card number, expiration date, cardholder verification value, and service code) was stolen by hackers from what is believed to be as many as 1,000 stores. It was previously reported that only 300 stores had been impacted. According to the company press release, the data breach is believed to have “resulted from a service provider’s remote access credentials being compromised, allowing access – and the ability to deploy malware – to some franchisees’ Point-of-Sale (POS) systems”. Wendy’s claims that “soon after detecting the malware, Wendy’s identified a method of disabling it and thereafter has disabled the malware in all franchisee restaurants where it has been discovered. The investigation has confirmed that criminals used malware believed to have been effectively deployed on some Wendy’s franchisee systems starting in late fall 2015.”

Wendy’s, the third largest hamburger fast-food business, has over a billion dollars in revenue annually and over 6,000 franchise locations. In May of 2016, the company confirmed discovery of evidence of malware being installed on some restaurants’ point-of-sale systems, and worked with their investigator to disable it. On June 9th, they reported that they had discovered additional malicious cyber activity involving other restaurants. The company believes that the malware has also been disabled in all franchisee restaurants where it has been discovered. “We believe that both criminal cyberattacks resulted from service providers’ remote access credentials being compromised, allowing access – and the ability to deploy malware – to some franchisees’ point-of-sale systems.”

In a July 7th statement, Todd Penegor ,President and CEO, of the Wendy’s Company stated that, “in a world where malicious cyberattacks have unfortunately become all too common for merchants, we are committed to doing what is necessary to protect our customers. We will continue to work diligently with our investigative team to apply what we have learned from these incidents and further strengthen our data security measures. Thank you for your continued patience, understanding and support.”

Commentary by Bob Aiello

The following is my opinion; feel free to contact me if you disagree.

I believe that too many companies are not accepting responsibility for ensuring that their systems are completely safe and reliable. Wendy’s does over a billion dollars in sales annually. They have the resources to create completely secure IT systems that will ensure that customer data is safe. There are PCI regulatory requirements in place and organizations which can help companies create secure and reliable systems. Yet, many companies continue to rely upon “experts” who can find malware which has been put onto their servers by hackers. This approach is at best, trying to find a needle in a haystack. We should be building systems to be completely secure and reliable from the ground up. As consumers, we need to demand that corporations implement their systems, which hold our personal data, using techniques such as the secure trusted application base.

Behaviorally Speaking – CM and Traceability

Behaviorally Speaking – CM and Traceability
by Bob Aiello

Software and systems development can often be a very complex endeavor so it is no wonder that sometimes important details can get lost in the process. My own work involves implementing CM tools and processes across many technology platforms including mainframes, Linux/Unix, and Windows. I may be implementing an enterprise Application Lifecycle Management (ALM) solution one day and supporting an open source version control system (VCS) the next. It can be difficult to remember all of the details of the implementation and yet that is precisely what I need to do. The only way to ensure that I don’t lose track of my own changes is to use the very same tools and processes, that I teach developers, for my own implementation work – thereby ensuring history and traceability of everything that I do. I have known lots of very smart developers who made mistakes due to forgetting details that should have been documented and stored for future reference. It often seems like developers are great at running up the mountain the first time, but it takes process and repeatable procedures to ensure that each and every member of the team can run up the same mountain with tripping. This article will discuss how to implement CM and traceability in a practical and realistic way.

The most basic form of traceability is establishing baselines to record a specific milestone in time. For example, when you are checking changes into a version control tool, there is always a point in which you believe that all of the changes are complete. To record this baseline you should label or tag the version control repository at that point in time. This is basic version control and essential in order to be able to rebuild a release at a specific point in time (usually when the code was released to QA for testing). But how do you maintain traceability when the code has been deployed and is no longer in the version control solution? In fact, you need to also maintain baselines in the production runtime area and ensure that you can verify that the right code has been deployed. You also must ensure that unauthorized changes have not occurred either through malicious intent or just an honest mistake. Maintaining a baseline in a runtime environment takes a little more effort than just labeling the source code in a version control repository because you need to actually verify the correct binaries and other runtime dependencies have been deployed and have not been modified without authorization. It is also sometimes necessary that you find the exact version of the source code that was used to build the release that is running in production in order to make a support change such as an emergency bug fix. Best practices in version control and deployment engineering are very important but there is also more to traceability than just labeling source code and tracking binaries.

When software is being developed it is important to develop the requirements with enough detail so that the developers are able to understand the exact functionality that needs to be developed. Requirements themselves change frequently and it is essential that you can track and version control requirements themselves. In many industries such as medical and other mission critical systems, there is often a regulatory requirement to ensure that all requirements have been reviewed and were included in the release. If you were developing a life support system then obviously requirements tracking could be a matter of life or death. Dropping an essential requirement for a high speed trading firm can also result in serious consequences and it is just not feasible to rely upon testing to ensure that all requirements have been met. As Deming noted many years ago, quality has to be built in from the beginning [1]. There are also times when all requirements cannot be included in the release and choices have to be made often based upon risk and the return on investment for including a specific feature. This is when, it is often essential to know who requested the specific requirement and also who has the authority to decide on which requirements will be approved and delivered. Traceability is essential in these circumstances.

Robust version control solutions allow you to automatically track the sets of changes, known as changesets, to the workitems that described and authorized the change. Tracking workitems to changesets is known by some authors as task based development [2]. In task based development, you define the workitems up front and then assign them to resources to complete the work. Workitems may be defects, tasks, requirements or for agile enthusiasts, epics and stories. Some tools allow you to specify linkages between workitems such as a parent-child relationship. This is very handy when you have a defect come in from the help desk that results in other workitems such as tasks and even test cases to ensure that the problem does not happen again in the next release. Traceability helps to document these relationships and also link the workitems to the changesets themselves. Establishing traceability does not really take much more time and it does help to organize and implement iterative development. In fact, it is much easier to plan and implement agile scrum based development if your version control tool implements task based development with the added benefit of providing traceability.

Traceability can help you and your team manage the entire CM process by organizing and tracking the essential details that must be completed in order to successfully deliver the features that your customers want to see. Picking the right tools and processes can help you implement effective CM and maintain much needed traceability. It can also help you develop software that has better quality while meeting those challenging deadlines which often change due to unforeseen circumstances. Use traceability effectively to accelerate your agile development!

[1] Deming, W. Edwards (1986). Out of the Crisis. MIT Press
[2] Hüttermann, Michael, Agile ALM: Lightweight tools and Agile strategies, Manning Publications 2011
[3] Aiello, Bob and Leslie Sachs. 2011. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional
[4] Aiello, Bob and Leslie Sachs. 2016. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement. Addison-Wesley Professional