Personality Matters- Positive Psychology and Learning from Mistakes

0

Personality Matters- Positive Psychology and Learning from Mistakes
By Leslie Sachs

Mistakes happen. But, too often, team members engage in very dysfunctional behavior after they have made mistakes. Even though mistakes are often the best learning experiences, many organizations suffer serious consequences not just because a mistake was made, but often as a direct result of the attempt to save face and cover up after an employee has made a mistake. W. Edwards Deming wisely said that organizations need to “drive out fear”; addressing mistakes in an open and honest manner is essential for any organization striving to excel in today’s competitive business environment. Here’s what we learn from positive psychology on creating an environment where employees can be empowered to address their mistakes in an open and honest way.

Positive psychology teaches us that most people want to cultivate what is best within themselves, and to enhance their experiences of love, work, and play. The trick is to guide your employees into exhibiting appropriate behaviors to accomplish these goals. Otherwise, you may find very dysfunctional behaviors such as hiding mistakes, denial and even blaming others, actions that disrupt the workforce and can adversely impact the business in many ways. Many organizations have siloed infrastructures and cultures which further detract from the organization’s goal of addressing mistakes and resolving problems in the most efficient way possible. DevOps principles and practices can help by encouraging teams to work in a collaborative cross-functional way; supportive teamwork is essential when addressing mistakes. Highly effective teams really need to embrace much more effective and proactive ways of dealing with challenges, including human error.

Positive Psychology focuses on positive emotions, positive individual traits, and positive institutions. This approach is a refreshing change from many schools of psychology which focus more on analyzing the reasons for a variety of anti-social and other problematic personality types which often result in dysfunctional behavior. No doubt some folks do indeed have personality problems which pre-dispose them to managing problems – such as handling their own mistakes – in a way that is not very constructive. But it is equally true that focusing on positive individual traits helps us to see and appreciate the strengths and virtues, such as personal integrity, self-knowledge, self-control, courage and wisdom that come from experience and being nourished in a positive environment. The individual is very important in this context, but it is equally important to consider the organization as a holistic being. Understanding positive behaviors within the company itself entails the study of the strengths that empower team members to address challenges in an effective and create way. Some examples of issues that should be discussed are social responsibility, civility, tolerance, diversity, work ethic, leadership, and honesty.

Not surprisingly, the best leaders actually exhibit these behaviors themselves and lead by example, which brings us back to how specific individuals handle mistakes. When mistakes occur, does your organization foster a safe and open environment where people can feel that their best course of action is to admit what they did wrong? Do team members assume that their colleagues will drop what they are doing to help out in resolving any problems? Does the team avoid finger-pointing and the blame game to focus instead on problem resolution?

One manager mentioned that he did not focus so much on the unavoidable reality that mistakes will occur. Instead, he focused on encouraging his employees to acknowledge errors freely and then rated the entire team on their ability to work together and address problems productively, regardless of who may have been involved. Positive psychology gives us an effective framework for actually following through on Deming’s direction to “drive out fear.” The most successful organizations take mistakes and make them learning experiences, leading to employees who feel a renewed sense of loyalty and commitment to achieving excellence. Mistakes happen – your challenge is to ensure that, rather than demoralizing or paralyzing people, these missteps instead empower your team to be more effective and successful!

 

[1] Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14

[2] Seligman, Martin, Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Free Press, New York 2002

[3] Abramson, L. Y.; Seligman, M. E. P.; Teasdale, J. D. (1978). “Learned helplessness in humans: Critique and reformulation”. Journal of Abnormal Psychology 87

[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press

[5] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

Groundbreaking Technology for DevOps Omniscience; JFrog Announces Immediate Availability of JFrog Xray

0

SANTA CLARA, CA, July 5th, 2016 as announced on Marketwired

JFrog, the worldwide leader in infrastructure for software management and distribution, today announced the immediate availability of JFrog Xray, its pioneering product for accelerating software delivery. This powerful new resource provides organizations unprecedented insight into their software packages and containers.

As the industry’s first universal artifact analysis product, JFrog Xray works with all software package formats and a multitude of databases as it deeply and recursively scans every type of binary component ever used in a software project and points out changes or issues impacting the production environment.

“The early response to JFrog Xray has been phenomenal,” said Shlomi Ben Haim, founder and CEO of JFrog. “We’re excited that more and more organizations will now be able to benefit from this pioneering technology for gaining radical transparency into the huge volume and variety of binary components used in development. The combination of validating with security databases, along with acquiring metadata from JFrog Artifactory makes JFrog Xray the only tool in the world that not only scans the container or software package, but also provides a full dependency graph and impact analysis to the user. Our goal is to address the real DevOps pain and not just to send another scanner to the market!”

As organizations increasingly transform from isolated teams and role-specific tools to common delivery pipelines shared by global teams and driven by seamlessly inter-operating tools, they need to understand all the binary artifacts they produce, across all product lines and geographies, and taking into account changes to global application deployment and distribution over time.

JFrog Xray addresses this critical need by providing deep recursive scanning to repeatedly peel back the layers of software components and their accompanying metadata to uncover security vulnerabilities or other issues down to the most fundamental binary component, no matter what binary packaging format the organization uses. This deep scanning of the dependency graph provides organizations the ability to perform impact analysis on changes to their package structure.

JFrog Xray is a fully-automated platform with a powerful REST API, enabling integration with an organization’s CI/CD pipeline as well as with all current and any possible future types of component-scanning technology.

It integrates with a universal range of databases and security platforms so that critical necessities such as security vulnerability analysis, license compliance and component version analysis and assurance become possible not only at build time, but across all of the enterprise’s binary digital assets as well.

JFrog Xray is available now at https://www.jfrog.com/xray/free-trial with a special webinar on JFrog Xray on July 14 — http://bit.ly/23gL143.

More information about JFrog Editions — https://www.jfrog.com/pricing
Resources
Company: https://www.jfrog.com
Open Positions: https://join.jfrog.com
JFrog Artifactory: https://www.jfrog.com/artifactory
JFrog Bintray: https://www.jfrog.com/bintray
JFrog Mission Control: https://www.jfrog.com/mission-control
JFrog Xray: https://www.jfrog.com/xray
Customer testimonials: https://www.jfrog.com/customers
Twitter: https://twitter.com/jfrog
LinkedIn: https://www.linkedin.com/company/jfrog-ltd
JFrog Training Webinars: https://www.jfrog.com/webinars

About JFrog
More than 2,500 paying customers, 60,000 installations and millions of developers globally rely on JFrog’s world-class infrastructure for software management and distribution. Customers include some of the world’s top brands, such as Amazon, Google, LinkedIn, MasterCard, Netflix, Tesla, Barclays, Cisco, Oracle, Adobe and VMware. JFrog Artifactory, the Universal Artifact Repository, JFrog Bintray, the Universal Distribution Platform, JFrog Mission Control, for Universal Repository Management, and JFrog Xray, Universal Component Analyser, are used by millions of developers and DevOps engineers around the world and available as open-source, on-premise and SaaS cloud solutions. The company is privately held and operated from California, France and Israel. More information can be found at www.jfrog.com.

Behaviorally Speaking—Putting Ops Into DevOps

0
Behaviorally Speaking—Putting Ops Into DevOps
by Bob Aiello
DevOps is a set of principles and practices that helps development and operations teams work more effectively together. DevOps focuses on improving communication and collaboration with the technical goal of providing reliable and flexible release and deployment automation. Much of what is written about DevOps is delivered from the perspective of developers. We see many articles about continuous, delivery and deployment describing the virtues of good tooling, microservices and very popular practices around virtualization and containers. In my opinion, DevOps needs to also take an operations view in order to achieve the right balance. This article is all about putting Operations back into DevOps.

Operations professionals are responsible for ensuring that IT services are available without interruption or even degradation in services. IT operations is a tough job and I have worked with many technology professionals who were truly gifted in IT operations with all of its functions and competencies. Many IT operations staff perform essential day to day operations tasks that can be very repetitive, although essential in keeping critical systems online and operational. In some organizations, operations engineers are not as highly skilled as their development counterparts. Historically, mainframe operators were focused on punch cards and mounting tapes while programmers were focused on implementing complex business logic. Today we do come across operations engineers who lack strong software engineering skills and training and this can be a very serious problem. When developers observe that operations technicians are not highly skilled then they often stop providing technical information because the developers come to the conclusion that the operations folks cannot understand the technical details. This dynamic can result in consequences that are disastrous for the company, with the most common challenge of developers feeling that should try to bypass operations as often as possible.I have also worked with top notch Unix/Linux gurus in operations who focused on keeping complex systems up and running on a continuous basis. IT operations professionals often embrace the itSMF ITIL v3 framework to ensure that they are implementing industry best practices that ensure reliable IT services. If you are not already aware of ITIL v3 you probably should be.

The ITIL v3 framework describes a robust set of industry best practices designed to ensure continuous operation of IT services. The ISACA Cobit and the SEI CMMI are also frameworks that are used by many organizations to improve their IT processes, but ITIL is by far the popular set of guidelines for IT operations. CM professionals should particularly focus on the guidance in the transition section of the ITIL framework which describes change management, build and release, configuration management systems (including CMDB). With all of this guidance do not forget to start at the beginning with an understanding of the application and systems architecture.

The first thing that I always require is a clear description of the application and systems architecture. This information is very important to have a clear understanding of the system as a whole or in DevOps terminology, having a full end-to-end systems view. For build and release engineers, understanding the architecture is fundamental because all of our build, release and deployment scripts must be created with an understanding of the architecture involved. In fact, development needs to build applications that are designed for IT Operations.

Many developers focus on Test Driven Development (TDD) where code is designed and written to be testable, often beginning with writing the unit test classes even before the application code itself is written. I have run several large scale automated testing projects in my career and I have always tried to work with the developers to design the systems to be more easily testable. In some cases this actually included hooks to ensure that the test tools could work without finding too many cosmetic superficial issues (which we usually call false positives). Test Driven Development is very effective and it is my view that applications also need to be designed and written with operations in mind. One reason to design applications with IT Operations in mind is to enable the implementation of IT process automation.
Effective IT operations teams rely upon tools including the automated collection of events, alerts and incident management. When an alert is raised or incident reported to the IT Service Desk, the IT Operations team must be able to rely upon IT process automation to facilitate detection and resolution of the incident, preferably before there is customer impact. IT Process automation must include automated workflows to enable each member of the team to respond in a clear and consistent way. In practice, it is very common for organizations to have one or two highly skilled subject matter experts who are able to troubleshoot almost any production issue. The problem is that these folks don’t always work twenty four hours a day – seven days a week and in fact, are usually on vacation when problems occur. IT process automation, including workflow automation, enables the operations team to have well documented and repeatable processes to help ensure that IT Services are consistently working in a reliable and consistent way. Getting these procedures right must always start with the application build.

Effective build automation often includes key procedures such as embedding immutable version IDs into configuration items to facilitate the physical configuration audit. For example, a C#/.net application should have a version identifier embedded into the assembly. You can embed version IDs via an MS Build script or using Visual Studio IDE. The Microsoft .net MSIL Disassembler (Ildasm.exe) can be used to look inside of a .net assembly and display the version ID. There are similar techniques in Java/C/C++ along with almost every other software development technology. These techniques are essential for IT operations to be able to confirm that the correct binary configuration items are in place and that there have not been any unauthorized changes. Builds are important, but continuously deploying code starting from very early in the development lifecycle is also a critical DevOps function that helps IT operations to be more effective.

Application automation is a key competency in any effective DevOps environment. Continuous Delivery enables the IT operations team to rehearse and streamline the entire deployment process. If this is done right then the operations team can support many deployments while still maintaining a high level of service and support. The best practice is to move the application build, package and deployment process upstream and begin the effort with supporting development test environments. These automated procedures are not trivial and it will take some time to get them right. The sooner in the lifecycle you begin this effort, the sooner your procedures will be mature and reliable. Since organizations have to pay someone to deploy code to the development and testing environments, it is a great idea to have the person who will deploy to production do this work and get the experience and training to understand and help evolve the deployment automation. The practice of involving operations from the beginning of the lifecycle has become known as left-shift.  IT operations depends upon a reliable automated deployment framework for success and getting Ops involved from the beginning helps you get that work done.

IT operations is a key stakeholder in any DevOps transformation. It is all too common for development to miss the importance of partnering effectively with operations to develop effective procedures that ensure uninterrupted IT services. If you want to excel then you need to keep Operations in your DevOps transformation!

Personality Matters- Learned Complacency and Systems Glitches

0

Personality Matters- Learned Complacency and Systems Glitches
By Leslie Sachs

System glitches have impacted many high profile trading systems such as the July 2015 New York Stock systems outage. Initially feared to be the the result of a cyber attack, but investigation determined it to be the result of a faulty software upgrade. The NYSE is not the only trading venue suffering outages during systems upgrades. In April 2013, the Chicago Board Options Exchange (CBOE) also suffered a high profile systems glitch, which shut down the CBOE trading system and was among a series of incidents impacting large trading firms, including NASDAQ, which are expected to be highly secure and reliable. It is often believed that these, and similar, outages are the result of the complexity and challenges inherent in upgrading complex mission critical financial trading systems. Given that similar outages occurred at other major stock exchanges and trading firms, one might be tempted to think that the CBOE debacle was unremarkable. What is striking is that there was a published report that employees knew in advance that the system was not working correctly and yet the CBOE none-the-less chose to not fail over to its backup system. In our consulting practice, we often come across technology professionals who try to warn their management about risks and the possibility of systems outages that could impact essential services. During the CM Assessment that we conduct, we often find ourselves being the voice for validating and communicating these concerns to those who are in a position to take appropriate action. What is troubling, however, is that we have seen many companies where employees have essentially learned to no longer raise their concerns because there is no one willing to listen and, even worse, they may have suffered consequences in the past for being the bearer of bad tidings. We refer to this phenomenon as learned complacency.

Some people are more passive than others. This may come from a personality trait where the person feels that getting along with others is more important than blazing a new trail and standing up for one’s own convictions. Many people strongly desire to just go along with the crowd and psychologists often refer to this personality trait as agreeableness, one of the primary personality traits in the well-known Big Five [1]. This personality trait can be very problematic in certain situations and some people who like to avoid conflict at all costs display a dysfunctional behavior known as passive-aggressiveness. A passive-aggressive person typically refuses to engage in conflict, choosing instead to outwardly go along with the group, while inwardly deeply resenting the direction that they feel is being forced upon them. People with a passive-aggressive personality trait may outwardly appear to be agreeable, but deep down they are usually frustrated and dissatisfied and may engage in behaviors that appear to demonstrate acquiescence, yet actually do nothing or even obstruct progress, albeit in a subtle manner. Some IT professionals who have a passive (or passive-aggressive) personality trait may be less than willing to warn their managers that systems problems that may cause a serious outage exist.

We have seen people who simply felt that although they were close enough to the technology to identify problems, they could not escalate a serious issue to their management, because it simply was not their job. In some cases, we have come across folks who tried to warn of pending problems, but were counseled by their managers to not be so outspoken. Bob Aiello describes one manager who frequently used the phrase, “smile and wave” to encourage his staff to tone down their warnings since no one really wanted to hear them anyway. Not surprisingly, that organization has experienced serious systems outages which impacted thousands of customers. But not everyone is afraid to stand and be heard. What often distinguishes employees is their own natural personality traits, including those associated with being a strong leader.

Technology leaders know how to maintain a positive demeanor and focus on teamwork, while still having the courage to communicate risks that could potentially impact the firm. The recent rash of serious systems outages certainly demonstrates the need for corporations to reward and empower their technical leaders to communicate problems without fear of retribution. Deming said, “drive out fear” and there is certainly no greater situation where we need leaders to be fearless than when warning of a potential problem that could have a significant impact upon large-scale production IT systems.

While some people may be predisposed to avoid conflict, the greater problem is when a corporation develops a culture where employees learn to maintain silence even when they are aware of potential problems. The IT industry needs leaders who are accountable, knowledgeable and empowered to create working environments where those who protect the long-term best interests of the firm are rewarded and those who take short-sighted risks are placed in positions where they cannot adversely impact the well-being of the firm. We will see less systems outages when each member of the team understands their own role in the organization and feels completely safe and empowered to speak truthfully about risks and potential problems that may impact their firm’s critical systems infrastructure. There are times when risk-taking is appropriate and may result in significant rewards. However, firms which take unnecessary risks endanger not only their own corporation, but may impact thousands of other people as well. Those firms with thoughtful IT leadership and a strong truthful and open culture will achieve success while still managing and addressing risk in an appropriate and effective way.

 

 

References

[1] http://www.psychometric-success.com/personality-tests/personality-tests-big-5-aspects.htm

[2] Byrne, Donn. 1974. An Introduction to Personality: Research, Theory, and Applications. Prentice-Hall Psychology Series.

[3] Appelo, Jurgen. 2011. Management 3.0: Leading Agile Developers, Developing Agile Leaders. Addison-Wesley Signature Series.

[4] Aiello, Bob and Leslie Sachs. 2011. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

[5] Aiello, Bob and Leslie Sachs. 2016. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement. Addison-Wesley Professional.

 

 

 

 

Leslie Sachs is a New York state-certified school psychologist and the COO of Yellow Spider. She is the co-author of Configuration Management Best Practices: Practical Methods that Work in the Real World. Leslie has more than twenty years of experience in the psychology field and has worked in a variety of clinical and business settings, where she has provided many effective interventions designed to improve the social and educational functioning of both individuals and groups. She has an MS in school psychology from Pace University and interned in Bellevue Psychiatric Center in New York City. A firm believer in the uniqueness of every individual, she has recently done advanced training with Mel Levine’s All Kinds of Minds Institute. She may be reached at LeslieASachs@gmail.com, or link with her at http://www.linkedin.com/in/lesliesachs.

 

 

 

 

 

 

Behaviorally Speaking – Why Retailers Need A Secure Trusted Application Base

0

Behaviorally Speaking – Why Retailers Need A Secure Trusted Application Base
by Bob Aiello

Target’s well-publicized disclosure that customers’ Personally Identifiable Information (PII) had been compromised was a stark reminder that retailers need to ensure that their systems are secure and reliable. The 2013 incident resulted in a settlement of over 39 million dollars, which Target had to pay banks and other financial services firms to compensate them for losses stemming from the cybersecurity breach. Target is not alone, as other retailers are stepping forward and admitting that they, too, have been “victims” of a cyber-attack. Target CEO Gregg Steinhafel called for chips on credit cards while also admitting that there was malware on his point-of-sale machines. Mr Steinhafel ultimately lost his job after the incident, which should emphasize corporate leaders are being held accountable for their corporate systems security and reliability. In Target’s case, the malware was not discovered on the retailer’s machines despite the use of malware scanning services. The fact that retailers rely upon security software to detect the presence of a virus, Trojan or other malware is exactly what is wrong with the manner in which these executives are looking at this problem.

The problem is that malicious hackers do not give us a copy of their code in advance so that virus protection software vendors can make security products capable of recognizing the “signature” of an attack. This means that we are approaching security in a reactive manner only after the malicious code is already on our systems. What we need to be doing is building secure software in the first place, and to do this you need a secure trusted application base which frankly is not really all that difficult to accomplish. Creating secure software has more to do with the behavior and processes of your development, operations, information security and QA testing teams than the software or technology you are using. We need to be building, packaging and deploying code in a secure and trusted way such that we know exactly what code should be on a server. Furthermore, we also need to be able to detect unauthorized changes which occur, either through human error or malicious intent. The reason that so much code is not secure and reliable is that we aren’t building it to be secure and reliable and it is about time that we fixed this readily-addressed problem. We discuss how to create an effective Agile ALM using DevOps in our new book.

Whether your software system is running a nuclear power plant, grandpa’s pacemaker or the cash register at your favorite retailer, software should be built, packaged and deployed using verifiable automated procedures that have built-in tests to ensure that the correct code was deployed and that it is running as it should. In the IEEE standards, this is known as a physical and functional configuration audit and is among the most essential configuration management procedures required by most regulatory frameworks, and for very good reason. If you use Ant, Maven, Make or MSBuild to compile and package your code, you can also use cryptographic hashes to sign your code using a private key in a technique that is commonly known as asymmetric cryptography. This isn’t actually all that difficult to do and many build frameworks have the functions already built into the language. Plus, there are many reliable free and open source libraries available to help automate these tasks. It is unfortunate, not to mention rather costly, that many companies don’t take the time to implement these procedures and best practices as they rush their updates to market without the most basic security built in from the beginning of the lifecycle.

We have had enough trading firms, stock exchanges and big banks suffer major outages that impacted their customers and shareholders. It is about time that proper strategies be employed to build in software reliability, quality and security from the beginning of the lifecycle instead of just trying to tack it on at the end – if there is enough time. The Obamacare healthcare.gov website has also been cited as having serious security flaws and there are reports that the security testing was skipped due to project timing constraints. The DevOps approach of building code through automated procedures and deploying to a production-like environment early in the lifecycle is essential in enabling information security, QA, testing and other stakeholders to participate in helping to build quality systems that are verifiable down to the binary code itself. If you have put in place the procedures needed to detect any unauthorized changes then your virus detection software should not need to detect the signature of a specific virus, Trojan or other malware.

Using cryptography, I can create a secure record of the baseline that allows me to proactively ascertain when a binary file or other configuration item has been changed. When I baseline production systems, I sometimes find that, to my surprise, there are files changing in the environment that I do not expect to be changing. Often, there is a good explanation. For example, some software frameworks spawn off additional processes and related configuration files to handle additional volume. This is particularly a problem with frameworks that are
commonly used to write code faster. These frameworks are often very helpful, but sometimes they are not necessarily completely understood by the technology team using them. Baselining your codelines will actually help you understand and support your environment more effectively when you learn what is occurring on a day-to-day basis. There is some risk that you might have some false positives in which you think that you have a virus or other malware when in fact you can determine that there is a logical explanation for the changed files (and that information can be stored in your knowledge management system for next time).

The Target point-of-sale (POS) devices should have been provisioned using automated procedures that also could be used to immediately identify that code was on the machine (or networking device) that was not placed there by the deployment team. Identifying malware is great, but identifying that your production baseline has been compromised is a infinitely more helpful. When companies start embracing DevOps best practices then large enterprise systems will become more reliable and secure, thus helping the organization better achieve their business goals!

Pick up a copy of our new book on Agile Application Lifecycle Management to learn more about exactly what you need to do in order to create the secure trusted application base!

Personality Matters- Type “A’s” in DevOps

0

Personality Matters- Type “A’s” in DevOps
By Leslie Sachs

Technology teams often attract some “interesting” personalities. Some of these folks may simply exhibit odd, perhaps eccentric, behaviors unrelated to their work responsibilities while others may engage in behaviors that undermine the effectiveness of the team or, perhaps conversely, actually stimulate teamwork and contribute to success. The personalities of the people on your team certainly impact not only how happy you are to show up for work, but often they directly impact the overall success (or failure) of the organization as well. But what happens when members of your team exhibit overly aggressive or downright combative behaviors, such as insisting that the team adopt their approach or showing a lack of teamwork and collaborative behavior? You may find yourself on a team with some “interesting”  personalities one day. Successful managers give careful consideration to the personalities on their teams. Since you’re unlikely to change your colleagues MOs, it is wise to consider instead how everyone’s styles interact.

DevOps efforts can benefit from some typical “Type A” or “Type B” behaviors. First, a quick overview of the history; Dr. Meyer Friedman was a cardiologist, trained at John Hopkins University, who developed a theory that certain behaviors increased the risk of heart attacks [1]. Dr. Friedman developed this theory with another cardiologist by the name of Dr. Ray Rosenman. These two physicians suggested that people who exhibited type “A” behaviors (TAB), including being overly competitive, hard driving and achievement-oriented, were at higher risk for developing coronary artery disease. Fascinating, and not without some controversy in the medical establishment, this research also makes one ponder how other members of the team might react to interacting regularly with a type “A” personality on the team.

Software development is largely a “Type A” endeavor. The fact is that many highly-effective teams have lots of members who are very aggressive, intense and highly competitive. One important mitigating factor is that technology professionals also need to be right. You can exhibit whatever personality traits you want and the software just won’t work if you didn’t get it right. The next issue is that technology is so complex that few people, if any, in today’s global organizations, are able to work entirely on their own. High-performing teams often have specialists who depend upon each other and must collaborate. Even though some degree of competition may be common, frequent collaboration is just not optional. If you have ever been in a meeting with someone who just stuck to their point despite objections from other team members (and seemingly oblivious to any sense of logic) then you probably have seen this type of behavior. Still, many technology teams often struggle to overcome a fair amount of conflict and drama. In the midst of a highly confrontational meeting, it might be tempting to consider what life would be like with the more easy going “Type B” personalities. However, Harvey Robbins and Michael Finley point out that some teams don’t work well when their leaders are unwilling to fight for the team [2].

So, how exactly can one determine what is the right amount of “Type A” versus “Type B” behavior in a DevOps team? As noted in previous articles, there is a natural tension between the aggressive behavior of highly motivated software developers and the operations professionals who are charged with ensuring that we maintain consistent and continuously available services. Operations often focuses on maintaining the status quo while development presses hard for introducing new features based upon customer demand. It shouldn’t surprise you that both types of behavior are essential for a successful DevOps organization. You need to have aggressive personalities with a strong achievement-focused drive to create new features and improved systems. But you also need to temper this aggressiveness with the more balanced “Type B” behaviors that encourage review and careful analysis.

This balance is exactly what the burgeoning DevOps movement brings to the table. DevOps brings together the views of folks engaged in QA activities, software development, operations and also information security. Keep in mind that many people are attracted to each of these essential disciplines, in part, due to their personalities as well as by how these roles fit into the goals and objectives of their respective teams. DevOps brings together and improves communications between teams; it also brings together stakeholders with different viewpoints and often very different personalities. Successful teams harness this diversity to help ensure that their cross-functional teams are more effective. The most effective managers understand the basic personalities and communication styles that are often found in cross-functional teams and become adept at developing strategies which utilize these differences productively. With encouragement, competitive “Type A’s” and more laid-back “Type B’s” can learn to “play nice” so that each of their strengths are incorporated and contribute to overall team success!

References:
[1] Meyer Friedman, Type A Behavior: Its Diagnosis and Treatment (Prevention in Practice Library). Springer, 1996
[2] Harvey Robbins and Michael Finley, Why Teams Don’t Work – What Went Wrong and How to Make it Right, Peterson’s Pacesetter Books, 1995
[3] Bob Aiello and Leslie Sachs, Configuration Management Best Practices: Practical Methods that Work, Addison-Wesley Professional, 2011
[4] Bob Aiello and Leslie Sachs, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, Addison-Wesley Professional, 2016

Behaviorally Speaking – Software Safety

Behaviorally Speaking – Software Safety
by Bob Aiello

Software impacts our world in many important ways. Almost everything that we touch from the beginning to the end of our day relies upon software. For example, airline flight controls and nuclear power plants all rely upon complex software code that must be updated from time to time, tested and supported. Incidents in the past impacted the 911 emergency dispatch system and that was not the only time that emergency dispatch systems have suffered outages which impacted the response time for police, ambulance and fire department services. The software that enables the anti-missile defense system known as the Iron Dome in Israel has been credited with saving lives and underwent an extensive testing and validation effort. But the number of software glitches impacting trading systems and other complex financial systems could cause us to question whether or not our capability to manage software configuration management is really where it should be.

Many years ago, I was interviewed by a very smart technology manager for a position supporting a major New York based stock exchange. I went into the interview feeling pretty confident that I had the requisite skills and actually had been recommended by a manager who I had worked for previously at another company. I was surprised when during the interview I was asked a very pointed question about my capabilities. The manager asked me to imagine that I was supporting the software for a life support system which my loved one depended upon. He then asked me if I was confident that I would never make a mistake that could potentially impact the person (presumably my child, parent or spouse) who was dependent upon the life support system. I was pretty shocked at this question posed during a job interview and I managed to stay positive and I told the manager my methods worked and yes I would trust them on a life support system that could potentially impact someone who I cared about. But the question stayed with me for years to come. The truth is that someone has to upgrade the software used by life support systems and I am not completely confident that our industry has completely reliable methods to handle this work.

Some times ago I gave a full day class at a The Nuclear Information Technology Strategic Leadership (NITSL) conference. The NITSL is a nuclear industry group of all nuclear generation utilities that exchange information related to information technology management and quality issues. I am pleased to say that these colleagues valued software safety to such a degree that it was an ingrained aspect of their culture which impacted every aspect of their daily work.

From a configuration management perspective, the first step in software safety must be to establish the trusted base from the systems software to applications that are integrated with the hardware devices. The trusted base must start from the lowest levels of the system including the firmware, operating system and even the hardware itself. Applications must built, packaged and deployed deterministically to the trusted base in a manner that ensures that we know exactly what code is to be deployed and that we can verify that the correct code actually was indeed deployed to the target environment. Equally important is verifying that no unauthorized changes have occurred and that the trusted base is verifiable and fully tested. If you had a pacemaker that required software updates, obviously it would be essential that you can rely upon there being a trusted base that enables the pacemaker to function reliably and correctly.

Past outages at major stock exchanges and trading firms have shown that many complex financial systems obviously do not have an established trusted computing base and that has directly resulted in very steep losses for some firms and impacted thousands of people. The good news is that we actually do know how to build, package and deploy software reliably. We also know how to verify that the right code was deployed and that there are no unauthorized changes. These best practices are precisely what we discuss in application build, package and deployment including DevOps, although many firms struggle with their successful implementation. The key to success is to start from the beginning.

In my consulting work, I often find that companies actually do know what has to be done to reliably build, package and deploy software successfully. The problem is that they often begin doing the right thing much too late in the application lifecycle. Deming teaches us that quality must be built in from the beginning. The same is especially true when considering software safety.

Successful build and release engineers understand that smoke testing after a deployment is essential for a successful build and release process. When the software matters then you need to be verifying and validating the code from the very beginning to the end of the lifecycle. This means that your build stream should include unit testing, functional and non-functional (e.g. performance testing) and of course comprehensive regression testing. Good configuration management practices allow you to build a version of the code that can be instrumented for comprehensive code analysis and exhaustive automated testing. The truth is that these best practices are most successful when they are supported from the very beginning of the lifecycle and are a fundamental part of the culture of the organization. Don’t forget that the build and deploy pipeline must also be verifiable and trusted.

When I create an automated build and deployment system, I start from the ground up verifying the operating system itself and all of the system dependencies. I only trust the trusted base if I am able to verify it on a continuous basis and this become for me part of environment management (and monitoring).For example, the Center for Internet Security (CIS) provides an excellent consensus standard that explains in great detail exactly how to create a secure linux operating system. You will also find that the consensus standard also provides example code for verifying that the security baseline is configured as it should be. Successful, security engineering involves both configuring the operating system correctly and verifying on an ongoing basis that it stays configured in a secure way. This is fundamentally a core aspect of environment monitoring and is essential for ensuring the trusted base.

Software safety requires that systems be built and configured in a secure and reliable way. Changes need to be tracked and verified which is essentially the purpose of the physical configuration audit. There’s more to software safety and I hope that you will contact me to share your views on software safety best practices and get involved with the community based efforts to updated software safety standards!

Call for Articles!

0

Hi Everyone!

I am excited to invite you to get involved with the Agile ALM Journal by contributing your own articles on Agile ALM and DevOps along with all aspects of software and systems development. The Agile ALM Journal provides guidance on Application Lifecycle Management which means that we have a strong focus on DevOps, Configuration Management and software methodology throughout the entire ALM. Articles are typically 900 – 1200 words and should explain how to do some aspect of software methodology. Contact me directly to get involved with submitting your articles and I will help you with getting started, forming your ideas and editing your article for publication.

Common topics include:

  • Software development approaches including agile
  • DevOps throughout the entire software process
  • Configuration Management (including the CMDB)
  • Build and Release Engineering
  • Source Code Management including branching and streams
  • Deployment Engineering (DevOps)
  • Continuous Testing
  • Development in the Cloud
  • Continuous Integration and Deployment
  • Environment Management
  • Change Management

and much more!

Bob Aiello
Editor
bob.aiello@ieee.org
http://www.linkedin.com/in/BobAiello

Monitoring Your Environment

0

Monitoring your runtime environment is an essential function that will help you proactively identify potential issues before they escalate into incidents and outages. But environment monitoring can be pretty challenging to do well. Unfortunately, environment management is often overlooked and, even when addressed, usually only handled in the simplest way. Keeping an eye on your environment is actually one of the most important functions for IT operations. If you spend the time understanding what really needs to be monitored and establish effective ways of communicating events, then your systems will be much more reliable—and you will likely get a lot more sleep without so many of those painful calls in the middle of the night. Here’s how to get started with environment management.

The ITIL v3 framework provides pretty good guidance on how to implement an effective environment management function. The first step is to identify which events should be monitored and establish an automated framework for communicating the information to the stakeholders who are responsible for addressing problems when they occur. The most obvious environment dependencies are basic resources such as available memory, disk space, and processor capacity. If you are running low on memory, disk space, or any other physical resource, then obviously your IT services may be adversely impacted. Most organizations understand that employees need to monitor key processes and identify and respond to abnormal process termination. Nagios is one of the popular tools to monitor processes and communicate events that may be related to processes being terminated unexpectedly.

There are many other environmental dependencies, such as ports being opened, that also need to be monitored on a constant basis. I have seen production outages caused by a security group closing a port because there was no record that the port was needed for a particular application. These are fairly obvious dependencies, and most IT shops are well aware of these requirements. But what about the more subtle environment dependencies that need to be addressed?

I have seen situations where databases stopped working because the user account used by the application to access the database locked up. Upon investigation, we found that the UAT user account was the same account used in production. In most ways, you want UAT and production to match, but in this case locking up the user account in UAT took down production. You certainly don’t want to use the same account for both UAT and production, and it may be a good idea to set up a job that checks to ensure that the database account is always working.

Market data feeds are another example of an environment dependency that may impact your system. This one can be tricky because you may not have control over a third-party vendor who supplies you with data. This is all the more reason why you want to monitor your data feeds and notify the appropriate support people if there is a problem. Cloud-based services may also provide some challenges because you may not always be in control of the environment and might have to rely on a third party for support. Establishing a service-level agreement (SLA) is fundamental when you are dependent on another organization for services. You may also find yourself trying to figure out how your cloud-based resources  actually work and what you need to do when your service provider makes changes that may be unexpected and not completely understood. I had this experience myself when trying to puzzle my way through all of the options for Amazon Cloud. In fact, it took me a few tries to figure out how to turn off all of the billable options such as storage and fixed IPs when the project was over. I am not intending to criticize Amazon per se but even their own help desk had trouble locating what I needed to remove so that I would stop getting charged for resources that I wasn’t using.

To be successful with environment management, you need to establish a knowledge base to gather the essential technical information that may be understood by a few people on the team. Documenting and communicating this information is an important task and often requires effective collaboration among your development, data security, and operations teams.

Many organizations including financial services are working to establish a configuration management database (CMDB) to facilitate environment management.  The ITIL framework provides a considerable amount of guidance on how to establish a CMDB and the supporting configuration management system (CMS), which helps to provide some structure for the information in the CMDB. The CMDB and the CMS  must be supported by tools that monitor the environment and report on the status of key dependencies on a constant basis. These capabilities are essential for ensuring that your critical infrastructure is safe and secure.

Many organizations monitor port level scans and attacks. Network intrusion detection tools such as SNORT can help to monitor and identify port-level activity that may indicate an attempt to compromise your system is underway. Ensuring that your runtime environment is secure is essential for maintaining a trusted computing environment. There have been many high-profile incidents that resulted in serious system outages related to port-level system attacks. Monitoring and recognizing this activity is a first step in addressing these concerns.

In complex technology environments you may find it difficult to really understand all of the environment requirements. This is where tying together your support application lifecycle is essential. When bad things happen, your help desk will receive the calls. Reviewing and understanding incidents can help the entire team identify and address environment-related issues. Make sure that you never have the same problem twice by having reported incidents fully investigated with new environmental dependencies identified and monitored on an ongoing basis.
Conclusion

Environment management is a key capability that can help your entire team be more effective. You need to provide a structure to identify environment dependencies and then work with your technical resources to implement tools to monitor environment dependencies. If you get this right, then your organization will benefit from reliable systems and your development and operations

 

 

 

 

 

ALMtoolbox presents smart performance monitoring and alerting tool, including Free Community Edition

0

Tel Aviv, Israel – June 28, 2016 –  IBM Champion ALMtoolbox, Inc., a firm with offices in the United States and Israel, today announced availability of a free Community edition product called ALM Performance, based upon ALM Performance Pro, their award-winning environment monitoring commercial solution.

The Community edition of ALM Performance provides a comprehensive set of over twenty environment-monitoring features including monitoring your ClearCase VOB ,Jenkins and ClearQuest servers along with their respective JVMs. The product also monitors available storage, required ports, memory and CPU load while checking also for components of the application itself, such as Jenkins jobs which may be running too long or other possible Jenkins application problems. You can even write your own custom scripts and integrate them with the ALM performance dashboard. The user interface allows for email alerts, filtering notifications and other custom alerts.

Easily upgraded to the Commercial Pro edition for additional scalability and convenience, the Community Edition alone offers the following features.

ALM Performance Highlights – 3 main components:

  • Settings application – configuration tool for the ALM Performance, allows you to add or delete monitored servers and configure the server’s checks and parameters
  • Graphical component – graphical dashboard of the system.
  • Monitoring service – heart of the system – monitoring service is the component that schedules, runs tests and analyzes the results.

ALM Performance can monitor all Linux, UNIX, Mac OS and Windows versions and it does so in a non-intrusive and secure manner, using advanced SSH protocol. ALM Performance is installed on a Windows host and can be run on-premise or it can be run as a cloud service where we run and manage the system while it remotely monitors your servers.

“Over the years, we have provided a variety of robust tools that share our techniques and expertise with the user community, and now we are doing so with performance monitoring issues. We wanted to share our knowledge with the users, and help them improve their skills so that they can respond more effectively when they have to cope with malfunctions or Systems suffering from slow-response and other forms of latency, says Tamir Gefen, ALMtoolbox CEO.

“We have built this tool after many years of experience with SCM administration, IT management and DevOps, and we created it by envisioning a tool that’s made a priori for Jenkins, ClearCase or ClearQuest rather than just another cut-off-the-shelf monitoring tool that users have to spend months planning what to monitor and how to customize.   Using this tool it takes only 1 hour to start getting status data and insights from your monitored hosts”, says Gefen.

“We always strive to benefit the users’ communities and provide a version that can provide the essential features for each company that uses Jenkins, ClearCase or ClearQuest” says David Cohen, the product manager of the new ALM Performance monitoring tool.

“Since it’s software with an easy installation, we are excited that we are able to provide the Community version, including self-installation, for free”, says Cohen.

To download the product, visit ALM toolbox and click the Download link.

Updates and support via email, phone and desktop sharing are available with either product.