Personality Matters- Positive Psychology and Learning from Mistakes

0

Personality Matters- Positive Psychology and Learning from Mistakes
By Leslie Sachs

Mistakes happen. But, too often, team members engage in very dysfunctional behavior after they have made mistakes. Even though mistakes are often the best learning experiences, many organizations suffer serious consequences not just because a mistake was made, but often as a direct result of the attempt to save face and cover up after an employee has made a mistake. W. Edwards Deming wisely said that organizations need to “drive out fear”; addressing mistakes in an open and honest manner is essential for any organization striving to excel in today’s competitive business environment. Here’s what we learn from positive psychology on creating an environment where employees can be empowered to address their mistakes in an open and honest way.

Positive psychology teaches us that most people want to cultivate what is best within themselves, and to enhance their experiences of love, work, and play. The trick is to guide your employees into exhibiting appropriate behaviors to accomplish these goals. Otherwise, you may find very dysfunctional behaviors such as hiding mistakes, denial and even blaming others, actions that disrupt the workforce and can adversely impact the business in many ways. Many organizations have siloed infrastructures and cultures which further detract from the organization’s goal of addressing mistakes and resolving problems in the most efficient way possible. DevOps principles and practices can help by encouraging teams to work in a collaborative cross-functional way; supportive teamwork is essential when addressing mistakes. Highly effective teams really need to embrace much more effective and proactive ways of dealing with challenges, including human error.

Positive Psychology focuses on positive emotions, positive individual traits, and positive institutions. This approach is a refreshing change from many schools of psychology which focus more on analyzing the reasons for a variety of anti-social and other problematic personality types which often result in dysfunctional behavior. No doubt some folks do indeed have personality problems which pre-dispose them to managing problems – such as handling their own mistakes – in a way that is not very constructive. But it is equally true that focusing on positive individual traits helps us to see and appreciate the strengths and virtues, such as personal integrity, self-knowledge, self-control, courage and wisdom that come from experience and being nourished in a positive environment. The individual is very important in this context, but it is equally important to consider the organization as a holistic being. Understanding positive behaviors within the company itself entails the study of the strengths that empower team members to address challenges in an effective and create way. Some examples of issues that should be discussed are social responsibility, civility, tolerance, diversity, work ethic, leadership, and honesty.

Not surprisingly, the best leaders actually exhibit these behaviors themselves and lead by example, which brings us back to how specific individuals handle mistakes. When mistakes occur, does your organization foster a safe and open environment where people can feel that their best course of action is to admit what they did wrong? Do team members assume that their colleagues will drop what they are doing to help out in resolving any problems? Does the team avoid finger-pointing and the blame game to focus instead on problem resolution?

One manager mentioned that he did not focus so much on the unavoidable reality that mistakes will occur. Instead, he focused on encouraging his employees to acknowledge errors freely and then rated the entire team on their ability to work together and address problems productively, regardless of who may have been involved. Positive psychology gives us an effective framework for actually following through on Deming’s direction to “drive out fear.” The most successful organizations take mistakes and make them learning experiences, leading to employees who feel a renewed sense of loyalty and commitment to achieving excellence. Mistakes happen – your challenge is to ensure that, rather than demoralizing or paralyzing people, these missteps instead empower your team to be more effective and successful!

 

[1] Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14

[2] Seligman, Martin, Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Free Press, New York 2002

[3] Abramson, L. Y.; Seligman, M. E. P.; Teasdale, J. D. (1978). “Learned helplessness in humans: Critique and reformulation”. Journal of Abnormal Psychology 87

[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press

[5] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

Groundbreaking Technology for DevOps Omniscience; JFrog Announces Immediate Availability of JFrog Xray

0

SANTA CLARA, CA, July 5th, 2016 as announced on Marketwired

JFrog, the worldwide leader in infrastructure for software management and distribution, today announced the immediate availability of JFrog Xray, its pioneering product for accelerating software delivery. This powerful new resource provides organizations unprecedented insight into their software packages and containers.

As the industry’s first universal artifact analysis product, JFrog Xray works with all software package formats and a multitude of databases as it deeply and recursively scans every type of binary component ever used in a software project and points out changes or issues impacting the production environment.

“The early response to JFrog Xray has been phenomenal,” said Shlomi Ben Haim, founder and CEO of JFrog. “We’re excited that more and more organizations will now be able to benefit from this pioneering technology for gaining radical transparency into the huge volume and variety of binary components used in development. The combination of validating with security databases, along with acquiring metadata from JFrog Artifactory makes JFrog Xray the only tool in the world that not only scans the container or software package, but also provides a full dependency graph and impact analysis to the user. Our goal is to address the real DevOps pain and not just to send another scanner to the market!”

As organizations increasingly transform from isolated teams and role-specific tools to common delivery pipelines shared by global teams and driven by seamlessly inter-operating tools, they need to understand all the binary artifacts they produce, across all product lines and geographies, and taking into account changes to global application deployment and distribution over time.

JFrog Xray addresses this critical need by providing deep recursive scanning to repeatedly peel back the layers of software components and their accompanying metadata to uncover security vulnerabilities or other issues down to the most fundamental binary component, no matter what binary packaging format the organization uses. This deep scanning of the dependency graph provides organizations the ability to perform impact analysis on changes to their package structure.

JFrog Xray is a fully-automated platform with a powerful REST API, enabling integration with an organization’s CI/CD pipeline as well as with all current and any possible future types of component-scanning technology.

It integrates with a universal range of databases and security platforms so that critical necessities such as security vulnerability analysis, license compliance and component version analysis and assurance become possible not only at build time, but across all of the enterprise’s binary digital assets as well.

JFrog Xray is available now at https://www.jfrog.com/xray/free-trial with a special webinar on JFrog Xray on July 14 — http://bit.ly/23gL143.

More information about JFrog Editions — https://www.jfrog.com/pricing
Resources
Company: https://www.jfrog.com
Open Positions: https://join.jfrog.com
JFrog Artifactory: https://www.jfrog.com/artifactory
JFrog Bintray: https://www.jfrog.com/bintray
JFrog Mission Control: https://www.jfrog.com/mission-control
JFrog Xray: https://www.jfrog.com/xray
Customer testimonials: https://www.jfrog.com/customers
Twitter: https://twitter.com/jfrog
LinkedIn: https://www.linkedin.com/company/jfrog-ltd
JFrog Training Webinars: https://www.jfrog.com/webinars

About JFrog
More than 2,500 paying customers, 60,000 installations and millions of developers globally rely on JFrog’s world-class infrastructure for software management and distribution. Customers include some of the world’s top brands, such as Amazon, Google, LinkedIn, MasterCard, Netflix, Tesla, Barclays, Cisco, Oracle, Adobe and VMware. JFrog Artifactory, the Universal Artifact Repository, JFrog Bintray, the Universal Distribution Platform, JFrog Mission Control, for Universal Repository Management, and JFrog Xray, Universal Component Analyser, are used by millions of developers and DevOps engineers around the world and available as open-source, on-premise and SaaS cloud solutions. The company is privately held and operated from California, France and Israel. More information can be found at www.jfrog.com.

Behaviorally Speaking—Putting Ops Into DevOps

0
Behaviorally Speaking—Putting Ops Into DevOps
by Bob Aiello
DevOps is a set of principles and practices that helps development and operations teams work more effectively together. DevOps focuses on improving communication and collaboration with the technical goal of providing reliable and flexible release and deployment automation. Much of what is written about DevOps is delivered from the perspective of developers. We see many articles about continuous, delivery and deployment describing the virtues of good tooling, microservices and very popular practices around virtualization and containers. In my opinion, DevOps needs to also take an operations view in order to achieve the right balance. This article is all about putting Operations back into DevOps.

Operations professionals are responsible for ensuring that IT services are available without interruption or even degradation in services. IT operations is a tough job and I have worked with many technology professionals who were truly gifted in IT operations with all of its functions and competencies. Many IT operations staff perform essential day to day operations tasks that can be very repetitive, although essential in keeping critical systems online and operational. In some organizations, operations engineers are not as highly skilled as their development counterparts. Historically, mainframe operators were focused on punch cards and mounting tapes while programmers were focused on implementing complex business logic. Today we do come across operations engineers who lack strong software engineering skills and training and this can be a very serious problem. When developers observe that operations technicians are not highly skilled then they often stop providing technical information because the developers come to the conclusion that the operations folks cannot understand the technical details. This dynamic can result in consequences that are disastrous for the company, with the most common challenge of developers feeling that should try to bypass operations as often as possible.I have also worked with top notch Unix/Linux gurus in operations who focused on keeping complex systems up and running on a continuous basis. IT operations professionals often embrace the itSMF ITIL v3 framework to ensure that they are implementing industry best practices that ensure reliable IT services. If you are not already aware of ITIL v3 you probably should be.

The ITIL v3 framework describes a robust set of industry best practices designed to ensure continuous operation of IT services. The ISACA Cobit and the SEI CMMI are also frameworks that are used by many organizations to improve their IT processes, but ITIL is by far the popular set of guidelines for IT operations. CM professionals should particularly focus on the guidance in the transition section of the ITIL framework which describes change management, build and release, configuration management systems (including CMDB). With all of this guidance do not forget to start at the beginning with an understanding of the application and systems architecture.

The first thing that I always require is a clear description of the application and systems architecture. This information is very important to have a clear understanding of the system as a whole or in DevOps terminology, having a full end-to-end systems view. For build and release engineers, understanding the architecture is fundamental because all of our build, release and deployment scripts must be created with an understanding of the architecture involved. In fact, development needs to build applications that are designed for IT Operations.

Many developers focus on Test Driven Development (TDD) where code is designed and written to be testable, often beginning with writing the unit test classes even before the application code itself is written. I have run several large scale automated testing projects in my career and I have always tried to work with the developers to design the systems to be more easily testable. In some cases this actually included hooks to ensure that the test tools could work without finding too many cosmetic superficial issues (which we usually call false positives). Test Driven Development is very effective and it is my view that applications also need to be designed and written with operations in mind. One reason to design applications with IT Operations in mind is to enable the implementation of IT process automation.
Effective IT operations teams rely upon tools including the automated collection of events, alerts and incident management. When an alert is raised or incident reported to the IT Service Desk, the IT Operations team must be able to rely upon IT process automation to facilitate detection and resolution of the incident, preferably before there is customer impact. IT Process automation must include automated workflows to enable each member of the team to respond in a clear and consistent way. In practice, it is very common for organizations to have one or two highly skilled subject matter experts who are able to troubleshoot almost any production issue. The problem is that these folks don’t always work twenty four hours a day – seven days a week and in fact, are usually on vacation when problems occur. IT process automation, including workflow automation, enables the operations team to have well documented and repeatable processes to help ensure that IT Services are consistently working in a reliable and consistent way. Getting these procedures right must always start with the application build.

Effective build automation often includes key procedures such as embedding immutable version IDs into configuration items to facilitate the physical configuration audit. For example, a C#/.net application should have a version identifier embedded into the assembly. You can embed version IDs via an MS Build script or using Visual Studio IDE. The Microsoft .net MSIL Disassembler (Ildasm.exe) can be used to look inside of a .net assembly and display the version ID. There are similar techniques in Java/C/C++ along with almost every other software development technology. These techniques are essential for IT operations to be able to confirm that the correct binary configuration items are in place and that there have not been any unauthorized changes. Builds are important, but continuously deploying code starting from very early in the development lifecycle is also a critical DevOps function that helps IT operations to be more effective.

Application automation is a key competency in any effective DevOps environment. Continuous Delivery enables the IT operations team to rehearse and streamline the entire deployment process. If this is done right then the operations team can support many deployments while still maintaining a high level of service and support. The best practice is to move the application build, package and deployment process upstream and begin the effort with supporting development test environments. These automated procedures are not trivial and it will take some time to get them right. The sooner in the lifecycle you begin this effort, the sooner your procedures will be mature and reliable. Since organizations have to pay someone to deploy code to the development and testing environments, it is a great idea to have the person who will deploy to production do this work and get the experience and training to understand and help evolve the deployment automation. The practice of involving operations from the beginning of the lifecycle has become known as left-shift.  IT operations depends upon a reliable automated deployment framework for success and getting Ops involved from the beginning helps you get that work done.

IT operations is a key stakeholder in any DevOps transformation. It is all too common for development to miss the importance of partnering effectively with operations to develop effective procedures that ensure uninterrupted IT services. If you want to excel then you need to keep Operations in your DevOps transformation!

Personality Matters- Learned Complacency and Systems Glitches

0

Personality Matters- Learned Complacency and Systems Glitches
By Leslie Sachs

System glitches have impacted many high profile trading systems such as the July 2015 New York Stock systems outage. Initially feared to be the the result of a cyber attack, but investigation determined it to be the result of a faulty software upgrade. The NYSE is not the only trading venue suffering outages during systems upgrades. In April 2013, the Chicago Board Options Exchange (CBOE) also suffered a high profile systems glitch, which shut down the CBOE trading system and was among a series of incidents impacting large trading firms, including NASDAQ, which are expected to be highly secure and reliable. It is often believed that these, and similar, outages are the result of the complexity and challenges inherent in upgrading complex mission critical financial trading systems. Given that similar outages occurred at other major stock exchanges and trading firms, one might be tempted to think that the CBOE debacle was unremarkable. What is striking is that there was a published report that employees knew in advance that the system was not working correctly and yet the CBOE none-the-less chose to not fail over to its backup system. In our consulting practice, we often come across technology professionals who try to warn their management about risks and the possibility of systems outages that could impact essential services. During the CM Assessment that we conduct, we often find ourselves being the voice for validating and communicating these concerns to those who are in a position to take appropriate action. What is troubling, however, is that we have seen many companies where employees have essentially learned to no longer raise their concerns because there is no one willing to listen and, even worse, they may have suffered consequences in the past for being the bearer of bad tidings. We refer to this phenomenon as learned complacency.

Some people are more passive than others. This may come from a personality trait where the person feels that getting along with others is more important than blazing a new trail and standing up for one’s own convictions. Many people strongly desire to just go along with the crowd and psychologists often refer to this personality trait as agreeableness, one of the primary personality traits in the well-known Big Five [1]. This personality trait can be very problematic in certain situations and some people who like to avoid conflict at all costs display a dysfunctional behavior known as passive-aggressiveness. A passive-aggressive person typically refuses to engage in conflict, choosing instead to outwardly go along with the group, while inwardly deeply resenting the direction that they feel is being forced upon them. People with a passive-aggressive personality trait may outwardly appear to be agreeable, but deep down they are usually frustrated and dissatisfied and may engage in behaviors that appear to demonstrate acquiescence, yet actually do nothing or even obstruct progress, albeit in a subtle manner. Some IT professionals who have a passive (or passive-aggressive) personality trait may be less than willing to warn their managers that systems problems that may cause a serious outage exist.

We have seen people who simply felt that although they were close enough to the technology to identify problems, they could not escalate a serious issue to their management, because it simply was not their job. In some cases, we have come across folks who tried to warn of pending problems, but were counseled by their managers to not be so outspoken. Bob Aiello describes one manager who frequently used the phrase, “smile and wave” to encourage his staff to tone down their warnings since no one really wanted to hear them anyway. Not surprisingly, that organization has experienced serious systems outages which impacted thousands of customers. But not everyone is afraid to stand and be heard. What often distinguishes employees is their own natural personality traits, including those associated with being a strong leader.

Technology leaders know how to maintain a positive demeanor and focus on teamwork, while still having the courage to communicate risks that could potentially impact the firm. The recent rash of serious systems outages certainly demonstrates the need for corporations to reward and empower their technical leaders to communicate problems without fear of retribution. Deming said, “drive out fear” and there is certainly no greater situation where we need leaders to be fearless than when warning of a potential problem that could have a significant impact upon large-scale production IT systems.

While some people may be predisposed to avoid conflict, the greater problem is when a corporation develops a culture where employees learn to maintain silence even when they are aware of potential problems. The IT industry needs leaders who are accountable, knowledgeable and empowered to create working environments where those who protect the long-term best interests of the firm are rewarded and those who take short-sighted risks are placed in positions where they cannot adversely impact the well-being of the firm. We will see less systems outages when each member of the team understands their own role in the organization and feels completely safe and empowered to speak truthfully about risks and potential problems that may impact their firm’s critical systems infrastructure. There are times when risk-taking is appropriate and may result in significant rewards. However, firms which take unnecessary risks endanger not only their own corporation, but may impact thousands of other people as well. Those firms with thoughtful IT leadership and a strong truthful and open culture will achieve success while still managing and addressing risk in an appropriate and effective way.

 

 

References

[1] http://www.psychometric-success.com/personality-tests/personality-tests-big-5-aspects.htm

[2] Byrne, Donn. 1974. An Introduction to Personality: Research, Theory, and Applications. Prentice-Hall Psychology Series.

[3] Appelo, Jurgen. 2011. Management 3.0: Leading Agile Developers, Developing Agile Leaders. Addison-Wesley Signature Series.

[4] Aiello, Bob and Leslie Sachs. 2011. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

[5] Aiello, Bob and Leslie Sachs. 2016. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement. Addison-Wesley Professional.

 

 

 

 

Leslie Sachs is a New York state-certified school psychologist and the COO of Yellow Spider. She is the co-author of Configuration Management Best Practices: Practical Methods that Work in the Real World. Leslie has more than twenty years of experience in the psychology field and has worked in a variety of clinical and business settings, where she has provided many effective interventions designed to improve the social and educational functioning of both individuals and groups. She has an MS in school psychology from Pace University and interned in Bellevue Psychiatric Center in New York City. A firm believer in the uniqueness of every individual, she has recently done advanced training with Mel Levine’s All Kinds of Minds Institute. She may be reached at LeslieASachs@gmail.com, or link with her at http://www.linkedin.com/in/lesliesachs.

 

 

 

 

 

 

Behaviorally Speaking – Why Retailers Need A Secure Trusted Application Base

0

Behaviorally Speaking – Why Retailers Need A Secure Trusted Application Base
by Bob Aiello

Target’s well-publicized disclosure that customers’ Personally Identifiable Information (PII) had been compromised was a stark reminder that retailers need to ensure that their systems are secure and reliable. The 2013 incident resulted in a settlement of over 39 million dollars, which Target had to pay banks and other financial services firms to compensate them for losses stemming from the cybersecurity breach. Target is not alone, as other retailers are stepping forward and admitting that they, too, have been “victims” of a cyber-attack. Target CEO Gregg Steinhafel called for chips on credit cards while also admitting that there was malware on his point-of-sale machines. Mr Steinhafel ultimately lost his job after the incident, which should emphasize corporate leaders are being held accountable for their corporate systems security and reliability. In Target’s case, the malware was not discovered on the retailer’s machines despite the use of malware scanning services. The fact that retailers rely upon security software to detect the presence of a virus, Trojan or other malware is exactly what is wrong with the manner in which these executives are looking at this problem.

The problem is that malicious hackers do not give us a copy of their code in advance so that virus protection software vendors can make security products capable of recognizing the “signature” of an attack. This means that we are approaching security in a reactive manner only after the malicious code is already on our systems. What we need to be doing is building secure software in the first place, and to do this you need a secure trusted application base which frankly is not really all that difficult to accomplish. Creating secure software has more to do with the behavior and processes of your development, operations, information security and QA testing teams than the software or technology you are using. We need to be building, packaging and deploying code in a secure and trusted way such that we know exactly what code should be on a server. Furthermore, we also need to be able to detect unauthorized changes which occur, either through human error or malicious intent. The reason that so much code is not secure and reliable is that we aren’t building it to be secure and reliable and it is about time that we fixed this readily-addressed problem. We discuss how to create an effective Agile ALM using DevOps in our new book.

Whether your software system is running a nuclear power plant, grandpa’s pacemaker or the cash register at your favorite retailer, software should be built, packaged and deployed using verifiable automated procedures that have built-in tests to ensure that the correct code was deployed and that it is running as it should. In the IEEE standards, this is known as a physical and functional configuration audit and is among the most essential configuration management procedures required by most regulatory frameworks, and for very good reason. If you use Ant, Maven, Make or MSBuild to compile and package your code, you can also use cryptographic hashes to sign your code using a private key in a technique that is commonly known as asymmetric cryptography. This isn’t actually all that difficult to do and many build frameworks have the functions already built into the language. Plus, there are many reliable free and open source libraries available to help automate these tasks. It is unfortunate, not to mention rather costly, that many companies don’t take the time to implement these procedures and best practices as they rush their updates to market without the most basic security built in from the beginning of the lifecycle.

We have had enough trading firms, stock exchanges and big banks suffer major outages that impacted their customers and shareholders. It is about time that proper strategies be employed to build in software reliability, quality and security from the beginning of the lifecycle instead of just trying to tack it on at the end – if there is enough time. The Obamacare healthcare.gov website has also been cited as having serious security flaws and there are reports that the security testing was skipped due to project timing constraints. The DevOps approach of building code through automated procedures and deploying to a production-like environment early in the lifecycle is essential in enabling information security, QA, testing and other stakeholders to participate in helping to build quality systems that are verifiable down to the binary code itself. If you have put in place the procedures needed to detect any unauthorized changes then your virus detection software should not need to detect the signature of a specific virus, Trojan or other malware.

Using cryptography, I can create a secure record of the baseline that allows me to proactively ascertain when a binary file or other configuration item has been changed. When I baseline production systems, I sometimes find that, to my surprise, there are files changing in the environment that I do not expect to be changing. Often, there is a good explanation. For example, some software frameworks spawn off additional processes and related configuration files to handle additional volume. This is particularly a problem with frameworks that are
commonly used to write code faster. These frameworks are often very helpful, but sometimes they are not necessarily completely understood by the technology team using them. Baselining your codelines will actually help you understand and support your environment more effectively when you learn what is occurring on a day-to-day basis. There is some risk that you might have some false positives in which you think that you have a virus or other malware when in fact you can determine that there is a logical explanation for the changed files (and that information can be stored in your knowledge management system for next time).

The Target point-of-sale (POS) devices should have been provisioned using automated procedures that also could be used to immediately identify that code was on the machine (or networking device) that was not placed there by the deployment team. Identifying malware is great, but identifying that your production baseline has been compromised is a infinitely more helpful. When companies start embracing DevOps best practices then large enterprise systems will become more reliable and secure, thus helping the organization better achieve their business goals!

Pick up a copy of our new book on Agile Application Lifecycle Management to learn more about exactly what you need to do in order to create the secure trusted application base!

Personality Matters- Type “A’s” in DevOps

0

Personality Matters- Type “A’s” in DevOps
By Leslie Sachs

Technology teams often attract some “interesting” personalities. Some of these folks may simply exhibit odd, perhaps eccentric, behaviors unrelated to their work responsibilities while others may engage in behaviors that undermine the effectiveness of the team or, perhaps conversely, actually stimulate teamwork and contribute to success. The personalities of the people on your team certainly impact not only how happy you are to show up for work, but often they directly impact the overall success (or failure) of the organization as well. But what happens when members of your team exhibit overly aggressive or downright combative behaviors, such as insisting that the team adopt their approach or showing a lack of teamwork and collaborative behavior? You may find yourself on a team with some “interesting”  personalities one day. Successful managers give careful consideration to the personalities on their teams. Since you’re unlikely to change your colleagues MOs, it is wise to consider instead how everyone’s styles interact.

DevOps efforts can benefit from some typical “Type A” or “Type B” behaviors. First, a quick overview of the history; Dr. Meyer Friedman was a cardiologist, trained at John Hopkins University, who developed a theory that certain behaviors increased the risk of heart attacks [1]. Dr. Friedman developed this theory with another cardiologist by the name of Dr. Ray Rosenman. These two physicians suggested that people who exhibited type “A” behaviors (TAB), including being overly competitive, hard driving and achievement-oriented, were at higher risk for developing coronary artery disease. Fascinating, and not without some controversy in the medical establishment, this research also makes one ponder how other members of the team might react to interacting regularly with a type “A” personality on the team.

Software development is largely a “Type A” endeavor. The fact is that many highly-effective teams have lots of members who are very aggressive, intense and highly competitive. One important mitigating factor is that technology professionals also need to be right. You can exhibit whatever personality traits you want and the software just won’t work if you didn’t get it right. The next issue is that technology is so complex that few people, if any, in today’s global organizations, are able to work entirely on their own. High-performing teams often have specialists who depend upon each other and must collaborate. Even though some degree of competition may be common, frequent collaboration is just not optional. If you have ever been in a meeting with someone who just stuck to their point despite objections from other team members (and seemingly oblivious to any sense of logic) then you probably have seen this type of behavior. Still, many technology teams often struggle to overcome a fair amount of conflict and drama. In the midst of a highly confrontational meeting, it might be tempting to consider what life would be like with the more easy going “Type B” personalities. However, Harvey Robbins and Michael Finley point out that some teams don’t work well when their leaders are unwilling to fight for the team [2].

So, how exactly can one determine what is the right amount of “Type A” versus “Type B” behavior in a DevOps team? As noted in previous articles, there is a natural tension between the aggressive behavior of highly motivated software developers and the operations professionals who are charged with ensuring that we maintain consistent and continuously available services. Operations often focuses on maintaining the status quo while development presses hard for introducing new features based upon customer demand. It shouldn’t surprise you that both types of behavior are essential for a successful DevOps organization. You need to have aggressive personalities with a strong achievement-focused drive to create new features and improved systems. But you also need to temper this aggressiveness with the more balanced “Type B” behaviors that encourage review and careful analysis.

This balance is exactly what the burgeoning DevOps movement brings to the table. DevOps brings together the views of folks engaged in QA activities, software development, operations and also information security. Keep in mind that many people are attracted to each of these essential disciplines, in part, due to their personalities as well as by how these roles fit into the goals and objectives of their respective teams. DevOps brings together and improves communications between teams; it also brings together stakeholders with different viewpoints and often very different personalities. Successful teams harness this diversity to help ensure that their cross-functional teams are more effective. The most effective managers understand the basic personalities and communication styles that are often found in cross-functional teams and become adept at developing strategies which utilize these differences productively. With encouragement, competitive “Type A’s” and more laid-back “Type B’s” can learn to “play nice” so that each of their strengths are incorporated and contribute to overall team success!

References:
[1] Meyer Friedman, Type A Behavior: Its Diagnosis and Treatment (Prevention in Practice Library). Springer, 1996
[2] Harvey Robbins and Michael Finley, Why Teams Don’t Work – What Went Wrong and How to Make it Right, Peterson’s Pacesetter Books, 1995
[3] Bob Aiello and Leslie Sachs, Configuration Management Best Practices: Practical Methods that Work, Addison-Wesley Professional, 2011
[4] Bob Aiello and Leslie Sachs, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, Addison-Wesley Professional, 2016

Behaviorally Speaking – Software Safety

Behaviorally Speaking – Software Safety
by Bob Aiello

Software impacts our world in many important ways. Almost everything that we touch from the beginning to the end of our day relies upon software. For example, airline flight controls and nuclear power plants all rely upon complex software code that must be updated from time to time, tested and supported. Incidents in the past impacted the 911 emergency dispatch system and that was not the only time that emergency dispatch systems have suffered outages which impacted the response time for police, ambulance and fire department services. The software that enables the anti-missile defense system known as the Iron Dome in Israel has been credited with saving lives and underwent an extensive testing and validation effort. But the number of software glitches impacting trading systems and other complex financial systems could cause us to question whether or not our capability to manage software configuration management is really where it should be.

Many years ago, I was interviewed by a very smart technology manager for a position supporting a major New York based stock exchange. I went into the interview feeling pretty confident that I had the requisite skills and actually had been recommended by a manager who I had worked for previously at another company. I was surprised when during the interview I was asked a very pointed question about my capabilities. The manager asked me to imagine that I was supporting the software for a life support system which my loved one depended upon. He then asked me if I was confident that I would never make a mistake that could potentially impact the person (presumably my child, parent or spouse) who was dependent upon the life support system. I was pretty shocked at this question posed during a job interview and I managed to stay positive and I told the manager my methods worked and yes I would trust them on a life support system that could potentially impact someone who I cared about. But the question stayed with me for years to come. The truth is that someone has to upgrade the software used by life support systems and I am not completely confident that our industry has completely reliable methods to handle this work.

Some times ago I gave a full day class at a The Nuclear Information Technology Strategic Leadership (NITSL) conference. The NITSL is a nuclear industry group of all nuclear generation utilities that exchange information related to information technology management and quality issues. I am pleased to say that these colleagues valued software safety to such a degree that it was an ingrained aspect of their culture which impacted every aspect of their daily work.

From a configuration management perspective, the first step in software safety must be to establish the trusted base from the systems software to applications that are integrated with the hardware devices. The trusted base must start from the lowest levels of the system including the firmware, operating system and even the hardware itself. Applications must built, packaged and deployed deterministically to the trusted base in a manner that ensures that we know exactly what code is to be deployed and that we can verify that the correct code actually was indeed deployed to the target environment. Equally important is verifying that no unauthorized changes have occurred and that the trusted base is verifiable and fully tested. If you had a pacemaker that required software updates, obviously it would be essential that you can rely upon there being a trusted base that enables the pacemaker to function reliably and correctly.

Past outages at major stock exchanges and trading firms have shown that many complex financial systems obviously do not have an established trusted computing base and that has directly resulted in very steep losses for some firms and impacted thousands of people. The good news is that we actually do know how to build, package and deploy software reliably. We also know how to verify that the right code was deployed and that there are no unauthorized changes. These best practices are precisely what we discuss in application build, package and deployment including DevOps, although many firms struggle with their successful implementation. The key to success is to start from the beginning.

In my consulting work, I often find that companies actually do know what has to be done to reliably build, package and deploy software successfully. The problem is that they often begin doing the right thing much too late in the application lifecycle. Deming teaches us that quality must be built in from the beginning. The same is especially true when considering software safety.

Successful build and release engineers understand that smoke testing after a deployment is essential for a successful build and release process. When the software matters then you need to be verifying and validating the code from the very beginning to the end of the lifecycle. This means that your build stream should include unit testing, functional and non-functional (e.g. performance testing) and of course comprehensive regression testing. Good configuration management practices allow you to build a version of the code that can be instrumented for comprehensive code analysis and exhaustive automated testing. The truth is that these best practices are most successful when they are supported from the very beginning of the lifecycle and are a fundamental part of the culture of the organization. Don’t forget that the build and deploy pipeline must also be verifiable and trusted.

When I create an automated build and deployment system, I start from the ground up verifying the operating system itself and all of the system dependencies. I only trust the trusted base if I am able to verify it on a continuous basis and this become for me part of environment management (and monitoring).For example, the Center for Internet Security (CIS) provides an excellent consensus standard that explains in great detail exactly how to create a secure linux operating system. You will also find that the consensus standard also provides example code for verifying that the security baseline is configured as it should be. Successful, security engineering involves both configuring the operating system correctly and verifying on an ongoing basis that it stays configured in a secure way. This is fundamentally a core aspect of environment monitoring and is essential for ensuring the trusted base.

Software safety requires that systems be built and configured in a secure and reliable way. Changes need to be tracked and verified which is essentially the purpose of the physical configuration audit. There’s more to software safety and I hope that you will contact me to share your views on software safety best practices and get involved with the community based efforts to updated software safety standards!

Call for Articles!

0

Hi Everyone!

I am excited to invite you to get involved with the Agile ALM Journal by contributing your own articles on Agile ALM and DevOps along with all aspects of software and systems development. The Agile ALM Journal provides guidance on Application Lifecycle Management which means that we have a strong focus on DevOps, Configuration Management and software methodology throughout the entire ALM. Articles are typically 900 – 1200 words and should explain how to do some aspect of software methodology. Contact me directly to get involved with submitting your articles and I will help you with getting started, forming your ideas and editing your article for publication.

Common topics include:

  • Software development approaches including agile
  • DevOps throughout the entire software process
  • Configuration Management (including the CMDB)
  • Build and Release Engineering
  • Source Code Management including branching and streams
  • Deployment Engineering (DevOps)
  • Continuous Testing
  • Development in the Cloud
  • Continuous Integration and Deployment
  • Environment Management
  • Change Management

and much more!

Bob Aiello
Editor
bob.aiello@ieee.org
http://www.linkedin.com/in/BobAiello

Monitoring Your Environment

0

Monitoring your runtime environment is an essential function that will help you proactively identify potential issues before they escalate into incidents and outages. But environment monitoring can be pretty challenging to do well. Unfortunately, environment management is often overlooked and, even when addressed, usually only handled in the simplest way. Keeping an eye on your environment is actually one of the most important functions for IT operations. If you spend the time understanding what really needs to be monitored and establish effective ways of communicating events, then your systems will be much more reliable—and you will likely get a lot more sleep without so many of those painful calls in the middle of the night. Here’s how to get started with environment management.

The ITIL v3 framework provides pretty good guidance on how to implement an effective environment management function. The first step is to identify which events should be monitored and establish an automated framework for communicating the information to the stakeholders who are responsible for addressing problems when they occur. The most obvious environment dependencies are basic resources such as available memory, disk space, and processor capacity. If you are running low on memory, disk space, or any other physical resource, then obviously your IT services may be adversely impacted. Most organizations understand that employees need to monitor key processes and identify and respond to abnormal process termination. Nagios is one of the popular tools to monitor processes and communicate events that may be related to processes being terminated unexpectedly.

There are many other environmental dependencies, such as ports being opened, that also need to be monitored on a constant basis. I have seen production outages caused by a security group closing a port because there was no record that the port was needed for a particular application. These are fairly obvious dependencies, and most IT shops are well aware of these requirements. But what about the more subtle environment dependencies that need to be addressed?

I have seen situations where databases stopped working because the user account used by the application to access the database locked up. Upon investigation, we found that the UAT user account was the same account used in production. In most ways, you want UAT and production to match, but in this case locking up the user account in UAT took down production. You certainly don’t want to use the same account for both UAT and production, and it may be a good idea to set up a job that checks to ensure that the database account is always working.

Market data feeds are another example of an environment dependency that may impact your system. This one can be tricky because you may not have control over a third-party vendor who supplies you with data. This is all the more reason why you want to monitor your data feeds and notify the appropriate support people if there is a problem. Cloud-based services may also provide some challenges because you may not always be in control of the environment and might have to rely on a third party for support. Establishing a service-level agreement (SLA) is fundamental when you are dependent on another organization for services. You may also find yourself trying to figure out how your cloud-based resources  actually work and what you need to do when your service provider makes changes that may be unexpected and not completely understood. I had this experience myself when trying to puzzle my way through all of the options for Amazon Cloud. In fact, it took me a few tries to figure out how to turn off all of the billable options such as storage and fixed IPs when the project was over. I am not intending to criticize Amazon per se but even their own help desk had trouble locating what I needed to remove so that I would stop getting charged for resources that I wasn’t using.

To be successful with environment management, you need to establish a knowledge base to gather the essential technical information that may be understood by a few people on the team. Documenting and communicating this information is an important task and often requires effective collaboration among your development, data security, and operations teams.

Many organizations including financial services are working to establish a configuration management database (CMDB) to facilitate environment management.  The ITIL framework provides a considerable amount of guidance on how to establish a CMDB and the supporting configuration management system (CMS), which helps to provide some structure for the information in the CMDB. The CMDB and the CMS  must be supported by tools that monitor the environment and report on the status of key dependencies on a constant basis. These capabilities are essential for ensuring that your critical infrastructure is safe and secure.

Many organizations monitor port level scans and attacks. Network intrusion detection tools such as SNORT can help to monitor and identify port-level activity that may indicate an attempt to compromise your system is underway. Ensuring that your runtime environment is secure is essential for maintaining a trusted computing environment. There have been many high-profile incidents that resulted in serious system outages related to port-level system attacks. Monitoring and recognizing this activity is a first step in addressing these concerns.

In complex technology environments you may find it difficult to really understand all of the environment requirements. This is where tying together your support application lifecycle is essential. When bad things happen, your help desk will receive the calls. Reviewing and understanding incidents can help the entire team identify and address environment-related issues. Make sure that you never have the same problem twice by having reported incidents fully investigated with new environmental dependencies identified and monitored on an ongoing basis.
Conclusion

Environment management is a key capability that can help your entire team be more effective. You need to provide a structure to identify environment dependencies and then work with your technical resources to implement tools to monitor environment dependencies. If you get this right, then your organization will benefit from reliable systems and your development and operations

 

 

 

 

 

ALMtoolbox presents smart performance monitoring and alerting tool, including Free Community Edition

0

Tel Aviv, Israel – June 28, 2016 –  IBM Champion ALMtoolbox, Inc., a firm with offices in the United States and Israel, today announced availability of a free Community edition product called ALM Performance, based upon ALM Performance Pro, their award-winning environment monitoring commercial solution.

The Community edition of ALM Performance provides a comprehensive set of over twenty environment-monitoring features including monitoring your ClearCase VOB ,Jenkins and ClearQuest servers along with their respective JVMs. The product also monitors available storage, required ports, memory and CPU load while checking also for components of the application itself, such as Jenkins jobs which may be running too long or other possible Jenkins application problems. You can even write your own custom scripts and integrate them with the ALM performance dashboard. The user interface allows for email alerts, filtering notifications and other custom alerts.

Easily upgraded to the Commercial Pro edition for additional scalability and convenience, the Community Edition alone offers the following features.

ALM Performance Highlights – 3 main components:

  • Settings application – configuration tool for the ALM Performance, allows you to add or delete monitored servers and configure the server’s checks and parameters
  • Graphical component – graphical dashboard of the system.
  • Monitoring service – heart of the system – monitoring service is the component that schedules, runs tests and analyzes the results.

ALM Performance can monitor all Linux, UNIX, Mac OS and Windows versions and it does so in a non-intrusive and secure manner, using advanced SSH protocol. ALM Performance is installed on a Windows host and can be run on-premise or it can be run as a cloud service where we run and manage the system while it remotely monitors your servers.

“Over the years, we have provided a variety of robust tools that share our techniques and expertise with the user community, and now we are doing so with performance monitoring issues. We wanted to share our knowledge with the users, and help them improve their skills so that they can respond more effectively when they have to cope with malfunctions or Systems suffering from slow-response and other forms of latency, says Tamir Gefen, ALMtoolbox CEO.

“We have built this tool after many years of experience with SCM administration, IT management and DevOps, and we created it by envisioning a tool that’s made a priori for Jenkins, ClearCase or ClearQuest rather than just another cut-off-the-shelf monitoring tool that users have to spend months planning what to monitor and how to customize.   Using this tool it takes only 1 hour to start getting status data and insights from your monitored hosts”, says Gefen.

“We always strive to benefit the users’ communities and provide a version that can provide the essential features for each company that uses Jenkins, ClearCase or ClearQuest” says David Cohen, the product manager of the new ALM Performance monitoring tool.

“Since it’s software with an easy installation, we are excited that we are able to provide the Community version, including self-installation, for free”, says Cohen.

To download the product, visit ALM toolbox and click the Download link.

Updates and support via email, phone and desktop sharing are available with either product.

Personality Matters- Anxiety and Dysfunctional Ops

0

Personality Matters- Anxiety and Dysfunctional Ops
By Leslie Sachs

As software professionals, we might find ourselves calling a help desk from time to time. Our needs may be as simple as dealing with a malfunctioning cell phone or as complex as navigating banking or investment systems. The last time you called a help desk, you may have been pleased with the outcome or disappointed in the service provided. The last time I called a help desk, I found myself trying to navigate what was obviously a dysfunctional organization. While the ITIL framework provides guidance on establishing an effective service desk, many organizations still struggle to provide excellent service.  The root cause may have much to do with a personality trait known as anxiety and the often-dysfunctional defense mechanisms people resort to in an attempt to deal with its discomfort. If you want your IT operations (IT Ops) group to be successful, then you need to consider the personality issues at individual as well as group levels that may impact their performance and your success. In order for you to understand a dysfunctional help desk, you need to know the personality traits that lead to the negative behaviors preventing you from receiving good service. Often, callers experience frustration and anger when they find themselves unable to elicit the response that they desire. Sometimes IT Ops professionals provide less than perfect service and support because they just don’t know how to solve the problem, and they lack the training and expertise needed to be effective and successful in their jobs. If you are frustrated as a customer, imagine how stressful it is for the support person who cannot successfully handle your request or the IT Ops professional who lacks the necessary technical expertise required to solve the problem. When your job is on the line, you may indeed feel extreme anxiety.

Anxiety is defined as an emotional state in which there is a vague, generalized feeling of fear [1]. Operations staff often find themselves under extreme stress and anxiety, especially when dealing with a systems outage. Some folks handle stress well, while others engage in disruptive behavior that may be as simple as blaming others for the outage or as complex as avoidance behaviors that could
potentially impact the organization. Sigmund Freud discussed many defense mechanisms that people often employ to deal with and reduce anxiety, and he conceptualized that many people develop behavior problems when they have difficulties learning[1]. We often see this phenomena being triggered when employees are required to complete tasks for which they have not been properly prepared. IT Ops team members must learn a considerable amount of information in order to understand how to support complex systems and deal with technology challenges that often arise when they face a critical systems outage.

Developers are often at an advantage because they get to learn new technologies first and sometimes get to choose the technical architecture and direction of a project. However, IT Ops team members must struggle with getting up to speed, and they are wholly dependent upon the information that they are given during the stage in which the technology transitions from development to operations. This knowledge transfer effort impacts the entire support organization. Organizations which fail to implement adequate knowledge transfer processes will have support personnel who are ill-equipped to handle situations which depend on familiarity and competence with the knowledge base American psychologist Harry Harlow proposed that a relationship exists between the evolutionary level of a species and the rate at which members of that species are able to learn[1]. Similarly, an
organization’s ability to transfer knowledge is an indication of how it will successfully deal with supporting complex technologies. The entire team may be adversely impacted when an organization cannot manage its essential institutional knowledge. As Jurgen Appelo notes, “knowledge is built from the continuous input of information from the environment in the form of education and learning, requests and requirements, measurements and feedback, and the steady accumulation of experience. In short, a software team is the kind of system that consumes and transforms information and produces
innovation”[2].

All this means that development and operations must share knowledge in order for the organization to be successful. Quality Management expert, W. Edwards Deming, aptly noted that it is essential to “drive
out fear”[3]. To remove fear and anxiety from the work environment, all members need the knowledge and skills to be able to perform their duties. Technology professionals cannot function optimally when they are not adequately trained and informed. Successful organizations reduce anxiety by properly training their teams and establishing a culture of knowledge and excellence.
References
[1] Byrne, Donn. 1974. An Introduction to Personality: Research, Theory, and Applications. Prentice-Hall Psychology Series.
[2] Appelo, Jurgen. 2011. Management 3.0: Leading Agile Developers, Developing Agile Leaders. Addison-Wesley Signature Series.
[3] Aiello, Bob and Leslie Sachs, Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional, 2011.
[4] Aiello, Bob and Leslie Sachs, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, Addison-Wesley Professional, 2016.

Behaviorally Speaking – Building Reliable Systems

0

Behaviorally Speaking – Building Reliable Systems
by Bob Aiello

Anyone who follows technology news is keenly aware that there have been a remarkable number of high profile system glitches over the last several years, at times, with catastrophic results. Major trading exchanges both in the US and in Tokyo have suffered serious outages that call into question the reliability of the world financial system itself. Knight Capital group has essentially ceased to exist as a corporate entity after what was reported to be a configuration management error that resulted in a one-day 440 million dollar loss. These incidents highlight the importance of effective configuration management best practices and place a strong focus on the need for reliable systems. But what exactly makes a system reliable and how do we implement reliable systems? This article describes some of the essential techniques necessary to ensure that systems can be upgraded and supported while enabling the business by providing frequent and continuous delivery of new system features. Mission critical and enterprise-wide computer systems today are often very complex with many moving parts and even more interfaces between components that present special challenges even for expert configuration management engineers. These systems are getting more complex as the demand for features and rapid time to market provides unique challenges that many technology professionals could not have envisioned even a few years ago. Computer systems do more today and many seem to learn more about us each and every day, evolving into complex knowledge management systems that seem to anticipate our every need. High frequency trading systems are just one example of complex computer systems that must be supported by industry best practices that can ensure rapid and reliable system upgrades and implementation of market driven new features. These same systems can result in severe consequences when systems glitches occur, especially as a result of a failed systems upgrade. Finra is a highly respected regulatory authority that has recently issued a targeted examination letter to ten firms that support high frequency trading systems. The letter requests that the firms provide information about their “software development lifecycle for trading algorithms, as well as controls surrounding automated trading technology” [1]. Some organizations may find it challenging to demonstrate adequate IT controls, although really the goal should be for implementing effective IT controls that help ensure systems reliability. Many industries enjoy a very strong focus on quality and reliability.

A few years ago, I had the opportunity to teach configuration management best practices at an NITSL conference for nuclear power plant engineers and quality assurance professionals. Everyone in the room was committed to software safety including reliable safety systems. In the IEEE, we have working groups which help update the related industry standards that help define software reliability, measures of dependability and safety. Make sure that you contact me directly if you are interesting in hearing more about participating in these worthwhile endeavors. Standards and frameworks are valuable but it takes more than just guidelines to make reliable software. Most professionals focus on the importance of accurate requirements and well written test scripts which are essential, however not sufficient to really create reliable software. What really needs to happen is that we build in quality from the very beginning which is an essential teaching that many of us learned from quality management guru W. Edwards Deming [2].

The key to success is to build the automated deployment pipeline from the very beginning of the application development lifecycle. We all know that software systems must be built with quality in mind from the beginning and this includes the deployment framework itself. Using effective source code management practices along with automated application build, package and deployment is only the beginning. You also need to understand that building a deployment factory is a major systems development itself. It has been my experience that many CM professionals forget to build automated build, package and deployment systems with the same rigor that they would a trading system. As the old adage says, “the chain is only as strong as its weakest link” and inadequate deployment automation is indeed a very weak link.

Successful organizations understand that quality has to be a cultural norm. This means that development teams must take seriously everything from requirements management to version control of test scripts and release notes. Organizations that take the time to train and support developers in the use of robust version control solutions, automated application build languages such as Ant, Maven, Make and MSBuild. The tools and plumbing to build, package and deploy the application must be a first class citizen and fundamental component of the application development effort.

Agile development and DevOps are providing some key concepts and methodologies for achieving success but the truth is that every organization has its own unique requirements, challenges and critical success factors. If you want to be successful then you need to approach this effort with the knowledge and perspective that critical systems are complex to develop and also complex to support. Building the automated deployment framework should not be an afterthought or an optional task started late in the process. Building quality into the development of complex computer systems requires what Deming described in the first of 14 points as “create constancy of purpose for continual improvement of products and service to society” [2].

We all know that Nuclear power plants, medical life support systems and missile defense systems must be reliable and they obviously must be upgraded from time to time – often due to uncontrollable market demands. Efforts by responsible regulatory agencies such as Finra are essential for helping financial service firms realize the importance of creating reliable systems. DevOps and configuration management best practices are fundamental to the successful creation of reliable software systems. You need to start this journey from the very beginning of the software and systems delivery effort. Make sure that you drop me a line and let me know what you are doing to develop reliable software systems!

.

[1] http://www.finra.org/Industry/Regulation/Guidance/TargetedExaminationLetters/P298161
[2] Deming, W. Edwards (1986). Out of the Crisis. MIT Press
[3] Bob Aiello and Leslie Sachs, Configuration Management Best Practices: Practical Methods that Work, Addison-Wesley Professional, 2011
[4] Bob Aiello and Leslie Sachs, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, Addison-Wesley Professional, 2016

Could Your Airplane Safety System Be Hacked?

0

Could Your Airplane Safety System Be Hacked?
by Bob Aiello

Flying by and large is considered to be one of the safest modes of transportation with industry regulatory authorities and engineering experts working together to establish the safest and most reliable technology possible. However, the aviation industry itself came under fire last year when, according to a published report, security researcher Chris Roberts divulged that he had hacked the in-flight entertainment system, or IFE, on an airplane and overwrote code on the plane’s Thrust Management Computer while aboard the flight. According to the article published in wired.com, Roberts was able to issue a climb command and make the plane briefly change course. The FBI responded by issuing a warrant for his arrest which according to published reports stated “that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” FBI Special Agent Mark Hurley wrote in his warrant application. “He also stated that he used Vortex software after comprising/exploiting or ‘hacking’ the airplane’s networks. He used the software to monitor traffic from the cockpit system.”

Roberts is not the only person reporting that inflight wifi lacks adequate security controls with another journalist reporting that his personal machine had been compromised via the onboard wifi which was determined to have very weak security.

The most important issue is whether or not the vulnerable wifi systems are connected to the onboard safety and navigation systems or is there is a proper network segregation, which protects the onboard safety and navigation systems from being accessed via a compromised inflight entertainment system. The good news is that U.S. aviation regulators have teamed up with their European counterparts to develop common standards aimed at harnessing wireless signals for a potentially wide array of aircraft-safety systems. Their goal is to make widespread use of wifi and reduce the amount of physical wiring required, but an essential byproduct of this effort could potentially be better safety standards.

The Wall Street Journal article goes on to say that nearly a year after Airbus Group SE unsuccessfully urged Federal Aviation Administration officials to join in such efforts, Peggy Gilligan, the agency’s senior safety official, has set up an advisory committee to cooperate with European experts specifically to “provide general guidance to industry” on the topic.

Network segregation can certainly be improved, but the real issue is that software onboard an aircraft should be built, packaged and deployed using DevOps best practices which can ensure that you have a secure trusted application base. Let’s hope that the folks writing those standards and guiding the industry are familiar with configuration management and DevOps best practices or at least involve those of us who are. See you on my next flight!

JFrog Showcases Repository and Distribution Platform at DockerCon 2016

0

SEATTLE, WA– according to a press release published by Marketwired,  JFrog announced on Jun 20, 2016 that it would showcase JFrog Artifactory and JFrog Bintray, its repository and distribution platform, at DockerCon 16, which took place June 20-21 in Seattle. JFrog also presented a session on Docker container lifecycles. Additionally, the company will lead a webinar on JFrog Artifactory and JFrog Bintray on July 7, followed by a second webinar covering JFrog Xray on July 14.

DockerCon is the community and industry event for makers and operators of next-generation distributed apps built with containers. The two-and-a-half-day conference provides talks by practitioners, hands-on labs, an expo of Docker ecosystem innovators and valuable opportunities to share experiences with peers in the industry.

“JFrog has been a valuable partner in supporting and expanding our developer community,” said David Messina, vice president of marketing at Docker. “JFrog’s universal repository and distribution solution not only supports Docker technology, but also the open ecosystem of software development tools, and we are glad to be working together to solve the industry’s biggest challenges.”

Behaviorally Speaking: DevOps In the Enterprise

0

Behaviorally Speaking: DevOps Across the Enterprise
by Bob Aiello

Agile software development practices are undeniably effective. However, even the most successful companies can face challenges when trying to scale agile practices on an enterprise level. Two popular approaches to implementing agile across the enterprise are Scott Ambler‘s Disciplined Agile 2.0 and Dean Leffingwell‘s Scaled Agile Framework also known as SAFe. These approaches may work well for agile, but how do we implement DevOps across the enterprise?

DevOps is going through many of the same growing pains. Many small teams are very successful at implementing DevOps, but trying to implement DevOps best practices on an enterprise level can be very challenging. This article will help you understand how to be successful implementing DevOps across the entire Enterprise. Agile development and DevOps focus on a number of important principles, including focusing on individuals and interactions over processes and tools, prioritizing working software over volumes of documentation, valuing customer collaboration over just contract negotiation as well as responding to change over following a plan. All these practices are familiar to anyone who adheres to the Agile Manifesto, DevOps and agile development share a lot more in common than just a set of principles. Agile development usually requires rapid iterative development, generally using fixed timebox sprints. Agile development highlights the value of rapid and effective application development practices that are fully automated, repeatable and traceable. It is no surprise then that DevOps has been especially popular in agile environments.

DevOps focuses on improved communication between development and operations with an equally essential focus on other stakeholders such as QA. DevOps at scale may require that you consider organizational structure and communicate effectively with many levels of management. You can expect that each organizational unit will want to understand how DevOps will impact them. You may have to navigate some barriers and even organizational and political challenges. There are other key requirements that often come when we consider scaling best practices.

The first consideration is that larger organizations often want to establish a centralized support function which may be established at the divisional level or could be a centralized corporate wide entity. This may mean that you have to establish a corporate budget and align within the corporate structure and obviously the culture too. You may be required to create corporate policies, mission statement and also project plans consistent with other efforts of similar size and effort. Even just purchasing tools which are essential for effectively implementing DevOps may require that you adhere to corporate requirements for evaluating and selecting tools. I have seen some of my colleagues become frustrated with these efforts as they felt that they already knew which tools should be implemented while organizations usually want to see a structured tools evaluation with transparency and participation by all stakeholders. These efforts, including a proof of concept (POC) can help to overcome resistance to change that often can be seen in larger efforts to implement any best practice, including DevOps. My own approach is to pick a pilot project to handle correctly – right from the beginning that has high visibility within the organization. In practice, I have often had to juggle day-to-day duties supporting source code management or automated application build, package and deployment. With a challenging “day-job” it can be difficult to also have a “star” project to show the value of doing things the best way from the beginning, but this is exactly how to get started and demonstrate the success that the organization can enjoy enterprise-wide once you attain stakeholder buy in.

Once the pilot project has been shown to be successful it is time to consider rolling out DevOps throughout the enterprise. For DevOps practices to be effective in the enterprise they must be repeatable, scaleable, and fully traceable. An important consideration is establishing a support function to help each of the teams with training (including tools) and ongoing support. The implementation of these practices must adhere to all of the corporate policies and align with the organizational structure and culture. DevOps also must address the requirements of the organization’s technology platform. In practical terms, this usually brings me right into the cloud.

Most large organizations are embracing cloud based technology and cloud based development. Implementing DevOps must also include support for the cloud in several very important ways. Provisioning servers in the cloud is a very initial step that allows DevOps to truly show its value. In fact, managing cloud-based development is much more difficult without the improved communication and deployment automation that has become synonomous with DevOps. DevOps in the enterprise does require some specific organizational skills. This must include an understanding of the organizational and structural requirements that are essential in implementing DevOps in the enterprise.

 

Docker Announces Docker Engine 1.12 With Container Orchestration

0

Built-in orchestration features Announced

DockerCon – Seattle, Wash. – June 20, 2016 – Docker announced Docker Engine 1.12 with built-in orchestration, a powerful combination that provides Developers and IT Operations with a simplified and automated experience to deploy and manage Dockerized distributed applications – both traditional apps and microservices – at scale in production. By adding this additional intelligence to Docker Engine, it becomes the orchestration building block, creating a model for engines to form a self-organizing, self-healing pool of machines on which to run multi-container distributed applications. When integrated into Docker Engine, these new capabilities optimize ease of use, resiliency, performance-at-scale and security – all key requirements that are missing in other orchestration systems. As a result, organizations can be assured that their dev and ops teams are aligned on unifying the software supply chain to release applications into production more rapidly and frequently.

Read the complete press release

Securing The Trusted Base

0
Securing The Trusted Base
By Bob Aiello
Over the last several years, there have been many reported incidents where hackers have attacked banks, government agencies and financial services firms. Corporate security experts are hard-pressed to react in a timely manner to each and every attack. DevOps provides many techniques, including application baselining, build, release and deployment engineering, which are essential for detecting and dealing with these problems. This article discusses how to use CM Best Practices to establish secure application baselines which help verify that the correct code is running in production. Just as importantly, this fundamental practice enables IT personnel to discover if there are any unauthorized changes, whether they be caused by unintended human error or malicious intent.

Ken Thompson won the coveted ACM Turing award in 1983[1] for his contributions to the field of computing. His acceptance speech was entitled “Reflections on Trusting Trust” [2] and asked the question “to what extent  should one trust a statement that a program is free of Trojan horses?” After discussing the ease with which a programmer can create a program that replicates itself (and could potentially also contain malicious code), Thompson’s comments highlight the need to ensure that we can create secure trusted application baselines. This article will help you get started delivering systems that can be verified and supported, while continuously being updated as needed.

The secure trusted application base needs to start with an operating system that is properly configured and verified. Deploying applications to an untrusted platform is obviously an unacceptable risk. But applications themselves also need to be built, packaged and deployed in a way that is fully verifiable. The place to start is baselining source code in a reliable version control system (VCS) that has the capability to reliably track the history of all changes. A common source of errors is missing or ill-defined requirements, so traceability of requirements to changesets is fundamental. Baselines provide for the management of multiple variants (e.g. bugfixes) in the code and, more importantly, the ability to reliably support milestone releases without having to resort to heroic efforts to find and fix your code base.

The application build itself must be fully automated with each configuration item (CI) built with an embedded immutable version ID; these secure IDs facilitate the physical configuration audit, essential for ensuring that the correct code can be verified as having been successfully being deployed. The CIs should themselves be packaged into a container which is then cryptographically signed to verify the identity of the source and also verify that the container has not been compromised. The only way to get this right is to build the package in a secure way and deploy it in an equally secure manner. Deming was right when he pointed out that quality must be built in from the beginning and this is a fundamental aspect of the application build used to deploy a secure trusted base. The deployed application must be baselined in the runtime environment and constantly verified to be free of unauthorized changes. This approach fundamentally provides a runtime environment that is verifiable and actually facilitates the automated application deployment pipeline. The secure trusted base must be frequently updated to provide new functionality and also be capable of retiring obsolete or unwanted code. Creating the automation to reliably build, package and deploy code on a constant basis is essential to implementing a secure trusted base. While you cannot test quality into a system, building systems that are fully verifiable is very important. The deployment pipeline itself must be designed to be fully testable itself, so that any automation issues are immediately discovered and resolved.

Complex systems today also have many interfaces that must be understood and verified. This may involve checking the feed of market data or simply ensuring that correct ports have been provisioned and remain available for interprocess communication. In fact, monitoring the application environment is another crucial aspect of ensuring the secure trusted base. You do not want unnecessary ports opened, possibly resulting in a security risk, and you also do not want application issues due to a port, required by the application system, being closed. There are similar procedures to provision and verify the operating system
itself.

In the secure trusted base, we automate the build of all code components and embed identifiers into the code to make it possible to audit and verify the configuration. We also provide a reliable packaged container that cannot be compromised and we deploy in a way that is fully verifiable. Most importantly, we ensure that unauthorized changes can be detected and then remediated if they do occur. This may sound like a lot of effort and it actually does take a lot of work, but the costs of failing to take these preventable measures are frequently much higher as those firms who have been caught with “their code down” have painfully learned. The DevOps approach enables the development of these automated procedures from the very beginning of the application lifecycle which is really the only viable approach. More importantly, the DevOps approach to creating an automated deployment pipeline enables you to rapidly build, package and deploy applications using the same procedures in every environment.

My career has often been focused on joining a team that is having problems releasing their code. The first thing that I do is push for more frequent releases and, whenever possible, smaller in scope. We automate each step and build in code to test and verify the deployment pipeline itself. The codebase that can be quickly and reliably deployed is inherently more secure. Ideally, you want to start from the very beginning of the software and system development effort. However, more often than not, I have had to jump onto a moving train. Taking an iterative approach to scripting and automated the application build, package and deployment will help you create a secure and reliable trusted application base!

1. http://amturing.acm.org/award_winners/thompson_4588371.cfm
2. https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

Behaviorally Speaking – Process and Quality

0

Behaviorally Speaking – Process and Quality
By Bob Aiello

Configuration Management places a strong focus on process and quality so sometimes I am shocked to learn that CM experts know so little about the underlying principles of CM that so clearly impact process and quality. I find this to be the case in organization whether they embrace agile methodologies or and so-called non-agile approaches such as waterfall. While many of my colleagues are expert build, release or deployment engineers, still there are those who do not understand the underlying principles that focus so heavily on process and quality. This article will help you enhance your existing CM Best Practices by applying the core principles that deliver excellent process and quality. We will also take a moment and make sure that we get our terminology straight.

So what exactly does it mean to have a process? There are many times when I hear colleagues say, “well our process is …” while referring to a set of activities that I would never really consider to be a “process”. My favorite authoritative resource is the free IEEE online dictionary called Sevocab [1] which describes a process
as a “set of interrelated or interacting activities which transforms inputs into outputs.” Sevocab also notes that the term “activities” covers use of resources and a process may have multiple starting points and multiple end points. I like to use Sevocab because it also notes the standard (e.g. IEEE) or framework (e.g. ITIL) where the term is used which can be very helpful for understanding the term within a specific context. I usually describe a process as a well defined way of doing a particular set of activities and it is worth noting that a well defined process is implicitly repeatable.

In configuration management, we often focus on creating processes to support the application build, release and deployment engineering. It is essential that all processes be automated or at least “semi-automated” – which means that the script proceed through each step, although perhaps requiring someone to verify each
step based upon the information on the screen. Scripting the entire release process ensures that process is repeatable and that errors are avoided. An error free process also helps to also ensure that we achieve a quality result. Manual procedures will always be sources of errors and mistakes. Automating the step, even if you need some human intervention will go a long way towards improving your process and quality.

Sevocab defines Quality as the degree to which a system, component, or process meets specified requirements and the ability of a product, service, system, component, or process to meet customer or user needs, expectations, or requirements. Configuration Management principles help to ensure quality. Here are
some of the guiding principles for source code management[2]:

1. Code is locked down and can never be lost
2. Code is baselined marking a specific milestone or other point in time
3. Managing variants in the code should be easy with proper branching
4. Code changed on a branch (variant) can be merged back onto the main trunk (or another variant)
5. Source Code Management processes are repeatable
6. Source Code Management provides traceability and tracking of all changes
7. Source Code Management best practices help improve productivity and quality

Regardless of the version control tool that you are using these principles will help you manage your code better and the result is better quality. Here are some principles as they relate to build engineering[2]:

1. Builds are understood and repeatable
2. Builds are fast and reliable
3. Every configuration item is identifiable
4. The source and compile dependencies can be easily determined
5. Code should be built once and deployed anywhere
6. Build anomalies are identified and managed in an acceptable way
7. The cause of broken builds is quickly and easily identified (and fixed)

You will find principles for each of the core configuration management functions in my book on Configuration Management Best Practices [2], but the ones listed above for source code management and build engineering will help you get started improving both process and quality.

cmbestpractices

Process and Quality are essential for the success of any technology development effort. Implementing the core configuration management best practices of source code management, build engineering, environment management, change control, release management and deployment will help you successfully develop
software and systems while maximizing productivity and quality!

References
1. www.computer.org/sevocab
2. Aiello, Robert and Leslie Sachs. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley, 2011

Personality Matters- The Three Pillars of Positive Psychology

0

Positive psychology is an emerging wellness approach developed by noted psychologists Martin Seligman and Mihaly Csikszentmihalyi that focuses on encouraging effective pro-active behaviors [1]. The principles apply to many business and technical situations as well as to the clinical settings in which they were originally introduced. Dr. Seligman noted in his writings that there are essentially three pillars that make up the scientific endeavor of positive psychology. The first two relate to individual behavior and the third is the study of positive institutions which Seligman suggested was “beyond the guild of psychology” [2]. This article will focus on that third pillar which falls within the realm of organizational psychology and is likely of great interest to anyone who wants to be part of an effective organization.

The first two pillars of positive psychology focus on positive emotion and positive character which each contribute to the development of a sense of self-efficacy and personal effectiveness, both of which are very important to individual success. Organizations, not  unlike the people who comprise them, often have unique and complex personalities. Individuals who join the army or the police force certainly experience the culture of the organization in a very real way. When people fail in their jobs, it is sometimes due to factors beyond their direct control; perhaps they could not fit into the culture and expectations of the organization itself or the culture itself made success very difficult to attain. What are the traits that we might want to highlight when looking at an organization from a positive psychology perspective?

Organizations that encourage curiosity, interest in the world, and a general love of learning provide an environment that is consistent with what Dr. Seligman had in mind with his first cluster which he termed wisdom. Technology professionals may understand these traits in terms of organizations which encourage learning new technologies and frameworks and provide opportunities for professionals to constantly improve their skills. Judiciousness, critical thinking and open-mindedness – along with ingenuity, originality and practical street smarts are other valuable attributes found among employees in effective organizations. Social, personal and emotional intelligence describe organizations which encourage their members to respectfully understand both individual and group differences, including cultural diversity.

Organizations which encourage employees to feel safe when speaking up or taking the initiative can be understood to exhibit valor and courage which is the cluster that Seligman termed bravery. Integrity and honesty, along with perseverance and diligence, are also grouped with these positive traits. The degree to which these characteristics and their active expression are valued in an organization will significantly impact that firm’s functioning and results.  Positive organizations encourage their employees to take initiative and ensure that employees feel safe – even when reporting a potential problem or issue. Dysfunctional organizations punish the whistleblower, while effective organizations not only recognize the importance of being able to evaluate the risks or problems that have been brought to their attention- they actively solicit such self-monitoring efforts.

The cluster of humanity and love consists of kindness, generosity and an intrinsic sense of justice. Organizations with leadership that encourages a genuine sense of delivering value to customers and giving back to their community, especially firms with managers who actively model these behaviors, are more likely to see employees living these values on a daily basis. Of paramount importance is good citizenship and teamwork as well as a strong culture of leadership. While many organizations may have individuals who exhibit these strengths, highly effective organizations make these values a cultural norm, which in turn then becomes the personality of the organization itself.

The cluster of temperance includes self-control, humility and modesty which can be understood in terms of delivering quality to all of their stakeholders, including ensuring real value to stock-holders instead of simply advertising and marketing hype. Gratitude is a fundamental trait of many successful organizations which model positive behaviors and actively participate in helping the communities that support them. These are often the same organizations which have a strong sense of hope and optimism and are mindful of the future – again all traits found in Seligman’s view of positive psychology. Some organizations have a culture that exhibits spirituality, faith and even religiousness which aligns with their personality. Most importantly, playfulness and humor, along with passion and enthusiasm, all make for a corporate environment that breeds successful and loyal employees.

Over the years, many organizations have unfortunately become associated with greed and dysfunctional behavior. However, the study of positive psychology provides an effective, comprehensive and attainable model for those companies seeking to create a healthy, growth-oriented culture that will encourage and nurture the positive behaviors which research indicates lead to success and profitability!

References
[1] Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14
[2] Seligman, Martin, Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Free Press, New York 2002
[3] Abramson, L. Y.; Seligman, M. E. P.; Teasdale, J. D. (1978). “Learned helplessness in humans: Critique and reformulation”. Journal of Abnormal Psychology 87
[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press
[5] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.