Wendy’s Fast Food Chain Data Breach

0

Wendy’s Fast Food Chain Credit Card Data Breach
Hackers Pay a visit to Fast Food Chain And It Wasn’t Just to Grab A Hamburger
By Bob Aiello

Fast food chain Wendy’s has provided additional information on the data breach, first disclosed in January, in which customer credit information (including cardholder name, credit or debit card number, expiration date, cardholder verification value, and service code) was stolen by hackers from what is believed to be as many as 1,000 stores. It was previously reported that only 300 stores had been impacted. According to the company press release, the data breach is believed to have “resulted from a service provider’s remote access credentials being compromised, allowing access – and the ability to deploy malware – to some franchisees’ Point-of-Sale (POS) systems”. Wendy’s claims that “soon after detecting the malware, Wendy’s identified a method of disabling it and thereafter has disabled the malware in all franchisee restaurants where it has been discovered. The investigation has confirmed that criminals used malware believed to have been effectively deployed on some Wendy’s franchisee systems starting in late fall 2015.”

Wendy’s, the third largest hamburger fast-food business, has over a billion dollars in revenue annually and over 6,000 franchise locations. In May of 2016, the company confirmed discovery of evidence of malware being installed on some restaurants’ point-of-sale systems, and worked with their investigator to disable it. On June 9th, they reported that they had discovered additional malicious cyber activity involving other restaurants. The company believes that the malware has also been disabled in all franchisee restaurants where it has been discovered. “We believe that both criminal cyberattacks resulted from service providers’ remote access credentials being compromised, allowing access – and the ability to deploy malware – to some franchisees’ point-of-sale systems.”

In a July 7th statement, Todd Penegor ,President and CEO, of the Wendy’s Company stated that, “in a world where malicious cyberattacks have unfortunately become all too common for merchants, we are committed to doing what is necessary to protect our customers. We will continue to work diligently with our investigative team to apply what we have learned from these incidents and further strengthen our data security measures. Thank you for your continued patience, understanding and support.”

Commentary by Bob Aiello

The following is my opinion; feel free to contact me if you disagree.

I believe that too many companies are not accepting responsibility for ensuring that their systems are completely safe and reliable. Wendy’s does over a billion dollars in sales annually. They have the resources to create completely secure IT systems that will ensure that customer data is safe. There are PCI regulatory requirements in place and organizations which can help companies create secure and reliable systems. Yet, many companies continue to rely upon “experts” who can find malware which has been put onto their servers by hackers. This approach is at best, trying to find a needle in a haystack. We should be building systems to be completely secure and reliable from the ground up. As consumers, we need to demand that corporations implement their systems, which hold our personal data, using techniques such as the secure trusted application base.

Behaviorally Speaking – CM and Traceability

Behaviorally Speaking – CM and Traceability
by Bob Aiello

Software and systems development can often be a very complex endeavor so it is no wonder that sometimes important details can get lost in the process. My own work involves implementing CM tools and processes across many technology platforms including mainframes, Linux/Unix, and Windows. I may be implementing an enterprise Application Lifecycle Management (ALM) solution one day and supporting an open source version control system (VCS) the next. It can be difficult to remember all of the details of the implementation and yet that is precisely what I need to do. The only way to ensure that I don’t lose track of my own changes is to use the very same tools and processes, that I teach developers, for my own implementation work – thereby ensuring history and traceability of everything that I do. I have known lots of very smart developers who made mistakes due to forgetting details that should have been documented and stored for future reference. It often seems like developers are great at running up the mountain the first time, but it takes process and repeatable procedures to ensure that each and every member of the team can run up the same mountain with tripping. This article will discuss how to implement CM and traceability in a practical and realistic way.

The most basic form of traceability is establishing baselines to record a specific milestone in time. For example, when you are checking changes into a version control tool, there is always a point in which you believe that all of the changes are complete. To record this baseline you should label or tag the version control repository at that point in time. This is basic version control and essential in order to be able to rebuild a release at a specific point in time (usually when the code was released to QA for testing). But how do you maintain traceability when the code has been deployed and is no longer in the version control solution? In fact, you need to also maintain baselines in the production runtime area and ensure that you can verify that the right code has been deployed. You also must ensure that unauthorized changes have not occurred either through malicious intent or just an honest mistake. Maintaining a baseline in a runtime environment takes a little more effort than just labeling the source code in a version control repository because you need to actually verify the correct binaries and other runtime dependencies have been deployed and have not been modified without authorization. It is also sometimes necessary that you find the exact version of the source code that was used to build the release that is running in production in order to make a support change such as an emergency bug fix. Best practices in version control and deployment engineering are very important but there is also more to traceability than just labeling source code and tracking binaries.

When software is being developed it is important to develop the requirements with enough detail so that the developers are able to understand the exact functionality that needs to be developed. Requirements themselves change frequently and it is essential that you can track and version control requirements themselves. In many industries such as medical and other mission critical systems, there is often a regulatory requirement to ensure that all requirements have been reviewed and were included in the release. If you were developing a life support system then obviously requirements tracking could be a matter of life or death. Dropping an essential requirement for a high speed trading firm can also result in serious consequences and it is just not feasible to rely upon testing to ensure that all requirements have been met. As Deming noted many years ago, quality has to be built in from the beginning [1]. There are also times when all requirements cannot be included in the release and choices have to be made often based upon risk and the return on investment for including a specific feature. This is when, it is often essential to know who requested the specific requirement and also who has the authority to decide on which requirements will be approved and delivered. Traceability is essential in these circumstances.

Robust version control solutions allow you to automatically track the sets of changes, known as changesets, to the workitems that described and authorized the change. Tracking workitems to changesets is known by some authors as task based development [2]. In task based development, you define the workitems up front and then assign them to resources to complete the work. Workitems may be defects, tasks, requirements or for agile enthusiasts, epics and stories. Some tools allow you to specify linkages between workitems such as a parent-child relationship. This is very handy when you have a defect come in from the help desk that results in other workitems such as tasks and even test cases to ensure that the problem does not happen again in the next release. Traceability helps to document these relationships and also link the workitems to the changesets themselves. Establishing traceability does not really take much more time and it does help to organize and implement iterative development. In fact, it is much easier to plan and implement agile scrum based development if your version control tool implements task based development with the added benefit of providing traceability.

Traceability can help you and your team manage the entire CM process by organizing and tracking the essential details that must be completed in order to successfully deliver the features that your customers want to see. Picking the right tools and processes can help you implement effective CM and maintain much needed traceability. It can also help you develop software that has better quality while meeting those challenging deadlines which often change due to unforeseen circumstances. Use traceability effectively to accelerate your agile development!

[1] Deming, W. Edwards (1986). Out of the Crisis. MIT Press
[2] Hüttermann, Michael, Agile ALM: Lightweight tools and Agile strategies, Manning Publications 2011
[3] Aiello, Bob and Leslie Sachs. 2011. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional
[4] Aiello, Bob and Leslie Sachs. 2016. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement. Addison-Wesley Professional

 

Personality Matters- Positive Psychology and Learning from Mistakes

0

Personality Matters- Positive Psychology and Learning from Mistakes
By Leslie Sachs

Mistakes happen. But, too often, team members engage in very dysfunctional behavior after they have made mistakes. Even though mistakes are often the best learning experiences, many organizations suffer serious consequences not just because a mistake was made, but often as a direct result of the attempt to save face and cover up after an employee has made a mistake. W. Edwards Deming wisely said that organizations need to “drive out fear”; addressing mistakes in an open and honest manner is essential for any organization striving to excel in today’s competitive business environment. Here’s what we learn from positive psychology on creating an environment where employees can be empowered to address their mistakes in an open and honest way.

Positive psychology teaches us that most people want to cultivate what is best within themselves, and to enhance their experiences of love, work, and play. The trick is to guide your employees into exhibiting appropriate behaviors to accomplish these goals. Otherwise, you may find very dysfunctional behaviors such as hiding mistakes, denial and even blaming others, actions that disrupt the workforce and can adversely impact the business in many ways. Many organizations have siloed infrastructures and cultures which further detract from the organization’s goal of addressing mistakes and resolving problems in the most efficient way possible. DevOps principles and practices can help by encouraging teams to work in a collaborative cross-functional way; supportive teamwork is essential when addressing mistakes. Highly effective teams really need to embrace much more effective and proactive ways of dealing with challenges, including human error.

Positive Psychology focuses on positive emotions, positive individual traits, and positive institutions. This approach is a refreshing change from many schools of psychology which focus more on analyzing the reasons for a variety of anti-social and other problematic personality types which often result in dysfunctional behavior. No doubt some folks do indeed have personality problems which pre-dispose them to managing problems – such as handling their own mistakes – in a way that is not very constructive. But it is equally true that focusing on positive individual traits helps us to see and appreciate the strengths and virtues, such as personal integrity, self-knowledge, self-control, courage and wisdom that come from experience and being nourished in a positive environment. The individual is very important in this context, but it is equally important to consider the organization as a holistic being. Understanding positive behaviors within the company itself entails the study of the strengths that empower team members to address challenges in an effective and create way. Some examples of issues that should be discussed are social responsibility, civility, tolerance, diversity, work ethic, leadership, and honesty.

Not surprisingly, the best leaders actually exhibit these behaviors themselves and lead by example, which brings us back to how specific individuals handle mistakes. When mistakes occur, does your organization foster a safe and open environment where people can feel that their best course of action is to admit what they did wrong? Do team members assume that their colleagues will drop what they are doing to help out in resolving any problems? Does the team avoid finger-pointing and the blame game to focus instead on problem resolution?

One manager mentioned that he did not focus so much on the unavoidable reality that mistakes will occur. Instead, he focused on encouraging his employees to acknowledge errors freely and then rated the entire team on their ability to work together and address problems productively, regardless of who may have been involved. Positive psychology gives us an effective framework for actually following through on Deming’s direction to “drive out fear.” The most successful organizations take mistakes and make them learning experiences, leading to employees who feel a renewed sense of loyalty and commitment to achieving excellence. Mistakes happen – your challenge is to ensure that, rather than demoralizing or paralyzing people, these missteps instead empower your team to be more effective and successful!

 

[1] Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55, 5–14

[2] Seligman, Martin, Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Free Press, New York 2002

[3] Abramson, L. Y.; Seligman, M. E. P.; Teasdale, J. D. (1978). “Learned helplessness in humans: Critique and reformulation”. Journal of Abnormal Psychology 87

[4] Deming, W. Edwards (1986). Out of the Crisis. MIT Press

[5] Aiello, Bob and Leslie Sachs. 2010. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

Groundbreaking Technology for DevOps Omniscience; JFrog Announces Immediate Availability of JFrog Xray

0

SANTA CLARA, CA, July 5th, 2016 as announced on Marketwired

JFrog, the worldwide leader in infrastructure for software management and distribution, today announced the immediate availability of JFrog Xray, its pioneering product for accelerating software delivery. This powerful new resource provides organizations unprecedented insight into their software packages and containers.

As the industry’s first universal artifact analysis product, JFrog Xray works with all software package formats and a multitude of databases as it deeply and recursively scans every type of binary component ever used in a software project and points out changes or issues impacting the production environment.

“The early response to JFrog Xray has been phenomenal,” said Shlomi Ben Haim, founder and CEO of JFrog. “We’re excited that more and more organizations will now be able to benefit from this pioneering technology for gaining radical transparency into the huge volume and variety of binary components used in development. The combination of validating with security databases, along with acquiring metadata from JFrog Artifactory makes JFrog Xray the only tool in the world that not only scans the container or software package, but also provides a full dependency graph and impact analysis to the user. Our goal is to address the real DevOps pain and not just to send another scanner to the market!”

As organizations increasingly transform from isolated teams and role-specific tools to common delivery pipelines shared by global teams and driven by seamlessly inter-operating tools, they need to understand all the binary artifacts they produce, across all product lines and geographies, and taking into account changes to global application deployment and distribution over time.

JFrog Xray addresses this critical need by providing deep recursive scanning to repeatedly peel back the layers of software components and their accompanying metadata to uncover security vulnerabilities or other issues down to the most fundamental binary component, no matter what binary packaging format the organization uses. This deep scanning of the dependency graph provides organizations the ability to perform impact analysis on changes to their package structure.

JFrog Xray is a fully-automated platform with a powerful REST API, enabling integration with an organization’s CI/CD pipeline as well as with all current and any possible future types of component-scanning technology.

It integrates with a universal range of databases and security platforms so that critical necessities such as security vulnerability analysis, license compliance and component version analysis and assurance become possible not only at build time, but across all of the enterprise’s binary digital assets as well.

JFrog Xray is available now at https://www.jfrog.com/xray/free-trial with a special webinar on JFrog Xray on July 14 — http://bit.ly/23gL143.

More information about JFrog Editions — https://www.jfrog.com/pricing
Resources
Company: https://www.jfrog.com
Open Positions: https://join.jfrog.com
JFrog Artifactory: https://www.jfrog.com/artifactory
JFrog Bintray: https://www.jfrog.com/bintray
JFrog Mission Control: https://www.jfrog.com/mission-control
JFrog Xray: https://www.jfrog.com/xray
Customer testimonials: https://www.jfrog.com/customers
Twitter: https://twitter.com/jfrog
LinkedIn: https://www.linkedin.com/company/jfrog-ltd
JFrog Training Webinars: https://www.jfrog.com/webinars

About JFrog
More than 2,500 paying customers, 60,000 installations and millions of developers globally rely on JFrog’s world-class infrastructure for software management and distribution. Customers include some of the world’s top brands, such as Amazon, Google, LinkedIn, MasterCard, Netflix, Tesla, Barclays, Cisco, Oracle, Adobe and VMware. JFrog Artifactory, the Universal Artifact Repository, JFrog Bintray, the Universal Distribution Platform, JFrog Mission Control, for Universal Repository Management, and JFrog Xray, Universal Component Analyser, are used by millions of developers and DevOps engineers around the world and available as open-source, on-premise and SaaS cloud solutions. The company is privately held and operated from California, France and Israel. More information can be found at www.jfrog.com.

Behaviorally Speaking—Putting Ops Into DevOps

0
Behaviorally Speaking—Putting Ops Into DevOps
by Bob Aiello
DevOps is a set of principles and practices that helps development and operations teams work more effectively together. DevOps focuses on improving communication and collaboration with the technical goal of providing reliable and flexible release and deployment automation. Much of what is written about DevOps is delivered from the perspective of developers. We see many articles about continuous, delivery and deployment describing the virtues of good tooling, microservices and very popular practices around virtualization and containers. In my opinion, DevOps needs to also take an operations view in order to achieve the right balance. This article is all about putting Operations back into DevOps.

Operations professionals are responsible for ensuring that IT services are available without interruption or even degradation in services. IT operations is a tough job and I have worked with many technology professionals who were truly gifted in IT operations with all of its functions and competencies. Many IT operations staff perform essential day to day operations tasks that can be very repetitive, although essential in keeping critical systems online and operational. In some organizations, operations engineers are not as highly skilled as their development counterparts. Historically, mainframe operators were focused on punch cards and mounting tapes while programmers were focused on implementing complex business logic. Today we do come across operations engineers who lack strong software engineering skills and training and this can be a very serious problem. When developers observe that operations technicians are not highly skilled then they often stop providing technical information because the developers come to the conclusion that the operations folks cannot understand the technical details. This dynamic can result in consequences that are disastrous for the company, with the most common challenge of developers feeling that should try to bypass operations as often as possible.I have also worked with top notch Unix/Linux gurus in operations who focused on keeping complex systems up and running on a continuous basis. IT operations professionals often embrace the itSMF ITIL v3 framework to ensure that they are implementing industry best practices that ensure reliable IT services. If you are not already aware of ITIL v3 you probably should be.

The ITIL v3 framework describes a robust set of industry best practices designed to ensure continuous operation of IT services. The ISACA Cobit and the SEI CMMI are also frameworks that are used by many organizations to improve their IT processes, but ITIL is by far the popular set of guidelines for IT operations. CM professionals should particularly focus on the guidance in the transition section of the ITIL framework which describes change management, build and release, configuration management systems (including CMDB). With all of this guidance do not forget to start at the beginning with an understanding of the application and systems architecture.

The first thing that I always require is a clear description of the application and systems architecture. This information is very important to have a clear understanding of the system as a whole or in DevOps terminology, having a full end-to-end systems view. For build and release engineers, understanding the architecture is fundamental because all of our build, release and deployment scripts must be created with an understanding of the architecture involved. In fact, development needs to build applications that are designed for IT Operations.

Many developers focus on Test Driven Development (TDD) where code is designed and written to be testable, often beginning with writing the unit test classes even before the application code itself is written. I have run several large scale automated testing projects in my career and I have always tried to work with the developers to design the systems to be more easily testable. In some cases this actually included hooks to ensure that the test tools could work without finding too many cosmetic superficial issues (which we usually call false positives). Test Driven Development is very effective and it is my view that applications also need to be designed and written with operations in mind. One reason to design applications with IT Operations in mind is to enable the implementation of IT process automation.
Effective IT operations teams rely upon tools including the automated collection of events, alerts and incident management. When an alert is raised or incident reported to the IT Service Desk, the IT Operations team must be able to rely upon IT process automation to facilitate detection and resolution of the incident, preferably before there is customer impact. IT Process automation must include automated workflows to enable each member of the team to respond in a clear and consistent way. In practice, it is very common for organizations to have one or two highly skilled subject matter experts who are able to troubleshoot almost any production issue. The problem is that these folks don’t always work twenty four hours a day – seven days a week and in fact, are usually on vacation when problems occur. IT process automation, including workflow automation, enables the operations team to have well documented and repeatable processes to help ensure that IT Services are consistently working in a reliable and consistent way. Getting these procedures right must always start with the application build.

Effective build automation often includes key procedures such as embedding immutable version IDs into configuration items to facilitate the physical configuration audit. For example, a C#/.net application should have a version identifier embedded into the assembly. You can embed version IDs via an MS Build script or using Visual Studio IDE. The Microsoft .net MSIL Disassembler (Ildasm.exe) can be used to look inside of a .net assembly and display the version ID. There are similar techniques in Java/C/C++ along with almost every other software development technology. These techniques are essential for IT operations to be able to confirm that the correct binary configuration items are in place and that there have not been any unauthorized changes. Builds are important, but continuously deploying code starting from very early in the development lifecycle is also a critical DevOps function that helps IT operations to be more effective.

Application automation is a key competency in any effective DevOps environment. Continuous Delivery enables the IT operations team to rehearse and streamline the entire deployment process. If this is done right then the operations team can support many deployments while still maintaining a high level of service and support. The best practice is to move the application build, package and deployment process upstream and begin the effort with supporting development test environments. These automated procedures are not trivial and it will take some time to get them right. The sooner in the lifecycle you begin this effort, the sooner your procedures will be mature and reliable. Since organizations have to pay someone to deploy code to the development and testing environments, it is a great idea to have the person who will deploy to production do this work and get the experience and training to understand and help evolve the deployment automation. The practice of involving operations from the beginning of the lifecycle has become known as left-shift.  IT operations depends upon a reliable automated deployment framework for success and getting Ops involved from the beginning helps you get that work done.

IT operations is a key stakeholder in any DevOps transformation. It is all too common for development to miss the importance of partnering effectively with operations to develop effective procedures that ensure uninterrupted IT services. If you want to excel then you need to keep Operations in your DevOps transformation!

Personality Matters- Learned Complacency and Systems Glitches

0

Personality Matters- Learned Complacency and Systems Glitches
By Leslie Sachs

System glitches have impacted many high profile trading systems such as the July 2015 New York Stock systems outage. Initially feared to be the the result of a cyber attack, but investigation determined it to be the result of a faulty software upgrade. The NYSE is not the only trading venue suffering outages during systems upgrades. In April 2013, the Chicago Board Options Exchange (CBOE) also suffered a high profile systems glitch, which shut down the CBOE trading system and was among a series of incidents impacting large trading firms, including NASDAQ, which are expected to be highly secure and reliable. It is often believed that these, and similar, outages are the result of the complexity and challenges inherent in upgrading complex mission critical financial trading systems. Given that similar outages occurred at other major stock exchanges and trading firms, one might be tempted to think that the CBOE debacle was unremarkable. What is striking is that there was a published report that employees knew in advance that the system was not working correctly and yet the CBOE none-the-less chose to not fail over to its backup system. In our consulting practice, we often come across technology professionals who try to warn their management about risks and the possibility of systems outages that could impact essential services. During the CM Assessment that we conduct, we often find ourselves being the voice for validating and communicating these concerns to those who are in a position to take appropriate action. What is troubling, however, is that we have seen many companies where employees have essentially learned to no longer raise their concerns because there is no one willing to listen and, even worse, they may have suffered consequences in the past for being the bearer of bad tidings. We refer to this phenomenon as learned complacency.

Some people are more passive than others. This may come from a personality trait where the person feels that getting along with others is more important than blazing a new trail and standing up for one’s own convictions. Many people strongly desire to just go along with the crowd and psychologists often refer to this personality trait as agreeableness, one of the primary personality traits in the well-known Big Five [1]. This personality trait can be very problematic in certain situations and some people who like to avoid conflict at all costs display a dysfunctional behavior known as passive-aggressiveness. A passive-aggressive person typically refuses to engage in conflict, choosing instead to outwardly go along with the group, while inwardly deeply resenting the direction that they feel is being forced upon them. People with a passive-aggressive personality trait may outwardly appear to be agreeable, but deep down they are usually frustrated and dissatisfied and may engage in behaviors that appear to demonstrate acquiescence, yet actually do nothing or even obstruct progress, albeit in a subtle manner. Some IT professionals who have a passive (or passive-aggressive) personality trait may be less than willing to warn their managers that systems problems that may cause a serious outage exist.

We have seen people who simply felt that although they were close enough to the technology to identify problems, they could not escalate a serious issue to their management, because it simply was not their job. In some cases, we have come across folks who tried to warn of pending problems, but were counseled by their managers to not be so outspoken. Bob Aiello describes one manager who frequently used the phrase, “smile and wave” to encourage his staff to tone down their warnings since no one really wanted to hear them anyway. Not surprisingly, that organization has experienced serious systems outages which impacted thousands of customers. But not everyone is afraid to stand and be heard. What often distinguishes employees is their own natural personality traits, including those associated with being a strong leader.

Technology leaders know how to maintain a positive demeanor and focus on teamwork, while still having the courage to communicate risks that could potentially impact the firm. The recent rash of serious systems outages certainly demonstrates the need for corporations to reward and empower their technical leaders to communicate problems without fear of retribution. Deming said, “drive out fear” and there is certainly no greater situation where we need leaders to be fearless than when warning of a potential problem that could have a significant impact upon large-scale production IT systems.

While some people may be predisposed to avoid conflict, the greater problem is when a corporation develops a culture where employees learn to maintain silence even when they are aware of potential problems. The IT industry needs leaders who are accountable, knowledgeable and empowered to create working environments where those who protect the long-term best interests of the firm are rewarded and those who take short-sighted risks are placed in positions where they cannot adversely impact the well-being of the firm. We will see less systems outages when each member of the team understands their own role in the organization and feels completely safe and empowered to speak truthfully about risks and potential problems that may impact their firm’s critical systems infrastructure. There are times when risk-taking is appropriate and may result in significant rewards. However, firms which take unnecessary risks endanger not only their own corporation, but may impact thousands of other people as well. Those firms with thoughtful IT leadership and a strong truthful and open culture will achieve success while still managing and addressing risk in an appropriate and effective way.

 

 

References

[1] http://www.psychometric-success.com/personality-tests/personality-tests-big-5-aspects.htm

[2] Byrne, Donn. 1974. An Introduction to Personality: Research, Theory, and Applications. Prentice-Hall Psychology Series.

[3] Appelo, Jurgen. 2011. Management 3.0: Leading Agile Developers, Developing Agile Leaders. Addison-Wesley Signature Series.

[4] Aiello, Bob and Leslie Sachs. 2011. Configuration Management Best Practices: Practical Methods that Work in the Real World. Addison-Wesley Professional.

[5] Aiello, Bob and Leslie Sachs. 2016. Agile Application Lifecycle Management: Using DevOps to Drive Process Improvement. Addison-Wesley Professional.

 

 

 

 

Leslie Sachs is a New York state-certified school psychologist and the COO of Yellow Spider. She is the co-author of Configuration Management Best Practices: Practical Methods that Work in the Real World. Leslie has more than twenty years of experience in the psychology field and has worked in a variety of clinical and business settings, where she has provided many effective interventions designed to improve the social and educational functioning of both individuals and groups. She has an MS in school psychology from Pace University and interned in Bellevue Psychiatric Center in New York City. A firm believer in the uniqueness of every individual, she has recently done advanced training with Mel Levine’s All Kinds of Minds Institute. She may be reached at LeslieASachs@gmail.com, or link with her at http://www.linkedin.com/in/lesliesachs.

 

 

 

 

 

 

Behaviorally Speaking – Why Retailers Need A Secure Trusted Application Base

0

Behaviorally Speaking – Why Retailers Need A Secure Trusted Application Base
by Bob Aiello

Target’s well-publicized disclosure that customers’ Personally Identifiable Information (PII) had been compromised was a stark reminder that retailers need to ensure that their systems are secure and reliable. The 2013 incident resulted in a settlement of over 39 million dollars, which Target had to pay banks and other financial services firms to compensate them for losses stemming from the cybersecurity breach. Target is not alone, as other retailers are stepping forward and admitting that they, too, have been “victims” of a cyber-attack. Target CEO Gregg Steinhafel called for chips on credit cards while also admitting that there was malware on his point-of-sale machines. Mr Steinhafel ultimately lost his job after the incident, which should emphasize corporate leaders are being held accountable for their corporate systems security and reliability. In Target’s case, the malware was not discovered on the retailer’s machines despite the use of malware scanning services. The fact that retailers rely upon security software to detect the presence of a virus, Trojan or other malware is exactly what is wrong with the manner in which these executives are looking at this problem.

The problem is that malicious hackers do not give us a copy of their code in advance so that virus protection software vendors can make security products capable of recognizing the “signature” of an attack. This means that we are approaching security in a reactive manner only after the malicious code is already on our systems. What we need to be doing is building secure software in the first place, and to do this you need a secure trusted application base which frankly is not really all that difficult to accomplish. Creating secure software has more to do with the behavior and processes of your development, operations, information security and QA testing teams than the software or technology you are using. We need to be building, packaging and deploying code in a secure and trusted way such that we know exactly what code should be on a server. Furthermore, we also need to be able to detect unauthorized changes which occur, either through human error or malicious intent. The reason that so much code is not secure and reliable is that we aren’t building it to be secure and reliable and it is about time that we fixed this readily-addressed problem. We discuss how to create an effective Agile ALM using DevOps in our new book.

Whether your software system is running a nuclear power plant, grandpa’s pacemaker or the cash register at your favorite retailer, software should be built, packaged and deployed using verifiable automated procedures that have built-in tests to ensure that the correct code was deployed and that it is running as it should. In the IEEE standards, this is known as a physical and functional configuration audit and is among the most essential configuration management procedures required by most regulatory frameworks, and for very good reason. If you use Ant, Maven, Make or MSBuild to compile and package your code, you can also use cryptographic hashes to sign your code using a private key in a technique that is commonly known as asymmetric cryptography. This isn’t actually all that difficult to do and many build frameworks have the functions already built into the language. Plus, there are many reliable free and open source libraries available to help automate these tasks. It is unfortunate, not to mention rather costly, that many companies don’t take the time to implement these procedures and best practices as they rush their updates to market without the most basic security built in from the beginning of the lifecycle.

We have had enough trading firms, stock exchanges and big banks suffer major outages that impacted their customers and shareholders. It is about time that proper strategies be employed to build in software reliability, quality and security from the beginning of the lifecycle instead of just trying to tack it on at the end – if there is enough time. The Obamacare healthcare.gov website has also been cited as having serious security flaws and there are reports that the security testing was skipped due to project timing constraints. The DevOps approach of building code through automated procedures and deploying to a production-like environment early in the lifecycle is essential in enabling information security, QA, testing and other stakeholders to participate in helping to build quality systems that are verifiable down to the binary code itself. If you have put in place the procedures needed to detect any unauthorized changes then your virus detection software should not need to detect the signature of a specific virus, Trojan or other malware.

Using cryptography, I can create a secure record of the baseline that allows me to proactively ascertain when a binary file or other configuration item has been changed. When I baseline production systems, I sometimes find that, to my surprise, there are files changing in the environment that I do not expect to be changing. Often, there is a good explanation. For example, some software frameworks spawn off additional processes and related configuration files to handle additional volume. This is particularly a problem with frameworks that are
commonly used to write code faster. These frameworks are often very helpful, but sometimes they are not necessarily completely understood by the technology team using them. Baselining your codelines will actually help you understand and support your environment more effectively when you learn what is occurring on a day-to-day basis. There is some risk that you might have some false positives in which you think that you have a virus or other malware when in fact you can determine that there is a logical explanation for the changed files (and that information can be stored in your knowledge management system for next time).

The Target point-of-sale (POS) devices should have been provisioned using automated procedures that also could be used to immediately identify that code was on the machine (or networking device) that was not placed there by the deployment team. Identifying malware is great, but identifying that your production baseline has been compromised is a infinitely more helpful. When companies start embracing DevOps best practices then large enterprise systems will become more reliable and secure, thus helping the organization better achieve their business goals!

Pick up a copy of our new book on Agile Application Lifecycle Management to learn more about exactly what you need to do in order to create the secure trusted application base!

Personality Matters- Type “A’s” in DevOps

0

Personality Matters- Type “A’s” in DevOps
By Leslie Sachs

Technology teams often attract some “interesting” personalities. Some of these folks may simply exhibit odd, perhaps eccentric, behaviors unrelated to their work responsibilities while others may engage in behaviors that undermine the effectiveness of the team or, perhaps conversely, actually stimulate teamwork and contribute to success. The personalities of the people on your team certainly impact not only how happy you are to show up for work, but often they directly impact the overall success (or failure) of the organization as well. But what happens when members of your team exhibit overly aggressive or downright combative behaviors, such as insisting that the team adopt their approach or showing a lack of teamwork and collaborative behavior? You may find yourself on a team with some “interesting”  personalities one day. Successful managers give careful consideration to the personalities on their teams. Since you’re unlikely to change your colleagues MOs, it is wise to consider instead how everyone’s styles interact.

DevOps efforts can benefit from some typical “Type A” or “Type B” behaviors. First, a quick overview of the history; Dr. Meyer Friedman was a cardiologist, trained at John Hopkins University, who developed a theory that certain behaviors increased the risk of heart attacks [1]. Dr. Friedman developed this theory with another cardiologist by the name of Dr. Ray Rosenman. These two physicians suggested that people who exhibited type “A” behaviors (TAB), including being overly competitive, hard driving and achievement-oriented, were at higher risk for developing coronary artery disease. Fascinating, and not without some controversy in the medical establishment, this research also makes one ponder how other members of the team might react to interacting regularly with a type “A” personality on the team.

Software development is largely a “Type A” endeavor. The fact is that many highly-effective teams have lots of members who are very aggressive, intense and highly competitive. One important mitigating factor is that technology professionals also need to be right. You can exhibit whatever personality traits you want and the software just won’t work if you didn’t get it right. The next issue is that technology is so complex that few people, if any, in today’s global organizations, are able to work entirely on their own. High-performing teams often have specialists who depend upon each other and must collaborate. Even though some degree of competition may be common, frequent collaboration is just not optional. If you have ever been in a meeting with someone who just stuck to their point despite objections from other team members (and seemingly oblivious to any sense of logic) then you probably have seen this type of behavior. Still, many technology teams often struggle to overcome a fair amount of conflict and drama. In the midst of a highly confrontational meeting, it might be tempting to consider what life would be like with the more easy going “Type B” personalities. However, Harvey Robbins and Michael Finley point out that some teams don’t work well when their leaders are unwilling to fight for the team [2].

So, how exactly can one determine what is the right amount of “Type A” versus “Type B” behavior in a DevOps team? As noted in previous articles, there is a natural tension between the aggressive behavior of highly motivated software developers and the operations professionals who are charged with ensuring that we maintain consistent and continuously available services. Operations often focuses on maintaining the status quo while development presses hard for introducing new features based upon customer demand. It shouldn’t surprise you that both types of behavior are essential for a successful DevOps organization. You need to have aggressive personalities with a strong achievement-focused drive to create new features and improved systems. But you also need to temper this aggressiveness with the more balanced “Type B” behaviors that encourage review and careful analysis.

This balance is exactly what the burgeoning DevOps movement brings to the table. DevOps brings together the views of folks engaged in QA activities, software development, operations and also information security. Keep in mind that many people are attracted to each of these essential disciplines, in part, due to their personalities as well as by how these roles fit into the goals and objectives of their respective teams. DevOps brings together and improves communications between teams; it also brings together stakeholders with different viewpoints and often very different personalities. Successful teams harness this diversity to help ensure that their cross-functional teams are more effective. The most effective managers understand the basic personalities and communication styles that are often found in cross-functional teams and become adept at developing strategies which utilize these differences productively. With encouragement, competitive “Type A’s” and more laid-back “Type B’s” can learn to “play nice” so that each of their strengths are incorporated and contribute to overall team success!

References:
[1] Meyer Friedman, Type A Behavior: Its Diagnosis and Treatment (Prevention in Practice Library). Springer, 1996
[2] Harvey Robbins and Michael Finley, Why Teams Don’t Work – What Went Wrong and How to Make it Right, Peterson’s Pacesetter Books, 1995
[3] Bob Aiello and Leslie Sachs, Configuration Management Best Practices: Practical Methods that Work, Addison-Wesley Professional, 2011
[4] Bob Aiello and Leslie Sachs, Agile Application Lifecycle Management – Using DevOps to Drive Process Improvement, Addison-Wesley Professional, 2016

Behaviorally Speaking – Software Safety

Behaviorally Speaking – Software Safety
by Bob Aiello

Software impacts our world in many important ways. Almost everything that we touch from the beginning to the end of our day relies upon software. For example, airline flight controls and nuclear power plants all rely upon complex software code that must be updated from time to time, tested and supported. Incidents in the past impacted the 911 emergency dispatch system and that was not the only time that emergency dispatch systems have suffered outages which impacted the response time for police, ambulance and fire department services. The software that enables the anti-missile defense system known as the Iron Dome in Israel has been credited with saving lives and underwent an extensive testing and validation effort. But the number of software glitches impacting trading systems and other complex financial systems could cause us to question whether or not our capability to manage software configuration management is really where it should be.

Many years ago, I was interviewed by a very smart technology manager for a position supporting a major New York based stock exchange. I went into the interview feeling pretty confident that I had the requisite skills and actually had been recommended by a manager who I had worked for previously at another company. I was surprised when during the interview I was asked a very pointed question about my capabilities. The manager asked me to imagine that I was supporting the software for a life support system which my loved one depended upon. He then asked me if I was confident that I would never make a mistake that could potentially impact the person (presumably my child, parent or spouse) who was dependent upon the life support system. I was pretty shocked at this question posed during a job interview and I managed to stay positive and I told the manager my methods worked and yes I would trust them on a life support system that could potentially impact someone who I cared about. But the question stayed with me for years to come. The truth is that someone has to upgrade the software used by life support systems and I am not completely confident that our industry has completely reliable methods to handle this work.

Some times ago I gave a full day class at a The Nuclear Information Technology Strategic Leadership (NITSL) conference. The NITSL is a nuclear industry group of all nuclear generation utilities that exchange information related to information technology management and quality issues. I am pleased to say that these colleagues valued software safety to such a degree that it was an ingrained aspect of their culture which impacted every aspect of their daily work.

From a configuration management perspective, the first step in software safety must be to establish the trusted base from the systems software to applications that are integrated with the hardware devices. The trusted base must start from the lowest levels of the system including the firmware, operating system and even the hardware itself. Applications must built, packaged and deployed deterministically to the trusted base in a manner that ensures that we know exactly what code is to be deployed and that we can verify that the correct code actually was indeed deployed to the target environment. Equally important is verifying that no unauthorized changes have occurred and that the trusted base is verifiable and fully tested. If you had a pacemaker that required software updates, obviously it would be essential that you can rely upon there being a trusted base that enables the pacemaker to function reliably and correctly.

Past outages at major stock exchanges and trading firms have shown that many complex financial systems obviously do not have an established trusted computing base and that has directly resulted in very steep losses for some firms and impacted thousands of people. The good news is that we actually do know how to build, package and deploy software reliably. We also know how to verify that the right code was deployed and that there are no unauthorized changes. These best practices are precisely what we discuss in application build, package and deployment including DevOps, although many firms struggle with their successful implementation. The key to success is to start from the beginning.

In my consulting work, I often find that companies actually do know what has to be done to reliably build, package and deploy software successfully. The problem is that they often begin doing the right thing much too late in the application lifecycle. Deming teaches us that quality must be built in from the beginning. The same is especially true when considering software safety.

Successful build and release engineers understand that smoke testing after a deployment is essential for a successful build and release process. When the software matters then you need to be verifying and validating the code from the very beginning to the end of the lifecycle. This means that your build stream should include unit testing, functional and non-functional (e.g. performance testing) and of course comprehensive regression testing. Good configuration management practices allow you to build a version of the code that can be instrumented for comprehensive code analysis and exhaustive automated testing. The truth is that these best practices are most successful when they are supported from the very beginning of the lifecycle and are a fundamental part of the culture of the organization. Don’t forget that the build and deploy pipeline must also be verifiable and trusted.

When I create an automated build and deployment system, I start from the ground up verifying the operating system itself and all of the system dependencies. I only trust the trusted base if I am able to verify it on a continuous basis and this become for me part of environment management (and monitoring).For example, the Center for Internet Security (CIS) provides an excellent consensus standard that explains in great detail exactly how to create a secure linux operating system. You will also find that the consensus standard also provides example code for verifying that the security baseline is configured as it should be. Successful, security engineering involves both configuring the operating system correctly and verifying on an ongoing basis that it stays configured in a secure way. This is fundamentally a core aspect of environment monitoring and is essential for ensuring the trusted base.

Software safety requires that systems be built and configured in a secure and reliable way. Changes need to be tracked and verified which is essentially the purpose of the physical configuration audit. There’s more to software safety and I hope that you will contact me to share your views on software safety best practices and get involved with the community based efforts to updated software safety standards!

Call for Articles!

0

Hi Everyone!

I am excited to invite you to get involved with the Agile ALM Journal by contributing your own articles on Agile ALM and DevOps along with all aspects of software and systems development. The Agile ALM Journal provides guidance on Application Lifecycle Management which means that we have a strong focus on DevOps, Configuration Management and software methodology throughout the entire ALM. Articles are typically 900 – 1200 words and should explain how to do some aspect of software methodology. Contact me directly to get involved with submitting your articles and I will help you with getting started, forming your ideas and editing your article for publication.

Common topics include:

  • Software development approaches including agile
  • DevOps throughout the entire software process
  • Configuration Management (including the CMDB)
  • Build and Release Engineering
  • Source Code Management including branching and streams
  • Deployment Engineering (DevOps)
  • Continuous Testing
  • Development in the Cloud
  • Continuous Integration and Deployment
  • Environment Management
  • Change Management

and much more!

Bob Aiello
Editor
bob.aiello@ieee.org
http://www.linkedin.com/in/BobAiello