Archive for the ‘test automation’ Category
Regular reader Gianni Pucciani wrote me recently to discuss risk based testing in an Agile world:
Risk based testing has been discussed in several places, and from different perspective, including in your books and online resources. There are almost no doubts about the benefits it can bring. What is still missing in my opinion is a good discussion about adopting risk based testing in an agile environment.
In this case risk identification and assessment should be performed at the beginning of each sprint, analyzing the risks connected to the features that will be developed in the coming sprint. Another point worth mentioning is the importance of test automation for regression testing, but what about a situation where most of the tests are manual? I would like to hear from you and the readers of your blogs if you have any experience/suggestions to share.
Yes, Gianni, this is exactly how risk based testing works in an Agile environment. Here’s summary of a risk based testing process document we created for a client who uses the Scrum methodology:
1. At the beginning of the planning period for a release, identify project risk based analysis team participants.
2. Schedule risk meetings (90-120 minutes each).
3. Prepare interview documents.
4. Hold interviews with all risk identification participants.
5. Analyze risk items.
6. Normalize risks.
7. Review project Quality Risk Analysis with stakeholders.
8. Apply RPN values to test planning and test case development.
9. When release backlogs are being determined and when sprint backlogs are being revised, At major project milestones, review and revise the risk analysis.
In terms of manual regression testing, given all the great tool support for test automation in Agile environments, I’m not sure why an organization would choose to do this. It’s the best way to manage regression risk in an Agile environment.
Many of us got into the computer business because we were fascinated by the prospect of using computers to build better ways to get work done. (That and the almost magical way we could command a complex machine to do something simply through the force of words coming off our fingers, into a keyboard, and onto a screen.) Ultimately, those of us who consider ourselves software engineers, like all engineers, are in the business of building useful things.
Of course, engineers need tools. Civil engineers have dump trucks, trenching machines, and graders. Mechanical engineers have CAD/CAM software. And we have integrated development environments (IDEs), configuration management tools, automated unit testing and functional regression testing tools, and more. May great software testing tools are available, and some of them are even free. But just because you can get a tool, doesn’t mean that you need the tool.
When you get beyond the geek-factor on some tool, you come to the practical questions: What is the business case for using a tool? There are so many options, but how to I pick one? How should I introduce and deploy the tool? How can I measure the return on investment for the tool? This article will help you uncover answers to these questions as you contemplate tools.
Let’s start with the business case. Remember: without a business case, it’s not a tool, it’s a toy. Often, the business case comes down to one or more of the following:
- There’s no way to perform some activity without a tool, or, if it is done without a tool, it won’t be done very well. If the benefits and opportunities of performing that activity exceed the costs and the risks associated with the tool, there’s a business case.
- The tool will allow you to substantially accelerate some activity you need to perform as part of some project or operation. If that activity is on the critical path for completion of that project or operation, and the benefits and opportunities of accelerating the completion of that project or operation exceed the costs and the risks associated with the tool, there’s a business case.
- The tool will allow you to reduce the manual effort associated with carrying out some activity. If the benefits and opportunities from reducing the effort (over some period of time) exceed the costs and the risks associated with the tool (including the effort associated with acquiring, implementing, and maintaining the tool and its various enabling components), there’s a business case.
There can be other business cases, but one or more of these will frequently apply. Sometimes the business case masquerades as something else, such as improving consistency of tasks or reducing repetitive work, but notice that these two are actually the first and last bullet items above, respectively, if you consider them carefully.
Once you’ve established a business case, you can select a tool. With the internet, it is easy to find candidate tools. Before you start that, consider the fact that you are going to live with the tool you select for a long time—if it works—and potentially spend a lot of money on it. So, I recommend that you consider tool selection as a special project, and manage it that way. Form a team to carry out a tool selection. Identify requirements, constraints, and limitations. At this point, start searching the Internet to prepare an inventory of suitable tools. If you can’t find any, then perhaps you can find some open source or freeware constituent pieces that could be used to build the tool you need? Assuming you do find some candidate tools, you should perform an evaluation and, ideally, have a proof-of-concept with your actual business problem. (Remember, the vendor’s demo will always work, but you don’t learn much from a demo about how the tool will solve your problems.) With that information in hand, you’re ready to choose a tool.
Once you’ve chosen the tool, it’s time to pilot the tool and then deploy it. In the pilot, select a project that can absorb the risk associated with the piloting of a tool. Your goals for the pilot should include the following:
- To learn more about the tool and how to use it.
- To adapt the tool, and any processes associated with it, to fit your other tools and your organization.
- To devise standard ways of using, managing, storing and maintaining the tool and its assets.
- To assess the return on investment (more on that later).
Based on what you learned from the pilot, you’ll want to make some adjustments. Once those adjustments are in place, you’ll want to proceed to deployment of the tool. Here are some important ideas to remember for deployment:
- Deploy the tool to the rest of the organization incrementally, rather than all at once, if at all possible. In some cases, as for governance tools required for regulatory compliance, you might not have this luxury, but be sure to manage the risks associated with a rapid roll-out if you must do so.
- Adapt and improve the software engineering processes to fit use of the tool. The tool should effect changes in your processes; otherwise, how could you become more effective and efficient?
- Provide training and mentoring for new users. Be sensitive to the possible learning-curve issues that a new tool can create, and manage the risks that would be created by misuse of the tool.
- Define tool usage guidelines. Some simple explanations—say on a company wiki or in a recorded internal webinar-style lunch-and-learn—can really help people use the tool properly.
- Learn ways to improve use of the tool continuously. Especially in early deployment, you’ll find opportunities and problems the pilot didn’t reveal. Be ready to address those, and to gather a repository of lessons learned (perhaps again in wikis or recorded webinars).
Finally, let’s address this question of return on investment (ROI). For process improvements (including introduction of tools), we can define ROI as follows:
ROI = (net benefit of improvement)/(cost of improvement)
This question of net benefit returns us to where we started: business objectives. Any meaningful measure of return on investment has a strong relationship with the objectives initially established for the tool. Let’s look at an example. Suppose you have developers who currently use manual approaches for code integration and unit testing. This consumes 5,000 person-hours per year. With the tool, one developer will spend 50% of their time as integration/test toolsmith, using Hudson and other associated tools to automate the process. By doing so, developer effort for this process will shrink to 500 person-hours (plus the 50% of the person-year for the toolsmith). So, ROI is:
ROI = (net benefit from investment)/(cost of investment) = ((5000-(500+1000)))/1000 = 350%
Notice that, in this case, since the tools are free, I did the calculation entirely using person hours. Sometimes, with commercial tools, you have to perform this whole calculation in dollars or whatever your local currency is.
As software engineers, we want to build useful things, and tools can make us more effective and efficient in doing so. Before we start to use a tool, we should understand the business objectives the tool will promote. Understanding the business case will allow us to properly select a tool. With the tool selected we can then go through one or more pilot projects with the tool, followed by a wider deployment of the tool. As we deploy—and after we deploy—we should plan to measure the return on investment, based on the business case. By following this simple process, you can not only achieve success with tools—you can prove it, using solid ROI numbers.
Often, software engineering processes–including but not limited to software testing processes–are made more efficient by tools, or in some cases are only enabled by the use of a tool. When the tool is missing, the process breaks down. The dependency–and thus the breakdown–might not be as obvious as shown in the picture below; sometimes you have to think harder about the problem.
What Is Missing?
I’ve made some comments, both on this blog and in various speechs/webinars/courses, about Agile development processes and how they affect testing. However, I haven’t addressed the entire set of Agile principles at once. I haven’t seen others who I would call “Agile agnostics” do so either. (By ”Agile agnostics” I mean those who do not cast themselves as proponents for or opponents of Agile.) So, in this post, I make some test-centric observations about the Agile principles from the Agile manifesto. These observations are based on my experiences working on Agile projects and working with Agile teams.
- “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” Like any iterative lifecycle, Agile breaks the development work to be done into iterations. Each iteration should create a collection of features that are potentially valuable to customers. I say “potentially” because not all iterations do result in the delivery of features to customers, but, when practiced properly, each iteration’s features could be delivered to customers; i.e., each iteration is sufficiently complete and of sufficient quality. This focus on regularly assuring quality (in the most holistic meaning of the term “quality assurance”) is helpful to the test team.
- “Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.” As a practical matter, this principle is one of the most challenging for testing, because there is a point in each iteration at which changes become highly disruptive in terms of complete testing of the change and the associated possible regressions. Balancing quality and agility in the face of desired changes is important.
- “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.” Most Agile teams and projects seem to have settled on iterations of between two to four weeks. At the end of each iteration, as mentioned above, the software should work and be potentially deliverable to customers. This principle is challenging to testers, because of the short timeframes for test preparation and execution, but it also provides the benefit of limiting the number of features delivered for testing at any one time. Short iterations also help to contain the number of bugs that could accumulate in the code prior to test execution, a distinct advantage to the test team.
- “Business people and developers must work together daily throughout the project.” This principle, while laudable in theory, is difficult. Often, surrogates represent the users or customers, and business people are too busy to participate daily. In addition, the absence of the word “testers” from the list above can create challenges. However, accessibility of the business stakeholders to the project team is certainly helpful to the testers, and good Agile practices can make it easier for test teams to resolve questions about expected behavior.
- “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.” This principle simply re-states what is called “theory Y management.” Simply put, the theory is that people are essential self-motivated and want to get work done. Tom DeMarco and Tim Lister, in their books on management, are probably the leading current proponents of theory Y management in the software engineering discipline. From a testing point of view, to the extent that individuals are motiviated not simply to produce large volumes of features, but to produce quality features, this principle supports the testing process when realized.
- “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” Certainly, no one can argue with the idea that excessive documentation can reduce the efficiency of a project. However, this principle can pose some challenges for the test team if the face-to-face conversations happen when testers are not in a room, thus leaving them disconnected from decisions about how the system should work, what features it should contain, etc. The use of brief daily meetings to re-synchronize the team can help manage this challenge, but it’s essential that these meetings expand to become so long that they become simply a different form of inefficiency. It’s also important that Agile teams remember that “less documentation” does not mean “no documentation;” essential documentation, including for testing, must still be prepared.
- “Working software is the primary measure of progress.” While this principle is also laudable from a testing point of view—most testers would accept that working software speaks for itself—this principle is sometimes stretched to the point that smart metrics, including metrics of quality and testing progress, are abandoned. Good practices in terms of testing and quality metrics, measurement and management apply in Agile projects as with any other project, though the specific forms of the metrics tend to differ from the metrics used on sequential lifecycles.
- “Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.” Working a normal workweek, without overtime, weekends, excessive pressure or stress, is certainly a desirable goal. In some cases, I’ve seen Agile teams avoid the “death march” behaviors that arise at the end of some projects when large feature backlogs and enormous numbers of bugs overwhelm the teams. However, I’ve seen plenty of situations where overtime is a regular feature at the end of every iteration, and where this burden falls disproportionate on testers. This principle remains under-fulfilled in practice, to the detriment of testers.
- “Continuous attention to technical excellence and good design enhances agility.” Music to the testers’ ears, indeed. Of course, some programmers find it hard to make the transition from big chunks of development work, done on rushed schedules (with the concomitant quality compromises) to the smaller chunks of work, done carefully, that are proposed by this principle. I hope to see better realization of this principle on actual projects as software engineering professionals internalize the practice of Agile development.
- “Simplicity—the art of maximizing the amount of work not done—is essential.” As testers, this principle is also a joy to hear, and to see. Simplicity implies a small set of high-quality, working features, the opposite of the untestable, complex, sprawling applications that are so hard to cover in any reasonable sense. Again, though, this is a major mental shift for many software engineering professionals, and this principle is under-realized in practice.
- “The best architectures, requirements, and designs emerge from self-organizing teams.” As with the theory Y management topic discussed earlier, this is an assertion about the nature of human psychology and capabilities that is beyond the scope of this book. However, I have observed that Agile processes don’t always scale smoothly to large, complex, and especially distributed projects and teams. I have also seen and heard of instances where the reality of “emergent design” and “emergent architecture” was considerably less satisfactory than this principle might lead us to expect; the cliché about “painting oneself into a corner” can apply. With complex applications, testers should watch carefully for problems with performance, maintainability, and reliability, because these can reflect fundamental design and architecture problems that are difficult to fix after too many iterations have gone by.
- “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” Of course, project retrospectives are hardly an Agile innovation, but all testers would agree that such periodic reflection is a great idea. Testers should encourage the actualization of this principle, and ensure that becoming more effective at producing quality is part of the agenda.
These observations are reflections on a work-in-progress. Software engineering teams are still learning how to apply Agile approaches. Agile approaches have not (yet?) been successfully applied to all types of projects or products. Some tester challenges remain to be surmounted with respect to Agile development. However, Agile methodologies are starting to show promising results in terms of both development efficiency and quality of the delivered code.
So, what do you think about Agile methodologies and testing? I’d be happy to discuss this topic with interested readers of this blog.
We have spent the last couple years in an economic downturn, and no one seems to know how much longer it will last. For the foreseeable future, management will exhort testers and test teams to do more with less. A tedious refrain, indeed, but you can improve your chances of weathering this economic storm if you take steps now to address this efficiency fixation. In this blog, I’ll give you four ideas you can implement to improve test efficiency. All can show results quickly, within the next six months. Better yet, none require sizeable investments which you could never talk your managers into making in this current economic situation. By achieving quick, measurable improvements, you will position yourself as a stalwart supporter of the larger organizational cost-cutting goals, always smart in a down economy.
Know Your Efficiency
The first idea—and the foundation for the others—is that you should know your efficiency to know what to improve. All too often, test teams have unclear goals. Without clear goals, how can you measure your efficiency? Efficiency at what? Cost per what? Here are three common goals for test teams:
- Find bugs
- Reduce risk
- Build confidence
You should work with your stakeholders—not just the people on the project, but others in the organization who rely on testing—to determine the right goals for your team. With the goals established, ask yourself, can you measure your efficiency in each area? What is the average cost of detecting and repairing a bug found by your test team, and how does that compare with the cost of a bug found in production? (I describe this method of measuring test efficiency in detail in my article, “Testing ROI: What IT Managers Should Know.”) What risks do you cover in your testing, and how much does it cost on average to cover each risk? What requirements, use cases, user stories, or other specification elements do you cover in your testing, and how much does it cost on average to cover each element? Only by knowing your team’s efficiency can you hope to improve it.
Institute Risk-Based Testing
I mentioned risk reduction as a key testing goal. Many people agree, but few people can speak objectively about how they serve this goal. However, those people who have instituted analytical risk-based testing strategies can. Let me be clear on what I mean by analytical risk-based testing. Risk is the possibility of a negative or undesirable outcome, so a quality risk is a possible way that something about your organization’s products or services could negatively affect customer, user, or stakeholder satisfaction. Through testing, we can reduce the overall level of quality risk. Analytical risk-based testing uses an analysis of quality risks to prioritize tests and allocate testing effort. We involve key technical and business stakeholders in this process. Risk-based testing provides a number of efficiency benefits:
- You find the most important bugs earlier in test execution, reducing risk of schedule delay.
- You find more important bugs than unimportant bugs, reducing the time spent chasing trivialities.
- You provide the option of reducing the test execution period in the event of a schedule crunch without accepting unduly high risks.
You can learn more about how to implement risk-based testing in Chapter 3 of my book, Advanced Software Testing: Volume II. You can also read the article I co-wrote with an RBCS client, CA, on our experiences with piloting risk-based testing at one of their locations.
Tighten Up Your Test Set
With many of our clients, RBCS assessments reveal that they are dragging around heavy, unnecessarily-large regression test sets. Once a test is written, it goes into the regression test set, never to be removed. However, in the absence of complete test automation, this leads to inefficient, prolonged test execution periods. The scope of the regression test work will increase with each new feature, each bug fix, each patch, eventually overwhelming the team. Once you have instituted risk-based testing, you can establish traceability between risks and test cases, identifying those risks which you are over-testing. You can then remove or consolidate certain tests. You can also apply fundamental test design principles to do identify redundant tests. We had one client that, after taking our Test Engineering Foundation course, applied the ideas in that course to reduce the regression test set from 800 test cases to 300 test cases. Since regression testing made up most of the test execution effort for this team, you can imagine the kind of immediate efficiency gain that occurred.
Introduce Lightweight Test Automation
I mentioned complete test automation above. That’s sometimes seen as an easy way to improve test efficiency. However, for many of our clients, that approach proves chimerical. The return on the test automation investment some of our clients see is low, zero, or even negative. Even when the return is strongly positive, for many traditional forms of GUI-based test automation, the payback period is too far in the future and the initial investment is too high. However, there are cheap, lightweight approaches to test automation. We helped one of our clients, Arrowhead Electronic Healthcare, create a test automation tool called a dumb monkey. It was designed and implemented using open source tools, so the tool budget was zero. It required a total of 120 person-hours to create. Within four months, it had already saved almost three times that much in testing effort. For more information, see the article I co-wrote with our client.
In this blog, I’ve shown you four ideas you can implement quickly to improve your efficiency. Start by clearly defining your team’s goals, then derive efficiency metrics for those goals and measure your team now. With that baseline measurement, move on to put risk-based testing in place, ensuring the right focus for your effort. Next, apply risk-based testing and other test fundamentals to reduce the overall size of your test set while not increasing the level of regression risk on release. Finally, use dumb monkeys and other lightweight test automation tools to tackle manual, repetitive test tasks, saving your people for other more creative tasks. With these changes in place, measure your efficiency again six months or a year from now. If you are like most of our clients, you’ll have some sizeable improvements to show off for your managers.
Businesses spend millions of dollars annually on software test automation. A few years back, while doing some work in Israel (birthplace of the Mercury toolset), someone told me that Mercury Interactive had a billion dollars in a bank in Tel Aviv. Probably an urban legend, but who knows? Mercury certainly made a lot of money selling tools over the years, which is why HP bought them.
That’s nice for Mercury and Hewlett Packard, but so what, right? I don’t know about your company, but none of RBCS’ clients buy software testing tools so that they can help tool vendors make money. Our clients buy software testing tools because they expect those tools will help them make money.
Unfortunately, it’s often the case that there’s a real lack of clarity in terms of the business case for software test automation at some organizations. Without a clear business case, there’s no clear return on investment. This leads to a lack of clear success (or failure) of the automation effort. Efforts that should be cancelled continue too long, and efforts that should continue are cancelled.
So, one of the pre-requisites of software test automation success a clear business case, leading to clear measures of success. Here are the top three business cases for software test automation that we’ve observed with our clients:
- Automation is the only practical way to address some critical set of quality risks. The two most common examples are reliability and performance, which generally cannot be tested manually.
- Automation is used to shorten test execution time. This is particularly true in highly competitive situations where time-to-market is critical, and at the same time customers have a low tolerance for quality problems.
- Automation is used to reduce the effort required to achieve a desired level of quality risk. This is often the case in large, complex products where regression, especially regression across interconnected features, is considered unacceptable.
This list is not exhaustive, and, in some cases, two or more reasons may apply. One of the particularly nice aspects of each of these three business cases is that the return on investment is clearly quantifiable. That makes achieving success in one or more of these areas easy to measure and to demonstrate. It also makes it easy to determine which tests should be automated and which should not.