Risk based testing is a phrase that we hear many times in testing. Many people know many facts and have many opinions. The trouble is, in many cases, these facts are actually wrong or based on a poor understanding of risk based testing, and thus many opinions about risk based testing are incorrect. There are many risk based testing fallacies. In this first post (in what is likely to be a series of occassional posts on this topic), I’ll start with five frequently encountered fallacies.
- Risk based testing is just a method to cut corners (part 1). This whole idea for a series of blog posts came about when someone said to me, “Well, risk based testing means not testing everything.” Well, right. So does every kind of testing. There are an infinite number of tests you could run, and you are going to select a finite subset from that infinite set. The only question is whether you are going to select that subset intelligently, with an understanding of the likelihood and impact associated with potential problems. Risk based testing allows you to do that.
- Risk based testing is just a method to cut corners (part 2). Sometimes when people say this, they mean that risk based testing does not cover all the requirements. Unfortunately, some people have promoted an approach which they call risk based or risk driven testing that involves exactly that: Selecting which requirements not to test based on risk. While in some cases it is appropriate to skip testing some of the requirements, as a general rule we want to cover not only the important risks but all the requirements (at least those which are specified). By ensuring that every requirement has at least one associated risk item and at least one associated test case, you can do so. This is an example of a blended strategy of risk based and requirements based testing.
- Risk based testing is all about technical risk. Some people have put forward this idea that risk based testing is a form of reactive testing where we wait to see what the system does (i.e., no planning, analysis, or up-front test development), then use experience, defect taxonomies, and other aids to predict and find as many bugs as we can in a limited period of time. To me, that approach is just a big geeky bug hunt; it does not cover all of the strategic objectives most organizations have for test teams. Yes, we should consider defect likelihood when analyzing quality risks, but we should also consider the impact of potential defects as well.
- Risk based testing can be done entirely by the test team. Those who believe this fallacy simply analyze requirements or other information, in isolation from other project and product stakeholders, and then test based on that analysis. Sorry, but that’s just a risk-aware form of requirements based testing. What makes true risk based testing truly powerful is the consideration of input from a cross-functional team of project and product stakeholders. When we help clients start doing risk based testing, I always emphasize that getting the right quality risk analysis team together is more important than the right process or templates.
- Risk based testing only influences selection of test cases. It’s true that one major benefit of risk based testing is the smart selection of test cases. However, with risk based testing you can also report test results in terms of residual risk, which makes test status truly clear to non-test project team members. You can also run tests in risk priority order, which maximizes the likelihood of finding important bugs first. And, if you do get squeezed for time, you can triage your test cases based on risk, ensuring that the most important tests get run.
I hope this blog entry has helped to dispel some of these fallacies. I’ll return to it in a later post someday to try to dispel more such fallacies. In the meantime, you might want to check out the videos on risk based testing, found in our Digital Library, for more information about what risk based testing really is and how to make it work for you.