Archive for the ‘best practices’ Category
Reader Steven Moore wrote to ask the following:
I have been asked to document the benefits of early engagement of QA in the SDLC and am looking for any qualitative or quantitative information, articles, papers, or industry experts I can reference. Any advice or guidance you could provide would be greatly appreciated. Thanks.
Steven, I’d recommend that you pick up a copy of Capers Jones’ recent book, The Economics of Software Quality. This is an excellent resource for what you’re trying to accomplish. Anecdotally, I can mention that we did a study for a client where we found that the average cost of removing a defect in reviews was $37, while the cost to remove a defect in system testing was $3,700. Due to those relative costs, and the number of defects involved that escaped to system testing, this client was losing between $100,000,000 and $250,000,000, on a $1,000,000,000 annual IT budget.
I received a questions from a reader of the blog:
I have a query with respect to the levels and types of testing & how they are carried out. Can Development team execute all the System Integration test cases as a part of their Unit test cases? Later the testing team will be re-executing the same test cases once again and will be left behind with no defects to be reported.
In general, Krishna, this would be neither possible nor desirable. It’s not possible because properly designed system integration testing will focus on interoperability and other emergent behaviors (e.g., performance, security, reliability) that do not manifest themselves or are indeed not even testable at a unit level. It’s not desirable because different levels of testing should focus on covering different things.
Basically, the test levels should function as a sequence of filters. If you wanted to filter water, you wouldn’t use five identical filters to do that, but rather would use a sequence of different filters, each designed to catch particular types of impurities. In the case of testing, each test level is designed to cover certain aspects of the system, to mitigate certain types of quality risks, and to catch certain types of bugs.
After a webinar a couple weeks ago, I had a question from a listener:
Most of my experience as a QA Tester was in V-Model / Waterfall. Now I work in a project where they use Scrum. I’ve looked everywhere for testing process withing scrum methodology but testing is not mentioned at all. Are test cases necessary in scrum or not? And if yes, when and how should be written, right after the 2 weeks scrum planning meeting where the stories are defined? Also, I guess a test plan in this case has makes no sense…right? Some information specifically for TESTING processes in scrum would be really helpful.
Thank you in advance for your answer.
Some interesting questions here, Maria, thanks. First, yes, the “where’s the testing in Agile?” question is a good one. Many of the people involved in defining Agile methodologies seemed very focused on automated unit testing, and perhaps missed the fact that unit testing by itself is not enough. According to Capers Jones’ studies, even the best unit testing removes only 50% of defects, so testing needs to be added to the methodology.
Do you need a test plan? Well, certainly not in the sense of writing a new plan for each iteration. Instead, the test plan should be written during inception, and the plan should describe the way testing will occur for each iteration. Then, in each iteration, testers should participate in estimation (e.g., in planning poker sessions) based on the user stories proposed for the iteration. The accepted set of user stories make up the iteration-specific instantiation of the test plan.
Do you need test cases? Well, test cases in Agile environments mainly come in four main types: 1) automated unit tests (e.g., using J-unit); 2) automated feature verification tests (e.g., using Fitnesse); 3) automated functional/regression tests (e.g., using Selenium or QuickTestPro); and, 4) manual testing. For types 1-3, of course, there are specified automated test cases. For type 4, the test cases tend to be logical (or high-level) test cases, which requires a higher level of skill and domain knowledge in the testers. Also, a significant amount of experience-based testing, such as exploratory testing, defect taxonomy testing, etc., tends to occur.
I hope that is helpful.
A new reader of the RBCS website sent some kind words that we want to share here:
The RBCS web site is outstanding. Only a few days ago, I was introduced to RBCS’ work via a CAI webinar. I’m delighted to learn from the online articles and publications about the many facets of testing. Enlightening concepts include: establishing relationships and involving other members of the project team into the testing process, aligning testing goals with the business client’s goals via risk-based testing, and examining the topic of measuring to ensure that we fully understand what we’re measuring and why.
Bill Minckler, MBA, PMP, CLSSBB
Senior Project Manager State of Ohio
Thanks, Bill. I’m glad you find the resources useful. I hope you’ll subscribe to the RBCS monthly webinars, which are also free resources, focused on content not advertising. You can catch then on our YouTube channel, or better yet, attend live to earn PMI PDUs and participate directly in the 30 minute Q&A at the end.
As people familar with my books, webinars, and training on test design techniques know, there are a few situations where pairwise and other combinatorial testing can make a lot of sense, especially for higher-risk systems. Following a webinar, listener Terry Croskrey sent the following useful links:
I first got introduced to you on your ITUNES Podcasts and then your books. You are an excellent writer and presenter and I appreciated your enormous contribution to the Profession of Software Testing!
I wanted to pass on some recent software I discovered for ALL Pairs and Orthogonal Array creation…that is easy to use.
NIST free software: http://csrc.nist.gov/staff/Kuhn/kuhn_rick.html
ACTS GUI software: http://csrc.nist.gov/groups/SNS/acts/index.html
I’d add to this the link http://www.pairwise.org.
My Agile Testing Opportunities webinar continues to stir up discussion. A listener, Rana Zoghbi, commented:
I have a small question concerning this presentation:
You have talked about the opportunities of agile in testing, but what about the pitfalls that a tester can encounter in an agile environment? I am pursuing the Professional tester course – Foundation level, and it mentions that usually an agile mode is hostile towards independent testing. Can you please elaborate further on this?
Thanks for the opportunity to clarify some points, Rana. First, in terms of pitfalls, yes, there are testing pitfalls associated with any lifecycle model, and Agile is no exception. My presentation yesterday was focused on how Agile lifecycles offer testers cetain opportunities (also present with any lifecycle model), but, if you want to hear about the pitfalls, you can listen to my previous webinar, Agile Testing Challenges.
Second, whether Agile teams are “hostile” to independent test teams is something that varies widely. We have a number of clients that are using Agile methods and have independent test teams. In a number of those situations, the independent test team is well-respected and well-established as a complement to the Agile approach. The way in which the independent test team interacts with the Agile teams tends to vary, so I’d recommend that you check out my previous webinar on Test Organization Options for more details.
Historically, there certainly was some bad blood between Agile methodologists and professional testers. One early sign of trouble occurred when Kent Beck, the originator of Extreme Programming, gave a keynote at the now-defunct Quality Week conference in San Francisco in 1999. In his talk, he was reported to have said that the entire concept of independent testing was going to fade away, since it would be made irrelevant by the Agile approach to creating software.
Thirteen or so years later, we (the professional testers) are still here and still relevant. There are still some dogmatists on the extreme fringes of the Agile world who reject the concept of professional, independent testers, sure. However, my sense is that pragmatic software practitioners who are adopting Agile–and often adapting it in the process–see the value of independent test teams, staffed with professional testers, and providing testing services to their Agile teams. I certainly see a lot of clients that are successfully doing so.
I received the following message from Helen Huang. My comments are found below, inline…
Dear Rex ,
You are my idol, I am your fan
Thanks, Helen. I appreciate your trust.
I am tester .I has work this job of 5 years on China. Now I meet some problem in my working.
1. In china , I find many company always talk the tester value ,what is the tester value ? . eg: Find the number of bug Or find a bug problem ?
This is a common problem. Test organizations often do not have clearly defined objectives. This makes it very hard to demonstrate value. The first step to demonstrating the value of testing is to work with stakeholders to define what testing should contribute. I wrote about this in Chapter 2 of Beautiful Testing, which I’d encourage you to use as a way to making the value of testing measurable.
2. In China, many companies will consider automation and performance testing as a standard test KPI?
Performance is often a key value for testing. It’s important to understand the need for performance testing, as it is quite expensive to do well. I’d encourage you to check the RBCS Library, especially the Digital Library, for my thoughts on performance testing.
Automated regression testing is also a major potential source of value. It’s very hard to do this well, so you need to understand how to make this effort success. About half of automation efforts fail, often due to poor planning.
3. Now many Chinese tests were halfway decent, there is no specific test theories for the current lot of people think that the ability of the test failed. Test is not taken seriously in the project
Again, I think this is a matter of not having clearly defined objectives. I’d encourage you to work with stakeholders to understand what testing can contribute.
Please help me to work out this question.?
I hope this is helpful, Helen. Please feel free to respond to this post to continue the discussion.
Helen .w. Huang
I recently flew to China on a 747. The entertainment system was a complete failure, and no one in the upper deck cabin (where I sat) had any access to it. They gave us all free frequent flier miles to make up for it. When they gave me the voucher, I said, “I hope the avionics software is better tested than the entertainment system.”
Yes, that was a somewhat rhetorical comment, as I know about the DO-178B standard and how that would affect testing of avionics software versus entertainment system software. However, I have some concerns about that standard and what compliance to it really means. For example, that standard, being white-box based, only requires that you test code that’s there to some degree. What about code that shouldn’t be there, or code that should be there but isn’t (due to bad requirements or design)?
Shortly after I arrived, I received this link via e-mail: http://bit.ly/sNYQBa
Now, this is alarming. Suppose, in all of the resetting of the entertainment systems that was done to try to restore working order, some bad signals had gotten sent to the avionics system through leakage over network? How could that happen, you might say, as this link talks about deliberate hacking into the network? Well, if it can be done deliberately, and you have dozens of in-seat computer systems booting over the network from the storage system, isn’t it possible that all of that network traffic could leak due to a bug in the way the entertainment system is implemented? Even if the traffic doesn’t interfere with the avionics directly, if there’s a lot of it, you have the possibility of denial of service effects.
Of course, this can’t happen with newer planes, you might say. Oh, really? I believe I read articles when the A380 and the B787 were being built that both used a single network infrastructure. I can only assume they use this same “virtual network” approach to try to keep traffic separated.
Long time reader ML Gregory wrote me with the following concern:
Subject: To QA or not to QA, that is the question
I was looking for an article about QA in an agile world. I’m uneasy about a change our company is doing – eliminating QA. Testers who choose to stay must work in development as a developer writing tests. No test cases will be written, no bugs will be written. I’ve seen it before and was hoping to find some authoritative material on this type of change.
I’ve written about various ways Agile methodologies can challenge testing (most recently in the blog, but also in an article and a webinar). What you’re describing here, though, is a variant of the problem which is actually independent on Agile per se.
What you are describing is a problem I refer to as the Christmas Tree Ornament problem. I would be willing to bet that these three changes–no independent testers, no bug reports, no written tests cases–have been changes that people have wanted for some time. When the change to an Agile methodology came along, people hung these wishes onto Agile like a Christmas tree ornament.
It’s not unusual for these particular problems to occur with Agile, especially the “no bug reports” problem. Some Agile advocates do push for no tracking of bugs found during the iteration, which of course has the effect of making it impossible to gather metrics about the software process’s quality capability. However, I’ve seen other organizations that are definitely not using Agile have this same problem.
There’s not necessarily a lot you can do to resist these changes. They usually come with a lot of momentum. Standing in the way of an approaching train is not a good career move. Perhaps passing on the links I provided above, along with this blog post, can help.
I’d be interested in other thoughts from you and from other readers.
I received a request from Martin for help on how to apply orthogonal arrays. He wrote:
I am working on putting together testing at work using an orthogonal array. This is the first time I’ve used this method. I read about it in detail in Rex’s books. But I’m having difficulty remembering all the steps. I need a little help to jog my memory. Could you please send me the high-level, bulleted, steps for creating the orthogonal array with my test case data?Thank you.
Sure, Martin, you can take a quick listen to the key parts of my webinar on pairwise testing techniques. This webinar addresses how to use orthogonal arrays and other pairwise tools. After you listen to that, I’d suggest you go to www.pairwise.org to find a free downloadable tool. Last I heard, the US Department of Commerce had funded a project to create a pairwise tool (your tax dollars at work!) that will actually do classification trees as well, and I think you might find it there.
In general, Martin, if you are looking for a refresher on test design techniques, a good first stop is the RBCS Digital Library.