Archive for December, 2011
Reader Art Salwin asks a good questions about an interesting distinction:
There seems to be quite a bit of contradiction in the literature between test conditions and test cases. Some places say the conditions have expected results (IEEE 829, for example). Other places say the cases have the expected results (some of the ITSQB study guides). There’s also disagreement about how high level a condition is. Some put it at pretty high level (test electronic payment, for example), others get down to if good credt card, accept; if bad credit card, reject.
Well, Art, there certainly is a lot of variation in the usage of certain terms in testing. A number of terms, such as “test case,” “test condition,” “test plan,” “test strategy,” etc., are in very common use, but almost everyone seems to mean something a little different by them. In addition, we have people using different terms to describe things that are the same; e.g., “test procedure” and “test script” and “test case.” So, we get a lot of miscommunication in our profession.
To clarify communication, one thing I suggest, when talking to someone about testing, is to make sure that you’re speaking the same language. If someone uses one of these common test terms, asking them what they mean by that term, and what specific contents he or she is referring to. Variation exists even within organizations, so a quick check to make sure you’re talk to each other, not past each other, is always a good idea.
Within an organization, it’s a good idea to adopt a single glossary for testing terms. The ISTQB glossary isn’t perfect, but it is being constantly perfected and it’s certainly the most comprehensive testing glossary out there. So, that’s probably a good place to start.
It’s also important to adopt templates for common work products, because a definition by itself will not suffice. For example, you and I could both agree on the ISTQB definition of the phrase “test strategy” and yet write different documents. It’s important to remember that templates should be useful outlines that help people remember important topics to include in work products; templates are not straightjackets or substitutes for thinking.
So, going back to your original question. If we take the ISTQB terminology and templates, then the test conditions are the higher level statements of what is to be tested. In risk based testing, the risk items identified during a quality risk analysis are the test conditions (for more on how that works, see the video series on risk based testing on the Digital Library). Examples of risk based test conditions would be “system responds too slowly to user input” and “system calculates incorrect report totals.” In requirements based testing, test conditions are identified by an analysis of the requirements. Examples of requirements based test conditions would be “check input field validation” and “check tax calculations.” As you can see, these are at a high level.
The test cases are then developed to cover the test conditions. One or more test cases should be associated with each test condition. In risk based testing, the number of test cases for each test condition is determined by the level of risk associated with the risk item. In requirements based testing, well, determining how many tests to have for a test condition is one of the problems with a purely requirements based testing strategy. If you use a blended risk based and requirements based testing strategy, you can associate levels of risk with requirements to determine how much to test.
The test cases themselves can be a various levels of detail. If you check Managing the Testing Process, 3e, you’ll find a discussion in there about the level of detail in test cases. If enough people are interested, I can post some of that information on the blog.
I received the following message from Helen Huang. My comments are found below, inline…
Dear Rex ,
You are my idol, I am your fan
Thanks, Helen. I appreciate your trust.
I am tester .I has work this job of 5 years on China. Now I meet some problem in my working.
1. In china , I find many company always talk the tester value ,what is the tester value ? . eg: Find the number of bug Or find a bug problem ?
This is a common problem. Test organizations often do not have clearly defined objectives. This makes it very hard to demonstrate value. The first step to demonstrating the value of testing is to work with stakeholders to define what testing should contribute. I wrote about this in Chapter 2 of Beautiful Testing, which I’d encourage you to use as a way to making the value of testing measurable.
2. In China, many companies will consider automation and performance testing as a standard test KPI?
Performance is often a key value for testing. It’s important to understand the need for performance testing, as it is quite expensive to do well. I’d encourage you to check the RBCS Library, especially the Digital Library, for my thoughts on performance testing.
Automated regression testing is also a major potential source of value. It’s very hard to do this well, so you need to understand how to make this effort success. About half of automation efforts fail, often due to poor planning.
3. Now many Chinese tests were halfway decent, there is no specific test theories for the current lot of people think that the ability of the test failed. Test is not taken seriously in the project
Again, I think this is a matter of not having clearly defined objectives. I’d encourage you to work with stakeholders to understand what testing can contribute.
Please help me to work out this question.?
I hope this is helpful, Helen. Please feel free to respond to this post to continue the discussion.
Helen .w. Huang
E-learning Advanced Test Analyst student Donna Hungarter asks:
[An exam question] shows a state transition diagram, sets some criteria, and then asks how many tests are needed for this level of coverage. If we were looking for the fewest number of tests needed to cover all states and transitions, I believe the answer is 6 (not one of the answer options). If all 4 different credit cards need to be tested for each state and transition, I can get to 24, but I do not understand how the correct answer is 26. Can someone please explain? Thank you, Donna
Here’s the question:
You are testing a computerized gas pump that allows users to pay using credit cards. The pump is initially in the waiting-for-customer state. A transaction is initiated when a customer first inserts a credit card. Four types of cards are accepted: Visa, MasterCard, Discover, and American Express. The pump will reject any other type of card. If given an accepted credit card, the pump validates the card. If the card is valid, the pump is turned on and the customer is told to begin pumping gas. The pump remains on until the transaction ends. The transaction ends when one of the following events occurs:
• Pump handle is returned to pump
• Amount reaches transaction limit for card
• No gas is pumped within two minutes of validation of card
• The station attendant throws the emergency shut-off switch
Once the transaction ends, the pump processes the transaction, charging the credit card and producing a receipt. It then returns to the waiting-for-customer state.
Assume a test always starts and may only end in the waiting-for-customer state, a test must end after a transaction-ending event, and a test input consists of (initial state, event[condition], next state, event[condition], …, event[condition], initial state). Design tests covering every unique sequence of up to four states/three events. How many tests are needed for this level of coverage?
The state transition diagram is shown here:
Gas Pump State Transition Diagram
The correct answer is 26. The reason is because the question is asking for two-switch coverage. You have to create the two-switch table. You can short-cut the work by noticing that many of the two-switch elements can’t occur in a test, because they involve returning to the initial state as an intermediary step. You can see the answer in the image below:
Switch Table and Tests
You might need to expand the image to read it. This picture was taken in Kuala Lumpur, Malaysia, after a long 15 minute session to solve the problem. It’s a true brain-twister, and almost certainly harder than anything you’d see on a real exam.
Long time reader Gianni Pucciani commented on today’s webinar on Agile testing opportunites:
Today I followed your webinar at SmartBear, and I was quite surprised when you mentioned that testers, especially for automation purposes, should do their tasks outside the sprint. Following the agile methods like Scrum, the team produces a potentially shippable product at the end of each sprint, hence tested. I think with agile processes, one should strive to complete automation scripts (I am talking about functional tests, at the system level) inside the team, within the sprint schedule, to cover the stories implemented in the given sprint. Maybe I misunderstood your point, could you please clarify your thoughts on this subject? Thanks a lot!
Perhaps I mis-spoke, or maybe didn’t make the point clearly enough. Yes, I agree that testers with responsibilities for testing the user stories in the Agile sprints need to be embedded with the Agile teams. However, that doesn’t mean that all testers should be embedded. For example, when Agile teams are working on systems that will be integrated with other systems (as many of our clients are), system integration testing needs to be separated from the Agile teams, as this role spans multiple systems and requires a perspective different from that needed to system test the content produced by any of the Agile teams. In addition, long-term projects such as building maintainable test automation frameworks should also span individual sprints. These roles need to be part of the test organization, but not embedded within a sprint.
The best approach that I’ve seen is that a central test team continues to exist. Testers are delegated–with dotted-line reporting structures–to the Agile teams where appropriate, at least one tester per team. (Having a tester work on multiple teams creates problems with focus, as I discussed today.) Testers that have roles that span sprints or systems should remain in the central team, providing a supporting role.