Archive for July, 2011
I received the following question from a reader of this blog, Kathleen Marrs. She wrote:
I have a colleague who swears up and down about a standard test case and process document style guide that is supposedly endorsed by you and the ISTQB and ASTQB. The style that he has introduced makes extensive use of quotation marks to denote controls, fields, and form names.
E.g. Click on the “Orders” form, then click on the “Current Orders” tab, enter “Bob Smith” in the “Customer Name” field and click on the “Find” button.
To me this is not grammatically correct as quotation marks should be used for actual quotes, user entered text, single character entries or familiar words that are used in an unfamiliar context. Have you seen this writing style and do you endorse its use or has my colleague been misled?
I am a proponent of the standard that is used by MIT and Microsoft and others which looks like:
Go to Orders -> Current Orders tab -> Customer Name and enter “Bob Smith” and click OK
Your input on this matter would be greatly appreciated.
In my book, Managing the Testing Process, I discuss various styles and templates for documenting test cases or procedures. The ISTQB, in the Foundation and current Advanced syllabus, discusses the use of the IEEE 829 templates. However, I don’t recall addressing the stylistic point raised above in any of these documents. To me, it’s really a matter of choice. The test organization should standardize on a single approach and everyone should follow it. Otherwise, you get into the “Tower of Babel” problem.
What I would comment on is that the example given above is what is called a concrete test case, in the sense that all of the inputs and outputs are explicitly specified. This is certainly necessary for automated tests. For manual tests, you can consider what degree of detail must be captured in the test cases. In some situations, using logical test cases–where the type of input and output are described generally and the tester is allowed to use discretion, white-box and black-box test design techniques, and exploratory testing to select the specific inputs and to evaluate the results–is often more lightweight, effective, and maintainable. This issue of the degree of specificity in test cases is something covered in Managing the Testing Process.
Regular reader Gianni Pucciani wrote me recently to discuss risk based testing in an Agile world:
Risk based testing has been discussed in several places, and from different perspective, including in your books and online resources. There are almost no doubts about the benefits it can bring. What is still missing in my opinion is a good discussion about adopting risk based testing in an agile environment.
In this case risk identification and assessment should be performed at the beginning of each sprint, analyzing the risks connected to the features that will be developed in the coming sprint. Another point worth mentioning is the importance of test automation for regression testing, but what about a situation where most of the tests are manual? I would like to hear from you and the readers of your blogs if you have any experience/suggestions to share.
Yes, Gianni, this is exactly how risk based testing works in an Agile environment. Here’s summary of a risk based testing process document we created for a client who uses the Scrum methodology:
1. At the beginning of the planning period for a release, identify project risk based analysis team participants.
2. Schedule risk meetings (90-120 minutes each).
3. Prepare interview documents.
4. Hold interviews with all risk identification participants.
5. Analyze risk items.
6. Normalize risks.
7. Review project Quality Risk Analysis with stakeholders.
8. Apply RPN values to test planning and test case development.
9. When release backlogs are being determined and when sprint backlogs are being revised, At major project milestones, review and revise the risk analysis.
In terms of manual regression testing, given all the great tool support for test automation in Agile environments, I’m not sure why an organization would choose to do this. It’s the best way to manage regression risk in an Agile environment.
RBCS Advanced e-learning courseware student Angee Tong asks the following question:
Question: What is a Review?
Isn’t the answer suppose to be the following? An independent evaluation of software products and processes to ascertain compliance to standards, guidelines, specifications, test procedures, etc…
Actually, this is the definition of an audit, which is a special form of review focused on compliants and conducted by a third party. To quote the IEEE 1028 standard’s definition (cited in the ISTQB Glossary):
An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify: (1) the form or content of the products to be produced; (2) the process by which the products shall be produced; and, (3) how compliance to standards or guidelines shall be measured. [IEEE 1028]
The correct answer in the e-learning course is the following, again based on IEEE 1028:
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [IEEE 1028]
While I would not expect a pure definition question such as this one to occur on the ISTQB Advanced exams, it might well be that the correct answer to an Advanced exam question would depend on knowing the definition of a cited term such as “review.” I’d encourage people studying for Advanced exams to carefully review the definitions of all terms to remember called out in the Advanced syllabus.
We have some interesting questions about our Advanced Test Manager e-learning course, which also apply to the Advanced Software Testing: Volume 2 book (since it is the main source for the course), from Patricia Osorio Aristizabal. She asks the following questions, with my answers interspersed below.
Could you please help me with more questions about chapter 3? I understand, this chapter is quite important for a test manager (for the certification exam and for his/her job) as you tell us:
RB: Yes, this really is a critical chapter in the book, the course, and the syllabus.
· Does the exam include questions about test effort estimation using test point analysis (TPA)? I mean, It is possible a question in which I have to calculate a test effort estimation using TPA? I would like if I have to do more practice about that (more exercises)
RB: I make it a rule never to speculate about what specific questions will be on the exams. I will say this: It would be very wise to make sure, before taking any Advanced exam, that you are ready to answer any question on any of the learning objectives defined for that module or for the Foundation syllabus. That includes combinations of learning objecties (i.e., one question covering multiple learning objectives), cross-section questions (i.e., questions that cover material and learning objectives from two or more sections, including sections in the Foundation), and Foundation review questions (i.e., questions about any of the six chapters of the Foundation syllabus).
For the specific technique you are concerned about, TPA, that is in Chapter 3 section 4. There are two learning objectives:
· (K3) Estimate the testing effort for a small sample system using a metrics based and an experience-based approach considering the factors that influence cost effort and duration
· (K2) Understand and give examples to the factors listed in the syllabus which may lead to inaccuracies in estimates
TPA would fall under the first learning objective.
· In the slide 188 you talk about colors: green, may be red but it is in just in black
RB: That is very strange if you are not seeing colors. I can understand that the hardcopy would not show colors, but the web should. I’d suggest checking your browser settings. If that still doesn’t work, please send a screen shot of the offending slide to firstname.lastname@example.org and we’ll open a defect report for it.
· In the slides 190, the rolling closure period value. It is not clear for me, could you please tell me where I could find more information about this kind of charts? They are not familiar for me. It the same case for the graph in the slide 198
RB: Rolling closure period is the average time from reporting to resolution for all defects resolved on and before the date shown. The daily closure period is the average time from reporting to reslution for all defects resolved on the date shown. This metric is described further in Managing the Testing Process, 3e. You can download the bug analysis charts from the RBCS Basic Library and Digital Library to see how exactly this is calculated.
· In the slide 195, I found the following definitions, could you please help me finding the difference between each of them?
· Plan Effort (or Planned Effort): The number of person-hours planned for this test. That might be more than the test effort, providing time for additional exploration in this area, or it might be less, meaning the tester is supposed to triage the conditions covered during testing.
· Actual Effort: The number of person-hours ultimately expended on the test. This might not match the planned effort, particularly if the test failed and the tester needed to expend significant effort to isolate and report the problems observed.
RB: Planned Effort is the number of person-hours of effort planned by the test manager for a test. It should be based on an estimate by the tester who designed and implemented the test or by actual effort from the last time the test was run. Planned Effort is an estimated target that the tester should try to stay within when executing the test. Actual Effort is the actual amount of person-hours of effort that was spent by the tester who executed the test. For various reasons, especially test failure, the Actual Effort might exceed the Planned Effort.
There are a lot of questions, sorry about that
Regards, Patricia Osorio Aristizabal
No problem. I hope these answers were helpful.
I recently had an experience with a couple important lessons for software quality and software testing. I ordered a pair of identical products from an e-commerce retailer. I’ll not distract the discussion by mentioning the specific products, or the problems with them, as it would entail a long, technical digression that detracts rather than enforces my point.
Anyway, the order fulfillment process went reasonably well. The retailer did an acceptable job of keeping me posted on the status of the order, which apparently involved working directly with the vendor making the products.
The two products finally arrived, and I immediately tested them out. I was not impressed with the quality of the products. Both exhibited an annoying failure each time they were used, and about 2% of the time they exhibited a failure that made the items not fit for purpose under certain circumstances. However, the products hadn’t cost much and I could imagine using them in non-critical situations, so I decided not to bother with returning them.
A week later, an e-mail popped into my inbox, from the retailer, asking me to review the product. Okay, it’s worth five minutes warning other potential customers about these problems. I write a review, click to submit it, and immediately see an error message. I go back, copy the review, and try it with another browser. Exact same problem.
So, now I’m a bit irritated. I send an e-mail to the retailer, advising them of the problem with their site and with the product. I get back a terse response from them, via their online support web page, saying, “It sounds to me like you got a defective [product]. What I would suggest is contacting the manufacturer of the item and most likely they will replace it for you.”
I then asked the retailer whether they were able to post my review on my behalf, as their system didn’t work. Here’s another terse response: “I don’t have a way to do that on my end, you can attempt that again.” I did try again–and this is now three days after the initial failure of their site–and it still doesn’t work.
Okay, I concluded, enough nonsense. I’ve now flipped the bozo bit on the retailer. A textbook case of how not to handle customer complaints about your website and the products you sell.
Okay, let’s compare this to best-of-breed processes for handling customer quality complaints. When I recently had a problem with a Blue Snowball microphone, the vendor worked with me to repair the microphone. When their repairs failed, Blue then sent me a new one for free, and apologized for my inconvenience.
As another example–not to blow our own horn too loudly–we (RBCS) have a policy that, whenever a customer has a problem using our website, we give them a discount code, even when the problem is not our fault (e.g., the recent failures due to a stealth software update by Pair, our site host). In fact, when people attending our free webinars have had problems with attending the webinar, we have given them discount codes.
So, what lessons can we learn as IT people, software professionals, and software testers specifically?
1. If you provide a means for customers to provide online feedback on your product or service, make sure it works! Testing should cover that feedback mechanism, and compatibility testing should be included in that, due to browser diversity.
2. If a customer reports a problem via an online feedback mechanism, make sure that the workflow includes tracking that problem to resolution, and test that workflow. Just telling the customer, “Tough crackers, take it up with the vendor,” is actually worse than not having an online feedback mechanism at all.
All in all, for me this experience has gone from the “that’s too bad, but not big deal” category to “I’ll never do business with that retailer again.” I’m sure that’s not the customer experience the retailer had in mind when they were designing their website, and, had they bothered to test it sufficiently, it probably wouldn’t have been the experience I had.
Last week, as some of you will know, I gave a webinar on the psychological and political aspects of software test management. You can find the recorded webinar here (http://www.rbcs-us.com/software-testing-resources/163) and the PDF version of the slides here (http://www.rbcs-us.com/images/documents/July-6-2011-Psychopolitics-of-Testing.pdf).
Patricia Ensworth wrote me to comment on the webinar:
Just wanted to let you know I thought your webinar was superb. As someone who has experienced many of the pressures and dilemmas you described, I found your analysis insightful and your advice on target. Well done!
With all due respect, there is one point you made about which I’d like to offer an alternative perspective. I completely agree with you that when test managers are labeled Quality Assurance Managers and put in charge of enforcing standard practices in a vague quest for product quality it’s a one-way ticket to nowhere (except maybe martyrdom). However, I have occasionally seen situations where to strengthen the organizational position of the testing group and to leverage the testers’ holistic perspective senior IT management has given the test manager other kinds of Quality Assurance responsiblities. For example, the so-called QA manager might be aligned with Compliance/Security initiatives mandated by regulators, or with Business Analysis projects to re-engineer processes or services. With strong enough support from matrixed senior managers, it can sometimes be a workable arrangement.
In any event, thanks for a thought-provoking, useful session.
I agree with Patricia’s point. I have indeed seen that work successfully. Thanks for mentioning it, Patricia.
I’d be interested in hearing from other readers and listeners to the webinar. What is your experience with psychology and politics in software test management?
Advanced Test Manager e-learning attendee Patricia Osorio asks about the following sample exam question in the course:
You are in charge of integration testing for a system that consists of three major software subsystems: browser-side, application server, and database server. Each subsystem likewise consists of a number of components. The integration strategy is to integration test each subsystem by adding one component at a time, working on each of the three subsystems at once. You plan to begin integration testing interfaces between subsystems as soon as two or more subsystems are able to communicate through some interface. In other words, component integration starts first and proceeds in parallel for all three subsystems, and subsystem integration testing starts as soon as component integration will allow it to start.
Which of the following statements accurately describes a portion of the test logging information you’ll want to capture?
A. Cross-referencing component integration test cases against the tested component versions in the subsystem under test.
B. Cross-referencing subsystem integration test cases against the tested component versions in two or more subsystems under test.
C. The information in both A and B should be captured.
D. Component version information is typically not captured during integration testing.
The answer is C. To understand the meaning of test results, we have to know which versions of the test items were tested by which test cases. The cross-referencing provides that information. We need that information for both the component integration tests and the subsystem integration tests.
This question draws upon not only the discussion in Chapter 2 of the Advanced syllabus, but also on the discussion in Chapter 5 of the Foundation syllabus, about configuration management.
Advanced Test Manager e-learning attendee Patricia Osorio asked about the following sample exam question:
You are working as a test manager on a project where experienced users will do most of the test execution, with the guidance of professional testers. Which of the following statements is most likely to be true?
A. Users should not be involved in testing, since they will add new requirements during test execution
B. Test cases must be written with detailed specification of the expected results
C. Test cases may be written without detailed specification of the expected results
D. No written test cases are needed, as users can identify all important test conditions during test execution
The correct answer is C. A is incorrect, because the testing can be organized such that requirements won’t be added. B is incorrect, because users have enough knowledge to understand the expect results. D is incorrect, because, in spite of users knowing expected results (as in answer C), users do need some guidance in terms of what to test.
Advanced Test Manager e-learning attendee Patricia Osorio asked a question about a sample exam question:
You are managing the testing of a hospital information system project that will integrate off-the-shelf software from three vendors. Which of the following gives a reasonable sequence of test levels that you will execute?
A. Component integration test; system integration test; user acceptance test
B. System acceptance test; system integration test; user acceptance test.
C. System integration test; system acceptance test; user acceptance test
D. System test; system integration test; user acceptance test
The correct answer is B. Each of the vendors’ systems should undergo an acceptance test test first. Then, the systems should be integrated and tested by the test team. Finally, the users should do an acceptance test of the integrated system of systems.
As long-time listeners–or even brand new listeners, for that matter–of the RBCS webinars know, we use Citrix’s GoToWebinar service for our free monthly webinars. Now, I’ve been fairly satisfied with GoToWebinar. I’ve used one or two of the competing services, and been less happy with those. Of course, webinar listeners (and readers of this blog) might remember I chided Citrix back in May for the ungraceful way the system handles audio drop-outs by the presenter.
So, during the June webinar, webinar attendee Keith Stobie reported an inability to see the presentation using Internet Explorer 9. He said that Chrome (not sure which version) worked just fine. I reported the problem to Citrix on Wednesday of last week. Five days later, I receive the following reply, quoted in its entirety (minus the links provided at the end):
Thank you for contacting Citrix Online Global Customer Support,
Dear Rex Black,
IE 9 has not been tested with any of our products as of yet. we will try to help fix any issues the best we can, but cannot guarantee anything. Hopefully we should get this done as soon as possible.
If you have any additional questions or need further clarification regarding this matter, please feel free to reply directly to this email. For any other product inquiries or technical assistance, please visit us at our Support Centers listed at the bottom of this email. Our Support Centers include Self Help files and our Global Customer Support Contact Information.
Richard Carrel | Global Customer Support
So, I appreciate the reply, though I have to say that five days isn’t quick turnaround for a customer complaint about a browser-based service that’s incompatible with a major vendor’s browser.
More surprising to me is the admission that Citrix hadn’t tested IE9. I don’t keep up with the browser wars, so I’m not sure what share of the browsing action IE9 has, but I’m pretty sure that Microsoft’s IE family of browsers remains at least one of the 800 pound gorillas in the room.
Putting myself in the position of the Director of Quality or VP of Testing or whatever the head-testing-honcho’s title is at Citrix, I understand that there are constraints on compatibility testing. I wouldn’t bother to test four-year-old versions of Opera, for example. But come on, not testing IE9? If I were in charge of testing for any SaaS provider, compatibility would be one of my top quality risks, and testing browser/OS/malware configuration combinations would receive a fair amount of time, money, and attention. Of course, functionality, reliability, performance, and security would also be high on the list of risk categories, too.
Here’s some free consulting advice to my fellow test professionals who work at Citrix: Spend a little time getting ramped up on how to do quality risk analysis and risk based testing. You can find lots of free resources on our web site, especially in the articles and the Digital Library. You’ll notice that compatibility is one of the quality risk categories included in our free quality risk checklist. If you need more help, let me know, as we can provide a one-week risk based testing bootstrapping service that will get you headed in the right direction.
Morale of the story: If you are in charge of testing at any SaaS vendor, and you’re not testing for compatibility, it’s only a matter of time before someone writes a blog post like this one about your product and the degree to which you aren’t testing it.