Archive for the ‘reviews’ Category
My friend and colleague from Colombia, Patricia Osorio, asks the following question:
Could you please help me with the following question? Is it right to say the order of reviews according its formality – from most formal to less formal is: inspection, walkthrough, technical review and informal review? When I had read foundation level syllabus to present the exam (2005) it was very clear. Now, when I reread the foundation level syllabus in its version 2010, it seems to me that It has changed to inspection, technical review, walkthrough and informal review.
Patricia Osorio Aristizabal
Patricia, the latest version of the Foundation syllabus is 2011, but I think the text is much the same as the 2010 version. I have checked some of the previous versions of the syllabus, and can’t find any explicit mention of the spectrum of formality.
I believe this idea of the spectrum of formality (informal, technical review, walkthrough, inspection) comes from IEEE 1028. In that standard, metrics and review-based process improvement are not specified for the technical review, and they are for walkthroughs and inspections. So, this makes the technical review less formal. The inspection is more formal than a walkthrough due to the separation of moderator and author.
This leads to an interesting question: Does this spectrum of formality actually matter in the real world? In my experience, it really doesn’t. Here’s why I say that. Most companies don’t do reviews, or at least don’t do them anywhere near as often and as thoroughly as they should. So, when I’m talking to a client about doing reviews, I don’t get into the issue of what level of formality. Instead, my focus is on motivating them to start doing more reviews and doing them better. I can’t remember working with any clients where the main problem they had with reviews was that they weren’t using the right level of formality.
The other reason this doesn’t matter much is because of the naming issue. People use the terms “review,” “technical review,” “inspection,” “JAD session,” “walkthrough,” and more, and whenever I hear those terms I ask people to tell me, specifically, what such an event is, who is involved, what the process is, and–if people mention more than one type of review–what the differences are. I very rarely get clear answers to those questions, which tells me that any particular session where people sit down to discuss a work product could have any one of a dozen or so different names attached to it.
Personally I don’t see this as a problem to worry about. I’m more worried about whether my clients are doing reviews, regularly and with good benefit, than with what they call it.
I was recently in Moscow to give a presentation on testing and quality at the Microsoft QA Day event there. After my talk, I had a conversation with a software tester, who followed up with an e-mail. I’ll respond here to Nataly’s questions.
First of all I want to thank you for your speech on the Microsoft QA Days in Moscow. Do you like the snow in late March?
Well, it was a bit cold while I was there. However, having my expectations shaped by books like War and Peace, Crime and Punishment, Ten Days that Shook the World, and Doctor Zhivago, the wintry landscape was exactly what I imagined.
You may remember, I came up to you at the end of the day with the question about participation of the test team in reviews and risk analysis. But because I have not had sufficient practice in conversational English, I could not properly ask the question
Certainly, Nataly, your English–written or conversational–is vastly superior to my Russian!
I will try to formulate the question more correctly. I’ll be very grateful if you could respond to my letter. Imagine that there is a test team with the appropriate skills, which should participate in review or risk analysis. Also imagine that the team was trained before the partisipation: they read the corresponding books about the review process or risk analysis process.
Well, Nataly, I’m not sure I would consider just reading a book on risk analysis or reviews sufficent training. It will be for some people, but many of our clients find that they need a little more help than that before they can become truly effective.
Despite the preliminary training, we can expect that the result of the first participation of the team will be low or zero, because of the lack of experience. However, management expects that the money invested in training will pay off immediately. Therefore, it would be wise to prepare them in advance to that the benefits of the participation of the test team will not be visible immediately.
It seems that, if all management did was pay for two books, one describing risk analysis and another describing software reviews, they can hardly have high expectations about how much behavioral and capability change is going to result from that extremely small investment. What they are going to pay for, in that situation, is the low efficiency associated with the series of practical review or risk analysis sessions that is necessary to learn-by-doing. This is exactly what you are describing. It’s often more effecient to pay for a training session, since, if the training is carefully chosen and the participants apply themselves to it, the participants will leave the training ready to be effective in their first review.
Could you please tell based on your experience, how much time (in average) usually pass before the moment when the participation of testers will bring tangible benefit? 1 review? 2 reviews? 3 reviews…?
So, we have a one-day training for risk analysis, and a two-day training for reviews. When we train people in how to do reviews or risk based testing, we find that they are immediately effective upon leaving that training. That’s not to say that they don’t continue to get better over time, but they are immediately capable of participating in an active and contributing fashion. However, if all the people do is read a book, it’s hard to say what level of effectiveness they would have.
The reason training is so much more effective than just reading a book is that training–at least good training–includes hands-on, practical, realistic exercises. This is true for any subject. Our training on risk based testing and reviews involves actually doing a risk analysis or reviewing a requirements specification. The instructor is involved during the exercise, to guide the participants to success. That way, the participants leave the training having actually, effectively and efficiently, carried out the process.
With a book, even a book that includes exercises, there is no instructor there to help guide the reader through the process. So, if the reader gets confused, or gets stuck, or thinks they know what they are doing but is actually wrong, the capability gained may be low.
I fear that managers might say something like: “testers spent so much time studying the theory, they spend time and money to the participation in review, so when we can see the benefit of their participation?”
Yes, this is a significant risk. If testers get involved in activities like reviews and risk analysis, and they are not capable of carrying those activities out, that can cause significant damage to the test team’s credibility and lead to managers deciding not to include them in the future.
Thanks in advance!
Thanks for getting in touch with me, Nataly. I hope my response has been helpful. If you do decide to pursue the avenue of reading books, please be sure that you arrange some exercises, just with the testers, before you try to participate in a real review or to initiate risk based testing. However, I would recommend that you seek out good training, training that includes hands-on exercises and instructor support, to improve the odds of success. Alternatively, if you have a person in the organization who can mentor the testers, someone with experience in reviews or risk analysis, cross-training can also work.
I had a query come in about a sample exam question in our Advanced Test Manager course. Shukti asked me to confirm the answer to the following:
A given organization is using reviews for development work products like code, requirements and design specifications; test work products like test plans, quality risk analyses, and test design specifications; and, documentation and help screens. The review processes have been in place for two years and are delivering excellent financial, quality, and schedule benefits.
You are attending a management team meeting. A senior executive raises the need to update the objectives by which the individual contributors are measured on their yearly performance evaluations. He suggests using defect counts from review meetings. He circulates a draft plan. Under the plan, people will be rewarded based on the number of defects they find in reviews. Further, people will be penalized if items they have produced incur too many defects during reviews.
Which of the following is a psychological factor affecting review success and failure that is likely to cause such an initiative to undermine the current success of the reviews?
- Scrutinize the document and not the author.
- Focus all participants on delivering high-quality items. [Correct]
- Try to find as many defects as possible.
- Assemble the right team of reviewers.
Here’s why that answer is correct. Bonuses and other financial incentives/disincentives are based on the assumption that people are basically rational economic actors who will behave in a way to maximize their financial situation. (Now, we can have a whole separate discussion about whether this assumption holds perfectly in all situations, but it really doesn’t need to be perfect, as long as more often than not it is true.)
So, in this scenario, what will the reviewers be focused on? Not on delivering the highest quality items, but on finding the maximum number of bugs in each item and “claiming” those bugs for themselves (i.e., squabbling with other reviewers over who should get credit.) What will the authors be focused on? Arguing about every single bug that reviewers report, trying to insist that the document is perfect. None of these behaviors is supportive of increased quality, and the authors’ behaviors are directly contrary to that goal.
In short, a really bad idea. I wish I could say that I never saw organizations make this kind of mistake with process metrics (i.e., mistaking a software process metric for a people metric), but unfortunately it is all too common.
Last week, I had a question arrive from a reader of one of my books, Pragmatic Software Testing. The reader, Akhila, wrote:
I picked up your book with a hope to learn more about testing, as I’m new to testing. I’m through with a nice course in testing, but I’m yet to land a testing job. All said, I’m glad that your book is serving my appetite for learning more. I’m thoroughly enjoying reading your book. Little concern though is that, I’m still not finding prototype K.1, K.2 etc which you mention in the appendices. By the way, I’m going through the Indian edition of the book.
Hope to get a response.
Thank you for a wonderful bundle of knowledge.
Akhila, you are referring to the screen prototypes mentioned in the Omninet Marketing Requirements Document, but not provided in the requirements or anywhere else. This is actually a bug in the requirements, one that sharp-eyed readers (such as you) typically find when they do the review exercise in the book.
Like most of the other bugs in that document, this is not an unusual problem in requirements specifications. Referencing information that cannot be found, or, when found, turns out to be marked “to be determined” or just an empty template, is a problem that many testers (and others) encounter with technical documents of various kinds, requirements included.
In this case, if you don’t find the bug in a requirements review, you’ll certainly run into it when you go to create tests. This is one reason why early test design is so useful. The attempt to create tests often reveals problems in a requirements specification, sometimes even in specifications that were carefully reviewed.