Archive for March, 2012
I had a query come in about a sample exam question in our Advanced Test Manager course. Shukti asked me to confirm the answer to the following:
A given organization is using reviews for development work products like code, requirements and design specifications; test work products like test plans, quality risk analyses, and test design specifications; and, documentation and help screens. The review processes have been in place for two years and are delivering excellent financial, quality, and schedule benefits.
You are attending a management team meeting. A senior executive raises the need to update the objectives by which the individual contributors are measured on their yearly performance evaluations. He suggests using defect counts from review meetings. He circulates a draft plan. Under the plan, people will be rewarded based on the number of defects they find in reviews. Further, people will be penalized if items they have produced incur too many defects during reviews.
Which of the following is a psychological factor affecting review success and failure that is likely to cause such an initiative to undermine the current success of the reviews?
- Scrutinize the document and not the author.
- Focus all participants on delivering high-quality items. [Correct]
- Try to find as many defects as possible.
- Assemble the right team of reviewers.
Here’s why that answer is correct. Bonuses and other financial incentives/disincentives are based on the assumption that people are basically rational economic actors who will behave in a way to maximize their financial situation. (Now, we can have a whole separate discussion about whether this assumption holds perfectly in all situations, but it really doesn’t need to be perfect, as long as more often than not it is true.)
So, in this scenario, what will the reviewers be focused on? Not on delivering the highest quality items, but on finding the maximum number of bugs in each item and “claiming” those bugs for themselves (i.e., squabbling with other reviewers over who should get credit.) What will the authors be focused on? Arguing about every single bug that reviewers report, trying to insist that the document is perfect. None of these behaviors is supportive of increased quality, and the authors’ behaviors are directly contrary to that goal.
In short, a really bad idea. I wish I could say that I never saw organizations make this kind of mistake with process metrics (i.e., mistaking a software process metric for a people metric), but unfortunately it is all too common.
Last week, I had a question arrive from a reader of one of my books, Pragmatic Software Testing. The reader, Akhila, wrote:
I picked up your book with a hope to learn more about testing, as I’m new to testing. I’m through with a nice course in testing, but I’m yet to land a testing job. All said, I’m glad that your book is serving my appetite for learning more. I’m thoroughly enjoying reading your book. Little concern though is that, I’m still not finding prototype K.1, K.2 etc which you mention in the appendices. By the way, I’m going through the Indian edition of the book.
Hope to get a response.
Thank you for a wonderful bundle of knowledge.
Akhila, you are referring to the screen prototypes mentioned in the Omninet Marketing Requirements Document, but not provided in the requirements or anywhere else. This is actually a bug in the requirements, one that sharp-eyed readers (such as you) typically find when they do the review exercise in the book.
Like most of the other bugs in that document, this is not an unusual problem in requirements specifications. Referencing information that cannot be found, or, when found, turns out to be marked “to be determined” or just an empty template, is a problem that many testers (and others) encounter with technical documents of various kinds, requirements included.
In this case, if you don’t find the bug in a requirements review, you’ll certainly run into it when you go to create tests. This is one reason why early test design is so useful. The attempt to create tests often reveals problems in a requirements specification, sometimes even in specifications that were carefully reviewed.
Are you a happy software tester? According to this article, those of us in this profession are among the happiest, most satisfied employees. I certainly enjoy the intellectual challenges of this career, and it’s rewarding, too, to be involved in such important work.
What makes you happy about being a software tester? Do you consider yourself a happy tester? Share your comments with me and with other readers of this blog.
My Agile Testing Opportunities webinar continues to stir up discussion. A listener, Rana Zoghbi, commented:
I have a small question concerning this presentation:
You have talked about the opportunities of agile in testing, but what about the pitfalls that a tester can encounter in an agile environment? I am pursuing the Professional tester course – Foundation level, and it mentions that usually an agile mode is hostile towards independent testing. Can you please elaborate further on this?
Thanks for the opportunity to clarify some points, Rana. First, in terms of pitfalls, yes, there are testing pitfalls associated with any lifecycle model, and Agile is no exception. My presentation yesterday was focused on how Agile lifecycles offer testers cetain opportunities (also present with any lifecycle model), but, if you want to hear about the pitfalls, you can listen to my previous webinar, Agile Testing Challenges.
Second, whether Agile teams are “hostile” to independent test teams is something that varies widely. We have a number of clients that are using Agile methods and have independent test teams. In a number of those situations, the independent test team is well-respected and well-established as a complement to the Agile approach. The way in which the independent test team interacts with the Agile teams tends to vary, so I’d recommend that you check out my previous webinar on Test Organization Options for more details.
Historically, there certainly was some bad blood between Agile methodologists and professional testers. One early sign of trouble occurred when Kent Beck, the originator of Extreme Programming, gave a keynote at the now-defunct Quality Week conference in San Francisco in 1999. In his talk, he was reported to have said that the entire concept of independent testing was going to fade away, since it would be made irrelevant by the Agile approach to creating software.
Thirteen or so years later, we (the professional testers) are still here and still relevant. There are still some dogmatists on the extreme fringes of the Agile world who reject the concept of professional, independent testers, sure. However, my sense is that pragmatic software practitioners who are adopting Agile–and often adapting it in the process–see the value of independent test teams, staffed with professional testers, and providing testing services to their Agile teams. I certainly see a lot of clients that are successfully doing so.
Vaibhavee K attended my webinar on Agile Testing Opportunities this evening, and asked an interesting follow up question:
I have a query on Test Estimations techniques for Agile testing.
Which is the best method – Estimation should be done based on Sprints OR overall User story-Goal?
What kind of Estimation techniques can be used?
Certainly some of the estimation on Agile projects need to be based on user stories in an Agile world. However, there are systemic testing concerns associated with any application. We recommend that our clients–both in Agile and traditional lifecycles–use risk based testing to identify these systemic quality risks. These should be considered as part of the estimation for each sprint.