software testing services
 Software testing training, consulting, and outsourcing from the experts: Rex Black Consulting Services (RBCS)
CALL US TODAY
(866) 438-4830
ISTQB certification testingISTQB certification testing ISTQB certification testing
PMI
Software Testing Articles

This page includes synopses of some of the most recent published software testing articles and commentary from RBCS experts regarding the software testing industry and other topics. To view the full article, click on the provided link.

Test Management: Creating and Building Relationships
By Rex Black, President, RBCS, Inc.

When I do assessments for clients, I talk to a lot of people, both inside and outside the testing group. In the opening moments of each interview, I try to engage in a friendly exchange, where I break the ice between the interviewee and myself. Not only is it more pleasant to have a friendly conversation than a tense one, but people are more open and honest with someone with whom they have some kind of positive relationship, compared to a complete stranger—or someone they see as hostile, cold, or inscrutable. Most of the time, I succeed, and I get to spend an interesting hour or so with someone who gives me the benefit of their insights and opinions.

The same is true, on a much larger and longer scale, for test managers. Testing is a matter of providing useful services to stakeholders. If those stakeholders have a good relationship with you and the other test managers in your test group, information will flow more smoothly in both directions. The job of the test group will become easier because it has better access to information it needs. The test group will also become more valuable because the information the group produces will flow more smoothly to the recipients of that information. It’s just human nature: We listen to and value the communications we receive from people we are comfortable with, and we are happy to reciprocate that flow of information.

It’s not that you must be a personal friend to every stakeholder with whom you work, but a good professional relationship with those stakeholders is a major factor in the success of a test manager. How well you and the other managers in the test group initiate, cultivate, and sustain these relationships will strongly influence the flow of information, as well as the support, you obtain from your colleagues.

A relationship is necessarily a two-way affair. You and the test group can’t be the only beneficiaries from a relationship, at least not a good one. Once, a person with whom I worked on a project described the CEO of one vendor as follows: “Every time I meet with that guy, I want to take a shower afterward,” meaning that he felt soiled just by being in the same room. Later in the project, when my colleague legitimately but accidentally came into possession of a memo that was certainly not in the vendor’s interests to disclose to its client, my colleague felt no compunction about copying the document before returning it in a way that did not disclose that he had seen it. The relationship had become two-way, but not in a good way.
As a contrast, I had an excellent relationship with this same vendor’s test manager. Across a significant cultural difference—the same difference my colleague and the CEO had not bridged—he and I forged a relationship of honesty and trust. I felt I could tell him the truth about what was happening on my side of the project, and he felt the same. We shared information to advance our mutual goals of a successful project and high-quality deliverable while at the same time respecting the limits on communication imposed by our different positions in terms of who our employers were. Even when the relationship between the two companies became testy, he and I were always able to communicate as friends with a good relationship of mutual respect.

I note that this anecdote does not represent an isolated incident but rather a truth that has become plain to me throughout my career in testing. The successful test manager, perhaps more than any other managers in the software business, must cultivate strong relationships with stakeholders, continuously reinforce those relationships with mutual benefits, and maintain the relationships through good times and bad. In the next few subsections, let’s look more closely at how.


Matching Test Techniques to the Extent of Testing
By Rex Black, President, RBCS, Inc.

In the Pragmatic Risk Analysis and Management process described in books such as Managing the Testing Process, Pragmatic Software Testing, and Advanced Software Testing: Volume 2, I define the following extents of testing, in decreasing order of thoroughness:

  • Extensive
  • Broad
  • Cursory
  • Opportunity
  • Report bugs only
  • None

Risk based testing does not prescribe specific test design techniques to mitigate quality risks based on the level of risk, as the selection of test design technique for a given risk item is subject to many factors. These factors include the suspected defects (what Boris Beizer called the “bug hypothesis”), the technology of the system under test, and so forth. However, risk based testing does give guidance in terms of the level of test design, implementation, and execution effort to expend, and that does influence the selection of test design techniques. This sidebar provides heuristic guides to help test managers and engineers select appropriate test techniques based on the extent of testing indicated for a risk item by the quality risk analysis process. These guides apply to testing during system and system integration testing by independent test teams.


Why Most Unit Testing is Waste
By James O Coplien

Unit testing was a staple of the FORTRAN days, when a function was a function and was sometimes worthy of functional testing. Computers computed, and functions and procedures represented units of computation. In those days the dominant design process composed complex external functionality from smaller chunks, which in turn orchestrated yet smaller chunks, and so on down to the level of well-understood primitives. Each layer supported the layers above it. You actually stood a good chance that you could trace the functionality of the things at the bottom, called functions and procedures, to the requirements that gave rise to them out at the human interface. There was hope that a good designer could understand a given function's business purpose. And it was possible, at least in well-structured code, to reason about the calling tree. You could mentally simulate code execution in a code review. 

Object orientation slowly took the world by storm, and it turned the design world upside-down. First, the design units changed from things-that-computed to small heterogeneous composites called objects that combine several programming artefacts, including functions and data, together inside one wrapper. The object paradigm used classes to wrap several functions together with the specifications of the data global to those functions. The class became a cookie cutter from which objects were created at run time. In a given computing context, the exact function to be called is determined at run-time and cannot be deduced from the source code as it could in FORTRAN. That made it impossible to reason about run-time behaviour of code by inspection alone. You had to run the program to get the faintest idea of what was going on.

So, testing became in again. And it was unit testing with a vengeance. The object community had discovered the value of early feedback, propelled by the increasing speed of machines and by the rise in the number of personal computers. Design became much more data-focused because objects were shaped more by their data structure than by any properties of their methods. The lack of any explicit calling structure made it difficult to place any single function execution in the context of its execution. What little chance there might have been to do so was taken away by polymorphism. So integration testing was out; unit testing was in. System testing was still somewhere there in the background but seemed either to become someone else's problem or, more dangerously, was run by the same people who wrote the code as kind of a grown-up version of unit testing.

Classes became the units of analysis and, to some degree, of design. CRC cards (popularly representing Classes, Responsibilities, and Collaborators) were a popular design technique where each class was represented by a person. Object orientation became synonymous with anthropomorphic design. Classes additionally became the units of administration, design focus and programming, and their anthropomorphic nature gave the master of each class a yearning to test it. And because few class methods came with the same contextualization that a FORTRAN function did, programmers had to provide context before exercising a method (remember that we don't test classes and we don't even test objects - the unit of functional test is a method). Unit tests provided the drivers to take methods through their paces. Mocks provided the context of the environmental state and of the other methods on which the method under test depending. And test environments came with facilities to poise each object in the right state in preparation for the test.


Interview with Rex Black in the August 2013 issue of “Tester’s Life” Magazine

An article featuring an interview with Rex Black was recently published in “Tester’s Life” magazine.   Click here to read the interview in English or, to see the original source article in it's entirety in Russian, visit www.testers-life.ru.  


I Take It (Almost) All Back
By Rex 

Exploratory Testing: Snake Oil, Snake Bite, or Smart Idea?


Defining Test Mission, Policy, and Metrics of Success
By Rex Black and Debbie Friedenberg

The International Software Testing Qualifications Board (ISTQB) program defines testing expansively. Static testing (via reviews and static analysis) is included as well as all levels of dynamic testing, from unit testing through to the various forms of acceptance testing. The test process is defined to include all necessary activities for these types of testing; planning; analysis, design, and implementation; execution, monitoring, and results reporting; and closure.


Are You Using Your ISTQB Certification?
By Judy McKay

An ISTQB certification is a nice thing to have. It decorates your office. It tells your co-workers that you have pursued education and have achieved a level of knowledge in testing. At the Advanced level, it shows that you have selected an area of specialization and have demonstrated your knowledge. At the Expert level, it provides evidence that you have set yourself apart from others-that you have acquired and demonstrated expertise in a particular area. But, are you really using your knowledge and expertise? Let’s look at how the benefits of the certification can be used in your regular job and your career.


Why You Should Consider Software Testing
By Rex Black

Rex Black was asked recently be a client to write a brief article regarding why to become a Software Tester. The target audience was high school students. RBCS is committed to "spreading the word" about testing and we all  enjoy the opportunity to encourage future testers.

Talking to high school students about testing requires that we communicate on their level and in a way that might intrigue them. As any parent or educator knows, this is not always an easy task. Read on to see how Rex Black man gages just that.

I bet this week you used more software than you even know. Did you use a smart phone? Did you ride in a car? Did you play a video game? Did you send an e-mail on your tablet or PC? Did you fly on an airplane? Did you go SCUBA diving? Did you go from one floor to another in an elevator while at the mall? Every one of these activities involves using software. In fact, most of the software used in those activities are more complex and sophisticated than the software used in the Apollo moon mission, and often even more so than the software used by the Space Shuttle.


Challenges of Testing with Production Data
By Rex Black

This article is excerpted from Chapter 3 of Rex Black’s book Managing the Testing Process, 3e.

A number of RBCS clients find that obtaining good test data poses many challenges. For any large-scale system, testers usually cannot create sufficient and sufficiently diverse test data by hand; i.e., one record at a time. While data-generation tools exist and can create almost unlimited amounts of data, the data so generated often do not exhibit the same diversity and distribution of values as production data. For these reasons, many of our clients consider production data ideal for testing, particularly for systems where large sets of records have accumulated over years of use with various revisions of the systems currently in use, and systems previously in use. 

However, to use production data, we must preserve privacy. Production data often contains personal data about individuals which must be handled securely. However, requiring secure data handling during testing activities imposes undesirable inefficiencies and constraints. Therefore, many organizations want to anonymize (scramble) the production data prior to using it for testing.

This anonymization process leads to the next set of challenges, though. The anonymization process must occur securely, in the sense that it is not reversible should the data fall into the wrong hands. For example, simply substituting the next digit or the next letter in sequence would be obvious to anyone­ it doesn’t take long to deduce that “Kpio Cspxo” is actually “John Brown”­which makes the de-anonymization process trivial. 



Organizing Manual Testing on a Budget
By Capers Jones, Vice President and Chief Technology Officer
Namcook Analytics LLC

RBCS is pleased to feature a special guest author for our newsletter article, Capers Jones. Capers Jones is, of course, a long-standing force for improving the software engineering industry, and has published a number of books that I consider essential reading for software professionals who seek to truly understand, through data and facts, what happens on software projects. Recently, he published an important book on software quality, The Economics of Software Quality. So, I asked Capers if he'd be willing to contribute a guest article, and he graciously agreed. This article, on software quality today and tomorrow, gives us a sobering view of our current situation, but also provides clear direction on what we need to do to get better. The good news is that we already have many of the tools we need to improve software quality. -- Rex Black

In 2012 large software projects are hazardous business undertakings. More than half of software projects larger than 10,000 function points (about 1,000,000 lines of code) are either cancelled or run late by more than a year.

When examining troubled software projects, it always happens that the main reason for delay or termination is due to excessive volumes of serious defects. Conversely, large software projects that are successful are always characterized by excellence in both defect prevention and defect removal. It can be concluded that achieving state of the art levels of software quality control is the most important single objective of software process improvements.

Quality control is on the critical path for advancing software engineering from its current status as a skilled craft to become a true profession.



How to Pick Testing Tools
By Rex Black


Many of us got into technology because we were fascinated by the prospect of using computers to build better ways to get work done. (That and the almost magical way we could command a complex machine to do something simply through the force of words coming off our fingers, into a keyboard, and onto a screen.) Ultimately, those of us who consider ourselves software engineers, like all engineers, are in the business of building useful things.

Of course, engineers need tools. Civil engineers have dump trucks, trenching machines, and graders. Mechanical engineers have CAD/CAM software. And we have integrated development environments (IDEs), configuration management tools, automated unit testing and functional regression testing tools, and more. Many great testing tools are available, and some of them are even free. But just because you can get a tool, doesn’t mean that you need the tool. 

When you get beyond the geek-factor on some tool, you come to the practical questions: What is the business case for using a tool? There are so many options, but how to I pick one? How should I introduce and deploy the tool? How can I measure the return on investment for the tool? This article will help you uncover answers to these questions as you contemplate tools.



Measuring Confidence along the Dimensions of Test Coverage
By Rex Black

When I talk to senior project and product stakeholders outside of test teams, confidence in the system—especially, confidence that it will have a sufficient level of quality—is one benefit they want from a test team involved in system and system integration testing. Another key benefit such stakeholders commonly mention is providing timely, credible information about quality, including our level of confidence in system quality.

Reporting their level of confidence in system quality often proves difficult to many testers. Some testers resort to reporting confidence in terms of their gut feel. Next to major functional areas, they draw smiley faces and frowny faces on a whiteboard, and say things like, “I’ve got a bad feeling about function XYZ.” When management decides to release the product anyway, the hapless testers either suffer the Curse of Cassandra if function XYZ fails in production, or watch their credibility evaporate if there are no problems with function XYZ in production.



How to  Build Quality Applications
By Rex Black
 
Testing is an excellent means to build confidence in the quality of software before it’s deployed in a data center or released to customers. It’s good to have confidence before you turn an application loose on the users, but why wait until the end of the project? The most efficient form of quality assurance is building software the right way, right from the start. What can software testing, software quality, and software engineering professionals do, starting with the first day of the project, to deliver quality applications?



Metrics for Software Testing: Managing with Facts Part 4: Product Metrics
By Rex Black

In the previous article in this series, we moved from a discussion of process metrics to a discussion of how metrics can help you manage projects. I talked about the use of project metrics to understand the progress of testing on a project, and how to use those metrics to respond and guide the project to the best possible outcome. We looked at the way to use project metrics, and how to avoid the misuse of these metrics.

In this final article in the series, we’ll look at one more type of metric. In this article, we examine product metrics. Product metrics are often forgotten, but having good product metrics helps you understand the quality status of the system under test. This article will help you understand how to use product metrics properly. I’ll also offer some concluding thoughts on the proper use of metrics in testing, as I wind up this series of articles.

As I wrote above, product metrics help us understand the current quality status of the system under testing. Good testing allows us to measure the quality and the quality risk in a system, but we need proper product metrics to capture those measures. These product metrics provide the insights to guide where product improvements should occur, if the quality is not where it should be (e.g., given the current point on the schedule). As mentioned in the first article in this series, we can talk about metrics as relevant to effectiveness, efficiency, and elegance.

Effectiveness product metrics measure the extent to which the product is achieving desired levels of quality. Efficiency product metrics measure the extent to which a product achieves that desired level of quality results in an economical fashion. Elegance product metrics measure the extent to which a product effectively and efficiently achieves those results in a graceful, well-executed fashion.



Advanced Risk Based Test Results Reporting: Putting Residual Quality Risk Measurement in Motion
By Rex Black and Nagata Atsushi

Analytical risk based testing offers a number of benefits to test teams and organizations that use this strategy.  One of those benefits is the opportunity to make risk-aware release decisions.  However, this benefit requires risk based test results reporting, which many organizations have found particularly challenging.  This article describes the basics of risk based testing results reporting, then shows how Rex Black (of RBCS) and Nagata Atsushi (of Sony) developed and implemented new and ground-breaking ways to report test results based on risk.

Testing can be thought of as (one) way to reduce the risks to system quality prior to release.  Quality risks typically include possible situations like slow system response to use input, incorrect calculations, corruption of customer data, and difficulty in understanding system interfaces.  All testing strategies, competently executed, will reduce quality risks.  However, analytical risk based testing, a strategy that allocates testing effort and sequences test execution based on risk, minimizes the level of residual quality risk for any given amount of testing effort.

There are various techniques for risk based testing, including highly formal techniques like Failure Mode and Effect Analysis (FMEA).  Most organizations find this technique too difficult to implement, so RBCS typically recommends ­and helps clients to implement­ a technique called Pragmatic Risk Analysis and Management (PRAM).  You can find a case study of PRAM implementation at another large company, CA, here. While this article describes the implementation of the technique for projects following a sequential lifecycle, a similar approach has been implemented by organizations using Agile and iterative lifecycle models.

This article was originally published in Software Test and Quality Assurance www.softwaretestpro.com in their December 2010 edition.



Metrics for Software Testing: Managing with Facts: Part 3: Project Metrics
Written by Rex Black

In the previous article in this series, we moved from general observations about metrics to a specific discussion about how metrics can help you manage processes. We talked about the use of metrics to understand and improve test and development process capability with facts.  We covered the proper development of process metrics, starting with objectives for the metrics and ultimately setting industry-based goals for those metrics. We looked at how to recognize a good set of process metrics, and trade-offs for those metrics

In this and the next article in the series, we’ll look at two more specific types of metrics.  In this article, we turn from process to project metrics.  Project metrics can help us understand our status in terms of the progress of testing and quality on a project.  Understanding current project status is a pre-requisite to rational, fact-driven project management decisions.  In this article, you’ll learn how to develop, understand, and respond to good project metrics.

Click here to read the article in its entirety.



Metrics for Software Testing: Managing with Facts: Part 2: Process Metrics 
Written by Rex Black

In the previous article in this series, I offered a number of general observations about metrics, illustrated with examples. We talked about the use of metrics to manage testing and quality with facts. We covered the proper development of metrics, top-down (objective-based) not bottom-up (tools-based). We looked at how to recognize a good set of metrics.

In the next three articles in the series, we’ll look at specific types of metrics. In this article, we will take up process metrics. Process metrics can help us understand the quality capability of the software engineering process as well as the testing capability of the software testing process. Understanding these capabilities is a pre-requisite to rational, fact-driven process improvement decisions. In this article, you’ll learn how to develop and understand good process metrics.

Click here to read the article in its entirety.



Advanced Risk Based Test Results Reporting: Putting Residual Quality Risk Measurement in Motion
Written by Rex Black

Analytical risk based testing offers a number of benefits to test teams and organizations that use this strategy. One of those benefits is the opportunity to make risk-aware release decisions.  However, this benefit requires risk based test results reporting, which many organizations have found particularly challenging.  This article describes the basics of risk based testing results reporting, then shows how Rex Black (of RBCS) and Nagata Atsushi (of Sony) developed and implemented new and ground-breaking ways to report test results based on risk.

Testing can be thought of as (one) way to reduce the risks to system quality prior to release. Quality risks typically include possible situations like slow system response to use input, incorrect calculations, corruption of customer data, and difficulty in understanding system interfaces. All testing strategies, competently executed, will reduce quality risks.  However, analytical risk based testing, a strategy that allocates testing effort and sequences test execution based on risk, minimizes the level of residual quality risk for any given amount of testing effort.

There are various techniques for risk based testing, including highly formal techniques like Failure Mode and Effect Analysis (FMEA). Most organizations find this technique too difficult to implement, so RBCS typically recommends and helps clients to implement a technique called Pragmatic Risk Analysis and Management (PRAM).  You can find a case study of PRAM implementation at another large company, CA, at http://www.rbcs-us.com/images/documents/A-Case-Study-in-Risk-Based-Testing.pdf. While this article describes the implementation of the technique for projects following a sequential lifecycle, a similar approach has been implemented by organizations using Agile and iterative lifecycle models.

Click here or to read the article in its entirety or read the original publication in the STP Magazine. 



Metrics for Software Testing:  Managing with Facts, Part 1:  The How and Why of Metrics
By Rex Black


At RBCS, a growing part of our consulting business is helping clients with metrics programs. We’re always happy to help with such engagements, and I usually try to do the work personally, because I find it so rewarding. What’s so great about metrics? Well, when you use metrics to track, control, and manage your testing and quality efforts, you can be confident that you are managing with facts and reality, not opinions and guesswork.


When clients want to get started with metrics, they often have questions. How can we use metrics to manage testing? What metrics can we use to measure the test process?  What metrics can we use to measure our progress in testing a project? What do metrics tell us about the quality of the product? We work with clients to answer these questions all the time. In this article, and the next three articles in this series, I’ll show you some of the answers.






Critical Testing Processes: An Open Source, Business Driven Framework for Improving the Testing Process
By Rex Black


When I wrote my book Critical Testing Processes in the early 2000s, I started with the premise that some test processes are critical, some are not. I designed this lightweight framework for test process improvement in order to focus the test team and test manager on a few test areas that they simply must do properly. This contrasts with the more expansive and complex models inherent in TPI and TMM.  In addition, the Critical Testing Processes (CTP) framework eschews the prescriptive elements of TMM and TPI since it does not impose an arbitrary, staged maturity model.

What’s the problem with prescriptive models?  In my consulting work, I have found that businesses want to make improvements based on the business value of the improvement and the organizational pain that improvement will alleviate. A simplistic maturity rating might lead a business to make improvements in parts of the overall software process or test process that are actually less problematic or less important than other parts of the process simply because the model listed them in order.

CTP is a non-prescriptive process model. It describes the important software processes and what should happen in them, but it doesn’t put them in any order of improvement. This makes CTP a very flexible model. It allows you to identify and deal with specific challenges to your test processes. It identifies various attributes of good processes, both quantitative and qualitative. It allows you to use business value and organizational pain to select the order and importance of improvements. It is also adaptable to all software development lifecycle models.



Challenges in Agile Translation

Polish translation

By Rex Black


A popular article translated into Polish! A number of our clients have adopted Scrum and other Agile methodologies. Every software development lifecycle model, from sequential to spiral to Agile, has  testing implications. Some of these implications ease the testing process. We don't need to worry about these implications here.


Some of these testing implications challenge testing. In this case study, I discuss those challenges so that our client can understand the issues created by the Scrum methodology, and distinguish  those from other types of testing issues that our client faces. 


Click here for the English version.



A Few Thoughts on Test Data
By Rex Black


This article is excerpted from Chapter 3 of Rex Black's popular book Managing the Testing Process, 3e.


A number of RBCS clients find that obtaining good test data poses many challenges. For any large-scale system, testers usually cannot create sufficient and sufficiently diverse test data by hand; i.e., one record at a time. While data-generation tools exist and can create almost unlimited amounts of data, the data so generated often do not exhibit the same diversity and distribution of values as production data. For these reasons, many of our clients consider production data ideal for testing, particularly for systems where large sets of records have accumulated over years of use with various revisions of the systems currently in use, and systems previously in use.


However, to use production data, we must preserve privacy. Production data often contains personal data about individuals which must be handled securely. However, requiring secure data handling during testing activities imposes undesirable inefficiencies and constraints. Therefore, many organizations want to anonymize (scramble) the production data prior to using it for testing.


This anonymization process leads to the next set of challenges, though. The anonymization process must occur securely, in the sense that it is not reversible should the data fall into the wrong hands. For example, simply substituting the next digit or the next letter in sequence would be obvious to anyone it doesn’t take long to deduce that “Kpio Cspxo” is actually “John Brown”-which makes the de-anonymization process trivial.



Using Domain Analysis for Testing
By Rex Black


Many of you are probably familiar with basic test techniques like equivalence partitioning and boundary value analysis. In this article, Rex presents an advanced technique for black-box testing called domain analysis. Domain analysis is an analytical way to deal with the interaction of factors or variables within the business logic layer of a program. It is appropriate when you have some number of factors to deal with. These factors might be input fields, output fields, database fields, events, or conditions. They should interact to create two or more situations in which the system will process data differently. Those situations are the domains. In each domain, the value of one or more factors influences the values of other factors, the system's outputs, or the processing performed.

In some cases, the number of possible test cases becomes very large due to the number of variables or factors and the potentially interesting test values or options for each variable or factor. For example, suppose you have 10 integer input fields that accept a number from 0 to 99. There are 10 billion billion valid input combinations.

Equivalence class partitioning and boundary value analysis on each field will reduce but not resolve the problem. You have four boundary values for each field. The illegal values are easy, because you have only 20 tests for those. However, to test each legal combination of fields, you have 1,024 test cases. But do you need to do so? And would testing combinations of boundary values necessarily make for good tests? Are there smarter options for dealing with such combinatorial explosions?


This article was originally published in Quality Matters.



Advanced Software Test Design Techniques:  Use Cases
By Rex Black


The following is an excerpt from my recently-published book, Advanced Software Testing: Volume 1. This is a book for test analysts and test engineers. It is especially useful for ISTQB Advanced Test Analyst certificate candidates, but contains detailed discussions of test design techniques that any tester can—and should—use. In this third article in a series of excerpts, I discuss the application of use cases to testing workflows.

At the start of this series, I said we would cover three techniques that would prove useful for testing business logic, often more useful than equivalence partitioning and boundary value analysis. First, we covered decision tables, which are best in transactional testing situations. Next, we looked at state-based testing, which is ideal when we have sequences of events that occur and conditions that apply to those events, and the proper handling of a particular event/condition situation depends on the events and conditions that have occurred in the past. In this article, we’ll cover use cases, where preconditions and postconditions help to insulate one workflow from the previous workflow and the next workflow. With these three techniques in hand, you have a set of powerful techniques for testing the business logic of a system.


This article was originally published in Testing Experience Magazine.  Subscribe today!



Advanced Software Test Design Techniques, State Diagrams, State Tables and Switch Coverage
By Rex Black


In this article, we look at state-based testing.  State-based testing is ideal when we have sequences of events that occur and conditions that apply to those events, and the proper handling of a particular event/condition situation depends on the events and conditions that have occurred in the past.  In some cases, the sequences of events can be potentially infinite, which of course exceeds our testing capabilities, but we want to have a test design technique that allows us to handle arbitrarily-long sequences of events. Read this article to learn more about state-based testing.


This article was originally published in Testing Experience Magazine.  Subscribe today!



Advanced Software Test Design Techniques, Decision Tables and Cause-Effect Graphs
By Rex Black


This article is an excerpt from Rex Black's recently-published book, Advanced Software Testing: Volume 1.  This is a book for test analysts and test engineers.  It is especially useful for ISTQB Advanced Test Analyst certificate candidates, but contains detailed discussions of test design techniques that any tester can-­and should­-use.  In this first article in a series of excerpts, Black starts by discussing the related concepts of decision tables and cause-effect graphs.


Equivalence partitioning and boundary value analysis are very useful techniques.  They are especially useful when testing input field validation at the user interface.  However, lots of testing that we do as test analysts involves testing the business logic that sits underneath the user interface.  We can use boundary values and equivalence partitioning on business logic, too, but three additional techniques, decision tables, use cases, and state-based testing, will often prove handier and more effective.  Read this article to learn more about these powerful techniques. 


This software testing article was originally published in the June 2009 edition of Testing Experience Magazine



Risk Based Testing:  What It Is and How You Can Benefit
By Rex Black


Rex Black’s pioneering Managing the Testing Process was both the first test management book and the first to discuss risk-based testing.  In this software testing article, Rex explains:
• The benefits of risk-based testing. 
• Why adding risk analysis to the test team’s responsibilities actually reduces their workload.
• The importance of stakeholder participation.
• Common mistakes that can occur in risk-based testing.

Rex illustrates these points, not through hypothetical discussion, but by examining a case study where RBCS helped a client launch risk-based testing.  Read this article to learn how to analyze risks to quality, and use that analysis to be a smarter test professional.



Quality Goes Bananas
By Rex Black, Daniel Derr and Michael Tyszkiewicz


You're familiar with test automation, but what is a dumb monkey? How can it help you automate your testing and explore very large screen flows and input sets? Is it true that you can build dumb monkeys from freeware with no tool budget? What kind of quality risks can dumb monkeys address? Read this article to learn the answers to these and other test automation questions.



How Outsourcing Affects Testing
By Rex Black

This software testing article is excerpted from Chapter 10 of Rex Black's upcoming book Managing the Testing Process, 3e.  Over the last twenty years, outsource development of one or more key components in the system has come to dominate software and hardware systems engineering. The trend started in hardware in the 1990s. RBCS clients like Dell, Hitachi, Hewlett Packard, and other computer systems vendors took advantage of cheap yet educated labor overseas to compete effectively in an increasingly commoditized market. By the end of 2002, three years into a spectacular IT downturn that saw computer science enrollments in the United States fall to less than half of their 1999 levels, price had become the primary determinant in most IT project decisions.  Mass outsourcing of software projects took hold, and it continues unabated to this day... Read this article to get a picture of the effects of outsourcing software testing.



Intelligent Use of Testing Service Providers
By Rex Black

In this software testing article, Rex Black analyzes the use of outsourcing in testing, based on some twenty years of experience with outsourcing of testing in one form or another. First, Mr. Black enumerates the key differences between in-house and outsourced test teams. Next, driven by these key differences, he analyzes which tasks fit better with outsourced testing service providers, followed by a similar analysis for in-house test teams. Then, Mr Black lists some of the technical, managerial, and political challenges that confront a company trying to make effective use of outsourced testing. Finally, he addresses some of the processes needed to use testing service providers effectively and with the least amount of trouble. Read this article to significantly improve your use of outsourced testing.



A Case Study in Successful Risk-Based Testing at CA
By Rex Black, Peter Nash and Ken Young

This article presents a case study of a risk-based testing pilot project at CA, the world's leading independent IT management software company. The development team chosen for this pilot is responsible for a widely-used mainframe software product called CA SYSVIEWR Performance Management, an intuitive tool for proactive management and real-time monitoring of z/OS environments.  By analyzing a vast array of performance metrics, CA SYSVIEW can help organizations identify and resolve problems quickly.
CA piloted risk-based testing as part of our larger effort to ensure the quality of the solutions we deliver.  The pilot consisted of six main activities:
.         Training key stakeholders on risk-based testing
.         Holding a quality risk analysis session
.         Analyzing and refining the quality risk analysis
.         Aligning the testing with the quality risks
.         Guiding the testing based on risks
.         Assessing benefits and lessons
This article addresses each of these areas - as well as some of the broader issues associated with risk-based testing.  Click here to read the version of this software testing article as published in Better Software Testing.



Four Ideas for Improving Software Test Efficiency

"Do more with less. Work smarter not harder. Same coverage, fewer testers." If you're like a lot of testers and test managers, you'll be hearing statements like those a lot in 2009, since we appear headed for another tight economic period. If you need a way to demonstrate quick, measurable efficiency gains in your test operation, read this  short article to learn four great ideas that will help you improve your software test efficiency.


A Simplified Automation Solution Using WATIJ
By Steven Troy, Jamie Mitchell and Rex Black

One of our clients, CA, has continued to impress us with innovative ways to go about their testing. An upcoming software testing article will discuss how we are helping them institute risk-based testing. Read this article to learn how one of their teams is using a leading-edge open source testing tool, WATIJ, to help contain regression risk.



A Story about User Stories and Test-Driven Development
By Gertrud Bjørnvig, James O. Coplien, and Neil Harrison

Test-Driven Development, or TDD, is a term used for a popular collection of development techniques in wide use in the Agile community. While testing is part of its name, and though it includes tests, and though it fits in that part of the life cycle usually ascribed to unit testing activity, TDD pundits universally insist that it is not a testing technique, but rather a technique that helps one focus one’s design thinking. The idea is that you write your tests first, and your code second.

Read this article to explore some subtle pitfalls of TDD that have come out of our experience, our consultancy on real projects (all of the conjectured problems are things we have actually seen in practice), and a bit of that rare commodity called common sense.

The original two-part version of this article  about Test Driven Development was published in Better Software Testing. Click here to read the first part and click here for the second part.



The ISTQB Advanced Syllabus: Guiding the Way to Better Software Testing
By Rex Black

The International Software Testing Qualification Board (ISTQB) has already effected profound change in the software testing field, with almost 100,000 people having attained Foundation certification. But a Foundation certification is just that: only a Foundation. With the release of the new Advanced syllabus in October 2007, the ISTQB has expanded and improved the next rung on the ladder of test professionalism. In the slides from this tutorial, Rex Black, President of the ISTQB, shows how the ISTQB Advanced syllabus can guide you, your testing colleagues, and your organization toward better testing, reduced risk, and higher quality.



The IT Professional on the Outsourced Project
By Rex Black

More and more IT professionals work on projects where some or all of the development is done by third-parties, often overseas. While cost savings make such arrangements attractive to executives, individual contributors and managers on such projects face some significant challenges. What does outsource mean for IT professionals? In this talk, Rex Black offers insights from his extensive involvement in outsource projects ­both successful and not-so-successful. Rex will illustrate his points with case studies, and share humorous and scary anecdotes along the way.



The Right Stuff: Four Small Steps for Testers, one Giant leap for Risk Mitigation
By Rex Black and Barton Layne

Recently, we worked on a high-risk, high-visibility system where performance testing ("Let's just make sure it handles the load") was the last item on the agenda. As luck would have it, the system didn't handle the load, and very long days and nights ensued. Why it does have to be this way... Read this article about Risk Mitigation to ensure this doesn’t happen to you.



Empirix's QAZone with Rex Black
By Marina Gil Santamaria

From certification to automation, expert thoughts on where the testing industry is and where it's headed.



Quality Risk Analysis: Which Quality Risks Should We Worry About?
By Rex Black

Since it is not possible to test everything, it is necessary to pick a subset of the overall set of tests to be run. Read this article to discover how quality risks analysis can help one focus the test effort.



Component Outsourcing, Quality Risks, and Testing: Factors and Strategies for Project Managers
By Rex Black

More and more projects involve more integration of custom developed or commercial-off-the-shelf (COTS) components, rather than in-house development or enhancement of software. In effect, these two approaches constitute direct or indirect outsourcing of some or all of the development work for a system, respectively. While some project managers see such outsourcing of development as reducing the overall risk, each integrated component can bring with it significantly increased risks to system quality. Read this software testing article to learn about the factors that lead to these risks ,and strategies you can use to manage them.




 
Other Links in this Section
 



 
`