Tuesday, September 15, 2009

How to do effective testing in Agile Development Methodology

To understand the Testing Process in an Agile Development Methodology, it is important to understand the Agile Development paradigm.
Agile Development paradigm is not very new. Although the Agile Software Development Manifesto came into existence in February 2001, the concepts existed long before that and were expressed in different ways. Spiral Development Methodology is one such example.

Understanding Agile Software Development:
The Agile Software Development primarily focuses on an iterative method of development and delivery.
The developers and end users communicate closely and the software is built. A working piece of software is delivered in a short span of time and based on the feedback more features and capabilities are added.
The focus is on satisfying the customer by delivering working software quickly with minimum features and then improvising on it based on the feedback. The customer is thus closely involved in the Software Design and Development Process.
The delivery timelines are short and the new code is built on the previous one.
Despite this, high quality of the product cannot be comprised.

This creates a different set of challenges for Software Testing.

How is Testing approach different in an Agile Development Scenario?

The Testing Strategy and Approach in Agile Development could be very different from traditional bureaucratic methods. In fact it could vary with project needs and the project team.
In many scenarios, it may make sense to not have a separate testing team.
The above statement should be understood carefully. By not having a testing team we do not consider testing to be any less important. In fact testing can done more effectively in short turn around times, by people who know the system and its objectives very well.

For example in certain teams Business Analysts could be doing a few rounds of testing each time the software version is released. Business Analysts understand the Business Requirements of the Software and test it to verify if it meets the requirements.

Developers may test the software. They tend to understand the system better and can verify the test results in a better way. Testing for AGILE Software Development requires innovative thinking and the right mix of people should be chosen for doing the testing.

What to test?

Given the relatively short turn around times in this methodology it is important that the team is clear on what needs to be tested. Even though close interaction and innovation is advocated rather than processes, sufficient emphasis is given to the testing effort.

While each team may have its own group dynamics based on the context, each code has to be unit tested. The developers do the unit testing to ensure that the software unit is functioning correctly.
Since the development itself is iterative it is possible that the next release of code was built by modifying the previous one. Hence Regression Testing gains significant importance in these situations.

The team tests if the newly added functionality works correctly and that the previously released functionality still works as expected.

Test Automation also gains importance due to short delivery timelines. Test Automation may prove effective in ensuring that everything that needs to be tested was covered.
It is not necessary that costly tools be purchased to automate testing. Test Automation can be achieved in a relatively cost effective way by utilizing the various open source tools or by creating in-house scripts. These scripts can run one or more test cases to exercise a unit of code and verify the results or to test several modules.
This would vary with the complexity of the Project and the experience of the Team

Typical bugs found when doing agile testing?

Although nothing is typical about any Agile Development Project and each project may have its own set of complexities, by the very nature of the paradigm bugs may be introduced in the system when a piece of code is modified/enhanced/changed by one or more Developers.

Whenever a piece of code is changed it is possible that bugs have been introduced to it or previously working code is now broken. New bugs/defects can be introduced at every change or old bugs/defects may be reopened.

Steps Taken to Effectively Test in Agile Development Methodology:

As a wise person once said there is no substitute to hard work.
The only way one can effectively test is by ensuring Sufficient Test Coverage and Testing Effort Automated or otherwise.
The challenge could be lack of documentation, but the advantage could be close communication between team members thereby resulting in greater clarity of thought and understanding of the system.

Each Time Code is changed Regression Testing is done. Test Coverage is ensured by having Automated Scripts and the right mix/combination of People executing the Test Cases.
Exploratory Testing may also be encouraged. Exploratory Tests are not pre-designed or pre-defined. The Tests are designed and executed immediately. Similarly ad hoc testing may also be encouraged. Ad-hoc testing is done based on the tester’s experience and skills.

While automated Test Cases will ensure that the Test Cases scripted are executed as defined, the team may not have enough time to design and script all the test cases.

Ensuring software test coverage

To ensure that delivered product meets the end user’s requirements it is important that sufficient testing is done and all scenarios are tested.

Sufficient Test Coverage in an Agile Development Scenario may be tricky but with close cooperation and the right team dynamics it may not be impossible.

The objectives of the Project should be clear to the entire team. Many teams advocate Test Driven Development. At every stage the Software is tested if it meets the Requirements. Every Requirement is translated to a Test Case and Software is validated/ verified. While the Processes and Documentation are not stressed upon sufficient steps are taken to ensure that the software is delivered as per the User Expectations. This implies that each Software delivery should be tested thoroughly before it is released.

The timelines being short requires that the person testing the software has sufficient knowledge about the system and its objectives.

Thursday, August 27, 2009

Challenges of Test management

There are so many challenges associated with test management. I am trying to describe few of them below. I will try to provide best solution of these challenges one by one in my next blogs.

Not enough time to test

Except for certain specialized or highly mission-critical applications, very few software projects include sufficient time in the development lifecycle to achieve a high level of quality. Very often, the almost inevitable delays in a software project get assigned to the already short "testing cycle". Even the best projects are very likely to have difficult time constraints on testing tasks. The effects of this obstacle on test management are constantly changing priorities and shifting tasks, as well as reduced data for test results and quality metrics.

Not enough resources to test

In addition to the shortages in time, there is quite often a difficulty getting the right resources available to perform required testing activities. Resources may be shared on other tasks or other projects. While hardware resources for testing can add delays and difficulties, a shortage of human resources can be even more difficult to resolve. The effects of this obstacle on test management are roughly the same as those for time shortages.

Testing teams are not always in one place

More often these days, testing resources might be available but not at the same geographic location. Leveraging talent around the globe to reduce costs has become commonplace, but this introduces considerable technical obstacles. How do teams on another continent share artifacts and stay coordinated without delays and discord affecting the overall project? How can a project maximize efficiency with geographically distributed development?

Difficulties with requirements

While there are many testing strategies, validating requirements is typically the primary, highest priority testing that needs to be completed. To do this requires complete, unambiguous, and testable requirements. Less-than-perfect requirements management can lead to more profound issues in the testing effort. Using a tool such as RequisitePro can significantly help improve requirements management and facilitate the development of good requirements.

For test management to be effective, there must be seamless access to the latest changing system and business requirements. This access must be not only to the wording of the requirements, but also to the priority, status, and other attributes. In addition, this requires the utmost coordination and communication between the teams developing the requirements and the teams performing the testing. This communication must go in both directions to ensure quality.

Keeping in synch with development

Another team coordination that is required to allow for software quality is between testers and developers. Aside from critical defects, it is almost a tradition in software development that the testing team's work is only the tester's concern. However, it is essential for everyone, especially the developers, to understand both the current level of quality and what has and has not yet been tested.

For testing teams to use their precious time efficiently, they have to keep up with constant changes in code, builds, and environments. Test management must identify precisely what build to test, as well as the proper environments in which to test. Testing the wrong builds (or functions) results in wasted time, and can severely impact the project schedule. Testers must also know what defects are already known, and should therefore not be re-tested, and which are expected to be fixed. Testers must then communicate the defects found, along with sufficient information to facilitate resolution, back to the developers.

Reporting the right information

A testing effort is only useful if it can convey testing status and some measures of quality for the project. Generating reports is simple enough, but presenting the right information (at the right time, to all the appropriate people) can be trickier than it seems for several reasons:

  • If there is too little information, then the project stakeholders will not fully understand the issues affecting quality, in addition to the reduced perceived value of the testing team.
  • If there is too much information, then the meaning and impact of key information becomes obscured.
  • There are often technical hurdles that can impede how to share information to different roles in different locations.

Another consideration in reporting results is exactly how the information is arranged, and in what formats (that is, the information can be tool based, browser-based, or in documents). Project stakeholders' understanding of the testing and quality information will be reduced if there are technical or other restrictions limiting the arrangement or format of reports. Data should be presented in a clear and logical design that conveys the appropriate meaning, not in a layout constrained by tools or technology. It is therefore essential for test management to consider the need for flexibility and capability in providing a wide range of reporting formats.

What are the quality metrics?

One of the primary goals of a testing team is to assess and determine quality, but how exactly do you measure quality? There are many means of doing this, and it varies for the type of system or application as well as the specifics of the development project. Any such quality metrics need to be clear and unambiguous to avoid being misinterpreted. More importantly, metrics must be feasible to capture and store, otherwise they might not be worth the cost or could be incomplete or inaccurate.

Recommendations for better Test management

The following are general recommendations that can improve software test management.

Start test management activities early

While this may seem like the most obvious suggestion, few software projects truly apply this concept. It is easy and not uncommon to begin identification of test resources at an early stage. However, many test analysis activities (such as the identification of critical, high-priority test cases) can and should begin as soon as possible. As soon as use cases are developed enough to have a flow of events, then test procedures can be derived. If a project is not utilizing use case requirements, then tests can still be derived from the validation of initial requirements. Developing tests as soon as possible helps alleviate the inevitably forthcoming time constraints.

Test iteratively

Software testing should be an iterative process, one that produces valuable testing artifacts and results early on in the overall project lifecycle. This follows the first recommendation of starting the testing process early: an iterative testing process forces early attention to test management. Test management guides this by organizing the various artifacts and resources to iterations. This risk-based approach helps ensure that changes, delays, and other unforeseen obstacles that may come up in the project timeline can be dealt with in the most effective manner.

Reuse test artifacts

Reusing test artifacts within a project, or across projects, can greatly improve the efficiency of a testing team. This can greatly relieve the pressure of limited time and limited resources. Reusable artifacts include not only test automation objects, but also test procedures and other planning information. In order to reuse artifacts efficiently, test management must do a good job of organizing and delineating the various testing-related information used for a given project. Reuse always requires some forethought when creating artifacts, and this principle can be applied in test management generally.

Utilize requirements-based testing

Testing can be broken down into two general approaches:

  • Validating that something does what it is supposed to do
  • Trying to find out what can cause something to break

While the latter exploratory testing is important to discover hard-to-predict scenarios and situations that lead to errors, validating the base requirements is perhaps the most critical assessment a testing team performs.

Requirements-based testing is the primary way of validating an application or system, and it applies to both traditional and use case requirements. Requirements-based testing tends to be less subjective than exploratory testing, and it can provide other benefits as well. Other parts of the software development team may question or even condemn results from exploratory testing, but they cannot dispute carefully developed tests that directly validate requirements. Another advantage is that it can be easier to calculate the testing effort required (as opposed to exploratory testing, which is often only bounded by available time).

It can also provide various statistics that may be useful quality metrics, such as test coverage. Deriving test cases from requirements, and more importantly tracking those relationships as things change, can be a complex task that should be handled through tooling. RequisitePro and the test management capabilities in ClearQuest provide an out-of-the-box solution that addresses this need.

The constraint to this process is that it depends on good system requirements and a sound requirements management plan to be highly effective. Exploratory testing, on the other hand, can be more ad hoc. It relies less on other parts of the software development team, and this can sometimes lead to testing efforts focusing less on the important tasks of validating requirements. While a superior testing effort should include a mix of different approaches, requirements-based testing should not be ignored.

Leverage remote testing resources

To help alleviate resource shortages, or to simply maximize the utilization of personnel, you should leverage whatever resources you can, whereever they are located. These days resources are likely to exist in multiple geographic locations, often on different continents. This requires careful and effective coordination to make the most of the far-flung testers and other people involved with test management. There can be considerable technical challenges for this to be efficient, and therefore proper tooling is needed. The test management capabilities in ClearQuest with MultiSite is a tool that simplifies the complexities of geographically distributed test coordination.

Should you utilize a Web client or automatically replicated data? These are two solutions available that make collaboration with remote practitioners possible. The former is simple and relatively easy, but there is still a potential constraint of network latency, especially if accessed across the globe. For remote access by a limited number of people or with limited functionality, this is a good solution. However, for situations where a number of people in different locations make up an overall virtual testing team, you will need to have data copied on local servers to maximize the speed at which they can work. This also means that you will need an easy and seamless way to automatically synchronize the data across each location. This is where ClearQuest MultiSite can be essential for test management.

Defining and enforcing a flexible testing process

A good, repeatable process can help you understand a project's current status and, by being more predictable, where it's headed. However, different projects will have different specific needs for the testing effort, so a test management process that automates workflows needs to be flexible and customizable. The process should be repeatable (to provide predictability), but more importantly, it must allow for improvements. It has to be easy enough to make revisions, including adjustments during the course of an iterative project, so that it can be optimized through changing needs.

Defining a process with workflows to guide team members doesn't do much good if it can't be enforced in any way. How strongly it needs to be enforced will vary with different organizations and projects. Software projects in many organizations now have a need to comply with various regulations, such as SOX and HIPPA. Some have a need for auditability of changes, project history, and other strict compliance validation such as e-signatures. Whether your project's test management requires strict process enforcement or a more casual approach, you need a mechanism for defining and enforcing something. One such test management tool that provides all of these capabilities is the test management capabilities in ClearQuest.

Coordinate and integrate with the rest of development

Software testing has traditionally been kept highly separated from the rest of development. Part of this comes from the valid need to keep the assessment unbiased and increase the odds of finding defects that development may have missed. This need is especially apparent with acceptance testing, where the best testers are ones who are blind to the design and implementation factors. However, this specific need only represents one of many aspects of software testing, and it should not create the barrier and impediments to developing quality software that it usually winds up doing.

Software testing must be integrated with the other parts of software development, especially disciplines such as requirements management and change management. This includes vital collaboration between the different process roles and activities, maximum communication of important information, and integrated tooling to support this. Without this coordination, quality will be reduced from missed or misunderstood requirements, untested code, missed defects, and a lack of information about the actual software quality level.

Communicate status

An effort is only as valuable as it is perceived to be, and how it is perceived depends on what is communicated to the stakeholders. Good test management must provide complete and proper reporting of all pertinent information. Ongoing real-time status, measurement of goals, and results should be made available, in all the appropriate formats, to all relevant team members on the software development project.

Reporting should also be more than just traditional static documents. Given the constant changes going on, it is necessary to have easily updatable output in a variety of formats to properly communicate information. All of this will enable different project roles to make the right decisions on how to react to changes as the project progresses.

Information from the different software disciplines is not entirely separated. This article has already mentioned the important relationships between test management and other disciplines such as requirements, change and configuration management, and development. It is therefore crucial that the outputs coming from test management can be easily combined with other project data. Current technology makes possible consolidated views combining all project metrics into a dashboard so that the overall project health can be determined. Tools also make it possible to clearly show and assess the relationships between test, development, and other project artifacts.

Focus on goals and results

Decide on quality goals for the project and determine how they might be effectively and accurately measured. Test management is where the goals, the metrics used to measure such goals, and how the data for them will be collected are defined. Many tasks in testing may not have obvious completion criteria. Defining specific outputs and measures of ongoing progress and changes will more accurately define the activities and tasks of the testing effort. Keeping specific goals and metrics for testing in mind not only helps track status and results, but also avoids the last-second scramble to pull together necessary reports.

Storing test management results in a single, common repository or database will ensure that they can be analyzed and used more easily. This also facilitates version control of artifacts (including results), which will prevent problems with out-of-date or invalid information. All of this will help project members to understand the progress made, and to make decisions based on the results of the testing effort.

Automate to save time

There is a lot to test management, and its many tasks can be very time consuming. To help save time, tools can be used to automate, or at least partially automate, many tasks. While simple tools like word processors and spreadsheets provide great flexibility, specialized test automation tools are much more focused and provide a greater time-saving benefit. Tasks that benefit greatly from automation include:

  • Tracking the relationship of testing to requirements and other test motivators
  • Test case organization and reuse
  • Documentation and organization of test configurations
  • Planning and coordination of test execution across multiple builds and applications
  • Calculating test coverage
  • Various reporting tasks

Proper tooling and automation of the right tasks in test management will greatly improve its value and benefits.

Friday, May 22, 2009

Significance of Test Metrics Management

If you are a Test manager or any part of Testing group in an organization, then you surely would have encountered the below mentioned queries:


How much time will it take to finish the test cycle?
How stable is the functionality you are testing?
How much of testing is remaining to be done in test areas assigned to you?
What’s the status of reviews you are doing?
How much percentage of Build is testable?

The list of these queries can go on. The important thing here is that these queries forms a part of daily routine of tester’s job.

More often, answering the Metrics related queries results in an uncomfortable experience for the tester. The reasons for this are:

1. Tester is not prepared to present the asked data.
2. There is an inadequate data available with the tester.
3. Tester was not at all aware that he/she had to be prepared with the asked metrics.

In absence of data, which may be due to one or all of above mentioned reasons, a tester is often prompted to leave the task at hand and go from scratch to collect the data and put the data in presentable form. This activity, which is primarily due to lack of planning and ineffective management, may take long time to finish and causes a precious loss of testing time.

Considering the fact that testing time is often squeezed during the life cycle of a product in order to meet important Product adlines, there is an utmost need to save time and focus on important tasks at hands. Managing the metrics effectively will help achieve this objective in a definite way.

As Testers are the people who are most exposed to Software before its actual Release or Beta Release, there is always a possibility tester will be asked to present various kinds of data during different stages of Life cycle.

Thus, metrics forms an important part of a tester’s job. So, Metrics needs to be managed in an efficient manner and it will in turn help to enhance the productivity of a tester. Both Managers and Testers have a role to play in managing metrics.

Manager’s Role in Test Metrics Management

Indeed, Managers have a vital role to play in managing Test metrics effectively.
What Managers can do is - PLAN BETTER

There is always a scope of Improvement in this area. Yes, this is definitely an area of Improvement as often Metrics are missed or not handled with priority at the Planning stage.

Most of the QA Plan or Master Test Plan templates don’t say much about the metrics that would be used during the whole life cycle of Product testing.


This is where Manager’s can act and plan the usage of metrics before hand i.e. details of data that would be expected from tester’s during various stages of Product life cycle.

Of course, all the requirements related to metrics are difficult to think of so early but even some meaningful Inputs at planning stage will definitely help testers to be prepared. This will help to make tester’s work more focused.

Tester’s Role in Test Metrics Management

To start with, the following listed attributes/ skills can help testers in this area-


1. Sense of anticipation
2. Discipline
3. Usage of Tools

Sense of anticipation is a quality that would definitely help tester manage him/her better. Its just thinking in advance, what kind of metrics would be expected in a particular time or a phase of the product. To add to this, sense of anticipation can be gained from one’s own experience or somebody else’s experiences.

To cite an example- during testing phase, one can always expect to be asked for time required for execution of test cases or about status of component under test.
Judging this, a tester can always be prepared with required stuff much before it is asked for. Anticipation can help testers in great deal to escape from phrases such as “You were supposed to be present with this data”.

Often metrics collection and management is a tedious activity and it sometimes prompts a tester to do repetitive tasks. Discipline is an attribute that can help tester in this aspect. Tester’s motive should be to find a better way of doing things and this should be taken as a challenge.

Usage of appropriate tools can help a great deal in managing things better. Data storage, Data retrieval and Data presentation are the important aspects that need to be considered while selecting a tool. It should be noted that data presentation is an important aspect as effective presentation helps anyone who is reviewing the data save a lot of time and gather and analyze more in a very less time. Already existing small-scale databases such as Microsoft Access, Microsoft Excel can assist a lot.

One more tip in this area is that if there is already existing tool in place then queries for retrieval of data should be planned and made available for use anytime and by anyone.

Friday, May 8, 2009

Reengineer The Test Management

Most organizations don’t have a standard process for defining, organizing, managing, and documenting their testing efforts. Often testing is conducted as an ad hoc activity, and it changes with every new project. Without a standard foundation for test planning, development, execution, and defect tracking, testing efforts are nonrepeatable, nonreusable, and difficult to measure.
Generating a test status report is very time consuming and many times not reliable. It is difficult to procure testing information such as
How much testing needs to be done?
How much testing has been completed to date?
What are the results of these tests?
Who tested what, and when was it last tested?
What is the defect number for this failed test case?
Do we have test results for this build?
Do we have a history of test case results across different builds?
How can we share the test cases with a remote team?
Does our product meet the requirements that we originally set?
What is the requirements test coverage?
Is our product ready for release?
Getting this information fast is critical for software product and process quality. But many times, it is difficult to get this information, depending on the way test cases and execution results are defined, organized, and managed.
Most organizations still use word processing tools or spreadsheets to define and manage test cases. There are many problems associated with defining and storing test cases in decentralized documents:
1. Tracking. Testing is a repetitive task. Once a test case has been defined, it should be reusable until the application is changed. Unstructured testing without following any standard process can result in creation of tests, designs, and plans that are not repeatable and cannot be reused for future iterations of the test. It is difficult to locate and track decentralized test documents.
2. Reuse. Because it is difficult to locate test cases for execution, they are seldom used in day to day testing execution.
3. Duplication of test cases and efforts. It is difficult to locate a test case, so there are chances of duplicating the same and wasting the testing effort.
4. Version control. Since there is no central repository, version control becomes difficult, and individual team members may use different versions of test cases.
5. Changes and maintenance. Changes to product features can happen many times during a product development lifecycle. In such scenarios, test cases can become obsolete, rendering the whole effort in test planning a fruitless exercise. It is important to keep the test case list updated to reflect changes to product features; otherwise, for the next phase of testing there will be a tendency to discard the current test case list and start over again.
6. Execution results—logging and tracking. The test execution result history is difficult to maintain. It is difficult to know what testing has been done, which test cases have been executed, results of each test case that is executed, and if a problem report has been written against this failed test case.
7. Incomplete and inconsistent information for decision-making. A defect database provides only one side of the information necessary for knowing the quality of a product. It tells what is broken in a product, and what has been fixed. It does not tell what has been tested and what works. This is almost as important as the defect information.
8. Test metrics. If we cannot have a history of test results, it is difficult to generate test metrics like functional test coverage, defect detection effectiveness, test execution progress, etc.
9. Difficult to associate related information. We also need to deal with huge quantities of information (documents, test data, images, test cases, test plans, results, staffing, timelines, priorities, etc.).
10. Nonpreservation of testware. It is essential that testware (test plans, test cases, test data, test results, and execution details) be stored and preserved for reuse on subsequent versions of a single application or sharing between applications. Not only does this testware save time, but over a period of time it gives the organization a pattern, knowledge base, and maturity to pinpoint the error-prone areas in the code development cycle, fix them, and prevent the errors from recurring.
11. Inconsistent processes. Organizations are not static. People move from project to project. If the testing job is performed differently for each project or assignment, the knowledge gained on one assignment is not transferable to the next. Time is then lost on each assignment as the process is redefined.
12. Requirements traceability and coverage. In the ideal world, each requirement must be tested at least once, and some requirements will be tested several times. With decentralized documents, it is difficult to cross-link test cases with requirements. Test efforts are ineffective if we have thousands of test cases but don’t know which one tests which requirement.

The problems due to unstructured, decentralized test management can be solved by reengineering the test management process. A testing project starts by building a test plan and proceeds to creating test cases, implementing test scripts, executing tests, and evaluating and reporting on results.
The objectives of reengineering test management are to
1. assist in defining, managing, maintaining, and archiving testware
2. assist in test execution and maintaining the results log over different test runs and builds
3. centralize all testing documentation, information, and access
4. enable test case reuse
5. provide detailed and summarized information about the testing status for decision support
6. improve tester productivity
7. track test cases and their relationship with requirements and product defects
A Reengineered test management process can help in improving key processes in test definition, tracking, execution, and reporting.

Thursday, May 7, 2009

Four Keys to Better Test Management

When I started working as an onsite coordinator for various vendors(NIIT, Keane etc.) of SEI in USA, there seemed to be a disjoint between offshore development and onsite test groups.

There were four things that became very obvious to me, that were necessary to get better organized:

1. To have a common set of ground rules on the test progress, defect reporting, and verification.
2. Be able to convey how is testing going on a frequent basis.
3. Be able to determine what do I need to test and stand behind the reasons why.
4. Maintain good communication with the technical leaders to help move the product through the development phases by being proactive rather than reactive.

Wednesday, May 6, 2009

Provide Test Expertise to Test Group

The test manager should provide some test expertise to the test group. This expertise comes in a variety of forms:

Ability to discuss, mentor, and/or train current employees on general testing techniques.
Ability to look forward to the products in development and planned for development, to foresee what new expertise and training is needed by the group.
Ability to hire people based on current and future testing needs.

Note that this does not mean the test manager has to provide specific low-level tests for the product. A test lead's job is developing tests, not the test manager. As a test manager, you provide leverage to all the testers and to corporate management. You need to define for yourself the difference between mentoring and doing other people's work.

Develop Test Strategies

The ship criterias should have a significant impact on the test strategy for the product. For example, if the product is an operating system that has to run on multiple hardware platforms, ship criteria that emphasize the number of platforms the product has to run on, and de-emphasize the specific features of the operating system will push your strategy to one of hardware combination testing, not depth-first testing across many features on one particular platform. The test manager does not necessarily develop the entire strategy for testing, but should have input to the strategy, and participate in reviewing the strategy with the test group.

Define and Verify Product Ship Criteria

As the test manager, we have an opportunity to negotiate the criteria with marketing and the development groups. You want to verify criteria with customers or representatives of the customer. The development group's job is to figure out how to achieve what the company wants to achieve. Using the customer requirements to figure out what to do provides the developers a complete view of the product and how it works. Once the product is defined in terms of what and how, the testing effort can verify how well the product meets the real requirements.
It's important for testers to prioritize their testing so that the product ship criteria can be met. Since very few projects have enough time to do everything, getting the testers enough information for what and when to test is a very important role for the test manager.
Corporate managers need to understand the product ship criteria enough to be able to judge whether the product is ready to ship at a given date. I don't believe the test group should be either the holder of product approval or rejection-that role belongs to corporate management. Having pre-negotiated, agreed-upon ship criteria helps corporate managers judge for themselves whether or not a product is ready to ship. Pre-negotiated criteria helps the project team decide what product quality is when one no one is stressed from the project work.

Gather Product Information

The successful test manager also gathers product information, in the form of defect counts, test pass rates, and other meaningful data. The test manager defines the data, and then gathers it for presentation to corporate management. For example, I gather what I consider to be the "basic" metrics:
Defect find and close rates by week, normalized against level of effort (are we finding defects, and can developers keep up with the number found and the ones necessary to fix?)
Number of tests planned, run, passed by week (do we know what we have to test, and are we able to do so?)
Defects found per activity vs. total defects found (which activities find the most defects?)
Schedule estimates vs. actuals (will we make the dates, and how well do we estimate?)
People on the project, planned vs. actual by week or month (do we have the people we need when we need them?)
Major and minor requirements changes (do we know what we have to do, and does it change?)

Measure what you’ve done

Few days ago, at a testing conference, I met a few of incredibly busy test managers. They were taking conference calls, sending and receiving email, and still trying to attend sessions at the conference. I found that one of the reasons managers are so busy is that some of them don’t know whether they’ve gotten anything done. They’re all crazy-busy, but most of them feel as if they’ve accomplished nothing.

When your efforts have more to do with team performance and avoiding disasters than a tangible product, how can you measure what you’ve done in a concrete way? If you are a test manager, here are some ways to measure what you’ve done:

How many people have you unwedged or unblocked this week?

Sometimes our staff may not realize they are stuck trying to solve problems. If you are able to recognize that someone is stuck and point them in a more useful direction, then you’ve accomplished valuable work.

How many of your staff were able to get more work done this week, based on what you said?

Managers leverage the work other people do. When you facilitate other people’s work, then you’re doing management work. If you discussed test plans or your staff’s choices about which tests to implement, then you’ve helped people leverage their work.

How much strategic work did you do this week?

Managers set the strategy for their groups, and then put the tactics in place to make it happen. Maybe you’re working on improving your test capability with a new test lab, or arranging ongoing education for your testers. Maybe you’re working on decreasing time to market by changing how you plan your projects, or how the developers and testers work together. Strategic goals are long-term goals, so you’d expect to make only a little progress on strategic work each week. But the effort you invest and the progress you make certainly count.

How many crises did you prevent?

Every time you solve a problem and prevent a disaster or crisis, you’ve allowed your technical staff to go forward. For example, if you realized early enough in the project that the developers and testers were working toward different goals, and you started developing release criteria, you’ve averted plenty of problems and a potential ship crisis.

How many meetings were you able to cancel?

Unfortunately, too many of us work in meeting-happy organizations. A meeting that doesn’t include the following characteristics is not likely to help your project or your team move forward:
It starts on time
It ends on time or early
You can leave the meeting if you’re not needed
Action items come out of the meeting
The meeting has an agenda and minutes
If you are able to cancel meetings that don’t actually help move a project forward, then you’ve done wonderful management work. Look for these opportunities and take advantage of them when you see them.


These are just five ways to measure your managerial output. You may want to create a short checklist that you review each week to keep track of your efforts. Any work you do that helps your staff and the long-term health of your organization is good, tangible work. Count it.