Tuesday, April 27, 2010

How to manage Quality for a project based on new technology

The project with new technology, holds many problems with itself that you did not anticipate. Many of these problems may require new tools, new skills, or substantial time to solve.

A process that is optimized to handle problems you understand may be wrong for problems you don’t yet understand. That’s why it needs a more exploratory, risk-oriented, incremental form of process, as opposed to one that is emphasizes
pre-planned, predictable tasks.

The key to working in unfamiliar territory is to change your test management focus from task fulfillment to risk management. The project team should continue to devote attention to planning and fulfilling project tasks, but that should not be the focus of the test team. Instead, the test manager must assume that any given prediction, at any given time, may be wrong.

A risk managed testing project implies specific strategies to deal with that assumption. Here’s my list:


· Produce a risk list for the project. If not a printed list, any leader on the team should be able to tell you what the major risks are. All project leaders should be in general agreement about the list.

· The team should practice talking about risk areas, rather than discouraging them as “negative thinking.” The risk that you don’t discuss is the one that you also won’t guard against. However, you should also stress that a new technology project requires you to be bold and to accept a certain amount of risk. The real problem is when there is an inconsistent attitude about risk, across the team.

· There should be some process whereby anyone on the project can learn about the current status of risk areas. There should be frequent project reviews (every few weeks) that involve a level of management one step higher (at least) than day-to-day management.

· There should be a change control process that prevents the careless introduction of new risk drivers after the scope of the project has been set. A clear charter for the project can help in limiting scope creep. Too much change control, too early, can also hurt the project. Consider establishing a progressive change control strategy, whereby the closer the product is to shipping, the stricter changes are controlled.

· Avoid specific promises to upper management about revenue dates, or anything else, unless you are in a position to fulfill those promises without compromising the project. Until the risks of the project have been explored and mitigated, such promises are premature. Also, look for an exit strategy that allows the project to be cancelled if it becomes apparent that the risks are too large relative to the rewards.

· For each risk driver, look for a strategy that will allow that element to be dropped from the project if it turns out to be too difficult to solve the problems that come up. Consider moving especially risky components to a follow-on release, or making them optional to install. I call this an “ejection seat” strategy. If risky components cannot be ejected, they can drag the whole project down.

· Adopt a prototyping strategy, or some other way that risk drivers and risks can be explored while there’s still time to do something about them, or it’s still early enough to cancel the project without huge losses.

· Adopt a test management process that dynamically reallocates test effort to focus on risk areas, as risk areas emerge. It helps to have an explicitly risk-based test plan. Look for strategies that minimize the amount of time between defect creation and resolution.

· The project should have a larger test team than normal, in order to react quickly to new builds. The development team does not have to be larger, in fact many say it should be smaller, but it must have the ideas, motivation, and skills needed to do the job. Special training or outside support should be considered.

· Draft a test plan that includes iteration and synchronization. Think of it like rock climbing or crossing a fast river by jumping from rock to rock: plan ahead, but revise the plan as new challenges arise and new information comes to light. Don’t let the bug list spiral out of control, but rather fix bugs at each stage. You might have to scrap some component of your technology and re-design it, once you have some experience with it in field testing. Don’t be surprised by that. It’s perfectly normal, and it’s the people who refuse to rewrite bad code who suffer more, in the end. Go through a test-triage-bug fix-release end game cycle for each iteration.

· Pay attention to how the product will be tested. Assure that the designers make every reasonable effort to maximize the controllability and visibility of each element of the product. These can include such features as log files, hidden test menus, internal assertion statements, alternative command-line interfaces, etc. Sometimes just a few minutes of consideration for the test process can lead to profound improvements in testing.

· Don’t put all your eggs into the risk-based basket, because you may inadvertently overlook an area that was risky, just because you didn’t anticipate that risk. So, devote a portion of your test strategy, maybe a third, to a variety of test approaches that are not explicitly risk-based. Also, establish a process for encouraging new approaches to be suggested and tried.

· Look for ways of making contact with the field about the emerging product. In speculative projects, problems often emerge that are invisible to the developing organization, yet obvious to customers. Also, assure that technical support, product management, and technical writers are well connected to the test team, so they can feed information to the testers without too undue effort.

· Look for records of schedules, schedule slips, meeting notes, and decisions made. Any metrics or general information that can be collected unobtrusively will help you understand the dynamics of the new technology and the team. That, in turn, helps plan the next project.

Thursday, February 11, 2010

SOA Testing Considerations

Service-Oriented Architectures are getting a lot of attention as businesses look for ways to be more responsive and flexible in meeting customer needs through new and existing technology. The adoption of SOA as both a technology and business strategy is on the increase. For the past couple of years, people have focused on the development aspects of SOA. However, the need for testing SOA has become increasingly apparent as companies are deploying SOA and are learning that they are different from other architectures in key ways. Traditionally, testing strategies lag behind development strategies by a matter of months. In the case of SOA, the tools have been around for some time, but the awareness of the need for SOA testing has been lagging. Randy Rice is a leading author, speaker and consultant in the field of software testing and software quality. I got a chance to study his articles on SOA testing and would like to share these excerpts which might be of interest for some of you.Whenever a new technology emerges, one of the first things to do is to define what makes the testing of the technology unique. The uniqueness of SOA is seen in how services are built and how they support business processes. Therefore, an SOA testing strategy needs to consider both structural and functional perspectives.The Structural PerspectiveThis is the perspective of testing that focuses on the internals of the SOA. This can include the code used to create services as well as the architecture itself. For many people, this has been the focus of their SOA testing. One reason for this is because as services are created, this is the first opportunity for the SOA to be tested. Although the structural testing perspective is very important, it is only one aspect of the SOA. We will explore the other types of testing later in this article. One example of structural testing for SOA is examining the Web Services Description Language (WSDL) to learn how data elements are supplied to services. WSDL uses XML to describe network services as collections of communication endpoints capable of exchanging messages. By learning how to read and understand WSDL, testers can learn many important things to test. However, some testers may not be suited to spend time at the WSDL level, so having developers involved at this level of testing can be helpful.The Functional PerspectiveTo ensure that the services support the business processes, functional testing is needed. Services can and should be tested individually, but the most critical type of testing is the integration testing of services and business processes. Unless services are tested in the context of business processes, you lack the confidence they will perform correctly when used to perform business functions. An example of this type of testing is to model tests based on transactions and processes. This is normally called scenario-driven or process-driven testing. In this approach, business processes are described so that distinct scenarios can be identified. Services that support each scenario are also identified so they can be tested in relation to each other.When defining a test strategy, critical success factors define the risks associated with the technology or project and the types of testing to be performed that can help reduce the risks. For SOA, some major critical success factors include:Correctness – Do the services and architecture deliver correct results?Performance – Does the SOA deliver results quickly and efficiently?Security – Is the SOA adequately protected against external attacks? Is data protected to keep it from unauthorized users?Interoperability – Do the services work together in the context of business processes to deliver correct results?There are other success factors that you may choose to address, depending on your project. These include usability, maintainability, reliability and portability.ToolsTools add leverage to the testing of any technology, but SOA has unique characteristics that almost require the use of tools. The big issue in test tools for SOA is whether traditional test tools can be extended, or if totally new tools are needed. Much depends on the type of tools you may currently have in place and their ability to handle SOA testing needs. Before investing a lot of money and time in acquiring an “SOA specific” tool, perform some tests to make sure your existing tools can’t handle the job. People can have large investments in tools and the automated testware, so make sure you consider that investment before switching to a new tool set. Tools are needed for SOA because:They can provide a harness for accessing services that may otherwise not be easily tested. It may be impractical to test services because they often lack a user interface. This is called “headless testing” because there is no single access point to the services.They can test faster than humans once the automation has been created.They can provide precision in tests such as regression and performance testing.Fortunately, SOA test tools appeared early in the emergence of SOA, so the tools that are currently on the market have had time to mature.PeopleTesting is a very human-based activity. When you list the major problems people face in testing any software, the great majority are human in nature. While SOA has a major technical element, you will still need to consider who will plan, perform and evaluate the test. Developers have a great perspective on structural testing, but may not be the most willing participants. However, with some encouragement, tools and management support for testing, developers can test their own work. Stakeholders, especially end-users, can add the business perspective to the test. The issue with end-user testing is to get enough of their time to adequately focus on the test.Testers can bring a wealth of testing knowledge to the project. They can adapt previously defined tests for SOA and also can help acquire or adapt the right tools for the job. You may need to get specialists involved in tests, such as performance and security testing.Controlling the SOA Test EnvironmentYour test is only as good as your test environment. If you can’t identify what is in the test environment, such as the services, you will not be able trust the test results. With SOAs, governance is needed to control when services are created or modified, and when they should be placed into the test SOA environment. Unfortunately, people often fail to appreciate the importance of test environment control until they experience a failure due to an incorrect test.In a gist, SOA brings new test considerations, but the good news is that many of the techniques can be adapted from past technologies. Effective testing is a matter of getting the right balance of people, processes and tools, all working together in an integrated test environment. SOA is no different. By taking time at the start of your first SOA project to define the uniqueness of the technology, you can approach the project knowing that the major points have been considered.

Tuesday, September 15, 2009

How to do effective testing in Agile Development Methodology

To understand the Testing Process in an Agile Development Methodology, it is important to understand the Agile Development paradigm.
Agile Development paradigm is not very new. Although the Agile Software Development Manifesto came into existence in February 2001, the concepts existed long before that and were expressed in different ways. Spiral Development Methodology is one such example.

Understanding Agile Software Development:
The Agile Software Development primarily focuses on an iterative method of development and delivery.
The developers and end users communicate closely and the software is built. A working piece of software is delivered in a short span of time and based on the feedback more features and capabilities are added.
The focus is on satisfying the customer by delivering working software quickly with minimum features and then improvising on it based on the feedback. The customer is thus closely involved in the Software Design and Development Process.
The delivery timelines are short and the new code is built on the previous one.
Despite this, high quality of the product cannot be comprised.

This creates a different set of challenges for Software Testing.

How is Testing approach different in an Agile Development Scenario?

The Testing Strategy and Approach in Agile Development could be very different from traditional bureaucratic methods. In fact it could vary with project needs and the project team.
In many scenarios, it may make sense to not have a separate testing team.
The above statement should be understood carefully. By not having a testing team we do not consider testing to be any less important. In fact testing can done more effectively in short turn around times, by people who know the system and its objectives very well.

For example in certain teams Business Analysts could be doing a few rounds of testing each time the software version is released. Business Analysts understand the Business Requirements of the Software and test it to verify if it meets the requirements.

Developers may test the software. They tend to understand the system better and can verify the test results in a better way. Testing for AGILE Software Development requires innovative thinking and the right mix of people should be chosen for doing the testing.

What to test?

Given the relatively short turn around times in this methodology it is important that the team is clear on what needs to be tested. Even though close interaction and innovation is advocated rather than processes, sufficient emphasis is given to the testing effort.

While each team may have its own group dynamics based on the context, each code has to be unit tested. The developers do the unit testing to ensure that the software unit is functioning correctly.
Since the development itself is iterative it is possible that the next release of code was built by modifying the previous one. Hence Regression Testing gains significant importance in these situations.

The team tests if the newly added functionality works correctly and that the previously released functionality still works as expected.

Test Automation also gains importance due to short delivery timelines. Test Automation may prove effective in ensuring that everything that needs to be tested was covered.
It is not necessary that costly tools be purchased to automate testing. Test Automation can be achieved in a relatively cost effective way by utilizing the various open source tools or by creating in-house scripts. These scripts can run one or more test cases to exercise a unit of code and verify the results or to test several modules.
This would vary with the complexity of the Project and the experience of the Team

Typical bugs found when doing agile testing?

Although nothing is typical about any Agile Development Project and each project may have its own set of complexities, by the very nature of the paradigm bugs may be introduced in the system when a piece of code is modified/enhanced/changed by one or more Developers.

Whenever a piece of code is changed it is possible that bugs have been introduced to it or previously working code is now broken. New bugs/defects can be introduced at every change or old bugs/defects may be reopened.

Steps Taken to Effectively Test in Agile Development Methodology:

As a wise person once said there is no substitute to hard work.
The only way one can effectively test is by ensuring Sufficient Test Coverage and Testing Effort Automated or otherwise.
The challenge could be lack of documentation, but the advantage could be close communication between team members thereby resulting in greater clarity of thought and understanding of the system.

Each Time Code is changed Regression Testing is done. Test Coverage is ensured by having Automated Scripts and the right mix/combination of People executing the Test Cases.
Exploratory Testing may also be encouraged. Exploratory Tests are not pre-designed or pre-defined. The Tests are designed and executed immediately. Similarly ad hoc testing may also be encouraged. Ad-hoc testing is done based on the tester’s experience and skills.

While automated Test Cases will ensure that the Test Cases scripted are executed as defined, the team may not have enough time to design and script all the test cases.

Ensuring software test coverage

To ensure that delivered product meets the end user’s requirements it is important that sufficient testing is done and all scenarios are tested.

Sufficient Test Coverage in an Agile Development Scenario may be tricky but with close cooperation and the right team dynamics it may not be impossible.

The objectives of the Project should be clear to the entire team. Many teams advocate Test Driven Development. At every stage the Software is tested if it meets the Requirements. Every Requirement is translated to a Test Case and Software is validated/ verified. While the Processes and Documentation are not stressed upon sufficient steps are taken to ensure that the software is delivered as per the User Expectations. This implies that each Software delivery should be tested thoroughly before it is released.

The timelines being short requires that the person testing the software has sufficient knowledge about the system and its objectives.

Thursday, August 27, 2009

Challenges of Test management

There are so many challenges associated with test management. I am trying to describe few of them below. I will try to provide best solution of these challenges one by one in my next blogs.

Not enough time to test

Except for certain specialized or highly mission-critical applications, very few software projects include sufficient time in the development lifecycle to achieve a high level of quality. Very often, the almost inevitable delays in a software project get assigned to the already short "testing cycle". Even the best projects are very likely to have difficult time constraints on testing tasks. The effects of this obstacle on test management are constantly changing priorities and shifting tasks, as well as reduced data for test results and quality metrics.

Not enough resources to test

In addition to the shortages in time, there is quite often a difficulty getting the right resources available to perform required testing activities. Resources may be shared on other tasks or other projects. While hardware resources for testing can add delays and difficulties, a shortage of human resources can be even more difficult to resolve. The effects of this obstacle on test management are roughly the same as those for time shortages.

Testing teams are not always in one place

More often these days, testing resources might be available but not at the same geographic location. Leveraging talent around the globe to reduce costs has become commonplace, but this introduces considerable technical obstacles. How do teams on another continent share artifacts and stay coordinated without delays and discord affecting the overall project? How can a project maximize efficiency with geographically distributed development?

Difficulties with requirements

While there are many testing strategies, validating requirements is typically the primary, highest priority testing that needs to be completed. To do this requires complete, unambiguous, and testable requirements. Less-than-perfect requirements management can lead to more profound issues in the testing effort. Using a tool such as RequisitePro can significantly help improve requirements management and facilitate the development of good requirements.

For test management to be effective, there must be seamless access to the latest changing system and business requirements. This access must be not only to the wording of the requirements, but also to the priority, status, and other attributes. In addition, this requires the utmost coordination and communication between the teams developing the requirements and the teams performing the testing. This communication must go in both directions to ensure quality.

Keeping in synch with development

Another team coordination that is required to allow for software quality is between testers and developers. Aside from critical defects, it is almost a tradition in software development that the testing team's work is only the tester's concern. However, it is essential for everyone, especially the developers, to understand both the current level of quality and what has and has not yet been tested.

For testing teams to use their precious time efficiently, they have to keep up with constant changes in code, builds, and environments. Test management must identify precisely what build to test, as well as the proper environments in which to test. Testing the wrong builds (or functions) results in wasted time, and can severely impact the project schedule. Testers must also know what defects are already known, and should therefore not be re-tested, and which are expected to be fixed. Testers must then communicate the defects found, along with sufficient information to facilitate resolution, back to the developers.

Reporting the right information

A testing effort is only useful if it can convey testing status and some measures of quality for the project. Generating reports is simple enough, but presenting the right information (at the right time, to all the appropriate people) can be trickier than it seems for several reasons:

  • If there is too little information, then the project stakeholders will not fully understand the issues affecting quality, in addition to the reduced perceived value of the testing team.
  • If there is too much information, then the meaning and impact of key information becomes obscured.
  • There are often technical hurdles that can impede how to share information to different roles in different locations.

Another consideration in reporting results is exactly how the information is arranged, and in what formats (that is, the information can be tool based, browser-based, or in documents). Project stakeholders' understanding of the testing and quality information will be reduced if there are technical or other restrictions limiting the arrangement or format of reports. Data should be presented in a clear and logical design that conveys the appropriate meaning, not in a layout constrained by tools or technology. It is therefore essential for test management to consider the need for flexibility and capability in providing a wide range of reporting formats.

What are the quality metrics?

One of the primary goals of a testing team is to assess and determine quality, but how exactly do you measure quality? There are many means of doing this, and it varies for the type of system or application as well as the specifics of the development project. Any such quality metrics need to be clear and unambiguous to avoid being misinterpreted. More importantly, metrics must be feasible to capture and store, otherwise they might not be worth the cost or could be incomplete or inaccurate.

Recommendations for better Test management

The following are general recommendations that can improve software test management.

Start test management activities early

While this may seem like the most obvious suggestion, few software projects truly apply this concept. It is easy and not uncommon to begin identification of test resources at an early stage. However, many test analysis activities (such as the identification of critical, high-priority test cases) can and should begin as soon as possible. As soon as use cases are developed enough to have a flow of events, then test procedures can be derived. If a project is not utilizing use case requirements, then tests can still be derived from the validation of initial requirements. Developing tests as soon as possible helps alleviate the inevitably forthcoming time constraints.

Test iteratively

Software testing should be an iterative process, one that produces valuable testing artifacts and results early on in the overall project lifecycle. This follows the first recommendation of starting the testing process early: an iterative testing process forces early attention to test management. Test management guides this by organizing the various artifacts and resources to iterations. This risk-based approach helps ensure that changes, delays, and other unforeseen obstacles that may come up in the project timeline can be dealt with in the most effective manner.

Reuse test artifacts

Reusing test artifacts within a project, or across projects, can greatly improve the efficiency of a testing team. This can greatly relieve the pressure of limited time and limited resources. Reusable artifacts include not only test automation objects, but also test procedures and other planning information. In order to reuse artifacts efficiently, test management must do a good job of organizing and delineating the various testing-related information used for a given project. Reuse always requires some forethought when creating artifacts, and this principle can be applied in test management generally.

Utilize requirements-based testing

Testing can be broken down into two general approaches:

  • Validating that something does what it is supposed to do
  • Trying to find out what can cause something to break

While the latter exploratory testing is important to discover hard-to-predict scenarios and situations that lead to errors, validating the base requirements is perhaps the most critical assessment a testing team performs.

Requirements-based testing is the primary way of validating an application or system, and it applies to both traditional and use case requirements. Requirements-based testing tends to be less subjective than exploratory testing, and it can provide other benefits as well. Other parts of the software development team may question or even condemn results from exploratory testing, but they cannot dispute carefully developed tests that directly validate requirements. Another advantage is that it can be easier to calculate the testing effort required (as opposed to exploratory testing, which is often only bounded by available time).

It can also provide various statistics that may be useful quality metrics, such as test coverage. Deriving test cases from requirements, and more importantly tracking those relationships as things change, can be a complex task that should be handled through tooling. RequisitePro and the test management capabilities in ClearQuest provide an out-of-the-box solution that addresses this need.

The constraint to this process is that it depends on good system requirements and a sound requirements management plan to be highly effective. Exploratory testing, on the other hand, can be more ad hoc. It relies less on other parts of the software development team, and this can sometimes lead to testing efforts focusing less on the important tasks of validating requirements. While a superior testing effort should include a mix of different approaches, requirements-based testing should not be ignored.

Leverage remote testing resources

To help alleviate resource shortages, or to simply maximize the utilization of personnel, you should leverage whatever resources you can, whereever they are located. These days resources are likely to exist in multiple geographic locations, often on different continents. This requires careful and effective coordination to make the most of the far-flung testers and other people involved with test management. There can be considerable technical challenges for this to be efficient, and therefore proper tooling is needed. The test management capabilities in ClearQuest with MultiSite is a tool that simplifies the complexities of geographically distributed test coordination.

Should you utilize a Web client or automatically replicated data? These are two solutions available that make collaboration with remote practitioners possible. The former is simple and relatively easy, but there is still a potential constraint of network latency, especially if accessed across the globe. For remote access by a limited number of people or with limited functionality, this is a good solution. However, for situations where a number of people in different locations make up an overall virtual testing team, you will need to have data copied on local servers to maximize the speed at which they can work. This also means that you will need an easy and seamless way to automatically synchronize the data across each location. This is where ClearQuest MultiSite can be essential for test management.

Defining and enforcing a flexible testing process

A good, repeatable process can help you understand a project's current status and, by being more predictable, where it's headed. However, different projects will have different specific needs for the testing effort, so a test management process that automates workflows needs to be flexible and customizable. The process should be repeatable (to provide predictability), but more importantly, it must allow for improvements. It has to be easy enough to make revisions, including adjustments during the course of an iterative project, so that it can be optimized through changing needs.

Defining a process with workflows to guide team members doesn't do much good if it can't be enforced in any way. How strongly it needs to be enforced will vary with different organizations and projects. Software projects in many organizations now have a need to comply with various regulations, such as SOX and HIPPA. Some have a need for auditability of changes, project history, and other strict compliance validation such as e-signatures. Whether your project's test management requires strict process enforcement or a more casual approach, you need a mechanism for defining and enforcing something. One such test management tool that provides all of these capabilities is the test management capabilities in ClearQuest.

Coordinate and integrate with the rest of development

Software testing has traditionally been kept highly separated from the rest of development. Part of this comes from the valid need to keep the assessment unbiased and increase the odds of finding defects that development may have missed. This need is especially apparent with acceptance testing, where the best testers are ones who are blind to the design and implementation factors. However, this specific need only represents one of many aspects of software testing, and it should not create the barrier and impediments to developing quality software that it usually winds up doing.

Software testing must be integrated with the other parts of software development, especially disciplines such as requirements management and change management. This includes vital collaboration between the different process roles and activities, maximum communication of important information, and integrated tooling to support this. Without this coordination, quality will be reduced from missed or misunderstood requirements, untested code, missed defects, and a lack of information about the actual software quality level.

Communicate status

An effort is only as valuable as it is perceived to be, and how it is perceived depends on what is communicated to the stakeholders. Good test management must provide complete and proper reporting of all pertinent information. Ongoing real-time status, measurement of goals, and results should be made available, in all the appropriate formats, to all relevant team members on the software development project.

Reporting should also be more than just traditional static documents. Given the constant changes going on, it is necessary to have easily updatable output in a variety of formats to properly communicate information. All of this will enable different project roles to make the right decisions on how to react to changes as the project progresses.

Information from the different software disciplines is not entirely separated. This article has already mentioned the important relationships between test management and other disciplines such as requirements, change and configuration management, and development. It is therefore crucial that the outputs coming from test management can be easily combined with other project data. Current technology makes possible consolidated views combining all project metrics into a dashboard so that the overall project health can be determined. Tools also make it possible to clearly show and assess the relationships between test, development, and other project artifacts.

Focus on goals and results

Decide on quality goals for the project and determine how they might be effectively and accurately measured. Test management is where the goals, the metrics used to measure such goals, and how the data for them will be collected are defined. Many tasks in testing may not have obvious completion criteria. Defining specific outputs and measures of ongoing progress and changes will more accurately define the activities and tasks of the testing effort. Keeping specific goals and metrics for testing in mind not only helps track status and results, but also avoids the last-second scramble to pull together necessary reports.

Storing test management results in a single, common repository or database will ensure that they can be analyzed and used more easily. This also facilitates version control of artifacts (including results), which will prevent problems with out-of-date or invalid information. All of this will help project members to understand the progress made, and to make decisions based on the results of the testing effort.

Automate to save time

There is a lot to test management, and its many tasks can be very time consuming. To help save time, tools can be used to automate, or at least partially automate, many tasks. While simple tools like word processors and spreadsheets provide great flexibility, specialized test automation tools are much more focused and provide a greater time-saving benefit. Tasks that benefit greatly from automation include:

  • Tracking the relationship of testing to requirements and other test motivators
  • Test case organization and reuse
  • Documentation and organization of test configurations
  • Planning and coordination of test execution across multiple builds and applications
  • Calculating test coverage
  • Various reporting tasks

Proper tooling and automation of the right tasks in test management will greatly improve its value and benefits.

Friday, May 22, 2009

Significance of Test Metrics Management

If you are a Test manager or any part of Testing group in an organization, then you surely would have encountered the below mentioned queries:


How much time will it take to finish the test cycle?
How stable is the functionality you are testing?
How much of testing is remaining to be done in test areas assigned to you?
What’s the status of reviews you are doing?
How much percentage of Build is testable?

The list of these queries can go on. The important thing here is that these queries forms a part of daily routine of tester’s job.

More often, answering the Metrics related queries results in an uncomfortable experience for the tester. The reasons for this are:

1. Tester is not prepared to present the asked data.
2. There is an inadequate data available with the tester.
3. Tester was not at all aware that he/she had to be prepared with the asked metrics.

In absence of data, which may be due to one or all of above mentioned reasons, a tester is often prompted to leave the task at hand and go from scratch to collect the data and put the data in presentable form. This activity, which is primarily due to lack of planning and ineffective management, may take long time to finish and causes a precious loss of testing time.

Considering the fact that testing time is often squeezed during the life cycle of a product in order to meet important Product adlines, there is an utmost need to save time and focus on important tasks at hands. Managing the metrics effectively will help achieve this objective in a definite way.

As Testers are the people who are most exposed to Software before its actual Release or Beta Release, there is always a possibility tester will be asked to present various kinds of data during different stages of Life cycle.

Thus, metrics forms an important part of a tester’s job. So, Metrics needs to be managed in an efficient manner and it will in turn help to enhance the productivity of a tester. Both Managers and Testers have a role to play in managing metrics.

Manager’s Role in Test Metrics Management

Indeed, Managers have a vital role to play in managing Test metrics effectively.
What Managers can do is - PLAN BETTER

There is always a scope of Improvement in this area. Yes, this is definitely an area of Improvement as often Metrics are missed or not handled with priority at the Planning stage.

Most of the QA Plan or Master Test Plan templates don’t say much about the metrics that would be used during the whole life cycle of Product testing.


This is where Manager’s can act and plan the usage of metrics before hand i.e. details of data that would be expected from tester’s during various stages of Product life cycle.

Of course, all the requirements related to metrics are difficult to think of so early but even some meaningful Inputs at planning stage will definitely help testers to be prepared. This will help to make tester’s work more focused.

Tester’s Role in Test Metrics Management

To start with, the following listed attributes/ skills can help testers in this area-


1. Sense of anticipation
2. Discipline
3. Usage of Tools

Sense of anticipation is a quality that would definitely help tester manage him/her better. Its just thinking in advance, what kind of metrics would be expected in a particular time or a phase of the product. To add to this, sense of anticipation can be gained from one’s own experience or somebody else’s experiences.

To cite an example- during testing phase, one can always expect to be asked for time required for execution of test cases or about status of component under test.
Judging this, a tester can always be prepared with required stuff much before it is asked for. Anticipation can help testers in great deal to escape from phrases such as “You were supposed to be present with this data”.

Often metrics collection and management is a tedious activity and it sometimes prompts a tester to do repetitive tasks. Discipline is an attribute that can help tester in this aspect. Tester’s motive should be to find a better way of doing things and this should be taken as a challenge.

Usage of appropriate tools can help a great deal in managing things better. Data storage, Data retrieval and Data presentation are the important aspects that need to be considered while selecting a tool. It should be noted that data presentation is an important aspect as effective presentation helps anyone who is reviewing the data save a lot of time and gather and analyze more in a very less time. Already existing small-scale databases such as Microsoft Access, Microsoft Excel can assist a lot.

One more tip in this area is that if there is already existing tool in place then queries for retrieval of data should be planned and made available for use anytime and by anyone.

Friday, May 8, 2009

Reengineer The Test Management

Most organizations don’t have a standard process for defining, organizing, managing, and documenting their testing efforts. Often testing is conducted as an ad hoc activity, and it changes with every new project. Without a standard foundation for test planning, development, execution, and defect tracking, testing efforts are nonrepeatable, nonreusable, and difficult to measure.
Generating a test status report is very time consuming and many times not reliable. It is difficult to procure testing information such as
How much testing needs to be done?
How much testing has been completed to date?
What are the results of these tests?
Who tested what, and when was it last tested?
What is the defect number for this failed test case?
Do we have test results for this build?
Do we have a history of test case results across different builds?
How can we share the test cases with a remote team?
Does our product meet the requirements that we originally set?
What is the requirements test coverage?
Is our product ready for release?
Getting this information fast is critical for software product and process quality. But many times, it is difficult to get this information, depending on the way test cases and execution results are defined, organized, and managed.
Most organizations still use word processing tools or spreadsheets to define and manage test cases. There are many problems associated with defining and storing test cases in decentralized documents:
1. Tracking. Testing is a repetitive task. Once a test case has been defined, it should be reusable until the application is changed. Unstructured testing without following any standard process can result in creation of tests, designs, and plans that are not repeatable and cannot be reused for future iterations of the test. It is difficult to locate and track decentralized test documents.
2. Reuse. Because it is difficult to locate test cases for execution, they are seldom used in day to day testing execution.
3. Duplication of test cases and efforts. It is difficult to locate a test case, so there are chances of duplicating the same and wasting the testing effort.
4. Version control. Since there is no central repository, version control becomes difficult, and individual team members may use different versions of test cases.
5. Changes and maintenance. Changes to product features can happen many times during a product development lifecycle. In such scenarios, test cases can become obsolete, rendering the whole effort in test planning a fruitless exercise. It is important to keep the test case list updated to reflect changes to product features; otherwise, for the next phase of testing there will be a tendency to discard the current test case list and start over again.
6. Execution results—logging and tracking. The test execution result history is difficult to maintain. It is difficult to know what testing has been done, which test cases have been executed, results of each test case that is executed, and if a problem report has been written against this failed test case.
7. Incomplete and inconsistent information for decision-making. A defect database provides only one side of the information necessary for knowing the quality of a product. It tells what is broken in a product, and what has been fixed. It does not tell what has been tested and what works. This is almost as important as the defect information.
8. Test metrics. If we cannot have a history of test results, it is difficult to generate test metrics like functional test coverage, defect detection effectiveness, test execution progress, etc.
9. Difficult to associate related information. We also need to deal with huge quantities of information (documents, test data, images, test cases, test plans, results, staffing, timelines, priorities, etc.).
10. Nonpreservation of testware. It is essential that testware (test plans, test cases, test data, test results, and execution details) be stored and preserved for reuse on subsequent versions of a single application or sharing between applications. Not only does this testware save time, but over a period of time it gives the organization a pattern, knowledge base, and maturity to pinpoint the error-prone areas in the code development cycle, fix them, and prevent the errors from recurring.
11. Inconsistent processes. Organizations are not static. People move from project to project. If the testing job is performed differently for each project or assignment, the knowledge gained on one assignment is not transferable to the next. Time is then lost on each assignment as the process is redefined.
12. Requirements traceability and coverage. In the ideal world, each requirement must be tested at least once, and some requirements will be tested several times. With decentralized documents, it is difficult to cross-link test cases with requirements. Test efforts are ineffective if we have thousands of test cases but don’t know which one tests which requirement.

The problems due to unstructured, decentralized test management can be solved by reengineering the test management process. A testing project starts by building a test plan and proceeds to creating test cases, implementing test scripts, executing tests, and evaluating and reporting on results.
The objectives of reengineering test management are to
1. assist in defining, managing, maintaining, and archiving testware
2. assist in test execution and maintaining the results log over different test runs and builds
3. centralize all testing documentation, information, and access
4. enable test case reuse
5. provide detailed and summarized information about the testing status for decision support
6. improve tester productivity
7. track test cases and their relationship with requirements and product defects
A Reengineered test management process can help in improving key processes in test definition, tracking, execution, and reporting.