Thursday, July 22, 2010

Continuation of my last post on relationships with customers...

I got an assignment lately to look into the flaws of one of my new client's department, so I thought to share some tips from my this experience here......

Looking in the flaws of something is the area where we all test managers treat ourselves as an expert. But, in order to getting on with the new client the skills to build rapport, the basic consulting goals of selling, influencing, guiding and managing are much more important than simply finding out and presenting their faults.

It is never a good idea to telling the client that their "baby is ugly", even though they have hired you for that. Initial meetings with the client is not the time to bring out all the (reasonably obvious, at least to you) flaws in their organization's approach to the consulting assignment, or their management style. It's not necessarily the time to lord it over the client about how much you know. It's the time to be affable. Being affable is about being in rapport with your consulting client. It requires you to be able to walk a mile in their shoes. When you and a client get on well, when you are in rapport, they are happy to take on your ideas, and follow your lead. I mean they will be more receptive to your suggestions because you are both in rapport, and what happens next is inevitably a success in your endeavor.

Sunday, June 6, 2010

How to build great relationships with customers

For all you Test Managers, I would like to share few most important aspects of building great relationships with your customers which I learnt from my experiences:


Don’t just understand customer’s needs, understand their business:
Do you know why a customer wants us to ensure the quality of his/her application? How does it fit into the larger picture of customer’s business? How does it generate money for customer? These are important questions for understanding the context. When you serve your customer, you are helping them address at least one of their business objectives. Understanding what works for the customer helps you align your actions to the business objectives. That is a sure way to add value, because customer no longer looks at you as a ‘vendor’ but as a ‘partner’. Most folks in technical areas need to understand this critically.

Communicating one-on-one, frequently:
Great relationships are built one conversation at a time. Open and transparent conversations are opportunities – to understand and to convey. Iterations of understanding and conveying the right things results in a credible relationship. In an outsourced world, I cannot emphasize more on value of ‘face-time’ with customers. Most customers will not open up when they talk over a weekly meeting. Frequently communicating your customer and understanding changes in their business helps. Phone calls and Emails are great tools to ensure continuous communication.

Ship Results:
All said and done, it all boils down to results. Great results delivered consistently over a period of time is the best strategy to build a strong relationship. Results build long lasting credibility. When you have deeper understanding of client’s ‘business’ and when you have ‘communicated’ frequently to manage expectations, you are in a much better position to deliver meaningful results that delights the customer. Key is to manage expectations, give realistic promises and delivering on them.

Thursday, May 27, 2010

Why domain expertise is required in an effective test team?

In the current scenario of the industry it is seen that the testers are expected to have both technical testing skills as well either need to be from the domain background or have gathered domain knowledge.First of all I would like to mention three dimensional testing careers. There are three categories of skill that need to be judged before hiring any software tester:

1) Testing skill
2) Domain knowledge
3) Technical expertise.

No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not. You will certainly have a product look by the domain expert before the product goes into the market.While testing any application you should think like an end user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above, so you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.

Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.

Here are some of the examples where you can see the distinct edge of domain knowledge:

1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing

How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough - Here comes the need of subject-matter experts.

When you know the functional domain better you can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.

For freshers in Testing field:

My basic list of the required testing knowledge:
* Testing skill
* Bug hunting skill
* Technical skill
* Domain knowledge
* Communication skill
* Automation skill
* Some programming skill
* Quick grasping
* Ability to Work under pressure

That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.

And, what if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.

Tuesday, April 27, 2010

How to manage Quality for a project based on new technology

The project with new technology, holds many problems with itself that you did not anticipate. Many of these problems may require new tools, new skills, or substantial time to solve.

A process that is optimized to handle problems you understand may be wrong for problems you don’t yet understand. That’s why it needs a more exploratory, risk-oriented, incremental form of process, as opposed to one that is emphasizes
pre-planned, predictable tasks.

The key to working in unfamiliar territory is to change your test management focus from task fulfillment to risk management. The project team should continue to devote attention to planning and fulfilling project tasks, but that should not be the focus of the test team. Instead, the test manager must assume that any given prediction, at any given time, may be wrong.

A risk managed testing project implies specific strategies to deal with that assumption. Here’s my list:


· Produce a risk list for the project. If not a printed list, any leader on the team should be able to tell you what the major risks are. All project leaders should be in general agreement about the list.

· The team should practice talking about risk areas, rather than discouraging them as “negative thinking.” The risk that you don’t discuss is the one that you also won’t guard against. However, you should also stress that a new technology project requires you to be bold and to accept a certain amount of risk. The real problem is when there is an inconsistent attitude about risk, across the team.

· There should be some process whereby anyone on the project can learn about the current status of risk areas. There should be frequent project reviews (every few weeks) that involve a level of management one step higher (at least) than day-to-day management.

· There should be a change control process that prevents the careless introduction of new risk drivers after the scope of the project has been set. A clear charter for the project can help in limiting scope creep. Too much change control, too early, can also hurt the project. Consider establishing a progressive change control strategy, whereby the closer the product is to shipping, the stricter changes are controlled.

· Avoid specific promises to upper management about revenue dates, or anything else, unless you are in a position to fulfill those promises without compromising the project. Until the risks of the project have been explored and mitigated, such promises are premature. Also, look for an exit strategy that allows the project to be cancelled if it becomes apparent that the risks are too large relative to the rewards.

· For each risk driver, look for a strategy that will allow that element to be dropped from the project if it turns out to be too difficult to solve the problems that come up. Consider moving especially risky components to a follow-on release, or making them optional to install. I call this an “ejection seat” strategy. If risky components cannot be ejected, they can drag the whole project down.

· Adopt a prototyping strategy, or some other way that risk drivers and risks can be explored while there’s still time to do something about them, or it’s still early enough to cancel the project without huge losses.

· Adopt a test management process that dynamically reallocates test effort to focus on risk areas, as risk areas emerge. It helps to have an explicitly risk-based test plan. Look for strategies that minimize the amount of time between defect creation and resolution.

· The project should have a larger test team than normal, in order to react quickly to new builds. The development team does not have to be larger, in fact many say it should be smaller, but it must have the ideas, motivation, and skills needed to do the job. Special training or outside support should be considered.

· Draft a test plan that includes iteration and synchronization. Think of it like rock climbing or crossing a fast river by jumping from rock to rock: plan ahead, but revise the plan as new challenges arise and new information comes to light. Don’t let the bug list spiral out of control, but rather fix bugs at each stage. You might have to scrap some component of your technology and re-design it, once you have some experience with it in field testing. Don’t be surprised by that. It’s perfectly normal, and it’s the people who refuse to rewrite bad code who suffer more, in the end. Go through a test-triage-bug fix-release end game cycle for each iteration.

· Pay attention to how the product will be tested. Assure that the designers make every reasonable effort to maximize the controllability and visibility of each element of the product. These can include such features as log files, hidden test menus, internal assertion statements, alternative command-line interfaces, etc. Sometimes just a few minutes of consideration for the test process can lead to profound improvements in testing.

· Don’t put all your eggs into the risk-based basket, because you may inadvertently overlook an area that was risky, just because you didn’t anticipate that risk. So, devote a portion of your test strategy, maybe a third, to a variety of test approaches that are not explicitly risk-based. Also, establish a process for encouraging new approaches to be suggested and tried.

· Look for ways of making contact with the field about the emerging product. In speculative projects, problems often emerge that are invisible to the developing organization, yet obvious to customers. Also, assure that technical support, product management, and technical writers are well connected to the test team, so they can feed information to the testers without too undue effort.

· Look for records of schedules, schedule slips, meeting notes, and decisions made. Any metrics or general information that can be collected unobtrusively will help you understand the dynamics of the new technology and the team. That, in turn, helps plan the next project.

Thursday, February 11, 2010

SOA Testing Considerations

Service-Oriented Architectures are getting a lot of attention as businesses look for ways to be more responsive and flexible in meeting customer needs through new and existing technology. The adoption of SOA as both a technology and business strategy is on the increase. For the past couple of years, people have focused on the development aspects of SOA. However, the need for testing SOA has become increasingly apparent as companies are deploying SOA and are learning that they are different from other architectures in key ways. Traditionally, testing strategies lag behind development strategies by a matter of months. In the case of SOA, the tools have been around for some time, but the awareness of the need for SOA testing has been lagging. Randy Rice is a leading author, speaker and consultant in the field of software testing and software quality. I got a chance to study his articles on SOA testing and would like to share these excerpts which might be of interest for some of you.Whenever a new technology emerges, one of the first things to do is to define what makes the testing of the technology unique. The uniqueness of SOA is seen in how services are built and how they support business processes. Therefore, an SOA testing strategy needs to consider both structural and functional perspectives.The Structural PerspectiveThis is the perspective of testing that focuses on the internals of the SOA. This can include the code used to create services as well as the architecture itself. For many people, this has been the focus of their SOA testing. One reason for this is because as services are created, this is the first opportunity for the SOA to be tested. Although the structural testing perspective is very important, it is only one aspect of the SOA. We will explore the other types of testing later in this article. One example of structural testing for SOA is examining the Web Services Description Language (WSDL) to learn how data elements are supplied to services. WSDL uses XML to describe network services as collections of communication endpoints capable of exchanging messages. By learning how to read and understand WSDL, testers can learn many important things to test. However, some testers may not be suited to spend time at the WSDL level, so having developers involved at this level of testing can be helpful.The Functional PerspectiveTo ensure that the services support the business processes, functional testing is needed. Services can and should be tested individually, but the most critical type of testing is the integration testing of services and business processes. Unless services are tested in the context of business processes, you lack the confidence they will perform correctly when used to perform business functions. An example of this type of testing is to model tests based on transactions and processes. This is normally called scenario-driven or process-driven testing. In this approach, business processes are described so that distinct scenarios can be identified. Services that support each scenario are also identified so they can be tested in relation to each other.When defining a test strategy, critical success factors define the risks associated with the technology or project and the types of testing to be performed that can help reduce the risks. For SOA, some major critical success factors include:Correctness – Do the services and architecture deliver correct results?Performance – Does the SOA deliver results quickly and efficiently?Security – Is the SOA adequately protected against external attacks? Is data protected to keep it from unauthorized users?Interoperability – Do the services work together in the context of business processes to deliver correct results?There are other success factors that you may choose to address, depending on your project. These include usability, maintainability, reliability and portability.ToolsTools add leverage to the testing of any technology, but SOA has unique characteristics that almost require the use of tools. The big issue in test tools for SOA is whether traditional test tools can be extended, or if totally new tools are needed. Much depends on the type of tools you may currently have in place and their ability to handle SOA testing needs. Before investing a lot of money and time in acquiring an “SOA specific” tool, perform some tests to make sure your existing tools can’t handle the job. People can have large investments in tools and the automated testware, so make sure you consider that investment before switching to a new tool set. Tools are needed for SOA because:They can provide a harness for accessing services that may otherwise not be easily tested. It may be impractical to test services because they often lack a user interface. This is called “headless testing” because there is no single access point to the services.They can test faster than humans once the automation has been created.They can provide precision in tests such as regression and performance testing.Fortunately, SOA test tools appeared early in the emergence of SOA, so the tools that are currently on the market have had time to mature.PeopleTesting is a very human-based activity. When you list the major problems people face in testing any software, the great majority are human in nature. While SOA has a major technical element, you will still need to consider who will plan, perform and evaluate the test. Developers have a great perspective on structural testing, but may not be the most willing participants. However, with some encouragement, tools and management support for testing, developers can test their own work. Stakeholders, especially end-users, can add the business perspective to the test. The issue with end-user testing is to get enough of their time to adequately focus on the test.Testers can bring a wealth of testing knowledge to the project. They can adapt previously defined tests for SOA and also can help acquire or adapt the right tools for the job. You may need to get specialists involved in tests, such as performance and security testing.Controlling the SOA Test EnvironmentYour test is only as good as your test environment. If you can’t identify what is in the test environment, such as the services, you will not be able trust the test results. With SOAs, governance is needed to control when services are created or modified, and when they should be placed into the test SOA environment. Unfortunately, people often fail to appreciate the importance of test environment control until they experience a failure due to an incorrect test.In a gist, SOA brings new test considerations, but the good news is that many of the techniques can be adapted from past technologies. Effective testing is a matter of getting the right balance of people, processes and tools, all working together in an integrated test environment. SOA is no different. By taking time at the start of your first SOA project to define the uniqueness of the technology, you can approach the project knowing that the major points have been considered.