Monday, August 17, 2009

Softwaresting on an Ad Hoc Basis?

Manny companies use to test on an ad hoc or heuristic basis, which means testing ONLY comes into play typically when one or more of the following symptoms appear (and testing is somehow not truly cultivated):

  • bad results during (pre)production or bad feedback from customers after shipment
  • process or compliance criteria (i.E. ISO, IEEE, ITIL, SOX, SPICE)
  • save money overall

So to sum this up, ad hoc testing typically comes into play when a project is forced to test or forced to save money (at the cost of software test).

Which brings us to the point that testing might easily become a true money burner. Testing on an ad hoc basis means you will certainly get into testing very late during your software lifecycle with a minimum of human or virtual resources (classic killer phrases).
The classic project requirement is to cover some kind of smoke.-, alpha.- or preproduction test with your test team (as quick (dirty) as possible).
Depending on the project there is a realistic chance that the topic "unit test" (or developer unit test) was touched somehow, so the chances that critical technical errors occur are minimized. But that’s only as a side note.

So depending on that, there are two ways how your test project will typically "run":

1)
Testers will find a lot of defects in an astounding short time (as we said, it’s nearly the end of the project typically; bad taste cause sometimes this will lead to the illusion that the test approach was correct). Those defects can be roughly separated into two groups.
The first group will cover highly critical malfunctions of the application. Detection of such defects at the very end of the project will simply lead to very high fixing costs (i.E. linked modules, interface errors, dropouts in your staff members – and loss of know how..) and may bring your project to the edge of failing. Further these defects will be either be classified as "known bugs" which will be planed to be fixed at an unknown future release or the fixing costs will be very high and a serious project delay might occur (sink or swim).
The other class of defects will be false negatives. Due to factors like the late entry of test into the project or the complexity of the nearly finished application, the testers will have a hard job to understand every (important) aspect of the application in a short time. Further test bureaucracy (test case design, planning, reports) have to be done simultaneously, so the human error quote won't be a factor to underestimate at this point.

2)
Testers will not find a lot of defects and can't decide whether the quality criteria are fulfilled or not. Typically due to the late start of test, it is rather difficult to meet the correct requirements with software test or to understand the complexity of the application under test adequate enough to provide good test coverage (see point 1).
Worst case here, the test efforts and results may be a waste of time from the project management's or customer's point of view.
The reasons are
  • late beginning of test
  • again lack of planning
  • question of know how acquisition
  • typically, complexity rises in the last phases of a project
  • insufficient quality of written documentation which are the key criteria for functional testing (something to test against)


Depending on your project and the specific skills of your team either one of the above or both factors mixed up can hit your project really hard. Expert testers which have a lot of experience in certain technology or special fields may kick the pendulum into factor one.
But as a matter of facts both ways will lead to high project costs with either high fixing efforts or wasting project efficiency within test.

To solve that problem I can only recommend to completely left ad hoc testing out of your project. It may be a good testing technique in specific test environments (i.E. Regression test, Simple or very little bulk of requirements) or specific preconditions (i.E. test experts in specific fields of interests).
But to come to the light side of the force it is necessary to give quality time. Which means bring in a small amount of test into the project early to give it a chance to grow within the project. On the long run there is a chance that critical defects will be detected early, where the fixing costs are rather small and the steering possibility with the help of early testing results is better then otherwise.
Further there is a chance that test experts will grow within your project which allows you to benefit from them later on (i.E. follow up projects etc.). It’s also cheaper this way as to buy them in at the end (or you have a heart for all those consulting companies out there..).
So to fully benefit from quality assurance you have to bring them in early. It is not necessary to get them at full force right from the start but it seems a good idea to slowly fade them in early.

It is also important to understand that test is not a necessary evil but further an overall improvement in your development process and output. Testers can provide early feedback to all kind of project departments (depending on the test focus), helps steering the project (test KPIs), helps improving quality from a functional or non functional point of view (again depending on the test focus) and providing a test framework to help minimizing test efforts on the long run (reuse of test cases, test basis for automation).

All these factors won't be possible by an ad hoc only test approach so the money saved here will be pumped into another corner/incident of the project which might be prevented otherwise…

No comments:

Post a Comment