
                         Software Quality Guild:
                           Testing Guidelines

     Testing is the process of executing a program with the intent of 
     finding errors.  
                                  - Glenford J. Myers

There are several good books on software testing, some of which are 
included in the recommended reading section at the end of this document.  
These guidelines are not intended to replace those books.  And in 
software where safety is an issue, these guidelines certainly don't 
replace the rigorous testing methodologies needed to certify that class 
of code.

Rather, these guidelines are a quick reference list of practical 
techniques.  They might act as a checklist, or more likely, an 
abbreviated refresher course, to remind you if you're using all the 
testing tools at your disposal.

Always keep in mind that "testing" is not a "phase" tacked on at the 
end of the development cycle.  Testing is an integral part of code 
implementation and is tightly coupled with design.  Some aspects of 
testing extend all the way up into the definition of requirements.  
Testing should not be an afterthought, nor done only if you have the 
time.  You should be testing your code and design constantly throughout 
the development cycle.

     There are three steps to fixing any given bug:

       1) Put the bug in the code.
       2) Find the bug.
       3) Fix the bug.

     Software bugs do not sneak into code on their own.  Some 
     programmer put them there.  Rather than treating bugs as 
     inevitable, treat them as a Massive Foul Up (MFU).  
                                  - David Thielen

As David Thielen points out, every bug in a program was put there by 
some programmer.  Since the goal is to release defect-free software, 
the ideal would be to not put bugs in the code to begin with.   Some of 
the books in our recommended reading address that issue.  But since 
programming is still a very human activity, that ideal will probably 
never be achievable.  

Therefore, at some point a module is going to have to be tested, by 
itself and with the rest of the program.  When designing your tests, 
keep in mind the difference between verification and testing.  
Verification shows a module or program meets its design goals.  Testing 
tries to break a module or program.  

The difference in point of view is crucial.  Tests designed to show 
that a module works tend to be very different from tests designed to 
attempt to break the module.  The latter generally reveal a greater 
number of errors and more subtle errors.

Again, these guidelines are only a synopsis.  And testing itself is 
only one small piece of the development cycle.  Producing quality 
software is an all-encompassing goal, which requires attention to all 
phases of the development cycle.  

As Kevin Weeks put it so graphically:

                 Testing can't replace  implementation
                  Implementation can't replace design
                     Design can't replace analysis
                     Analysis can't replace design
                  Design can't replace implementation
                 Implementation can't replace  testing
 
                                  - Kevin Weeks

                        -------------------------

Design comes first.  No amount of testing will save a bad design.

When designing a module, give thought to how you will test it.  At first, it
seems that most modules are too complicated to develop a thorough test
suite.  But if you think of testing during design, you may come up with a
design that meets the goals and is also testable.

Where possible, design tests that are compiler/platform-independent.  This
will ease verification if a module is ported to another environment.

Include documentation with each module that details procedures to perform in
the final program that will exercise the module.  Do this when you design the
module and modify it as you implement the module.  If you wait until the end,
the job will be overwhelming.

Have others look over your work.  One popular method is code reading.  The 
author reads the code verbally to two other programmers.  A similar
technique is for a programmer besides the author to read the code to verify 
legibility and estimate maintainability.  The second programmer also 
verifies that the testing appears complete.  Finally, the second programmer
verifies that the testing that is already there appears valid and correct.
The main point is that it be non-confrontational.  Some studies indicate 
that this step finds the highest number of bugs and the greatest variety of 
bugs.

Use assert, or the equivalent for your language.  Any should-not-occur 
situation should be handled gracefully (if possible) by the production code.  
But development code should not hide those types of errors.  On the
contrary, development code should immediately inform the developer/tester 
that an error has occurred.  Include enough information to track down where
and (hopefully) how the error occurred.

Use Lint or its equivalent.  Don't ignore a warning unless you thoroughly
understand it.

Set your compiler to maximum warning levels and treat each warning as a 
syntax error that must be fixed.

If feasible, compile under multiple compilers at maximum warning level.

Implement a Revision Control System and use it. If you don't know what 
you're testing, it negates the test.  A working RCS combined with a 
known (and repeatable) build process will help insure that you're 
testing against a known code set.  To enforce consistency, automate the 
RCS and build processes.  At a minimum, write down a procedure to 
formalize the process used to produce a given code-set.  The procedure 
should include all configuration settings that could alter the 
executable program. All configuration settings should be made 
explicitly.  Don't rely on default settings.

Single-step through every piece of new code with a debugger.  Do this for 
every pathway through that piece of code.

Where possible, compare different algorithms.  Often, a first-pass at a
problem will produce an algorithm that works, but is not optimized enough in 
either speed or size.  When optimizing, retain the original under a new 
name.  During development, execute both the original and the optimized 
versions and compare the results.  If they don't match within bounds defined
by the problem, inform the developer/tester.

Test as you go.  Treat each module of a program as a mini-project.  A 
project isn't complete if it hasn't been tested.

Include a standalone test suite with each module, where practical.  This
should allow the module to be compiled as a standalone program.  The suite 
should run a series of tests and compare the results to predictions/previous 
runs.  If changes are made, rerun the test suite.  Results should be 
archived.

Automate as much of the testing process as possible.

Use all the tools you can find.  Debuggers, automated pointer/heap
verification libraries and programs, automated execution tools, etc.

Capture all input and save to a file.  Implement code to use that file in
place of user input.  A stand alone testing tool may reduce the need for
this.

Always have a runnable program.  Start with a simple main program and build 
from there.  When a module is complete, integrate it with the rest.  Perform 
what testing you can with the partially complete program.  This is one way 
to find integration errors very early in the implementation phase.

During integration, test error reporting and handling from low-level 
functions.  Enumerate every error condition that can occur in the 
low-level functions (should have been done during design of that function).  
Then deny a resource to the low-level function, or short-circuit the 
low-level function and force it to return each of those errors.

Test in as many variants of the target environment as possible.  Use 
different video, sound, network hardware and drivers.  Use different memory 
managers, if applicable.  Example:  For an MS-DOS program, test in a Windows 
DOS box, OS/2 DOS box, DR-DOS, etc.  Include unrelated hardware, like 
scanner interface, image capture board, other oddball hardware.  Obviously, 
test low memory and low disk scenarios.

Allow enough time in your beta test cycle for at least one major revision and
more than one minor revision.  A thorough and vigorously pursued beta test
will show up bugs.  Don't put yourself into a situation where you can't
respond appropriately to those bugs.

Track classes of bugs that show up in your software.  If a particular class
of bug appears regularly, make team members aware of that class of bug and 
try to help everyone adjust their style to reduce the chance of continuing 
to repeat that bug.  (Author's note:  Large developers have commercial tools 
that address this subject.  We'd like to include practical suggestions which 
will help the smaller developers actively track and occasionally review the 
bugs that appear in their code.  If you have any ideas, please let us know.)

When a test indicates a bug, don't forget that the code doing the testing
could contain the bug.

Finally, remember that testing is only capable of proving the presence 
of errors, not the absence of errors.

                        -------------------------

                           Recommended reading

_The Art of Software Testing_, Glenford Myers, John Wiley and Sons, 1979, 
ISBN #0-471-04328-1.

_Software Testing Techniques_, Boris Beizer, Van Nostrand, 1990, ISBN 
#0-442-20672-0.

_Debugging Techniques in Large Systems_, Rustin, R. (ed.), Courant Computer
Science Symposium 1, Prentice-Hall, 1971, ISBN 0-13-197319-3.

_Practical Strategies for Developing Large Software Systems_, Horowitx, E. 
(ed.), Addison-Wesley, 1975, ISBN 0-201-02977-4.

_Program Test Methods_, Hetzel, W. (ed.), Prentice-Hall, 1973, ISBN 
0-13-729624-X.

"Is Your Code Done Yet," Kevin Weeks. Computer Language Magazine, April 1992,
pg 63.

"Glass Box Testing," Kevin Weeks. The C Users Journal, October 1992, pg 47.



While not directly discussing testing, some of these guidelines came from:

_Writing Solid Code_, Steve Maguire, Microsoft Press, 1993, ISBN 
1-55615-551-4
  
_No Bugs_,  David Thielen, Addison-Wesley Publishing Co., 1992, ISBN 
0-201-60890-1

