[Mono-list] Regression tests: we need contributors.
John Duncan
jddst19@mac.com
Mon, 12 May 2003 22:04:51 -0400
Sorry I did not respond to this earlier. I know I'm going to sound like
a silly nay-sayer but hear me out.
Unit tests written after the fact tend to be brittle and unwieldy. (My
opinion.) This is because they are not the right tool for the job. They
function best as a design tool. They set goals for writing the code.
You unit test a piece of code that does one thing, for example,
calculating an offset. It's like writing a specification. But the unit
test is harder to write for functions that integrate several of those
one-step functions together. Sometimes you can mock out the underlying
functions. This is the basis of the mock object testing pattern: you
identify a call sequence or something on an object and write a test
that uses a mock object. The mock can ensure that you are writing your
call sequence correctly. A good rule is: a unit test should not fail
because functionality already covered by another, more specific test
(which also fails) is at fault.
I think you're looking for another type of testing in regression tests.
This type tests portions of functionality in a rigorous and useful way.
These tests really must be designed and not written ad hoc. They should
be written in order of the likelihood that the functionality they test
should be used. They should test functionality in the way it will be
used in the real world.
There are two approaches I like. One is to identify the borders of the
sub-domains of input and place one test in each of the sub-domains.
These tests would have hard-coded data. Most QA teams use this
approach. Sometimes they make a distinction between "positive" and
"negative" tests. Positive tests show that good inputs produce desired
results. Negative tests show that bad inputs produce errors. I make no
such distinction because bad inputs should produce defined and expected
error conditions. This makes them positive tests. Let's call this
"directed testing".
The other approach is similar, but instead of using one test per
sub-domain, you define classes of tests and a way of generating them.
Then you assign probabilities of use to each class. The testing engine
(of course, you need one) will then run until the lowest-probability
test is executed. You will now have a large number of tests run with
the majority of tests applying to the most important parts of the code.
This is called "stochastic testing" because of the probabilities. The
advantage to this sort of testing is that it should not be deceived by
funny facts relating to the hard-coded data. The disadvantage is that
the testing infrastructure is harder to write.
Thoughts?
John
On Saturday, May 10, 2003, at 09:32 PM, Miguel de Icaza wrote:
> Hello everyone!
>
> Although our class libraries are moving along very quickly, we need
> volunteers to help continue the development of class library unit
> tests.
>
> If you do this, you get to use the fancy Nunit-Gtk# tool, which is
> really nice (Screenshot attached). Notice how System.Text has only
> tests for two classes!
>
> It is also the best way of learning C# and the .NET API
>
> Migue
>
> <Screenshot-Nunitgtk.png>