Software testing strategy

Software tests are most valuable when the stress level rises due to people working too much, which causes human judgment to fail. So, the tests must be automatic – returning an unqualified “thumbs up/thumbs” down indication of whether the system is behaving as it should.

It is impossible to test absolutely everything, without the tests being as complicated and error-prone as the code. Conversely, it is suicide to test nothing (in the sense of isolated, automatic tests).

So, of all the things you can imagine testing, what should you test?

You should test things that might break. If the code is so simple that it can’t possibly break, and you find  that the code in question doesn’t actually break in practice, then you shouldn’t write a test for it. If I told you to test absolutely everything, then pretty soon you would realize that most of the tests you were writing were valueless, and, if you were at all like me, you would stop writing them. “This testing stuff is for the birds.”

Testing is a bet. The bet pays off when your expectations are validated. One way a test can pay off is when a test works that you didn’t expect to work. Then you had better go find out why it works, because clearly, the code is smarter than you are. Another way a test can pay off is when a test breaks when you expected it to work. In either case, you learn something. And software development is learning. The more you learn, the better you develop.

So, if you could, you would only write those tests that pay off. However, since you can’t know which tests would pay off (if you did, then you would already know and you wouldn’t be learning anything), you write tests that you think might pay off. As you test, you reflect on which kinds of tests tend to pay off and which don’t, and you write more of the ones that do pay off, and fewer of the ones that don’t.

Who should writes Tests?

The tests can come from two sources:

  • Programmers, or Customers

The programmers

Write tests method-by-method. A programmer writes a test under the following circumstances.

  • If the interface for a method is at all unclear, you write a test before you write the method.
  • If the interface is clear, but you imagine that the implementation will be the least bit complicated, you write a test before you write the method.
  • If you think of an unusual circumstance in which the code should work as written, you write a test to communicate the circumstance.
  • If you find a problem later, you write a test that isolates the problem.
  • If you are about to redevelop some code, and you aren’t sure how it’s supposed to behave, and there isn’t already a test for the aspect of the behaviour in question, you write a test first.

The programmer-written unit tests always run at 100%. If one of the unit tests is broken, no one on the team has a more important job than fixing the tests. Because, if a test is broken, you have an unknown amount of work to do to fix it. It might only take a minute. But it might take a month. You don’t know. And because the programmers control the writing and execution of the unit tests, they can keep the tests completely in sync.

The customers

Write tests story-by-story. The question they need to ask themselves is, “What would have to be checked before I would be confident that this story was done?” Each scenario they come up with turns into a test, in this case, a functional test.

The functional tests don’t necessarily run at 100% all the time. Because they come from a different source than the code itself, I haven’t figured out a way to synchronize the tests and the code in the same way that the code and the unit tests are synchronized. So, while the measure of the unit tests is binary—100% or bust—the measure of functional tests is by necessity based on percentages. Over time you expect the functional test scores to rise to near 100%. As you get close to a release, the customer will need to categorize the failing functional tests. Some will be more important to fix than others.

Customers typically can’t write functional tests by themselves. They need the help of someone who can first translate their test data into tests, and over time can create tools that let the customers write, run, and maintain their own tests. That’s why an XP (Extreme Programming) team of any size carries at least one dedicated tester. The tester’s job is to translate the sometimes-vague testing ideas of the customer into real, automatic, isolated tests. The tester also uses the customer-inspired tests as the starting point for variations that are likely to break the software.

Even if you have a dedicated tester, someone whose joy comes from breaking software that is supposed to be working already, they work within the same economic framework as programmers writing tests. The tester is placing bets, hoping for a test that succeeds when it should fail or that fails when it should succeed. So, the tester is also learning to write better and better tests over time, tests that are more likely to pay off. The tester is certainly not there to just churn out as many tests as possible.

Let's Talk
First name
Last name
Phone
Phone number entered is invalid.
Message*
Message must not be empty.
preloader