How to Develop a Good Acceptance Test

How to Develop a Good Acceptance Test

The most important influencer of a good Acceptance Test (AT) is a set of good requirements

Developing an Acceptance Test is something that many (most?) avoid. It’s a pain to do, hard to fit into a busy schedule, and boring besides. Plus, no one likes to find bugs in their system. But, heartache and stress can be reduced by following some steps to assure that a useful Acceptance Test is performed on your system prior to release.

Developing a Good Acceptance Test – Requirements

The most important influencer of a good Acceptance Test (AT) is a set of good requirements. Without good requirements, you can’t know what to test the application against. Without requirements, how else would you know the set of expected features that the application or system must fulfill. (Let’s use the term system, since application sounds like a software-only thing, whereas we all develop systems with both hardware and software.)

I’ve seen requirements ranging from lists of very detailed items to broad features. The very detailed lists, such as “the XML command must contain the following elements”, are especially important when your system needs to interface with other systems being defined elsewhere or you need to deliver specific functionality to your end-users. A list of broad features, such as “save the acquired data from all channels”, works well when you are end-user or the end-user has flexible needs.

When to Develop the Acceptance Test Plan

Once the requirements are known (and I’d also say the corresponding design is done, so that you already know the requirements are achievable), write the Acceptance Test Plan (ATP). Having an ATP before implementation begins will get you thinking about steps and features you may need to include in the system to enable testing. There’s more discussion below on the reason for waiting until design is done before starting to think about the ATP.

Unit Testing

When performing an AT on a system, a successful outcome requires proper operation of a lot of internal system components. Testing these internal pieces is called Unit Testing. Different than the AT, unit testing treats (chosen) internal components to a detailed range of inputs and verifies the component’s outputs.
Formal unit testing is uncommon in typical LabVIEW development, but all developers do unit testing to some extent. For example, most people at least manually test a VI prior to using it. But few create a formal set of unit testing procedures.

Does system development always benefit from formal unit testing? In my opinion, I’d say no, except for certain situations. Under a software-only environment, it is at least possible to control the inputs and outputs for the low-level software algorithms.

But higher level code is harder to unit test because the combinatorial count of possible paths through the code becomes a huge number to support. Furthermore, when independent loops and event-driven techniques are used, unit testing is challenged to capture all possible scenarios. Finally, when hardware is included, unit tests confront components that don’t act exactly the same every time, making verification more complicated.

Perhaps the best reason to use unit tests arises from the fact that a collection of unit tests can be automatically run against components of a system, enabling that set of tests to be played against the system often as changes are made to the system. In other words, you can automate some the system testing. Much time can be spent on implementing unit tests and the extra effort is best spent on components that will be reused such that the time savings in automated testing compensates for the additional time spent in unit test development.

See http://en.wikipedia.org/wiki/Unit_test for more discussion.

System Testing and the Traceability Matrix

Once the system has been developed and unit tested as much as makes sense, the AT can be executed. A good ATP needs complete coverage against all the requirements. In other words, each ATP step can be 1-to-1, 1-to-many, or many-to-1, depending on the way the requirements are written and the system design is implemented.

For example, if the requirement is written as “the XML message needs to contain the elements X,Y, and Z with values A, B, and C respectively for ramping the voltage from 0 to 5 V over 3 seconds”, then the ATP could be written in several ways depending on the implementation of the system. Some examples:

Here’s a 1-to-1:

  • Run the ‘Voltage Ramp’ test step from 0 V to 5 V over a 3 second period and verify that the XML message has the elements X, Y, and Z with values A, B, and C. (1-to-1)

Here’s a many-to-1:

  • Open the ‘Set Start Voltage’ configuration screen and set the voltage to 0 V. Verify that the XML message has the element X with value A.
  • Open the ‘Set End Voltage’ configuration screen and set the voltage to 5 V. Verify that the XML message has the element Y with value B.
  • Open the ‘Set Ramp Rate’ configuration screen and set the duration to 3 s. Verify that the XML message has the element Z with value C.

In this example, the first item relies on the fact that there is a test step that sends the entire message at once. The second item must deal with a system design that enables the user to configure individual elements on the voltage ramp. Thus, in the end, the ATP steps must cover the requirements, but the specific ATP step(s) chosen to cover a particular requirement reflect(s) the design of the system as well, which is why the design should be done prior to writing the ATP.

By writing the requirements in the leftmost column of a table and each ATP step in a column of that table, you can trace coverage by marking an element of the table (matrix) when the ATP step validates a specific requirement. When the entire matrix has 1 or more marks in every column, each requirement can be traced to one of more ATP steps.

Check for Changes

Many years ago, I learned an important lesson when developing systems subject for FDA regulations. It’s not enough to check that the system covers the requirements. It is also very important that the system is validated against the chance that a requirement is covered in potentially unexpected and invalid ways.

You never want the system to create a false positive.

As a simple example, suppose you run the 1-to-1 ATP step above and find that the system runs correctly. You might also run the same step with values of 2 V to 5 V over 3 seconds and find that the XML has the same content as before, suggesting that the 2 V input apparently is being ignored and the default value of 0 V is always being used. Thus, a good ATP changes inputs over a range to assure that an expected output is not simply a fluke.

Check the Faults Too

For robustness, the ATP should also check against failures too. For example, what happens when the device handling the voltage ramp XML is powered off? It’s also important to check that the ATP covers the situations where the requirements cannot be met due to atypical operating conditions. Turning this observation around, it’s important that the requirements have definition of the system behavior in these atypical conditions. What should the system do when power goes out, or a device fails, or the data rates are too high? The ATP should validate the required behaviors in fault conditions too.

Dealing with Hardware

Both the “check for changes” and “check for faults” can be interesting to implement when hardware is part of the system.
For example, when testing the system against requirements that depend on hardware, you need to show that changes in outputs (control signals) to a hardware device or inputs (measurements) from a hardware device are as expected. And when fault conditions arise in hardware, the system needs to be able to respond appropriately.
It can be difficult to handle these conditions because 1) the atypical conditions might destroy the hardware and 2) you might need to destroy the hardware to create an atypical condition.
Usually, there are two approaches. First, the hardware is simulated in software and the simulation responds with faults or it injects faults back into the system via the simulation. Second, the actual hardware offers ways to disconnect physical signals and route these to hardware that handles the atypical situations.
Viewpoint makes breakout boards for this second approach for several NI DAQ (e.g., 68 pin) and other common header connections (e.g., 9 and 37 pin). Many customers and our engineers find these handy for performing complete ATPs (in addition to debugging the system!).

Summary

A system (i.e., hardware and software conglomeration) needs good requirements to develop an effective Acceptance Test (AT). Plus, these requirements need to describe how the system should react to atypical situations (such as power outage or broken cables) and failure modes of both system components and subsystems to which the system connects. A complete Acceptance Test Plan (ATP) covers these requirements in ways that are compatible with the system design, making it necessary to complete the system design before developing the ATP. In the end, the ATP should be written to show that your system works as expected when things go well and not so well.  If you’d like help with your test system, you can reach out here.  If you’d like more useful info on automated test systems, check out our resources page.

2018-04-26T13:20:29+00:00