Pages

Monday 16 April 2012

In the face of failure - part 1

Sometimes we fail to achieve what we have set our minds to achieve. That is not always a bad thing as we might learn something about the process of failing. The failure itself may require us to change our perspective or approach to the task at hand. in the first part I cover what happens when a test fails. In the second part I'll look into people's failure in effort to do something. All along I try to offer insight and learning possibilities to the subjects.

When a test fails (or doesn’t fail)…

As testers we do tests (DUH!). This may be a mission, a scenario, a flow, a check, basically of any proportion of work effort done in order to achieve some testing goal. This may also be a check conducted by a machine of some kind (test automation script, you name it). The point is that there is a test and we have or might not have some assertions (assumptions, expectations) regarding the outcome of the test. There may be tests that we have no idea what the outcome is (“What happens if I press this blank button at GUI?”) but it may result in a future assertion regarding the same test object.

Test may fail or they may pass. That is the binary nature of a test. They may however trigger a whole another result, for example “indefinable”, “false negative”, “false positive”, "what the bloody hell is that?". What happens if a test results in a “test passed”? What does it mean? Can it still fail? Can it mean something more?

Test fails because the test object doesn’t pass the assertion

This is what we want to happen, when a test fails: The test fails because there is something broken in the test object. By wanting this to be the case we might close our eyes from something important. The case simply states that the test object does not pass the assertions for reason unknown.

We think that the problem lies in the test object but are we sure? We need to eliminate the "think" and go towards the "know". We must analyse the test itself to determine whether the result is infact correct. The result may be a false-negative as where we need to determine if the test itself is not faulty. We may be testing the wrong thing or asserting the wrong things.

After we have determined that the test object infact is not built to match the test, we must ask: "have we built our test incorrectly?" This may result in information about the behavior of the test object just as well as the behavior of the TEST! The behaviour may be incorrect in both situations, but the newly found behaviour may be the thing customer/user/stakeholder (that is important enought) wanted or truly needed. The current behavior may also be a better solution than the intended/planned. In any case, this information must be revealed with more testing and analysing.

The test result may also uncover a risk that was not taken into account or was ignored before. This rises important questions about the the test object and the processes of development and testing. If there is a hole in our risk mitigation strategy could there be a need to revise the process or processes? Could there be more areas that we have not yet covered?

Lastly we may require to re-test some parts of the test object as the unwanted behavior mey be required to be fixed (or the test if chosen so). In any case the test requires revising, possibly some fixing and definitely more analysing. Do we need more tests to cover the area where the behaviour was found? Do we need to revise MORE tests? Was the test extensive enough to be feasible after the fix?

False negatives and positives

In a false positive/positive case we have already done some analysing to reach the conclusion that the test gave faulty information. Both cases may reveal something important about the test object but more than that they tell that there is something wrong with the testing. Do we have enough information about the test object to be making statements by the test results? Do we need to do more research on the test object to make our tests better? Have we missed something important in the process? These results always require analysis on why the tests give false results in the first place.

False positive is basically a situtation, where we think that the test object is behaving the way we think it is. This result is corrupted by the defects in the test itself and thus the test gives a "pass" result. it could be that the test is concentrating on the wrong area or function, thus giving false information. Therein could lie a risk that we have not covered a critical part of the product with sufficient testing. The assertion in the test itself could be missing or not strict enough ("Check if there is a response of any kind." -> Soap error message.) To get this situation corrected, usually both the test and the teset object need to be analysed and possibly fixed.

False negative is a situation where the test gives a falsely prompted negative result to a test. Almost all the situations apply to this as in the false positive case. We may need to back up the result with additional tests to rule out the possibility that the test object is behaving incorrectly. We may be lacking in skill, we may have ignored something or simply there is a defect in the assertions.

Tests always reveal something, and it may be important

By doing testing we uncover information. The goal of testing is to give enough knowledge for the deciding-people to make the right desicions. Even a poor test can reveal important information. If critical information is revealed at this stage (early or late), have we done something wrong in the previous stages? Is the poorly constructed test a waste? Do we need to construct better tests or are we aiming for fast results at this stage?

It could also be that because of some flaw in some process we have stumbled upon the wrong area to test. It could be that there are communication problems, ignorance or some other reason that we do testing to an area we are not supported to do that. It could be that the feature is under developments still, the environment is not ready, etc. Does the testing we did vale any value or was it waste? Do we value learning? Did we learn anything about the test object?

Even if we have the most beautifully constructed tests and the test object is in good shape, we may encounter problems if we do not know how to interpret the results of the tests. Results contain sometimes false results (positive/negative) that must be uncovered and examined so that we have correct information about the test object at all times. The one interpreting the results (tester during testing, test automation specialist, etc.) should have enough competence and criticism to question the results. We may ignore all "pass" results and just focus on the "fail" leading to false information.

We may think we know enough about defect. We did find it, didn't we? By analyzing the failure itself and the root cause we can uncover more information about the test object and its surroundings. By questioning the test results just as we question the test object, we may reveal some information about the testing methods, processes, tools, etc. and be able improve our testing.

Read also the part 2.

No comments: