Wednesday 4 June 2008

Why XFAIL?

A couple of weeks ago I implemented a feature called --XFAIL-- in the PHP test runner (run-tests.php). The idea was not mine – it was Pierre's and I admit I had a few reservations about it when he first suggested it but I convinced myself that it might be useful. There was also some helpful discussion on the PHP QA list.



In this post I'll explain what I have done and a give a couple of instances in which I think it might be of some use.



In the following example I have added an XFAIL section to a test called cos_basic1.phpt in ext/standard/tests/math:



--XFAIL--
Expected to fail because I've messed with expected output to make it fail

I have also messed with the expected output to ensure that the test really does fail.

When I use run-tests.php to execute all the tests in the math directory the final section of the report looks like this:



=====================================================================
Number of tests : 110 109
Tests skipped : 1 ( 0.9%) --------
Tests warned : 0 ( 0.0%) ( 0.0%)
Tests failed : 0 ( 0.0%) ( 0.0%)
Expected fail : 1 ( 0.9%) ( 0.9%)
Tests passed : 108 ( 98.2%) ( 99.1%)
---------------------------------------------------------------------
Time taken : 1 seconds
=====================================================================
=====================================================================
EXPECTED FAILED TEST SUMMARY
---------------------------------------------------------------------
Test return type and value for expected input cos() [math/cos_basic1.phpt]
=====================================================================

The test cos_basic1.phpt fails, the usual .out, .exp etc files are generated - the only difference is in the way that the failure is reported. There is a new line in the summary data (Expected fail:) and a new section called EXPECTED FAILED TEST SUMMARY.


The intention of XFAIL is to help people working on developing PHP. Consider first the situation where you (as a PHP implementer) are working through a set of failing tests. You do some analysis on one test but you can't fix the implementation until something else is fixed – however – you don't want to lose the analysis and it might be some time before you can get back to the failing test. In this case I think it's reasonable to add an XFAIL section with a brief description of the analysis. This takes the test out of the list of reported failures making it easier for you to see what is really on your priority list but still leaving the test as a failing test.


The second place that I can see that XFAIL might be useful is when a group of people are working on the same development project. Essentially one person on the project finds a missing feature or capability but it isn't something they can add immediately, or maybe another person has agreed to implement it. A really good way to document the omission is to write a test case which is expected to fail but which will pass if the feature is implemented. This assumes that there is general agreement that implementation is a good idea and needs to be done at some stage.


Both of these situations have more to do with what is useful for a developer than a tester, so XFAIL is probably not a feature that I'll use much myself. One person also raised the possibility that the function is really already covered by the SKIPIF section. I don't think it is and I think the distinction is simply that if something is in a SKIPIF section it is something that is never expected to work – like some of the file system tests on Windows. I also can't think of a good reason that there would ever be XFAILing tests in released code, in contrast we often use SKIPIF sections in released levels of PHP.


The XFAIL feature is only implemented in PHP 5.3 and PHP6. It's documented in the usual place