2013-03-17 Note: I started writing this post months ago, and was about to fill in the test report sections when I discovered that pytest-moztrap was not actually working against the production MozTrap. Getting it fixed involves having truely stand-alone test cases, rather than cases that only work against one of the environments because pre-conditions must pre-exist. Having stand-alone test cases depends upon MozTrap having a CRUD API. That is what I have been working on for the past few months. pytest-moztrap will get fixed once I am done. In the meantime, I'm publishing this post because the rest of its content is relevant even before the tool works.
I have not been a QA Manager, but I've certainly been asked by various QA Managers "How much of the testing have you finished for this release?" Mozilla needs to coordinate between staff members writing test cases, volunteers running them on a wide variety of platforms, and reporting the results to management, all for a number of projects. To meet this need, Mozilla created MozTrap.
As a test automator, I am also interested in statistics like:
- How many test cases can be automated?
- How many of the test cases have been automated?
Neither DOM inspecting nor bitmap comparing test frameworks can adequately test every case in a project. Some tests simply require a human eye, and others are virtually impossible for a person to perform without automated tools. Knowing how many test cases there are and what resources you have for running them are an essential part of test schedule planning.
- Did the automation really run?
At one former workplace, the nightly test run was set up to send email to the dev team if there are failures. After three occasions of discovering it had stopped running entirely and the developers assuming all was well, I changed the setup to always mail me with results. My record for noticing that they stopped arriving was not perfect, but it was an improvement. An automated mechanism that expected a message within the past 24-48 hours would have been better.
- Are there any patterns to the results of the automation?
Jenkins is great at providing Green/Red Good/Bad indicators, but it won't tell you if it's the same test failing each time, or if one particular environment is flakey.
- Are the manual testers expending energy on test cases that are already covered by automation?
While some duplication may discover UI bugs that may otherwise not have been found, directing the manual resources to test things not covered by automation is a better use of resources.
The Proposal
In January of this year, Dave Hunt made a proposal for a py.test (automation framework) plugin to talk to MozTrap (then called CaseConductor). He also provided a spike for this project. He had started asking people to mark automated test cases using this approach in code reviews, but it hadn't been hooked up yet.
It was a project awaiting my attention. When I approached him in late August about working on this project, he gave me his blessing to fork and proceed, as there was no time available for him to work on it.
Implemented Features:
- I extended the moztrap-connect API library.
- I translated between py.test statuses and MozTrap statuses, including xpass, xfail, and skipped.
- I made sure that if the same test was run more than once, the most relevant result was reported, such as with parameterized tests.
- I include the AssertionException, skip reason, or xfail reason in the result's notes field.
- I ensured reporting would work in concert with pytest-xdist's -n option.
Un-implemented Features:
- A link to the MozTrap results has not been added to the HTMLReport generated by pytest-moztrap.
- No coverage report has been generated
- No marker has been added to MozTrap to indicate that a test case has been automated. Use of Tags might be appropriate.
The project was not without it's trials. I had not intended on checking it in as one huge commit, but the thing I should have changed first (hard coded credentials) I didn't actually fix until late in the game, and squashing the commits was a better strategy than editing each commit in turn.
Command Line Options
$ py.test --help
moztrap:
--mt-url=url url for the moztrap instance. (default:
moztrap.mozilla.org)
--mt-username=str moztrap username
--mt-apikey=str Ask your MozTrap admin to generate an API key in the
Core / ApiKeys table and provide it to you.
--mt-product=str product name
--mt-productversion=str
version name
--mt-run=str test run name
--mt-env=str test environment name
--mt-coverage show the coverage report. (default False)
MozTrap Results Report
Verbose MozTrap Results Report
In any case, as of late September, pytest-moztrap is available via at https://github.com/klrmn/pytest-moztrap/.
I ask the Mozilla community, what additional work does it need in order to become part of the workflow?