While unit tests are important, they alone cannot complete the testing effort because they don't test whether the units are well integrated. By measuring the code coverage exercised by selenium (or other integration / system) tests, you would get a better idea of the health of the entire system. I also mentioned in the above-mentioned post that one of the reasons it's hard to get developers to run selenium tests as part of the Continuous Integration process is because they take so long. When I have spare minutes, I've been looking into the feasibility of running a code coverage report on a project's selenium tests. I've run into a number of issues but nothing unsolvable.
At the same time, running every single test for every single change either demands more and more hardware to run tests on, or lengthens the feedback cycle.
At a former employer, the collection of unit tests had gotten so big that it was no longer feasible for a developer to run all of the tests locally (sequentially) before checking in. They developed a 'run relevant tests' mechanism that once a week would run a process to determine which unit tests exercised which code, then made that database available so that developers could run only the relevant tests before checking in (where their code would be run in CI against the full unit test suite in parallell).
It occurred to me this morning to leverage a 'run relevant tests' mechanism to run selected selenium tests as part of CI.
This idea would need the following implementation layers:
- ability to run the selenium suite under the coverage utility
- selenium tests that run equally well on an un-provisioned development machine and the production instance
- ability to measure coverage on a per-test-file basis
- [environmental] (virtual) machine capable of both running the application under test with its associated back-end and a graphical UI with a common browser
- a database with API to track the correspondence
- ability of the coverage tool to talk to the database
- a test-runner plugin that would consult with the database to determine what tests to run for a given diff
Have any of these pieces been built (say, for a python/django environment)? Has the entire thing been built and I just not heard about it?
No comments:
Post a Comment