Thursday, March 13, 2014

secure ldap on centos

In the unlikely event that you were wondering how to fix the fact that the ldap client can't talk to the server using ldaps / starttls even tho you added the CA cert using certutil, you need to you need to create /etc/openldap/cacerts and add it to there instead of /etc/openldap/certs.

Friday, January 10, 2014

From nose to testr: Output Capture

The thing that drove me most crazy when I was trying to write nose-testresources (a plugin for nose that would put ResourcedTestCases inside OptimisedTestSuites) was that I lost nose's default behaviour of capturing log output and stdout. I put a lot of work into logging information that would help me troubleshoot failures, and was flailing without it.

I talked to Robert Collins, the primary maintainer of this chain of tools, and he suggested I use testtools.TestCase, which supports addFixture() and python-fixtures, which includes FakeLogger. Now, this transition turned out to be a little bit sticky - both testtools.TestCase and testresources.ResouredTestCase inherit from unittest.TestCase, so dual inheritance should have worked, but I was running into this strange problem that resources were never being cleaned up. It turned out that testtools.TestCase calls setUp() but doesn't call tearDown() - it uses addCleanup() for that purpose. As part of the debugging process for this issue, I axed the multiple inheritance an just created a ResourcedTestCase that follows testtools.TestCase norms as follows:


from testresources import setUpResources, tearDownResources, _get_result
import testtools
import fixtures

class ResourcedTestCase(testtools.TestCase):
    """Inherit from testtools.TestCase instead of unittest.TestCase in order
    to have self.useFixture()."""

    def setUp(self):
        # capture output
        FORMAT = '%(asctime)s [%(levelname)s] %(name)s %(lineno)d: %(message)s'
        self.useFixture(fixtures.FakeLogger(format=FORMAT))
        # capture stdout
        stdout = self.useFixture(fixtures.StringStream('stdout')).stream
        self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
        # capture stderr
        stderr = self.useFixture(fixtures.StringStream('stderr')).stream
        self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
        super(ResourcedTestCase, self).setUp()
        setUpResources(self, self.resources, _get_result)
        self.addCleanup(tearDownResources, self, self.resources, _get_result)


With this new base class, my tests now capture logging, stdout, and stderr under both testr and nose, with and without nose-testresources.

Thursday, January 9, 2014

From nose to testr: More Flexible Test Discovery

testrepository requires a results stream in subunit format, and the python implementation of that is testtools. testtools uses the standard unittest.TestLoader. This loader will let you discover tests from one specific directory, or run tests from multiple fully specified test modules. I am accustomed to nose, which lets you specify an arbitrary number of directories and modules, so I modified the TestLoader to bend it to my will.

import os
import unittest

class TestLoader(unittest.TestLoader):
    """Test loader that extends unittest.TestLoader to:

    * support names that can be a combination of modules and directories
    """

    def loadTestsFromNames(self, names, module=None):
        """Return a suite of all tests cases found using the given sequence
        of string specifiers. See 'loadTestsFromName()'.
        """
        suites = []
        for name in names:
            if os.path.isdir(name):
                top_level = os.path.split(name)[0]
                suites.extend(self.discover(name, top_level_dir=top_level))
            else:
                suites.extend(self.loadTestsFromName(name, module))
        return self.suiteClass(suites)

Then it just became a matter of borrowing from the subunit runner script and instantiating SubunitTestProgram with the new loader.

SubunitTestProgram(module=None, argv=sys.argv, testRunner=SubunitTestRunner,
        stdout=sys.stdout, testLoader=TestLoader())

From nose to testr: Rationale

When I started at SwiftStack, my first priority was to write as many automated tests as I could. I wanted to use a framework/runner already familiar to the developers, has good online support / community, and unittest-compatible because it was almost certain I'd have to change frameworks eventually. I chose nose.

Over the course of the next 6 months, I was a little bit too successful at writing automated tests, and I arrived at the point that a full test run in the default environment took 13 hours, there was not enough time in the day to run the tests against all of the different environments, and I was unable to effectively test the effect of changes to the automation infrastructure before merging.

I needed to speed up the tests.

While there were some tests where the test itself was slow, the bigger problem was setup for the tests. The fixtures I needed for many of the tests were very expensive, and even if nose let me set them up on a per-class basis rather than a per-test basis, it was still adding time. Enter testresources, whose OptimisedTestSuite analyses the resources required by each test, orders them according to those resources, and shares resources across tests.

I tried for quite some time to get testresources to work with nose...I wrote a nose plugin that allowed me to get by, but there were some distinct problems, and the primary maintainer for nose was not available (he has recently requested hand-off to a new maintainer). I eventually decided to go with what the OpenStack community had been telling me all along - that I should use the testrepository (testr) / subunit / testtools / testresources collection of tools.

There have been challenges. I'm still not done. But I've got insights to share:

And more to come.


Tuesday, April 23, 2013

lessons learned from my first professional conference

I've attended a number of small Mozilla work-weeks, but I recently attended my first full-blown professional conference. I learned a lot of things, most of them not specifically about the conference's topic.

  • Conference facilities will have food I can eat about half the time, but that doesn't mean it will be good food.
  • Just because the better hotels charge for wifi doesn't mean their wifi is any good. Buy an air card for my laptop or a cellphone that supports tethering before my next conference.
  • Denny's can make pretty much the same breakfast food as the hotel, at half (or less) the price.
  • For a lot of people, they prefer meeting someone in-person before working with them online. I do better if I have an opportunity to work with someone online before I meet them in-person (I remember personalities rather than faces).
  • Even though I'm an introvert, it's important for me to have people I (already) know to talk to. I was missing out on at least a couple of hours a day of interaction with my partner, and it really wreaked havoc on my mood.
  • I shouldn't be required to talk to strangers before breakfast...the day I attended a breakfast meeting, I couldn't pronounce the name of my company until 10.
  • Whether or not I have time to exercise in the morning has a huge effect on how allergies impact me. (Increased CO2 load dilates the sinuses.)
  • I need to re-arrange the linens on the bed so that I have sheets up to my chin, but blanket only up to my waist. Temperature control makes a big difference in how well I sleep.
  • 2500 people will overwork whatever cellular network capacity the venue has. Turn phone to airplane mode to keep it from chewing thru its batteries in 2 hours rather than my usual 2 days.
  • Hotels don't have public spaces where people can sit and work together. Suggest the company gets at least one multi-bedroom suite so people don't have to ask coworkers into their 'bedroom'.
  • Anything that keeps me away from my code for days running is going to make me grumpy. Find a reason to actually contribute to the open source project about which the conference is being held, at the conference. Try to enable other people to contribute too.
  • Company logo t-shirts are not enough insulation against hyperactive A/C. Forment a fashion revolution and demand a company logo jacket instead.
While it would have been good had I been able to participate more in the conference, sometimes this kind of learning can be even more important.

Sunday, April 21, 2013

announcing nose-selenium

The Mozilla automated web test team is pretty much standardized on using py.test to run selenium related tests and Dave Hunt has written pytest-mozwebqa for their use. The SwiftStack development team is standardized on nose, so rather than hard-coding webdriver details into my tests, I spent the weekend writing nose-selenium and just published it to pypi, to make it pip installable.

nose-selenium lets you specify if you're running your tests locally, on a remote selenium server, on a selenium grid, or at Sauce Labs, what browser you want to use, what OS / browser version you want (if you're running on a grid or Sauce Labs), and set an implicit timeout. I recommend running 'nosetest --with-nose-selenium --browser-help' to get a list of what browsers, etc are available in which environment.

Enjoy.

Sunday, April 7, 2013

Visualizing My Job

I could call it my 'Dream Job', except I feel like the process of getting it was very much a conscious one.

About six months ago I realized that while I knew a lot about what I didn't want in a job, that isn't a very good answer to the question "What are you looking for in a job?". I also observed that my technical education had been rather general, and while I knew some things about a lot of areas, I didn't know any one area with particular depth. This did not leave me in a position of strength when looking for a job outside of my prior core-competency in network management. I needed to improve my skills, and I needed to do it in a targeted manner.

A short while after I had these realizations, I started writing a post entitled "A Visualization Exercise" where I wrote short statements about what I want my job and career to look like, in the present tense. I never published this post, but every few weeks I'd have a new experience and learn something new about what I want, and it would end up in that draft.

I am happy to announce that I am now gainfully employed again. The Software Developer in Test position at SwiftStack very closely matches my visualization. (The primary bullets were from the original list. The secondary bullets provide additional detail regarding the current situation, and may apply to more than one primary element in the group.)



  • I take classes related to Data Storage, Linux Administration, Security, and Performance.
    • It looks like it's not a great idea logistically for me to continue taking classes at UCSC-Extension in Santa Clara, and I wasn't terribly impressed with the level of their classes. I have, however, discovered similar certificate programs out of UC Berkeley, and they have a program in Virtual and Cloud Computing that neatly maps to what I want to learn and feel would be useful at work.
  • I work with people who help me learn
    • My manager will continuously challenge me to solve his quality related problems.
  • I work in / on / with open platforms, languages, and standards.
  • I code in object oriented, hardware independent scripting languages.
    • Python, Linux, Django, RESTful APIs
    • OpenStack architecture is predicated on the idea of running a Cloud on commodity hardware running Linux and the freedom from proprietary lock-in.
  • I contribute to open source projects 
  • The company I work for is a significant part of the open source ecosystem.
    • All SwiftStack employees are expected to be involved in the OpenStack family of projects.
    • Things I learned while volunteering at Mozilla have been instrumental in my being able to hit the ground running and make valuable contributions to SwiftStack even on my first day. I plan to remain involved, particularly in MozTrap development.
    • I expect to build out 90% of our automation infrastructure using open source tools and hope to be able to contribute my solutions to the problems we face back to that community.
  • I work on projects whose customers I can identify with. Particular areas of interest are medicine, exercise, education, government, economics, software development infrastructure, and quality assurance tools.
  • My career is concentrated on specialties that endure and are universal, like quality, dev-ops, data storage and manipulation, and security.
  • I work involves design, implementation, testing, documentation, and deployment.
  • I share my discoveries and methods of solving problems with my colleagues, both inside and outside the company.
  • I have an office-mate (or study buddy) with whom I can chat about whatever problem I am trying to solve. This person doesn't necessarily know the answer, but is familiar with the tools I am using and is on the same side of any applicable confidentiality fence.
    • This one is still pending. Most of the engineering staff live with their headphones on, but we're also hiring so the right person may come along yet.
  • I follow a number of professional blogs, and I use IRC or IM to communicate with my coworkers and colleagues.
  • My coworkers enjoy the work we do.
    • I work with people I can call friends.
  • My team uses Agile development methodologies.
  • My Acceptance Tests are run in a Continuous Integration environment.
  • My test code is well-factored and modular.
  • I participate in (both sides of) code reviews.
  • I have time to take care of my health and fitness.
    • Our office building has easily-accessible 5-story stairs. Instant exercise.
    • I'm still looking for lunch-time Zumba without a contract.
    • I'm still looking for / might build an open-source exercise tracking tool to manage my workouts and progress.

My career is clearly progressing.

Tuesday, April 2, 2013

5 Things To Do Each Workday / Workweek

Today J.T. O'Donnell's post 10 Things To Do Every Workday hit my inbox and hit home. I've only recently pulled my career out of a stall, and I attribute that stall to the fact that I was focused so much on completing sprint / iteration tasks for my employer that I wasn't doing anything similar to keep my career and professional relationships on track.

So, more for my benefit than anyone else's, here are my To Do lists:

5 Things To Do Every Workday:

  1. Read at least 2 articles on Quality
  2. Read at least 2 articles on relevant technologies
  3. Have short non-work related conversations with each coworker
  4. Check in with each coworker regarding our progress towards our goals
  5. Take a break and Get some exercise (without exercise, you can't be healthy. without health, you can't do a good job.)
5 Things To Do Each Workweek:
  1. Have lunch with a coworker or colleague
  2. Go to a class or meetup
  3. Make a post to this blog or other similar venue
  4. Try something new even if it means failing
  5. Evaluate the work I've done to see if it supports my growth
5 Things To Do Each Quarter:
  1. Update LinkedIn
  2. Take a professional level class 
  3. Contribute something substantial to Open Source
  4. Attend a conference
  5. Evaluate my goals and progress towards them

Sunday, March 17, 2013

Tracking Manual and Automated Test Results


2013-03-17 Note: I started writing this post months ago, and was about to fill in the test report sections when I discovered that pytest-moztrap was not actually working against the production MozTrap. Getting it fixed involves having truely stand-alone test cases, rather than cases that only work against one of the environments because pre-conditions must pre-exist. Having stand-alone test cases depends upon MozTrap having a CRUD API. That is what I have been working on for the past few months. pytest-moztrap will get fixed once I am done. In the meantime, I'm publishing this post because the rest of its content is relevant even before the tool works.


I have not been a QA Manager, but I've certainly been asked by various QA Managers "How much of the testing have you finished for this release?" Mozilla needs to coordinate between staff members writing test cases, volunteers running them on a wide variety of platforms, and reporting the results to management, all for a number of projects. To meet this need, Mozilla created MozTrap.

As a test automator, I am also interested in statistics like: 

  • How many test cases can be automated?
  • How many of the test cases have been automated?
Neither DOM inspecting nor bitmap comparing test frameworks can adequately test every case in a project. Some tests simply require a human eye, and others are virtually impossible for a person to perform without automated tools. Knowing how many test cases there are and what resources you have for running them are an essential part of test schedule planning.
  • Did the automation really run?
At one former workplace, the nightly test run was set up to send email to the dev team if there are failures. After three occasions of discovering it had stopped running entirely and the developers assuming all was well, I changed the setup to always mail me with results. My record for noticing that they stopped arriving was not perfect, but it was an improvement. An automated mechanism that expected a message within the past 24-48 hours would have been better.
  • Are there any patterns to the results of the automation?
Jenkins is great at providing Green/Red Good/Bad indicators, but it won't tell you if it's the same test failing each time, or if one particular environment is flakey.
  • Are the manual testers expending energy on test cases that are already covered by automation?
While some duplication may discover UI bugs that may otherwise not have been found, directing the manual resources to test things not covered by automation is a better use of resources.

The Proposal


In January of this year, Dave Hunt made a proposal for a py.test (automation framework) plugin to talk to MozTrap (then called CaseConductor). He also provided a spike for this project. He had started asking people to mark automated test cases using this approach in code reviews, but it hadn't been hooked up yet.

It was a project awaiting my attention. When I approached him in late August about working on this project, he gave me his blessing to fork and proceed, as there was no time available for him to work on it.

 Implemented Features:

  • extended the moztrap-connect API library.
  • I translated between py.test statuses and MozTrap statuses, including xpass, xfail, and skipped.
  • I made sure that if the same test was run more than once, the most relevant result was reported, such as with parameterized tests.
  • I include the AssertionException, skip reason, or xfail reason in the result's notes field.
  • I ensured reporting would work in concert with pytest-xdist's -n option. 

Un-implemented Features:

  • A link to the MozTrap results has not been added to the HTMLReport generated by pytest-moztrap.
  • No coverage report has been generated
  • No marker has been added to MozTrap to indicate that a test case has been automated. Use of Tags might be appropriate.

The project was not without it's trials. I had not intended on checking it in as one huge commit, but the thing I should have changed first (hard coded credentials) I didn't actually fix until late in the game, and squashing the commits was a better strategy than editing each commit in turn.

Command Line Options


$ py.test --help

moztrap:
  --mt-url=url        url for the moztrap instance. (default:
                      moztrap.mozilla.org)
  --mt-username=str   moztrap username
  --mt-apikey=str     Ask your MozTrap admin to generate an API key in the
                      Core / ApiKeys table and provide it to you.
  --mt-product=str    product name
  --mt-productversion=str
                      version name
  --mt-run=str        test run name
  --mt-env=str        test environment name
  --mt-coverage       show the coverage report. (default False)

MozTrap Results Report


Verbose MozTrap Results Report



In any case, as of late September, pytest-moztrap is available via at https://github.com/klrmn/pytest-moztrap/.

I ask the Mozilla community, what additional work does it need in order to become part of the workflow?






More test coverage running fewer tests

I said the other day in Convergence of roles in software development that it is my opinion that with today's agile software processes, code coverage tools should be used as a primary method to determine whether the test coverage is complete. But sometimes in the wild race for the elusive 100% code coverage, we test more, or less, than we need.

While unit tests are important, they alone cannot complete the testing effort because they don't test whether the units are well integrated. By measuring the code coverage exercised by selenium (or other integration / system) tests, you would get a better idea of the health of the entire system. I also mentioned in the above-mentioned post that one of the reasons it's hard to get developers to run selenium tests as part of the Continuous Integration process is because they take so long. When I have spare minutes, I've been looking into the feasibility of running a code coverage report on a project's selenium tests. I've run into a number of issues but nothing unsolvable.

At the same time, running every single test for every single change either demands more and more hardware to run tests on, or lengthens the feedback cycle.

At a former employer, the collection of unit tests had gotten so big that it was no longer feasible for a developer to run all of the tests locally (sequentially) before checking in. They developed a 'run relevant tests' mechanism that once a week would run a process to determine which unit tests exercised which code, then made that database available so that developers could run only the relevant tests before checking in (where their code would be run in CI against the full unit test suite in parallell).

It occurred to me this morning to leverage a 'run relevant tests' mechanism to run selected selenium tests as part of CI.

This idea would need the following implementation layers:

  • ability to run the selenium suite under the coverage utility
  • selenium tests that run equally well on an un-provisioned development machine and the production instance
  • ability to measure coverage on a per-test-file basis
  • [environmental] (virtual) machine capable of both running the application under test with its associated back-end and a graphical UI with a common browser
  • a database with API to track the correspondence
  • ability of the coverage tool to talk to the database
  • a test-runner plugin that would consult with the database to determine what tests to run for a given diff
Have any of these pieces been built (say, for a python/django environment)? Has the entire thing been built and I just not heard about it?