Tuesday, April 23, 2013

lessons learned from my first professional conference

I've attended a number of small Mozilla work-weeks, but I recently attended my first full-blown professional conference. I learned a lot of things, most of them not specifically about the conference's topic.

  • Conference facilities will have food I can eat about half the time, but that doesn't mean it will be good food.
  • Just because the better hotels charge for wifi doesn't mean their wifi is any good. Buy an air card for my laptop or a cellphone that supports tethering before my next conference.
  • Denny's can make pretty much the same breakfast food as the hotel, at half (or less) the price.
  • For a lot of people, they prefer meeting someone in-person before working with them online. I do better if I have an opportunity to work with someone online before I meet them in-person (I remember personalities rather than faces).
  • Even though I'm an introvert, it's important for me to have people I (already) know to talk to. I was missing out on at least a couple of hours a day of interaction with my partner, and it really wreaked havoc on my mood.
  • I shouldn't be required to talk to strangers before breakfast...the day I attended a breakfast meeting, I couldn't pronounce the name of my company until 10.
  • Whether or not I have time to exercise in the morning has a huge effect on how allergies impact me. (Increased CO2 load dilates the sinuses.)
  • I need to re-arrange the linens on the bed so that I have sheets up to my chin, but blanket only up to my waist. Temperature control makes a big difference in how well I sleep.
  • 2500 people will overwork whatever cellular network capacity the venue has. Turn phone to airplane mode to keep it from chewing thru its batteries in 2 hours rather than my usual 2 days.
  • Hotels don't have public spaces where people can sit and work together. Suggest the company gets at least one multi-bedroom suite so people don't have to ask coworkers into their 'bedroom'.
  • Anything that keeps me away from my code for days running is going to make me grumpy. Find a reason to actually contribute to the open source project about which the conference is being held, at the conference. Try to enable other people to contribute too.
  • Company logo t-shirts are not enough insulation against hyperactive A/C. Forment a fashion revolution and demand a company logo jacket instead.
While it would have been good had I been able to participate more in the conference, sometimes this kind of learning can be even more important.

Sunday, April 21, 2013

announcing nose-selenium

The Mozilla automated web test team is pretty much standardized on using py.test to run selenium related tests and Dave Hunt has written pytest-mozwebqa for their use. The SwiftStack development team is standardized on nose, so rather than hard-coding webdriver details into my tests, I spent the weekend writing nose-selenium and just published it to pypi, to make it pip installable.

nose-selenium lets you specify if you're running your tests locally, on a remote selenium server, on a selenium grid, or at Sauce Labs, what browser you want to use, what OS / browser version you want (if you're running on a grid or Sauce Labs), and set an implicit timeout. I recommend running 'nosetest --with-nose-selenium --browser-help' to get a list of what browsers, etc are available in which environment.

Enjoy.

Sunday, April 7, 2013

Visualizing My Job

I could call it my 'Dream Job', except I feel like the process of getting it was very much a conscious one.

About six months ago I realized that while I knew a lot about what I didn't want in a job, that isn't a very good answer to the question "What are you looking for in a job?". I also observed that my technical education had been rather general, and while I knew some things about a lot of areas, I didn't know any one area with particular depth. This did not leave me in a position of strength when looking for a job outside of my prior core-competency in network management. I needed to improve my skills, and I needed to do it in a targeted manner.

A short while after I had these realizations, I started writing a post entitled "A Visualization Exercise" where I wrote short statements about what I want my job and career to look like, in the present tense. I never published this post, but every few weeks I'd have a new experience and learn something new about what I want, and it would end up in that draft.

I am happy to announce that I am now gainfully employed again. The Software Developer in Test position at SwiftStack very closely matches my visualization. (The primary bullets were from the original list. The secondary bullets provide additional detail regarding the current situation, and may apply to more than one primary element in the group.)



  • I take classes related to Data Storage, Linux Administration, Security, and Performance.
    • It looks like it's not a great idea logistically for me to continue taking classes at UCSC-Extension in Santa Clara, and I wasn't terribly impressed with the level of their classes. I have, however, discovered similar certificate programs out of UC Berkeley, and they have a program in Virtual and Cloud Computing that neatly maps to what I want to learn and feel would be useful at work.
  • I work with people who help me learn
    • My manager will continuously challenge me to solve his quality related problems.
  • I work in / on / with open platforms, languages, and standards.
  • I code in object oriented, hardware independent scripting languages.
    • Python, Linux, Django, RESTful APIs
    • OpenStack architecture is predicated on the idea of running a Cloud on commodity hardware running Linux and the freedom from proprietary lock-in.
  • I contribute to open source projects 
  • The company I work for is a significant part of the open source ecosystem.
    • All SwiftStack employees are expected to be involved in the OpenStack family of projects.
    • Things I learned while volunteering at Mozilla have been instrumental in my being able to hit the ground running and make valuable contributions to SwiftStack even on my first day. I plan to remain involved, particularly in MozTrap development.
    • I expect to build out 90% of our automation infrastructure using open source tools and hope to be able to contribute my solutions to the problems we face back to that community.
  • I work on projects whose customers I can identify with. Particular areas of interest are medicine, exercise, education, government, economics, software development infrastructure, and quality assurance tools.
  • My career is concentrated on specialties that endure and are universal, like quality, dev-ops, data storage and manipulation, and security.
  • I work involves design, implementation, testing, documentation, and deployment.
  • I share my discoveries and methods of solving problems with my colleagues, both inside and outside the company.
  • I have an office-mate (or study buddy) with whom I can chat about whatever problem I am trying to solve. This person doesn't necessarily know the answer, but is familiar with the tools I am using and is on the same side of any applicable confidentiality fence.
    • This one is still pending. Most of the engineering staff live with their headphones on, but we're also hiring so the right person may come along yet.
  • I follow a number of professional blogs, and I use IRC or IM to communicate with my coworkers and colleagues.
  • My coworkers enjoy the work we do.
    • I work with people I can call friends.
  • My team uses Agile development methodologies.
  • My Acceptance Tests are run in a Continuous Integration environment.
  • My test code is well-factored and modular.
  • I participate in (both sides of) code reviews.
  • I have time to take care of my health and fitness.
    • Our office building has easily-accessible 5-story stairs. Instant exercise.
    • I'm still looking for lunch-time Zumba without a contract.
    • I'm still looking for / might build an open-source exercise tracking tool to manage my workouts and progress.

My career is clearly progressing.

Tuesday, April 2, 2013

5 Things To Do Each Workday / Workweek

Today J.T. O'Donnell's post 10 Things To Do Every Workday hit my inbox and hit home. I've only recently pulled my career out of a stall, and I attribute that stall to the fact that I was focused so much on completing sprint / iteration tasks for my employer that I wasn't doing anything similar to keep my career and professional relationships on track.

So, more for my benefit than anyone else's, here are my To Do lists:

5 Things To Do Every Workday:

  1. Read at least 2 articles on Quality
  2. Read at least 2 articles on relevant technologies
  3. Have short non-work related conversations with each coworker
  4. Check in with each coworker regarding our progress towards our goals
  5. Take a break and Get some exercise (without exercise, you can't be healthy. without health, you can't do a good job.)
5 Things To Do Each Workweek:
  1. Have lunch with a coworker or colleague
  2. Go to a class or meetup
  3. Make a post to this blog or other similar venue
  4. Try something new even if it means failing
  5. Evaluate the work I've done to see if it supports my growth
5 Things To Do Each Quarter:
  1. Update LinkedIn
  2. Take a professional level class 
  3. Contribute something substantial to Open Source
  4. Attend a conference
  5. Evaluate my goals and progress towards them

Sunday, March 17, 2013

Tracking Manual and Automated Test Results


2013-03-17 Note: I started writing this post months ago, and was about to fill in the test report sections when I discovered that pytest-moztrap was not actually working against the production MozTrap. Getting it fixed involves having truely stand-alone test cases, rather than cases that only work against one of the environments because pre-conditions must pre-exist. Having stand-alone test cases depends upon MozTrap having a CRUD API. That is what I have been working on for the past few months. pytest-moztrap will get fixed once I am done. In the meantime, I'm publishing this post because the rest of its content is relevant even before the tool works.


I have not been a QA Manager, but I've certainly been asked by various QA Managers "How much of the testing have you finished for this release?" Mozilla needs to coordinate between staff members writing test cases, volunteers running them on a wide variety of platforms, and reporting the results to management, all for a number of projects. To meet this need, Mozilla created MozTrap.

As a test automator, I am also interested in statistics like: 

  • How many test cases can be automated?
  • How many of the test cases have been automated?
Neither DOM inspecting nor bitmap comparing test frameworks can adequately test every case in a project. Some tests simply require a human eye, and others are virtually impossible for a person to perform without automated tools. Knowing how many test cases there are and what resources you have for running them are an essential part of test schedule planning.
  • Did the automation really run?
At one former workplace, the nightly test run was set up to send email to the dev team if there are failures. After three occasions of discovering it had stopped running entirely and the developers assuming all was well, I changed the setup to always mail me with results. My record for noticing that they stopped arriving was not perfect, but it was an improvement. An automated mechanism that expected a message within the past 24-48 hours would have been better.
  • Are there any patterns to the results of the automation?
Jenkins is great at providing Green/Red Good/Bad indicators, but it won't tell you if it's the same test failing each time, or if one particular environment is flakey.
  • Are the manual testers expending energy on test cases that are already covered by automation?
While some duplication may discover UI bugs that may otherwise not have been found, directing the manual resources to test things not covered by automation is a better use of resources.

The Proposal


In January of this year, Dave Hunt made a proposal for a py.test (automation framework) plugin to talk to MozTrap (then called CaseConductor). He also provided a spike for this project. He had started asking people to mark automated test cases using this approach in code reviews, but it hadn't been hooked up yet.

It was a project awaiting my attention. When I approached him in late August about working on this project, he gave me his blessing to fork and proceed, as there was no time available for him to work on it.

 Implemented Features:

  • extended the moztrap-connect API library.
  • I translated between py.test statuses and MozTrap statuses, including xpass, xfail, and skipped.
  • I made sure that if the same test was run more than once, the most relevant result was reported, such as with parameterized tests.
  • I include the AssertionException, skip reason, or xfail reason in the result's notes field.
  • I ensured reporting would work in concert with pytest-xdist's -n option. 

Un-implemented Features:

  • A link to the MozTrap results has not been added to the HTMLReport generated by pytest-moztrap.
  • No coverage report has been generated
  • No marker has been added to MozTrap to indicate that a test case has been automated. Use of Tags might be appropriate.

The project was not without it's trials. I had not intended on checking it in as one huge commit, but the thing I should have changed first (hard coded credentials) I didn't actually fix until late in the game, and squashing the commits was a better strategy than editing each commit in turn.

Command Line Options


$ py.test --help

moztrap:
  --mt-url=url        url for the moztrap instance. (default:
                      moztrap.mozilla.org)
  --mt-username=str   moztrap username
  --mt-apikey=str     Ask your MozTrap admin to generate an API key in the
                      Core / ApiKeys table and provide it to you.
  --mt-product=str    product name
  --mt-productversion=str
                      version name
  --mt-run=str        test run name
  --mt-env=str        test environment name
  --mt-coverage       show the coverage report. (default False)

MozTrap Results Report


Verbose MozTrap Results Report



In any case, as of late September, pytest-moztrap is available via at https://github.com/klrmn/pytest-moztrap/.

I ask the Mozilla community, what additional work does it need in order to become part of the workflow?






More test coverage running fewer tests

I said the other day in Convergence of roles in software development that it is my opinion that with today's agile software processes, code coverage tools should be used as a primary method to determine whether the test coverage is complete. But sometimes in the wild race for the elusive 100% code coverage, we test more, or less, than we need.

While unit tests are important, they alone cannot complete the testing effort because they don't test whether the units are well integrated. By measuring the code coverage exercised by selenium (or other integration / system) tests, you would get a better idea of the health of the entire system. I also mentioned in the above-mentioned post that one of the reasons it's hard to get developers to run selenium tests as part of the Continuous Integration process is because they take so long. When I have spare minutes, I've been looking into the feasibility of running a code coverage report on a project's selenium tests. I've run into a number of issues but nothing unsolvable.

At the same time, running every single test for every single change either demands more and more hardware to run tests on, or lengthens the feedback cycle.

At a former employer, the collection of unit tests had gotten so big that it was no longer feasible for a developer to run all of the tests locally (sequentially) before checking in. They developed a 'run relevant tests' mechanism that once a week would run a process to determine which unit tests exercised which code, then made that database available so that developers could run only the relevant tests before checking in (where their code would be run in CI against the full unit test suite in parallell).

It occurred to me this morning to leverage a 'run relevant tests' mechanism to run selected selenium tests as part of CI.

This idea would need the following implementation layers:

  • ability to run the selenium suite under the coverage utility
  • selenium tests that run equally well on an un-provisioned development machine and the production instance
  • ability to measure coverage on a per-test-file basis
  • [environmental] (virtual) machine capable of both running the application under test with its associated back-end and a graphical UI with a common browser
  • a database with API to track the correspondence
  • ability of the coverage tool to talk to the database
  • a test-runner plugin that would consult with the database to determine what tests to run for a given diff
Have any of these pieces been built (say, for a python/django environment)? Has the entire thing been built and I just not heard about it?


Noodling on an Idea: Massively Parallel Simultanious Testing

One of the issues in testing many applications and software systems is time to execute adequate combinatorics. I propose that the possibility exists to create a system that could simultaneously test all the most important combinatorics in an inexpensive massively parallel processing system.

Requirements: The system would have to be affordable. It would have to run an open source or other adaptable OS. There would have to be integration between the testing software and the operating system.

Hardware: Parallella enables massively parallel systems inexpensively. It's an open source system, which would allow it to be customized to the needs of such a testing environment. I should state here that I am not experienced with this system.

Low level software: Low level calls would have to be provided which give testing tools the ability to branch an execution, such that one code path continues along decision path A, while another is spawned that follows decision path B. The testing tools would also require services that let them identify these branchings for reporting.

Testing Software: The testing software itself would have to be able to trigger these branchings, and to track on and report their progress. As well as to fail out gracefully and with logging in the event of an error. I would propose that these services be created as the lower level, and provided so many competing tools could be created for different needs.

Benefits and Limitations: Even with such a system, not all systems would be applicable to this kind of testing, and for most systems there may well be too many combinatorics to test them all for a given limit on cost of hardware. However by creating systems targeted to the most important subset of combinatorics, such a system could provide much more comprehensive testing for a given amount of time than existing solutions.

In Summary: While such a system would have limitations, it could be built inexpensively, with existing components in an open source manner. This would create a broadly applicable and inexpensive solution for the projects it was suited for. In short, once built, a lot of benefit for a remarkably small cost. And a technically interesting problem as well.

Wednesday, March 13, 2013

Convergence of roles in software development

Mike Brown's What’s the Difference Between Testers and Developers? came across my RSS feed today, and it has prompted me to write about convergence of roles in software development.

I see these reasons why Testing and Development are converging:

  1. Automated tests will only be run if they are fast, and fast tests require all of the tricks of the trade involved in unit testing.
  2. Automated tests will only be maintained if the entire team is invested in them, so they must be reviewed by the team and understood by the team, which requires the whole team to have development skills.
  3. Developers will only run and maintain automated tests written by testers when they change the features under test if the scripts use the same language, frameworks and tools as their automated unit tests.
  4. Automated test are more efficient than humans for doing the highly repetitive testing using combinatorics to cover huge matrixes of contingencies.
  5. Quality is not just 'Does the feature work?' but is also 'Does it leak memory?', 'Is it fast?', 'Is it secure?', and 'Is it scalable?'.  Questions like that that require large numbers of datapoints spread out over a great deal of time (or very small units of time) are better measured by software than by humans.
  6. In an agile world where there are no written requirements documents (or tracking documents get lost / out-dated within 2-3 sprints), you don't measure coverage by matching requirements to test cases, you measure coverage with code coverage tools.
  7. The human perspective provided by QA Engineers doing exploratory or acceptance testing is important, but it does not allow for much career growth.
  8. In small teams where the same tester would end up manually testing the same feature over multiple releases, the benefits of human eyes would be decreased and automation would very likely be more thorough and less error prone than human testing.

I also think QA Management and Product Management are converging. For big projects, QA Teams need to not only provide test plans and report defects originating from the test cases, but also create automation suites that meet the following requirements:

  1. Ensure exact pre-conditions and clean post-conditions, even in the event of failure.
  2. Can be run by Continuous Integration, other teams, or people across the globe.
  3. Are re-usable / can be maintained over multiple release cycles, by other teams.
  4. Are tracked by the same version control process being used by development.
  5. Provide results that can be interpreted by contractors, new hires, and/or people that stay behind when the writer goes on vacation.
  6. Can be run in parallell on the same machine or over different machines.
  7. Do not conflict with other tests written by other teams being run at the same time or on the same equipment.
  8. Interface with other systems in the SDLC (bug trackers, requirements trackers).
  9. Can be multiplexed to provide a variety of load scenarios.
  10. Run within the time limits imposed by the release cycle.
These requirements may be more complicated than those for some commercial development projects, and leading a team that can deliver an automation suite like that is going to be a lot like being a PM on a software project.


I might also be able to make a case for QA Engineer and Tech Writer converging. Trying to write acceptance test plans that are detailed enough to be outsourced and keeping them up-to-date is a great deal of effort. Keeping customer-facing documentation up-to-date is also challenging. These two efforts could be combined if acceptance tests were written up as a list of workflows the software needs to support, the QA Engineer responsible for writing the tests ensures that the software is well-documented, then the contractors running the tests could be verifying that the customer is able to learn how to perform the workflows given the information provided in the user guide.

Monday, March 11, 2013

GitHub Pull Request PSA

I was surprised this morning to get a pull request on a GitHub fork I have. I thought it was only possible to submit PRs to the upstream repo. Upon discussing it with the developer in question, I learned that it is part of his normal workflow for collaboration when the upstream project does it's reviews via gerritt. I failed to ask how he did it before he went offline for the day, so when I finished my work, I turned to google and #github for the answer. Google found a lot of descriptions of the plain-jane pull request, but not what I was looking for. It was a #github user that told me to have another look at the pull request form. It now lets you choose not only what branch you want your changes applied to, but also what user's fork. Win!

Saturday, February 16, 2013

now which change caused that infinite loop?

Today I volunteered to do a pep8 clean-up on one of the projects I'm working on. I committed the changes and was about to push them up to GitHub when I realized I hadn't run the tests. So I ran the tests and something threw a "RuntimeError: maximum recursion depth exceeded..." exception. Well crap. I only touched just about every file in the project.

After checking to see if I had changed any files mentioned in the stack trace (nope), or parent classes of any classes mentioned (yes, two 'two spaces before inline comment'), I started looking for other options.

I know that git has the 'bisect' command to do a binary search on commits, but I committed all of the files at the same time, and I've never heard of a facility to do a binary search on changed files...really, changes like pep8, that are not supposed to affect functionality and are not inter-related, would be the only use of it.

So here's the process I'm following:

If the tests fail:

  1. un-commit the changes (git reset HEAD^)
  2. stage half of the changes and check them in
  3. stash the other half of the changes
  4. run the tests
If the tests pass, 
  1. un-stash the stashed changes
  2. stage and commit half of them
  3. re-stash the other half
  4. run the tests
Repeat.

After two cycles of this, I did what I should have done in the first place - run test on master, at which point I discovered that the failing tests were an unintended consequence of some logging I had added to one of the project's dependencies. Not my most brilliant moment, but at least now I know what I will do  if I run into a similar problem.

Monday, January 28, 2013

I'm Geeky, She's Geeky, We're All Geeky

I spent this weekend at the She's Geeky conference at the Computer History Museum in Mountain View. The event was recommended to me by Anca Mosoiu, the owner of Tech Limnal.

The She's Geeky conference is an 'unconference', as such, you never really know what you're going to find until you've found it. Each morning volunteers from the participants wrote topics they'd be willing to teach, discussions they'd be willing to facilitate, or questions they wanted answers for on sheets of paper, and then used a large wall space to schedule them into the available time slots and session locations.

I originally thought the (un)conference principals posted on the wall were common to all unconferences, but I have been unable to find the list on either unconference.net, shesgeeky.org, or the wikipedia entry on unconferences (and the linked entries on different methods of facilitating them). So I'm going to try to drag them out of my memory and hope someone will come along and give me corrections and/or a reference. In any case, these principals made the event one of the least stressful events I've ever attended.

  • Whomever shows up are just the right people.
  • The session starts when it starts and ends when it ends.
  • If you aren't either learning or contributing to the conversation, it is your responsibility to go somewhere where you will.
  • Butterflies and Bumblebees are encouraged to flit from session to session contributing to each and cross pollenating between them.
  • Others...


Over the course of the weekend, I attended the following workshops (names may be abbreviated / combined):
  • Behavioral Interviewing (Lab126 flavored)
  • Personal Data Ecosystems
  • Wordpress
  • Open Organizations
  • Data Visualization
  • Quantified Self
  • Technical Interviewing (Google flavored)
  • Django, APIs
  • Online Learning
  • Recruiter Wisdom (Groupon flavored)
  • Mentoring
  • Uncomfortable Personalities at Work
  • Impostor Syndrome
My big take-aways from the conference are the following:

A framework for practicing for interviews that doesn't involve me trying to be something I'm not.

A hardware-y project (that I don't feel comfortable making public yet).

A project that is a cross between things I know I need to do for self care, quantified self, data visualization, and programming, that may lead to a commercial product.

A plan to learn math at Khan Academy, from wherever I'm currently at up through at least statistics, so that I can stop being limited in my educational choices by lack of prerequisites.

and

5 questions to ask people who ask me to be their mentor

  • Where do you want to go?
  • Where are you now?
  • What challenges / obstacles are in the way?
  • What have you tried so far?
  • How much time commitment are you looking for?
I plan to encourage other women in my network to attend She's Geeky next year. I was not the only one who arrived overwhelmed and anxious and left inspired, enthusiastic, and refreshed. If you're in Seattle, you're female (I suspect those who identify as female would be welcome as well), and you are any kind of geek, there's a She's Geeky scheduled there in late April, don't miss it!

Sunday, January 27, 2013

Setting up Python on Windows

I don't develop on a Windows machine myself, but while volunteering at Mozilla, a number of prospective contributors have asked me for help setting up Python under Windows. If anyone has differing instructions for Windows versions not mentioned here, I would be happy to include them.

Install Python

Please find out what version of Python is used by the project you plan to work on. If you're starting a brand-new project, you probably want to install 2.7.3, unless you already know you need 3.x. (Many Mozilla projects are still using 2.6.) The version number in the following instructions will be referred to as "XX".
  1. Download a Windows installer from http://python.org/download/
  2. Run Install (take note of the install location. the default location is C:\PythonXX)

Update your Path

If you are installing multiple versions of Python, be aware that the one that appears first on the Path is the one that will be used when calling "python.exe". If you need a specific version of python for your project, either re-arrange the path or call it with "python27.exe".

Windows XP

  1. Start Menu > Settings > Control Panel > System
  2. Advanced (tab) > Environment Variables (button near bottom)
  3. In the System variables list, find the entry for Path and click Edit

Windows 7

  1. Start Menu > Control Panel > System
  2. Advanced Settings (left pane)
  3. Environment Variables (button near bottom)

Both / All Versions

  1. Find 'Path' in the system variables and click Edit.
  2. Scroll to the end of the value, add a semicolon as a separator, and add "C:\PythonXX" (or the other place you installed python to), another semicolon, and "C:\PythonXX\Scripts".

Open a DOS / cmd window for entering command line instructions

On all Windows versions, this can be done from Start Menu->Run "cmd.exe". The default prompt in a command window is "C:\WINDOWS>". This will be used in instructions to specify that the command should be run in this window.

Install setuptools

The easy_install script is a part of setuptools, and it is used to install other packages from online sources.
  1. Download ez_setup.py from http://peak.telecommunity.com/dist/ez_setup.py by visiting the address in your browser and doing 'Save As' on the file. The Desktop is a reasonable place to save it to, you won't need it long.
  2. C:\WINDOWS>python.exe c:\Users\<username>\Desktop\ez_setup.py (replace the path if you saved it in a different location).

Install pip

pip is a 'second generation' wrapper for easy_install which is more flexible and has more options. It will automatically put files in the C:\PythonXX path hierarchy so that python.exe will be able to find them.

  C:\WINDOWS>easy_install.exe pip

Source Control

If you're working on an existing project, find out what kind of source / version control system it's using. Some common source control systems (with their Windows applications) include git / GitHub, subversion / tortiseSVN, and mercurial / tortiseHG. Find out if you should 'fork' (make a personal copy of) the repository before you download / check out / clone it.

Install virtualenv

If there is any chance you are going to be developing multiple Python applications on this machine, you should install virtualenv. It allows you to install differing versions of the same library for different projects.

  C:\WINDOWS>pip.exe install virtualenv

Possible Next Steps

If you installed virtualenv in the previous step, then you want to create and activate a virtual (python) environment. It is not generally accepted to check virtual environments into source control, so be sure to set it up in a location outside of your repository. I recommend putting them in a folder called 'environments' in your home directory (c:\User\<username>).

  C:\WINDOWS>mkdir c:\User\<username>\environments
  C:\WINDOWS>virtualenv.exe c:\User\<username>\environments\<project name>
  C:\WINDOWS>source c:\User\<username>\environments\<project name>\bin\activate

From here what you do is going to depend on your project. If it is an existing project and there is a "setup.py" file in the main directory, then you probably want to:

    C:\WINDOWS>python.exe setup.py develop

If there is no setup.py file but there is a requirements.txt file, then:

    C:\WINDOWS>pip.exe install -Ur requirements.txt

If neither of these apply to your project, consult with its documentation to find out how to install the project's pre-reqs.