I have really enjoyed writing this series on UI Tests with Cucumber. I have been able to share the things I have learned while working with diverse teams helping them adopt an Acceptance Test Driven approach to deliver quality software. So far in this series we have covered:
- Introducing the concept of Page Objects
- Using modules to represent partial pages and having Page Objects return Page Objects
- The introduction of a simple DSL to make page objects easier to write
- Writing high-level tests and providing default data
This post is about workflow. How does automated acceptance testing fit into the software development lifecycle? Who is impacted and how? Who should run the tests and when?
Who are we talking about
Let’s begin by defining the roles for this discussion. The three primary roles are:
Product Owner: This is the person that understands what is to be constructed. It may be somebody filling the role of product owner on an agile team or it may be an analyst that is working with business people to define the requirements.
Tester: This is the person that is automating the cucumber features. It may be a formal tester (my preference) or it may be a developer on the team.
Developer: This is the person writing the application code.
I will be describing the workflow I teach teams as it relates to Acceptance Test Driven Development. This is an Agile workflow but it can be easily adapted to fit non-agile teams.
It begins with a prioritized backlog of stories created by the product owner. These stories have very little details – just a title and maybe a sentence or so. A week or so before the sprint or iteration beginning is when the fun begins. The product owner should try to write a set of scenarios for each story we plan to bring into the next iteration.
If there is a designer or UX person on the team I generally try to have them create low fidelity prototypes (or high fidelity if absolutely necessary) at this time. If not, the product owner can just create the low fidelity screen mock-ups on paper. Flip charts work particularly well. If it is the product owner creating the mock-ups then it might also be beneficial to have a developer collaborate on the design.
Once the screen mock-ups are complete the tester can create or modify existing page objects. Initially the tester can just use WatirHelper to identify the items that appear on the web page. This should take very little time – usually under 15 minutes.
Around the same time I would have the tester sit down with the product owner and review the scenarios. The tester can suggest missing scenarios with the goal of agreeing on a set of scenarios that completely define the scope of the story. In other words, when all of the scenarios pass the story should be considered functionally complete.
Between the completion of the pre-iteration scenarios and the time the developer picks up the story the tester should try to have the first one or two scenarios automated. This will give the developer a starting point when they begin work on the story.
When a developer picks up a story in the iteration I suggest they have a brief discussion with the product owner and/or tester to understand all of the acceptance tests / scenarios. Be ready to add or modify scenarios based on the three way discussion. Once everybody is happy with the acceptance tests it is time to start coding.
While the developer is building the application the tester should automate all of the remaining scenarios. From my experience this will take a small fraction of the time it takes to write the application code. While doing this the tester can also have discussions with the developer agreeing on the naming and id’s for the elements on the page. This might cause a very simple update to the page objects that were created earlier but this should only take five minutes or less.
The developer should run the cucumber scripts throughout development to see how they are progressing. When all of the scenarios pass I have the developer ask the tester to join them for a final run of the acceptance tests. If all of the tests pass then we consider the story functionally complete. At that time the new feature is placed into the suite of features that run continuously.
Running the cukes
I like to have my acceptance tests running as part of my continuous integration but there are two problems with this approach. First of all they usually take a long time to run. Secondly, they often cannot run on the CI server due to environment issues. Let’s see how we can address each of these problems.
Running the tests continuously
Tests that drive through the user interface take much longer to run than unit tests. And yet we still want them to run all of the time.
Here is what I do. I want as many tests as possible to run with my continuous integration build. I usually select a small subset of the tests that run in two to three minutes and have them run with the developers continuous integration build. These tests run every time a developer checks in code. I then create a scheduled build on the CI server that runs the entire suite. If it takes two hours for the entire suite to run I have the scheduled build run the tests every two and a half hours. In this manner all of the tests are running frequently and the tests that are the most likely to break are running very frequently.
How do we determine what features should run in the developers continuous integration build? My experience is that the tests for the stories we are currently working on are the most likely to fail. The stories we finished a short while ago are the second most likely to fail. I usually flag the features for the current iteration and the features for the previous iteration to run in my CI build. If it is talking too long then I just run the current iterations features.
Which environment do the tests run against
In a perfect world our cukes would run against a fully deployed application on the continuous integration server. The build would simply deploy the application and then invoke cucumber and finally publish the report. Unfortunately this is not always possible. Sometimes the system requirements are such that you must deploy your application to a formal test environment prior to testing. If this is the case we cannot have our CI features begin running until the corresponding version of the software is deployed. You should put every effort into automating this process.
Using tags and profiles
I use tags and profiles to define what runs when. At the top of every feature I place two tags:
The tags above indicate that this feature is not ready to be run and that it belongs to iteration 6. I then create the following profiles in my cucumber.yml file:
default: --no-source all_features: --format html --out report.html --no-source --tags ~@NR ci_features: --format html --out report.html --no-source --tags ~@NR, @I_6, @I_5
The default profile is what the developers run when they executed the tests on their local computer. They usually just provided the path to the feature file on the command line. The ci_features profile is what runs with the developers continuous integration build. In this example it would run all scripts that didn’t have the @NR tag and that were tagged with @I_5 or @I_6 for iteration five and six. Each iteration we update this profile to include the correct tags. The all_features profile will run all features that don’t have the @NR tag and is what we run in the schedule build on the continuous integration server. This final profile serves as our regression test.
The @NR tag is removed from the feature when all of the scenarios pass and it is therefore considered feature complete. This usually happens when the tester and developer successfully run the feature against the application. If the application needs to be deployed to a test environment first then we wait until that is finished to remove this tag.