How to Create Maintainable Acceptance Tests

This post summarizes what I've learned from various sources about making acceptance or black-box tests maintainable. This topic is of great interest to me because I believe in the benefits that acceptance tests can bring (such as living documentation) but I'm also very much aware that it is all too easy to create an unmaintainable monster whose weight eventually crushes you. So the question is how to navigate the minefield to get to the golden apple?

The key elements that contribute to the maintainability of acceptance tests are:

  1. Aligned business, software, and test models => small change in business requires only a similarly small change in the software and a small change in tests (Gojko Adzic explains that very well in his JavaZone 2012 talk Long-term value of acceptance tests)
    • The key to gaining the alignment is to use business language in all the three models from the very start, building them around business concepts and relationships
  2. Testing under the surface level, if possible
    • Prefer to test your application via the service layer or at worst the servlet layer; only test on the UI level if you really have to and only as little as possible for UI is much more brittle (and also difficult to test)
    • The more you want to test the more you have to pay for it in the terms of maintenance effort. Usually you decide so that you cover the part(s) of the application where the most risk is - the best thing is to do cost-benefit evaluation.
  3. Isolating tests from implementation by layers of test abstraction
    • Top layer: Acceptance tests should only describe "what" is tested and never "how" to test it. You must avoid writing scripts instead of specifications.
    • Layer 2: Instrumentation - right below the acceptance test is an instrumentation layer, which extracts input/output data from the test and defines how to perform the test via a high-level API, provided by the next level (we could say a test DSL) such as "logInUser(X); openAccountPage();"
    • Layer 3: High-level test DSL: This layer contains all the implementation details and exposes to the higher layer high-level primitives that it can use to compose the tests without depending on implementation details (ex.: logInUser may use HtmlUnit to load a page, fill a form, post it). See the PageObject example below.


(And of course many, if not all, of the rules for creating maintainable unit tests apply as well.)



Maintainability vs. easy of writing



As James Coplien points out, it might be better to focus on making it easy to write a test quickly rather than on maintainability - f.ex. by making it easy to get the system into the state where you can access/test a feature and adding hooks that enable you to observe the state of the system (and thus simplify verification of results). You can f.ex. also break a system into a set of smaller systems. <=> does the architecture support the right kind granularity and level of product increment changes?

What others have to say



On the three layers of test isolation:

  1. Business rule or functionality level: what is this test demonstrating or exercising. Ideally illustrated with realistic key examples. For example: Free delivery is offered to customers who order two or more books, illustrated with an example of a customer who orders one book and doesn't get free delivery and an example of a customer who orders two books and gets free delivery.
  2. User interface workflow level: what does a user have to do to exercise the functionality through the UI, on a higher activity level. For example, put the specified number of books in a shopping cart, enter address details, verify that delivery options include or not include free delivery as expected.
  3. Technical activity level: what are the technical steps required to exercise the functionality. For example, open the shop homepage, log in with testuser and testpassword go to the /book page, click on the first image with the book CSS class, wait for page to load, click on the ‘Buy now’ link and so on.
(The Secret Ninja Cucumber Scrolls, page 109 - 110)


On isolating tests from implementation in Selenium web UI tests by modelling pages with PageObjects, exposing services (e.g. "hasUnreadEmail", "loginExpctingFailure") and hiding the low-level implementation details:

PageObjects can be thought of as facing in two directions simultaneously.  Facing towards the developer of a test, they represent the services offered by a particular page. Facing away from the developer, they should be the only thing that has a deep knowledge of the structure of the HTML of a page (or part of a page)


The ThoughtWorks Technology Radar 3/2012 promotes testing at the appropriate level (marked as suitable for adoption) while having test recorders "on hold" with this comments:

[...], has encouraged widespread use of acceptance testing at the browser level. This unfortunately encouraged doing the bulk of testing where the cost to run the tests is the greatest. Instead, we should test at the appropriate level, as close to the code as possible, so that tests can be run with maximum efficiency. Browser-level tests should be the icing on the cake, supported by acceptance and unit tests executed at appropriate layers.

Test recorders seem invaluable as they provide a quick way to capture navigation through an application. However, we strongly advise against their regular use, as it tends to result in brittle tests which break with small changes to the UI. The test code they produce tends to be relatively poor and riddled with unnecessary duplication. Most importantly, test recorders tend to cut channels of communication between the test automation and development teams. When faced with an application that is difficult to test through the user interface, the solution is to have a critical conversation between the teams to build a more testable UI.


On aligned business, software and test models:

The principle of symmetric change The best software is the one where the technical software model is aligned with the relevant business domain model. This ensures that one small change in the business domain (new requirements or changes to existing features) results in one small change in software. The same is true for the other artefacts produced for software — tests and documentation. When the tests are aligned with the business domain model, one small change in business will result in one small change in tests, making the tests easy to maintain. Tests described at a low level technical detail, in the language of technical UI interactions, are everything but aligned with a business model. If anything, they are aligned with the current design and layout of user interfaces. A small change in business requirements can have a huge impact on such tests, requiring hours of updates to tests after just a few minutes of changes to the code.

(The Secret Ninja Cucumber Scrolls, page 111)


Resources



This post is very much based on the following resources:

  1. The Secret Ninja Cucumber Scrolls (free pdf) by David de Florinier and Gojko Adzic, 2011-03-16
  2. http://concordion.org/ by David Peterson
  3. Specification by example, Gojko Adzic, 2011, Manning, ISBN 978-1617290084
  4. Long-term value of acceptance tests, Gojko Adzic, a talk at JavaZone 2012


You might enjoy also other posts on effective development.








Tags: testing opinion


Copyright © 2024 Jakub Holý
Powered by Cryogen
Theme by KingMob