Posted by:
Posted in: Technology and Delivery
Tags: , ,

Over time the likelihood of a bug being introduced into an active software project is inevitable. There is simply no way around this fact of life for software projects. The responsible, forward thinking professional software developer reduces the occurrence and recurrence of bugs by implementing a thoughtful automated testing strategy.

In part 1 of this series we focused on defining the flavors of tests.  In part 2 we will provide an application of the Test Pyramid strategy for an example software project.

Putting it all together: The Test Pyramid

The Test Pyramid is a way to visualize and define layers of a fully-developed testing architecture, seen in Figure 1, below.  This visualization does well to show the intended breadth of coverage of each layer, with Unit Tests taking up the base of the testing strategy implementation and Functional Tests covering less breadth.

fig-1

Figure 1: The Test Pyramid

Other considerations with implementing the Test Pyramid include; the test layer’s concerns and how often the test suite for a given layer should be executed.  Moving from the base of the pyramid upwards, a given layer’s focus shifts from primarily on the codebase itself (with Unit Tests) to the client (with Functional/Behavioral Tests).  Additionally the test suite for a given layer should range from being executed everywhere (with Unit Tests) to an on-merge/deployment basis (with Functional Tests).

Implementation: supporting a project with the Test Pyramid

Given the separate flavors of test cases each layer of the Test Pyramid addresses, it is possible for a team of developers to lay the foundation of a low-coverage Test Pyramid implementation in just a single Agile sprint.

  • Unit Tests: confirming the inner workings of Components – Ideally every Component code repository should include Unit Tests when the code repository is initialized.  If not, standard unit testing frameworks for a given Component’s language are trivial to implement.  Sometimes less trivial are the inclusion of code coverage and mocking packages.  Inclusion of a coverage package is crucial for ensuring that Unit Tests code coverage is consistently 80% or greater, and mocks allow for the discrete testing of relevant units of code.
  • Component Tests: confirming Components communicate effectively – Component Test cases are often best implemented using the same test framework as the Unit Tests.  This is a good approach as Component Tests should exist in the same repository as the Component itself, therefore ubiquity of the test framework is a wise decision.  Coverage is not a relevant concern with Component Tests, however it is important that developers have the ability to easily execute Component Tests vs Unit Tests.  Therefore Component Tests should be implemented in a directory structure that makes it easy for developers to discern between the two test layers.
  • Interface Tests: confirming Components deployed environment config – As stated previously, Interface Tests are optional given the architecture and environment configuration of the software project.  Given that these tests are intended to exercise a system in a deployed environment, Interface Tests should generally be placed in their own code repository.  In the cases of a where Interface Tests share the same code repository as Component and Unit Tests, care should be taken to ensure that each test suite can be executed independently.
  • System Tests: confirming end-to-end data transfer – Like Interface Tests, System Tests are intended to exercise a system in a deployed environment and should therefore be placed in their own code repository.  One of the challenges with System Tests implementation is the added latency inherent in accessing the system in the same manner as a client application.  For example, System Tests that validate a deployed web API will incur the latency inherent in HTTP request/response cycles.  Therefore it is recommended to use test frameworks that employ some form of concurrent or parallel processing in order to minimize blocked processing of test execution due to HTTP request/response cycles.
  • Functional/Behavioral Tests: confirming end-user functionality – Like Interface and System Tests, Functional Tests are intended to exercise a system in a deployed environment.  Unlike these layers, however, Functional Tests emulate client devices and software, therefore requiring special software packages for manipulating the client UIs.  For this reason Functional Tests should always exist in a separate code repository.  Just like System Tests, latency is again an issue in Functional Tests. Therefore employing some form of concurrent or parallel processing is recommended.  For web and mobile applications, specialized services like SauceLabs, TestingBot, and other similar tools exist specifically for parallelizing Functional Test suite test cases for faster execution.

Conclusion

In this series of posts we have defined the flavors of testing and how they fit into the Unit, Component, Interface, System, and Functional Test layers that compose the Test Pyramid.  Additionally, we covered how the Test Pyramid visualization itself lends well to denoting the breadth, concerns, and execution of a layer’s test suite.  Finally suggestions were given for breaking up layers of the Test Pyramid across code repositories within the entire software project’s ecosystem.


Sources:
http://www.artima.com/forums/flat.jsp?forum=106&thread=204677
http://www.martinfowler.com/articles/consumerDrivenContracts.html
http://martinfowler.com/bliki/TestPyramid.html
https://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid

There are no comments published yet.

Leave a Comment