Posted by:
Posted in: Technology and Delivery
Tags: , ,

Over time the likelihood of a bug being introduced into an active software project is inevitable. There is simply no way around this fact of life for software projects. The responsible, forward thinking professional software developer reduces the occurrence and recurrence of bugs by implementing a thoughtful automated testing strategy.

Tests for software come in several flavors and can be summed up nicely as layers of the Test Pyramid. The aim of this series is to describe and map these testing flavors to layers of the Test Pyramid.  While this part 1 will focus on defining the flavors of tests, part 2 will provide an application of the Test Pyramid strategy for an example software project.

Test Flavors

It is important we define the term “Component” as it applies to the different flavors of tests.  A software Component is a unit of software that is independently replaceable and upgradeable.  Admittedly, there is a lot of gray area in this definition.  An independently replaceable and upgradeable unit of software could apply to something as large as an ecommerce website’s shopping cart service or as small as independent functions comprising the service.  For the purposes of this discussion, we define a Component at the shopping cart service level given that the service encompases a much more independent piece of software. Additional example Components include other ecommerce functions such as payments, order management, and the site’s numerous client applications.  In microservice architectures, Components typically have a 1:1 ratio to the code repositories that comprise that project’s architecture.

Unit Tests: confirming the inner workings of Components

With the above understanding of a Component, we can ask ourselves the question “how do we ensure the inner workings of this Component functions as we intend it to?”  The answer to that question is with Unit Tests.  Unit Tests are the flavor of tests that exercise all of the smallest functional areas of a Component while relying on no other dependencies.  All intra-Component control flow logic, data manipulation, and side effect initiation should be covered with individual Unit Tests.  Components are typically comprised of several files and Unit Test files should have a near 1:1 ratio with those files.  The percentage of Component code that Unit Tests cover should be greater than 80% at all times of the software project’s life cycle.  This percentage coverage is somewhat arbitrary and is less important than having as much functionality and logic covered as Alberto Savoia’s infamous allegory on code coverage demonstrates.

Unit Tests rely on no other dependencies, which includes not only other Components in a system, but also other units of code within the Component.  For example, if a shopping cart service Component has a piece functionality for getting a user’s current cart information, and that piece of functionality itself relies on a set of utilities for manipulating raw user data, then the utility’s functionality should be mocked in order to narrow the Unit Test to the “get cart” functionality, as seen in Figure 1 below.  Unit tests for the mocked utility functions should live elsewhere.

fig-1: Unit tests

Figure 1: Example of Unit Test with no reliance on any dependency

Component Tests: confirming Components can communicate effectively

Since Unit Tests give the software developer a high confidence that the intra-workings of a Component are valid we can now turn our attention to communication between Components.  Validating communication between Components is the focus of Component Tests.  To facilitate this, it is critical that each Component exposes an interface that is reliable.  Relevant ways a Component’s interface may be exercised should be enumerated and reflected in individual Component Tests.

In Component Tests, only the Component under test is exercised, while the other Components that will interface with the Component under test are mocked.  By doing so, the inner workings of the Component that depend upon one another are exercised and the Component interface is validated.  For example, consider a checkout service that gets a user’s cart in order to proceed through checkout. If another cart service Component interfaces with the checkout service Component, the checkout service should be mocked, narrowing the Component Test to the cart service’s interface as seen in Figure 2 below.  A separate set of Component Tests would be created in order to specifically exercise the checkout service Component’s interface.

fig-2: Component Tests

Figure 2: Example of Component Test with no reliance on other Components

Interface Tests: confirming Components deployed environment config

While Component Tests address the first concern of communication “is this Component’s interface reliable?”, Interface Tests address an additional concern of communication “are deployed Components configured to communicate properly?”  This flavor of testing may be optional depending on how Components are deployed in a system.  For Microservice Architectures, Interface Tests are relevant as a sanity check that the various services deployed in the Architecture are communicating as expected.  If a monolithic application deploys all Components on a single server, Interface Tests are not necessary.  Relevant configurations for a deployed system should have Interface Tests for validating that a deployed environment is properly connected and the environment’s Components are communicating.

System Tests: confirming end-to-end data transfer

Since Interface Tests give confidence that communication between Components in a deployed environment is possible, it is now time to address the bird’s eye view of the system.  System Tests exercise and validate the  full end-to-end behavior of deployed services in a way that simulates how client applications interact with the system.  Relevant ways endpoints of the system may be exercised should be enumerated and reflected in individual System Tests.  This is not entirely unlike the relevant endpoints exposed by the system’s boundary Components in Component Tests, with the exception that now all Components are exercised without mocks.

In System Tests the deployed system is exercised in a way that is similar to a client application, even though the client application is not exercised directly.  For example, a website may expose an API that a client application creates requests against in order to populate and render data for user consumption.  A System Test suite’s harness would be capable of generating sessions that in turn generate requests to the deployed API application and validate that application’s responses, as seen in Figure 3 below.  In this example, the API Service is none-the-wiser that the System Tests are accessing its services and not an actual Client Application (outside of perhaps a User Agent, other Header, etc checking mechanisms that may be helpful in order to expand or limit services to the System Tests).

fig-3: System Tests

Figure 3: Example of System Tests simulating Client Application requests to an API Service

Functional/Behavioral Tests: confirming end-user functionality

All of the Unit, Component, Interface, and System Tests in the world ultimately fail to cover one of the most important and fundamental flavors of tests: “does my software work for the end-user on the device they’ll be using in a deployed environment?”  For this crucial form of testing, Functional Tests (sometimes referred to as Behavioral Tests) are a layer of testing that instantiates or emulates real versions of end-user devices or software in which the client application is rendered, as seen in Figure 4 below.  Given the high cost of resources necessary to instante or emulate end-user devices/software, Functional Tests tend to cover happy-path and common-failure cases of business critical functionality only.

fig-4: Functional Tests

Figure 4: Example of Functional Tests simulating a real user interacting with a Client Application

Automation of Functional Tests requires collaboration between the developers of automated tests and client application developers in order to develop a strategy for UI interactions that remain consistent between client application builds.  These collaborations ensure Functional Tests are reliable and durable.

Conclusion

We have defined the flavors of testing and how they fit into the Unit, Component, Interface, System, and Functional Test layers that compose the Test Pyramid.  In part 2 we will explore how these layers contribute to the overall Test Pyramid.


Sources:
http://www.artima.com/forums/flat.jsp?forum=106&thread=204677
http://www.martinfowler.com/articles/consumerDrivenContracts.html
http://martinfowler.com/bliki/TestPyramid.html
https://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid

There are no comments published yet.

Leave a Comment