Why integration test?
Unit tests are an essential part of the makeup of any resilient software solution. Whilst unit tests ensure that the various components of our software are all working independently and doing their job, they don't always ensure that when all the different pieces of functionality in our solution are connected together, that the outcome is sound.
We may want to ensure that a provider layer containing some business logic works correctly when dealing with a real database connection implemented by one of its dependencies. Or perhaps we might like to ensure that our API is delivering a valid response to a request constructed from real data i.e. not faked or mocked. Tools like Postman can help us to invoke our service whilst it's running or being debugged, however it would be great to have a project included in the solution which performs these kind of tests automatically.
The benefits of integration testing
Integration testing lets us quickly evaluate whether any changes to our solution components and dependencies have broken the full end to end performance of our application. It provides automation and allows us to test the full request and response pipeline. It allows us to trace faults that may not be so easy to identify using a more manual method. We can use integration tests to assess speeds under load when working with real data rather than mocked repositories, for example. In short, this kind of test gives us an extra measure of confidence in our solution.
Separation is good
Unit tests and integration tests are different beasts. As such they should be kept separate, and it is good practice to house integration tests in a different project to unit tests within a solution. This allows us to control what tests are run and when. For example, on a build server, we probably only require that all unit tests pass and are not interested in the integration tests as the environment is likely very different to a development or production machine. It also gives us flexibility when using something like the test explorer in Visual Studio.
A working case study
Here at cap hpi we have been developing micro services that will provide functionality for a larger API. One of these services has been developed to provide forecast valuations - short term forecasts and future residual values. This micro service is intended to be fast, lightweight and has a single purpose. The API exposes a method on a controller which accepts vehicle information and provides valuation data as a response.
The integration tests required revolve around two aspects of the solution - the data layer which returns valuation data from a database, and the API. Unit tests exist to test the provider layer using mocked data layer repositories. In this instance integration tests ensuring that the repositories function correctly when fetching data from a real database are desirable. In addition, it would be nice to end to end test the microservice by invoking the API with a proper HTTP request and validating the response - by definition this will also incorporate full testing of the business logic in the provider layer.
An existing solution providing similar data sets required that a separate instance of the underlying service be running in order to run the integration tests. Let's look at how this was improved upon.
First, let's look at a few requirements and some details of the solution. The microservice is an ASP.NET Core solution with projects for the API, the business layer, the data/repository layer and a models project. We are going to use NUnit as our test framework.
We'll need to ensure we reference the following packages:
Our solution targets the .NET Core 2.1 framework, and the integration tests project uses the Microsoft.NET.Sdk.Web SDK
A look at the code
In this simple example we are making use of NUnit's OneTimeSetup attribute to create a method that will fire before our tests are run and ensure that things are in place. We will create an instance of the TestServer class and pass in a WebHostBuilder - the key here is the Startup class. The Startup class is the entry point for our API and sets up configuration for required services. It is where we have configured our dependency injection that's now built in with .NET Core.
The TestServer class (referenced in the Microsoft.AspNetCore.TestHost namespace) exposes a CreateClient method which will return an HttpClient object. We can use this client to make asynchronous calls to our test server object.
Now for the actual integration test method. We will use NUnit's TestCase attribute to configure a couple of test invokes of the API controller method using some sample request JSON files and expected results. We will use the typical AAA (Arrange, Act, Assert) pattern. If all is well our integration test will pass having used all of the real components of our solution without mocking any dependencies such as the repositories.
Notice that we call EnsureSuccessStatusCode on the response to ensure a valid 200-299 range response code, which will throw an exception if this isn't the case. We can deserialize the response string from the result of the call into a model and make some relevant assertions.
We can also include a method marked with the OneTimeTearDown attribute to perform any clean up if necessary.
The tests didn't go according to plan on the first pass - our API project had several dependencies which didn't end up making it into the integration test project's output folder on building. Things such as the nLog.config file being required for our logging, and the appsettings.json file which is of course used for a lot of initial setup and configuration in the Startup class. These problems were solved with some simple post-build events.
Another unexpected piece of behaviour was the actual inferred application name. Our API project on startup had some swagger documentation configuration which made an assumption that an XML comments file would be present, with the name derived from:
It turns out that, when using the TestServer object, the application name actually resolved to TestHost rather than the assumed full project namespace. With these small issues resolved we are then able to run our full integration tests along with all the other unit tests with a click of the Run All button in Test Explorer. We have ensured our solution works end to end working with its real components such as the database, and can proceed to give it another once over by running the service and hitting it with a Postman call just for good measure.
It is extremely useful to include a full integration test within the solution itself - it allows another developer to maintain or enhance our code and can ensure the application works correctly on his or her system without having to use any external tools like Postman.
There is a limit to the range of tests we might want to include in this type of integration test project. We don't want to try and write test scenarios that cover every data eventuality as that isn't the goal. We ensure that basic functionality is working soundly. We can invoke methods on the API designed to report a HealthCheckResponse which might validate that the database is alive and that the service can connect to it. We create simple tests that make basic assertions that are not dependent on changing data.
It's relatively straightforward to implement a full integration test in ASP.NET Core using the WebHostBuilder and TestServer constructs.