In the previous post we’ve discussed why continuous integration is important, what makes a good CI tool and after that, we’ve set up a basic TeamCity project to build our application using Docker.

But continuous integration is much more than that, so in this article, we are going to learn a bit about more advanced features of TeamCity. We are also going to write an integration test to make sure our application is communicating with the database every time we build it.

The starting point for this part is the docker-series-prepare-ci-end branch of the docker-series repo on our GitHub.

Important note: This time around, if you haven’t worked like that already, you need to fork the repo to follow along since you’ll be using it in TeamCity builds. This is really important if you want to follow along the steps of the article and get most out of it since we will be doing incremental changes to the branch.

Here’s what we are going to learn this time:

Let’s get down to it.

How to Add an Integration Test to the Asp.Net Core App

First things first. We are going to add some integration tests to our application.

We already have a unit test, but since we are using TeamCity now, this is a great opportunity to introduce integration tests. This is something every production-grade application should have because integration tests ensure that all of your application parts communicate properly.

In our case, we are going to make sure that our app communicates with the database.

For that purpose, we are going to create a new project for integration tests, and add some simple tests.

Adding an Integration Tests Project

Like we did with our unit tests project in part 1 of the series, we are going to add a new one for integration tests.

With a little twist.

So we start with navigating to the project root and adding a new xUnit project:

After that we need to add the newly created project to our solution:

And like the previous time add and restore the required packages:

As you can see instead of using the Moq library, we’ve added the Microsoft.AspNetCore.TestHost package because it provides a nice interface for writing integration tests in ASP.NET Core projects.

We don’t mock stuff in integration tests.

This time around, we need to reference our main project AccountOwnerServer instead of just the Contracts project:

We need it because we have to use the real server and configuration.

That’s all we need to set up the project. You can quickly run the dotnet build to check if it compiles.

Great! Let’s proceed to the integration test.

First, we should rename the UnitTests1.cs file to something more appropriate like IntegrationTests.cs.

Then we can add our test:

Ok, so here we have the simplest test possible. The Microsoft.AspNetCore.TestHost library helps us create the test server and client.

Then, we simply hit the /api/owner endpoint and assert if the HTTP response status was 200 OK, which it should be if we’ve set up our database correctly.

Important note: If you get an error that states that appsettings.json or nlog.config while running the test, you can solve it by going to properties of those files and setting the Content property to Copy always or Copy if newer.

Essentially your AccountServerOwner.csproj should contain:

Separating the Test Initialization

Easy and simple as it is, this way of writing tests is plain wrong.

Why?

Because we would need to set up the test server and test client in every test.

So let’s make a helper class TestContext.cs, and extract the initialization logic to that class.:

Now, this looks much better. We’ve implemented the logic, made the server part private, and the client part public. This makes more sense.

We also implemented the IDisposable interface to make sure context is cleaned up after we’ve finished testing.

Let’s have a look how our IntegrationTests.cs class looks now:

So, as you can see, we’ve initialized the context in the constructor. This helped us clean our methods (Facts) to the extent we don’t even need the Arrange part in them.

We’ve also separated our test logic into two different methods. The first one checks if the status code is 200 OK, and the second one is for testing if that particular endpoint returns the list of owners.

So clean, much wow.

For extensive testing, there are even better ways to do this, but for our purposes, this will do.

Let’s go on.

Running Integration Tests Locally

Now let’s try out our integration tests by navigating to the Integration folder and running the dotnet test command.

Both of our tests fail!

integration tests failed

Can you guess why?

The reason is simple. We don’t have a MySQL database locally, so the requests to the endpoint in the tests fail with the HTTP response 500 Internal Server Error.

Up until now, we’ve run our application using the docker-compose up command like we’ve learned in part 4 of the series.

And since Docker Compose creates both the containers and the network in which they reside, we need to include our integration tests into docker-compose.yml somehow.

But, we don’t want the integration tests to run with each build since they are much slower than unit tests. That being the case, we need to make a new Dockerfile and a new docker.compose.yml specifically for the integration tests.

So let’s start with that.

Let’s create a Dockerfile for the Integration project in the solution root and name it Dockerfile.Integration. This file will just be the modified version of our existing Dockerfile:

So we’ve changed the base image to microsoft/dotnet:2-sdk to be able to run dotnet test, set the working directory to the Integration folder and made this image executable by setting the entry point to the dotnet test command.

If any of these concepts are unfamiliar to you, go back to part 3 of the series where we go into detail on how to configure a Dockerfile.

Next, we want to create a new and slightly modified docker compose file docker-compose.integration.yml in the solution root:

This file is pretty similar to the docker-compose.yml we already used to build our project. We made a few important changes though. First one being the usage of the pre-built image for our ASP.NET Core app instead of building it, and the second one being the addition of the integration service which will actually be responsible for running the integration workflow defined in Dockerfile.Integration file.

We’ve also made the integration service dependant on the accountownerapp service, to make sure the database and application are up and running before we start testing them.

For now, we are going to use the fixed tag for our application image (build-2), but we’ll learn how to change that to depend on the latest TeamCity build.

To run the integration tests we simply need to run:

Since we want to run a non-default docker compose file, we can use the -f option to specify which file to run.

After a while, you should see your integration service starting, and the tests should pass.

tests passed

Cool, ha?

Both of our integration tests run locally, but let’s see if that’s the case with TeamCity too.

Running Tests with TeamCity

If we would try to run our docker compose commands with the TeamCity, we would get an error like this:

TeamCity Agent doesn’t come with the docker compose runner out of the box. This will probably get fixed soon enough, but for now, we need to do a bit of Docker magic to make this work.

So let’s quickly make our own TeamCity agent image with docker compose.

Upgrading TeamCity Agent with Docker Compose

First, let’s navigate to the /Infrastructure/TeamCity directory and make a new folder agent.

Inside it, we are going to make a new Dockerfile:

This complicated command does one simple thing, and that’s the installation of the latest docker compose on top of the TeamCity agent image.

Now, let’s build our own agent. Let’s navigate to the agent folder and type:

And after that, we are going save the image on your local registry or Docker Hub since it will probably come in handy another time:

Finally, we are going to navigate to the /Infrastructure/TeamCity directory again and stop the TeamCity agent and the TeamCity server with docker-compose stop.

If you do docker-compose down, you’ll lose your volumes and everything you configured so far, so be careful.

Now, let’s change the docker-compose.yml to use our own image instead of official one:

And bring everything up again with:

If everything went smoothly this is what our agent should look like when we open the Build Runners tab:

tc agent docker compose

Excellent, we can proceed with our steps now.

Changing the Docker Build Step

Before we add our Integration Build Configuration, let’s change the existing Command Line step we prepared it the previous part to something more intuitive.

TeamCity offers a Docker Build build step, which helps a lot while building Docker images.

Let’s select it instead of command line and add our stuff:

Docker Build step

We’ve chosen the file we want to build and the image we want to produce.

Now, let’s add a Command Line Build step next to push our image to the local registry we created.

Docker Push step

It’s a pretty simple build step that pushes the image we made in the previous step to the local registry.

You can use this opportunity to change the VCS branch to your own in the Version Control Settings. (If you are not sure how to do it, refer to the previous part)

Now, run the build once to check if everything works still.

Thanks to the Docker caching, we should have a new image in a matter of seconds, and our registry should look like this:

registry

Awesome!

Now that we got that out of the way, let’s proceed to the main event.

Adding the Integration Build Configuration

Like with the build configuration for our docker images, we need to make a build configuration for our Integration Tests:

create integration build configuration

Be sure to use the forked repo branch in this build configuration too.

And this time, we’ll type the exact command we did while we tested our build locally, followed by the docker-compose down to dispose of our containers:

Now, let’s boldly go where no man has gone before and run these tests.

And would you believe it, our build fails and we get the two failed tests!

tests error message

tests failed tc

And on top of that, the build is still hanging! Outrageous! 😀

Can you guess why this is happening?

Well, it seems that volumes don’t work the same way they do when we run the images locally, and our init.sql script fails to mount on MySQL image initialization (docker-entrypoint-initdb.d folder).

So let’s make some modifications to our command line build step.

Fixing the MySQL Volumes Problem

Part of the problem lies in our TeamCity agent configuration. We need to add two more volumes to make Docker daemon available inside our builds.

So let’s stop TeamCity for a moment again with docker-compose stop and add two more lines to docker-compose.yml in /Integration/TeamCity directory.

After that just spin up TeamCity again with docker-compose up -d.

Another part of the problem or rather an incorrect usage of commands is the docker-compose down. To release the resources properly, we need to stop the right services (containers) and add the -v option to make sure that even named volumes are removed after the build finishes. In our case, that’s mainly because of the MySQL’s dbdata volume.

This should take care of our database initialization.

Fixing the Hanging Build Problem

To fix this problem and to ensure that our Integration Tests build from scratch each time, we are going to add a few flags to our docker-compose up command:

What these flags do:

  • --force-recreate: Forces the recreation of the containers, even if their configuration hasn’t changed
  • --abort-on-container-exit: Stops all containers if any container stops. Useful in our case, because once we finish the tests, we don’t need other containers to run anymore
  • --build: Forces the build of images before container starts

These flags fix the hanging build problem.

Running the Integration Tests Again

Now let’s run the tests again and see if they build this time.:

tests passed tc

And what do you know, they do! Hurray!

Connecting Build Configurations

So what is our endgame here?

We want to build our main project, AccountOwner ASP.NET Core application, on every commit, and then use the newly built image to run integration tests.

In order to do this, we need to connect our build configurations somehow.

Currently, we have two build configurations:

build configurations

Let’s connect them.

Navigate to the Integration build configuration settings and in the Dependencies section add the dependency to the Build build configuration:

create dependency

Now, this dependency means that our Build build configuration will trigger everytime we trigger the Integration build configuration. TeamCity is even smart enough to resolve triggers, so we can safely remove a VCS trigger from our Build build configuration and when we commit our changes, TeamCity will actually figure out if it needs to trigger it or not depending on the changes we made to the project.

How awesome is that?

Now that our builds are connected let’s figure out how to remove the hardcoded tags, we’ve been using so far.

Removing Hardcoded Tags from Images

The last but not the least important step is removing the hardcoded tags from our docker-compose.integration.yml file.

We have already used TeamCity’s %build.number% parameter to create a new image on every build. Now that our builds are connected, we can use that value and put it right into our docker-compose.integration.yml file so we can run integration tests on the newest image possible.

Now if we run the Integration build, we can see how the builds are connected in the Dependencies tab:

build chain

So far, we’ve used my-registry:50000/codemazeblog/accountownerapp:build-2 as a base image. Let’s change it so it dynamically adds the right tag using something called “Environment Variable Substitution”.

First, we need to find out how TeamCity resolves the build numbers of dependent projects. Since we connected the builds, we can find all sorts of useful information in the Parameters section of our Integration build.

For example, we are particularly interested in the build number of the dependent build.

So if we scroll down a bit to the Parameters from Dependencies section within the Parameters tab we can see that information:

dependency build number

We are interested in build.number here, not the build.counter which is the increment.

But to make it available to the current build, the Integration build, we need to promote it to the environment variable.

We can do this in the in the Parameters section of our Integration build configuration:

create env variable

Now we can substitute the hardcoded value in the docker-compose.integration.yml file:

To test this out you can set the environment variable in PowerShell with:

And check the configuration with:

The result should look like this:

config result

As our configuration seems to work, let’s commit it, and see how TeamCity resolves everything now.

build process finished

And the build finished successfully!

To make sure this is not a mistake let’s make one of our integration tests fail by changing the Assert.NotEmpty to Assert.Empty.

And run it again:

build process failed

And our test failed as expected. You can navigate to the build log to see the reason.

But now that we made our tests pass, we don’t want to open the build log to look for the test results. We want them to be clearly visible at the project overview screen.

Here’s a little trick that can help us with that.

How to Make Test Results Visible

TeamCity has a cool way of helping us display the test results when using xUnit.

xUnit runner detects the presence of TeamCity by looking for the TEAMCITY_PROJECT_NAME environment variable. So, let’s add it to our integration service in the docker-compose.integration.yml file.

If you are wondering what this means, this is just short for TEAMCITY_PROJECT_NAME = ${TEAMCITY_PROJECT_NAME }.

And there is one more thing we need to do to make it work. By default xUnit verbosity in TeamCity is set to minimal.

We need to ramp it up by changing the entrypoint in our Dockerfie.Integration a bit:

That’s it, let’s commit and wait for the build to finish.

Once it finishes, we can check the build results either on the project page or the Dependencies tab of our Integration build configuration:

tests failed tc visible

Now we can clearly see that tests have failed instead of generic Success/Error message. And we can click on that message to go to the stack trace and see exactly why the tests failed.

Isn’t that just great?

There is one thing remaining, and that’s to configure our Unit Tests results to show up in the similar manner.

In order to do that, we need to tweak the Dockerfile a bit.

We ramped up the verbosity level to normal, and in the similar manner as before, added the environment variable that helps xUnit recognize that it is running inside TeamCity.

Now our results look like this:

tests passed tc visible

Great stuff!

Conclusion

In this lengthy part, we’ve gone through a lot of concepts and tweaks. TeamCity is a powerful tool and becomes even more powerful when combined with Docker. Although we used it for a simple pipeline, TeamCity is flexible to support even the most complex projects. Add to that the cross-platform nature of Docker, and you get a monster-like tooling for anything you might need. Ever.

Although we used ASP.NET Core application as our base app, these concepts and configurations are applicable to any other project type or language. Now that you can integrate Docker with TeamCity, there are no boundaries to what you can do with it.

Although TeamCity is an on-premises tool, using these methods, you can make it cloud-like by hosting it on a remote machine, no matter which platform you choose.

Full source code with the modifications we made throughout this article can be found on the docker-series-continuous-integration-end branch of our docker-series repo.

Hopefully, you found this article useful. There are a lot of puzzle pieces in it, so don’t hesitate to leave a comment or ask for help.


If you have enjoyed reading this article, please leave a comment in the comments section below, and if you want to receive the notifications about the freshly published content we encourage you to subscribe to our blog.