This post has originally been published on the codecentric blog.
A few days ago I wrote about a library called Testcontainers. It helps you to run software that your application depends on in a test context by providing an API to start docker containers. Testcontainers comes with a few pre-configured database- and selenium-containers, but most importantly it also provides a generic container that you can use to start whatever docker image you need to.
In my current project we are using Infinispan for distributed caching. For some of our integration tests caching is disabled, but others rely on a running Infinispan instance. Up until now we are using a virtual machine to run Infinispan and other software on developer machines and build servers. The way we are handling this poses a few problems and isolated Infinispan instances would help mitigate these. This post shows how you can get Infinispan running in a generic container. I’ll also try to come up with a useful abstraction that makes running Infinispan as a test container easier.
Configuring a generic container for Infinispan
Docker Hub provides a readymade Infinispan image:
jboss/infinispan-server. We’ll be using the latest version at this time, which is
9.1.3.Final. Our first attempt to start the server using Testcontainers looks like this:
You can see a few things here:
- We’re configuring our test class with a class rule that will start a generic container. As a parameter, we use the name of the infinispan docker image alongside the required version. You could also use
- There’s a setup method that creates a
RemoteCacheManagerto connect to the Infinispan server running inside the docker container. We extract the network address from the generic container and retrieve the container ip address and the mapped port number for the hotrod port in
- Then there’s a simple test that will make sure we are able to retrieve an unnamed cache from the server.
Waiting for Infinispan
If we run the test, it doesn’t work and throws a
TransportException, though. It mentions an error code that hints at a connection problem. Looking at other pre-configured containers, we see that they have some kind of waiting strategy in place. This is important so that the test only starts after the container has fully loaded. The
PostgreSQLContainer waits for a log message, for example. There’s other wait strategies available and you can implement your own, as well. One of the default strategies is the
HostPortWaitStrategy and it seems like a straightforward choice. With the Infinispan image at least, it doesn’t work though: one of the commands that is used to determine the readiness of the tcp port has a subtle bug in it and the another relies on the
netcat command line tool being present in the docker image. We’ll stick to the same approach as the
PostgreSQLContainer rule and check for a suitable log message to appear on the container’s output. We can determine a message by manually starting the docker container on the command line using:
docker run -it jboss/infinispan-server:9.1.3.Final.
The configuration of our rule then changes to this:
After this change, the test still doesn’t work correctly. But at least it behaves differently: It waits for a considerable amount of time and again throws a
TransportException before the test finishes. Since the underlying
TcpTransportFactory swallows exceptions on startup and returns a cache object anyway, the test will still be green. Let’s address this first. I don’t see a way to ask the
RemoteCacheManager or the
RemoteCache about the state of the connection, so my approach here is to work with a timeout:
The test will now fail should we n ot be able to retrieve the cache within 1500 milliseconds. Unfortunatly, the resulting
TimeoutException will not be linked to the
TransportException, though. I’ll take suggestions for how to better write a failing test and leave it at that, for the time being.
Running Infinispan in standalone mode
Looking at the stacktrace of the
TransportException we see the following output:
INFO: ISPN004006: localhost:33086 sent new topology view (id=1, age=0) containing 1 addresses: [172.17.0.2:11222]
Dez 14, 2017 19:57:43 AM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo
INFO: ISPN004014: New server added(172.17.0.2:11222), adding to the pool.
It looks like the server is running in clustered mode and the client gets a new server address to talk to. The ip address and port number seem correct, but looking more closely we notice, that the hotrod port
11222 refers to a port number inside the docker container. It is not reachable from the host. That’s why Testcontainers gives you the ability to easily retrieve port mappings. We already use this in our
getServerAddress() method. Infinispan, or rather the hotrod protocol, however is not aware of the docker environment and communicates the internal port to the cluster clients overwriting our initial configurtation.
To confirm this analysis we can have a look at the output of the server when we start the image manually:
19:12:47,368 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-6) ISPN000078: Starting JGroups channel clustered
19:12:47,371 INFO [org.infinispan.CLUSTER] (MSC service thread 1-6) ISPN000094: Received new cluster view for channel cluster: [9621833c0138|0] (1) [9621833c0138]
Dez 14, 2017 19:12:47,376 AM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo
INFO: ISPN004016: Server not in cluster anymore(localhost:33167), removing from the pool.
The server is indeed starting in clustered mode and the documentation on Docker Hub also confirms this. For our tests we need a standalone server though. On the command line we can add a parameter when starting the container (again, we get this from the documentation on Docker Hub):
$ docker run -it jboss/infinispan-server:9.1.3.Final standalone
The output now tells us that Infinispan is no longer running in clustered mode. In order to start Infinispan as a standalone server using Testcontainers, we need to add a command to the container startup. Once more we change the configuration of the container rule:
Now our test now has access to an Infinispan instance running in a container.
Adding a specific configuration
The applications in our project use different caches, these can be configured in the Infinispan standalone configuration file. For our tests, we need them to be present. One solution is to use the
.withClasspathResourceMapping() method to link a configuration file from the (test-)classpath into the container. This configuration file contains the cache configurations. Knowing the location of the configuration file in the container, we can once again change the testcontainer configuration:
Now we can retrieve and work with a cache from the Infinispan instance in the container.
Simplifying the configuration
You can see how it can be a bit of a pain getting an arbitrary docker image to run correctly using a generic container. For Infinispan we now know what we need to configure. But I really don’t want to think of all this every time I need an Infinispan server for a test. However, we can create our own abstraction similar to the
PostgreSQLContainer. It contains the configuration bits that we discovered in the first part of this post and since it is an implementation of a
GenericContainer, we can also use everything that’s provided by the latter.
In our tests we can now create an Infinispan container like this:
That’s a lot better than dealing with a generic container.
Adding easy cache configuration
You may have noticed that I left out the custom configuration part here. We can do better by providing builder methods to create caches programatically using the
RemoteCacheManager. Creating a cache is as easy as this:
In order to let the container automatically create caches we facilitate the callback method
containerIsStarted(). We can overload it in our abstraction, create a
RemoteCacheManager and use its API to create caches that we configure upfront:
You can also retrieve the
CacheManager from the container and use it in your tests.
There’s also a problem with this approach: you can only create caches through the API if you use Hotrod protocol version 2.0 or above. I’m willing to accept that as it makes the usage in test really comfortable:
If you need to work with a protocol version below 2.0, you can still use the approach from above, linking a configuration file into the container.
While it sounds very easy to run any docker image using Testcontainers, there’s a lot of configuration details to know, depending on the complexity of the software that you need to run. In order to effectivly work with such a container, it’s a good idea to encapsulate this in your own specific container. Ideally, these containers will end up in the Testcontainers repository and others can benefit of your work as well.
I hope this will be useful for others, if you want to see the full code, have a look at this repository.