I am sure everyone would agree that rare application architecture these days could survive without relying on some kind of data stores (either relational or NoSQL ones), or/and messaging middleware, or/and external caches, ... just to name a few. Testing such applications becomes a real challenge.
Luckily, if you are a JVM developer, things are not as bad. Most of the time you have an option to fallback to an embedded version of the data store or message broker in your integration or component test scenarios. But what if the solution you are using is not JVM-based? Great examples of those are RabbitMQ, Redis, Memcached, MySQL, Postgresql which are extremely popular choices these days, and for a very good reasons. Even better, what if your integration testing strategy is set to exercise the component (read, microservice) in the environment as close to production as possible? Should we give up here? Or should we write a bunch of flaky shell scripts to orchestrate the test runs and scary most of the developers to death? Let us see what we can do here ...
Many of you are already screaming at this point: just use Docker, or CoreOS! And this is exactly what we are going to talk about in this post, more precisely, how to use Docker to back integration / component testing. I think Docker does not need an introduction anymore. Even those of us who spent a last couple of years in a cave on a deserted island in the middle of the ocean have heard about it.
Our sample application is going to be built on top of Spring projects portfolio, heavily relying on Spring Boot magic to wire all the pieces together (who doesn't, right? it works pretty well indeed). It will be implementing a very simple workflow: publish a message to RabbitMQ exchange app.exchange (using app.queue routing key), consume the message from the RabbitMQ queue app.queue and store it in Redis list data structure under the key messages. The three self-explanatory code snippets below demonstrate how each functional piece is being done:
@Component public class AppQueueMessageSender { @Autowired private RabbitTemplate rabbitTemplate; public void send(final String message) { rabbitTemplate.convertAndSend("app.exchange", "app.queue", message); } }
@Component public class AppQueueMessageListener { @Autowired private AppMessageRepository repository; @RabbitListener( queues = "app.queue", containerFactory = "rabbitListenerContainerFactory", admin = "amqpAdmin" ) public void onMessage(final String message) { repository.persist(message); } }
@Repository public class AppMessageRepository { @Autowired private StringRedisTemplate redisTemplate; public void persist(final String message) { redisTemplate.opsForList().rightPush("messages", message); } public Collection<String> readAll() { return redisTemplate.opsForList().range("messages", 0, -1); } public long size() { return redisTemplate.opsForList().size("messages"); } }
As you can see, the implementation is deliberately doing the bare minimum, we are more interested in the fact that quite a few interactions with RabbitMQ and Redis are happening here. The configuration class includes only the necessary beans, everything else has been figured out by Spring Boot automatic discovery from the classpath dependencies.
@Configuration @EnableAutoConfiguration @EnableRabbit @ComponentScan(basePackageClasses = AppConfiguration.class) public class AppConfiguration { @Bean Queue queue() { return new Queue("app.queue", false); } @Bean TopicExchange exchange() { return new TopicExchange("app.exchange"); } @Bean Binding binding(Queue queue, TopicExchange exchange) { return BindingBuilder.bind(queue).to(exchange).with(queue.getName()); } @Bean StringRedisTemplate template(RedisConnectionFactory connectionFactory) { return new StringRedisTemplate(connectionFactory); } }
At the very end comes application.yml. Essentially it contains the default connection parameters for RabbitMQ and Redis, plus a bit of logging level tuning.
logging: level: root: INFO spring: rabbitmq: host: localhost username: guest password: guest port: 5672 redis: host: localhost port: 6379
With that, our application is ready be run. For convenience, the project repository contains docker-compose.yml with official RabbitMQ and Redis images from the Docker hub.
Being TDD believers and practitioners, we make sure no application leaves the gate without thorough set of test suites and test cases. Keeping unit tests and integration tests out of scope of the subject of our discussion, let us jump right into component testing with this simple scenario.
@RunWith(SpringJUnit4ClassRunner.class) @SpringBootTest( classes = { AppConfiguration.class }, webEnvironment = WebEnvironment.NONE ) public class AppComponentTest { @Autowired private AppQueueMessageSender sender; @Autowired private AppMessageRepository repository; @Test public void testThatMessageHasBeenPersisted() { sender.send("Test Message!"); await().atMost(1, SECONDS).until(() -> repository.size() > 0); assertThat(repository.readAll()).containsExactly("Test Message!"); } }
It is really basic test case, exercising the main flow, but it makes an important point: no mocks / stubs / ... allowed, we expect the real things. The line is somewhat blurry but this is what makes component tests different from let say integration or e2e tests: we test a single full-fledged application (component) with the real dependencies (when it makes sense).
This is a right time for an excellent Overcast project to appear on the stage and help us out. Overcast brings the power of Docker to enrich test harness of the JVM applications. Among many other things, it allows to define and manage lifecycle of Docker containers from within Java code (or more precisely, any programming language based on JVM).
Unfortunately, the last released version 2.5.1 of Overcast is pretty old and does not include a lot of features and enhancements. However, building it from source is no-brainer (hopefully the new release is going to be available soon).
git clone https://github.com/xebialabs/overcast cd overcast ./gradlew install
Essentially, the only prerequisite is to provide the configuration file overcast.conf with the list of named containers to run. In our case, we need RabbitMQ and Redis.
rabbitmq { dockerImage="rabbitmq:3.6.6" exposeAllPorts=true remove=true removeVolume=true } redis { dockerImage="redis:3.2.6" exposeAllPorts=true remove=true removeVolume=true }
Great! The syntax is not as powerful as Docker Compose supports, but simple, straightforward and quite sufficient to be fair. Once configuration file is placed into src/test/resources folder, we could move on and use Overcast Java API to manage these containers programmatically. It is natural to introduce the dedicated configuration class in this case as we are using Spring framework.
@Configuration public class OvercastConfiguration { @Autowired private ConfigurableEnvironment env; @Bean(initMethod = "setup", destroyMethod = "teardown") @Qualifier("rabbitmq") public CloudHost rabbitmq() { return CloudHostFactory.getCloudHost("rabbitmq"); } @Bean(initMethod = "setup", destroyMethod = "teardown") @Qualifier("redis") public CloudHost redis() { return CloudHostFactory.getCloudHost("redis"); } @PostConstruct public void init() throws TimeoutException { final CloudHost redis = redis(); final CloudHost rabbitmq = rabbitmq(); final Map<String, Object> properties = new HashMap<>(); properties.put("spring.rabbitmq.host", rabbitmq.getHostName()); properties.put("spring.rabbitmq.port", rabbitmq.getPort(5672)); properties.put("spring.redis.host", redis.getHostName()); properties.put("spring.redis.port", redis.getPort(6379)); final PropertySource<?> source = new MapPropertySource("overcast", properties); env.getPropertySources().addFirst(source); } }
And that is literally all we need! Just a couple of important notes here. Docker is going to expose random ports for each container, so we could run many test cases in parallel on the same box without any port conflicts. On most operating systems it is safe to use localhost to access the running containers but for the ones without native Docker support the workarounds with Docker Machine or boot2docker exist. That is why we override connection settings for both host and port for RabbitMQ and Redis respectively, asking for the bindings at runtime:
properties.put("spring.rabbitmq.host", rabbitmq.getHostName()); properties.put("spring.rabbitmq.port", rabbitmq.getPort(5672)); properties.put("spring.redis.host", redis.getHostName()); properties.put("spring.redis.port", redis.getPort(6379));
Lastly, more advanced Docker users may wonder how Overcast is able to figure out where Docker daemon is running? Which port it is bound to? Does it use TLS or not? Under the hood Overcast uses terrific Spotify Docker Client which is able to retrieve all the relevant details from the environment variables, which works in majority of use cases (though you can always provide your own settings).
To finish up, let us include this configuration into the test case:
@SpringBootTest( classes = { OvercastConfiguration.class, AppConfiguration.class }, webEnvironment = WebEnvironment.NONE )
Easy, isn't it? If we go ahead and run mvn test for our project, all test cases should pass (please notice that first run may take some time as Docker would have to pull the container images from the remote repository).
> mvn test ... Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 ...
No doubts, Docker raises testing techniques to a new level. With the help of such awesome libraries as Overcast, seasoned JVM developers have even more options to come up with realistic test scenarios and run them against the components in "mostly" production environment (on the wave of hype, it fits perfectly into microservices testing strategies). There are many areas where Overcast could and will improve but it brings a lot of value even now, definitely worth checking out.
Probably, the most annoying issue you may encounter when working with Docker containers is awaiting for the moment when the container is fully started and ready to accept the requests (which heavily depends on what kind of underlying service this container is running). Although the work on that has been started, Overcast does not help with this particular problem yet though simple, old-style sleeps may be good enough (versus a bit more complex port polling for example).
But but but ... always remember about testing pyramid and strive for a right balance. Create as many test cases as you need to cover most critical and important flows, but no more. Unit and integration tests should be your main weapon.
The complete project is available on Github.