Saturday, November 30, 2019

Spring has you covered, again: consumer-driven contract testing for messaging continued

In the previous post we have started to talk about consumer-driven contract testing in the context of the message-based communications. In today's post, we are going to include yet another tool in our testing toolbox but before that, let me do a quick refresher on a system under the microscope. It has two services, Order Service and Shipment Service. The Order Service publishes the messages / events to the message queue and Shipment Service consumes them from there.

The search for the suitable test scaffolding led us to discovery of the Pact framework (to be precise, Pact JVM). The Pact offers simple and straightforward ways to write consumer and producer tests, leaving no excuses to not doing consumer-driven contract testing. But there is another player on the field, Spring Cloud Contract, and this is what we are going to discuss today.

To start with, Spring Cloud Contract fits the best JVM-based projects, built on top of terrific Spring portfolio (although you could make it work in polyglot scenarios as well). In addition, the collaboration flow that Spring Cloud Contract adopts is slightly different from the one Pact taught us, which is not necessarily a bad thing. Let us get straight to the point.

Since we are scoping out to messaging only, the first thing Spring Cloud Contract asks us to do is to define messaging contract specification, written using convenient Groovy Contract DSL.

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    name "OrderConfirmed Event"
    label 'order'
    
    input {
        triggeredBy('createOrder()')
    }
    
    outputMessage {
        sentTo 'orders'
        
        body([
            orderId: $(anyUuid()),
            paymentId: $(anyUuid()),
            amount: $(anyDouble()),
            street: $(anyNonBlankString()),
            city: $(anyNonBlankString()),
            state: $(regex('[A-Z]{2}')),
            zip: $(regex('[0-9]{5}')),
            country: $(anyOf('USA','Mexico'))
        ])
        
        headers {
            header('Content-Type', 'application/json')
        }
    }
}

It resembles a lot Pact specifications we are already familiar with (if you are not a big fan of Groovy, no real need to learn it in order to use Spring Cloud Contract). The interesting parts here are triggeredBy and sentTo blocks: basically, those outline how the message is being produced (or triggered) and where it should land (channel or queue name) respectively. In this case, the createOrder() is just a method name which we have to provide the implementation for.

package com.example.order;

import java.math.BigDecimal;
import java.util.UUID;

import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.contract.verifier.messaging.boot.AutoConfigureMessageVerifier;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.MessageChannel;
import org.springframework.test.context.junit4.SpringRunner;

import com.example.order.event.OrderConfirmed;

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMessageVerifier
public class OrderBase {
    @Autowired private MessageChannel orders;
    
    public void createOrder() {
        final OrderConfirmed order = new OrderConfirmed();
        order.setOrderId(UUID.randomUUID());
        order.setPaymentId(UUID.randomUUID());
        order.setAmount(new BigDecimal("102.32"));
        order.setStreet("1203 Westmisnter Blvrd");
        order.setCity("Westminster");
        order.setCountry("USA");
        order.setState("MI");
        order.setZip("92239");

        orders.send(
            MessageBuilder
                .withPayload(order)
                .setHeader("Content-Type", "application/json")
                .build());
    }
}

There is one small detail left out though: these contracts are managed by providers (or better to say, producers), not consumers. Not only that, the producers are responsible for publishing all the stubs for consumers so they would be able to write the tests against. Certainly a different path than Pact takes, but on the bright side, the test suite for producers are 100% generated by Apache Maven / Gradle plugins.

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>2.1.4.RELEASE</version>
    <extensions>true</extensions>
    <configuration>
        <packageWithBaseClasses>com.example.order</packageWithBaseClasses>
    </configuration>
</plugin>

As you may have noticed, the plugin would assume that the base test classes (the ones which have to provide createOrder() method implementation) are located in the com.example.order package, exactly where we have placed OrderBase class. To complete the setup, we need to add a few dependencies to our pom.xml file.


<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Greenwich.SR4</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-dependencies</artifactId>
            <version>2.1.10.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-verifier</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

And we are done with producer side! If we run mvn clean install right now, two things are going to happen. First, you will notice that some tests were run and passed, although we wrote none, these were generated on our behalf.

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.example.order.OrderTest

....

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

And secondly, the stubs for consumers are going to be generate (and published) as well (in this case, bundled into order-service-messaging-contract-tests-0.0.1-SNAPSHOT-stubs.jar).

...
[INFO]
[INFO] --- spring-cloud-contract-maven-plugin:2.1.4.RELEASE:generateStubs (default-generateStubs) @ order-service-messaging-contract-tests ---
[INFO] Files matching this pattern will be excluded from stubs generation []
[INFO] Building jar: order-service-messaging-contract-tests-0.0.1-SNAPSHOT-stubs.jar
[INFO]
....

Awesome, so we have messaging contract specification and stubs published, the ball is on consumer's field now, the Shipment Service. Probably, the most tricky part for the consumer would be to configure the messaging integration library of choice. In our case, it is going to be Spring Cloud Stream however other integrations are also available.

The fastest way to understand how the Spring Cloud Contract works on cosumer side is to start from the end and to look at the complete sample test suite first.

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMessageVerifier
@AutoConfigureStubRunner(
    ids = "com.example:order-service-messaging-contract-tests:+:stubs", 
    stubsMode = StubRunnerProperties.StubsMode.LOCAL
)
public class OrderMessagingContractTest {
    @Autowired private MessageVerifier<Message<?>> verifier;
    @Autowired private StubFinder stubFinder;

    @Test
    public void testOrderConfirmed() throws Exception {
        stubFinder.trigger("order");
        
        final Message<?> message = verifier.receive("orders");
        assertThat(message, notNullValue());
        assertThat(message.getPayload(), isJson(
            allOf(List.of(
                withJsonPath("$.orderId"),
                withJsonPath("$.paymentId"),
                withJsonPath("$.amount"),
                withJsonPath("$.street"),
                withJsonPath("$.city"),
                withJsonPath("$.state"),
                withJsonPath("$.zip"),
                withJsonPath("$.country")
            ))));
    }
}

At the top, the @AutoConfigureStubRunner references the stubs published by producer, effectively the ones from order-service-messaging-contract-tests-0.0.1-SNAPSHOT-stubs.jar archive. The StubFinder helps us to pick the right stub for the test case and to trigger a particular messaging contract verification flow by means of calling stubFinder.trigger("order"). The value "order" is not arbitrary, it should match the label assigned to the contract specification, in our case we have it defined as:

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    ...
    label 'order'
    ...
}

With that, the test should be looking simple and straightfoward: trigger the flow, verify that the message has been placed into the messaging channel and satisfies the consumer expectations. From the configuration standpoint, we only need to provide this messaging channel to run the tests against.

@SpringBootConfiguration
public class OrderMessagingConfiguration {
    @Bean
    PollableChannel orders() {
        return MessageChannels.queue().get();
    }
}

And again, the name of the bean, orders, is not a random pick, it has to much the destination from the contract specification:

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    ...
    outputMessage {
        sentTo 'orders'
        ...
    }
    ...
}

Last but not least, let us enumerate the dependencies which are required on consumer side (luckily, there is no need to use any additional Apache Maven or Gradle plugins).

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Greenwich.SR4</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-verifier</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream</artifactId>
        <version>2.2.1.RELEASE</version>
        <type>test-jar</type>
        <scope>test</scope>
        <classifier>test-binder</classifier>
    </dependency>
</dependencies>

A quick note here. The last dependency is quite an important piece of the puzzle, it brings the integration of the Spring Cloud Stream with Spring Cloud Contract. With that, the consumers are all set.

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.example.order.OrderMessagingContractTest

...

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

To close the loop, we should look back to the one of the core promises of the consumer-driven contract testing: allow the producers to evolve the contracts without breaking the consumers. What that means practically is that consumers may contribute their tests back to the producers, alhough the improtance of doing that is less of the concern with Spring Cloud Contract. The reason is simple: the producers are the ones who write the message contract specifications first and the tests generated out of these specifications are expected to fail against any breaking change. Nonetheless, there are number of benefits for producers to know how the consumers use their messages, so please give it some thoughts.

Hopefuly, it was an interesting subject to discuss. Spring Cloud Contract brings somewhat different perspective of applying consumer-driven contract testing for messaging. It is an appealing alternative to Pact JVM, especially if your applications and services already rely on Spring projects.

As always, the complete project sources are available on Github.

Thursday, October 31, 2019

Tell us what you want and we will make it so: consumer-driven contract testing for messaging

Quite some time ago we have talked about consumer-driven contract testing from the perspective of the REST(ful) web APIs in general and their projection into Java (JAX-RS 2.0 specification) in particular. It would be fair to say that REST still dominates the web API landscape, at least with respect to public APIs, however the shift towards microservices or/and service-based architecture is changing the alignment of forces very fast. One of such disrupting trends is messaging.

Modern REST(ful) APIs are implemented mostly over HTTP 1.1 protocol and are constrained by its request/response communication style. The HTTP/2 is here to help out but still, not every use case fits into this communication model. Often the job could be performed asynchronously and the fact of its completion could be broadcasted to interested parties later on. This is how most of the things work in real life and using messaging is a perfect answer to that.

The messaging space is really crowded with astonishing amount of message brokers and brokerless options available. We are not going to talk about that instead focusing on another tricky subject: the message contracts. Once the producer emits message or event, it lands into the queue/topic/channel, ready to be consumed. It is here to stay for some time. Obviously, the producer knows what it publishes, but what about consumers? How would they know what to expect?

At this moment, many of us would scream: use schema-based serialization! And indeed, Apache Avro, Apache Thrift, Protocol Buffers, Message Pack, ... are here to address that. At the end of the day, such messages and events become the part of the provider contract, along with the REST(ful) web APIs if any, and have to be communicated and evolved over time without breaking the consumers. But ... you would be surprised to know how many organizations found their nirvana in JSON and use it to pass messages and events around, throwing such clobs at consumers, no schema whatsoever! In this post we are going to look at how consumer-driven contract testing technique could help us in such situation.

Let us consider a simple system with two services, Order Service and Shipment Service. The Order Service publishes the messages / events to the message queue and Shipment Service consumes them from there.

Since Order Service is implemented in Java, the events are just POJO classes, serialized into JSON before arriving to the message broker using one of the numerous libraries out there. OrderConfirmed is one of such events.

public class OrderConfirmed {
    private UUID orderId;
    private UUID paymentId;
    private BigDecimal amount;
    private String street;
    private String city;
    private String state;
    private String zip;
    private String country;
}

As it often happens, the Shipment Service team was handed over the sample JSON snippet or pointed out some documentation piece, or reference Java class, and that is basically it. How Shipment Service team could kickoff the integration while being sure their interpretation is correct and the message's data they need will not suddenly disappear? Consumer-driven contract testing to the rescue!

The Shipment Service team could (and should) start off by writing the test cases against the OrderConfirmed message, embedding the knowledge they have, and our old friend Pact framework (to be precise, Pact JVM) is the right tool for that. So how the test case may look like?

public class OrderConfirmedConsumerTest {
    private static final String PROVIDER_ID = "Order Service";
    private static final String CONSUMER_ID = "Shipment Service";
    
    @Rule
    public MessagePactProviderRule provider = new MessagePactProviderRule(this);
    private byte[] message;

    @Pact(provider = PROVIDER_ID, consumer = CONSUMER_ID)
    public MessagePact pact(MessagePactBuilder builder) {
        return builder
            .given("default")
            .expectsToReceive("an Order confirmation message")
            .withMetadata(Map.of("Content-Type", "application/json"))
            .withContent(new PactDslJsonBody()
                .uuid("orderId")
                .uuid("paymentId")
                .decimalType("amount")
                .stringType("street")
                .stringType("city")
                .stringType("state")
                .stringType("zip")
                .stringType("country"))
            .toPact();
    }

    @Test
    @PactVerification(PROVIDER_ID)
    public void test() throws Exception {
        Assert.assertNotNull(message);
    }

    public void setMessage(byte[] messageContents) {
        message = messageContents;
    }
}

It is exceptionally simple and straightforward, no boilerplate added. The test case is designed right from the JSON representation of the OrderConfirmed message. But we are only half-way through, the Shipment Service team should somehow contribute their expectations back to the Order Service so the producer would keep track of who and how consumes the OrderConfirmed message. The Pact test harness takes care of that by generating the pact files (set of agreements, or pacts) out of the each JUnit test cases into the 'target/pacts' folder. Below is an example of the generated Shipment Service-Order Service.json pact file after running OrderConfirmedConsumerTest test suite.

{
  "consumer": {
    "name": "Shipment Service"
  },
  "provider": {
    "name": "Order Service"
  },
  "messages": [
    {
      "description": "an Order confirmation message",
      "metaData": {
        "contentType": "application/json"
      },
      "contents": {
        "zip": "string",
        "country": "string",
        "amount": 100,
        "orderId": "e2490de5-5bd3-43d5-b7c4-526e33f71304",
        "city": "string",
        "paymentId": "e2490de5-5bd3-43d5-b7c4-526e33f71304",
        "street": "string",
        "state": "string"
      },
      "providerStates": [
        {
          "name": "default"
        }
      ],
      "matchingRules": {
        "body": {
          "$.orderId": {
            "matchers": [
              {
                "match": "regex",
                "regex": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
              }
            ],
            "combine": "AND"
          },
          "$.paymentId": {
            "matchers": [
              {
                "match": "regex",
                "regex": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
              }
            ],
            "combine": "AND"
          },
          "$.amount": {
            "matchers": [
              {
                "match": "decimal"
              }
            ],
            "combine": "AND"
          },
          "$.street": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.city": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.state": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.zip": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.country": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          }
        }
      }
    }
  ],
  "metadata": {
    "pactSpecification": {
      "version": "3.0.0"
    },
    "pact-jvm": {
      "version": "4.0.2"
    }
  }
}

The next step for Shipment Service team is to share this pact file with Order Service team so these guys could run the provider-side Pact verifications as part of their test suites.

@RunWith(PactRunner.class)
@Provider(OrderServicePactsTest.PROVIDER_ID)
@PactFolder("pacts") 
public class OrderServicePactsTest {
    public static final String PROVIDER_ID = "Order Service";

    @TestTarget
    public final Target target = new AmqpTarget();
    private ObjectMapper objectMapper;
    
    @Before
    public void setUp() {
        objectMapper = new ObjectMapper();
    }

    @State("default")
    public void toDefaultState() {
    }
    
    @PactVerifyProvider("an Order confirmation message")
    public String verifyOrderConfirmed() throws JsonProcessingException {
        final OrderConfirmed order = new OrderConfirmed();
        
        order.setOrderId(UUID.randomUUID());
        order.setPaymentId(UUID.randomUUID());
        order.setAmount(new BigDecimal("102.33"));
        order.setStreet("1203 Westmisnter Blvrd");
        order.setCity("Westminster");
        order.setCountry("USA");
        order.setState("MI");
        order.setZip("92239");

        return objectMapper.writeValueAsString(order);
    }
}

The test harness picks all the pact files from the @PactFolder and run the tests against the @TestTarget, in this case we are wiring AmqpTarget, provided out of the box, but your could plug your own specific target easily.

And this is basically it! The consumer (Shipment Service) have their expectations expressed in the test cases and shared with the producer (Order Service) in a shape of the pact files. The producers have own set of tests to make sure its model matches the consumers' view. Both sides could continue to evolve independently, and trust each other, as far as pacts are not denounced (hopefully, never).

To be fair, Pact is not the only choice for doing consumer-driven contract testing, in the upcoming post (already in work) we are going to talk about yet another excellent option, Spring Cloud Contract.

As for today, the complete project sources are available on Github.

Thursday, July 18, 2019

Testing Spring Boot conditionals the sane way

If you are more or less experienced Spring Boot user, it is very luckily that at some point you may need to run into the situation when the particular beans or configurations have to be injected conditionally. The mechanics of it is well understood but sometimes the testing such conditions (and their combinations) could get messy. In this post we are going to talk about some possible (arguably, sane) ways to approach that.

Since Spring Boot 1.5.x is still widely used (nonetheless it is racing towards the EOL this August), we would include it along with Spring Boot 2.1.x, both with JUnit 4.x and JUnit 5.x. The techniques we are about to cover are equally applicable to the regular configuration classes as well as auto-configurations classes.

The example we will be playing with would be related to our home-made logging. Let us assume our Spring Boot application requires some bean for dedicated logger with name "sample". In certain circumstances however this logger has to be disabled (or become effectively a noop), so the property logging.enabled serves like a kill switch here. We use Slf4j and Logback in this example, but it is not really important. The LoggingConfiguration snippet below reflects this idea.

@Configuration
public class LoggingConfiguration {
    @Configuration
    @ConditionalOnProperty(name = "logging.enabled", matchIfMissing = true)
    public static class Slf4jConfiguration {
        @Bean
        Logger logger() {
            return LoggerFactory.getLogger("sample");
        }
    }
    
    @Bean
    @ConditionalOnMissingBean
    Logger logger() {
        return new NOPLoggerFactory().getLogger("sample"); 
    }
}

So how would we test that? Spring Boot (and Spring Framework in general) has always offered the outstanding test scaffolding support. The @SpringBootTest and @TestPropertySource annotations allow to quickly bootstrap the application context with the customized properties. There is one issue though: they are applied per test class level, not a per test method. It certainly makes sense but basically requires you to create a test class per combination of conditionals.

If you are still with JUnit 4.x, there is one trick you may found useful which exploits Enclosed runner, the hidden gem of the framework.

@RunWith(Enclosed.class)
public class LoggingConfigurationTest {
    @RunWith(SpringRunner.class)
    @SpringBootTest
    public static class LoggerEnabledTest {
        @Autowired private Logger logger;
        
        @Test
        public void loggerShouldBeSlf4j() {
            assertThat(logger).isInstanceOf(ch.qos.logback.classic.Logger.class);
        }
    }
    
    @RunWith(SpringRunner.class)
    @SpringBootTest
    @TestPropertySource(properties = "logging.enabled=false")
    public static class LoggerDisabledTest {
        @Autowired private Logger logger;
        
        @Test
        public void loggerShouldBeNoop() {
            assertThat(logger).isSameAs(NOPLogger.NOP_LOGGER);
        }
    }
}

You still have the class per condition but at least they are all in the same nest. With JUnit 5.x, some things got easier but not to the level as one might expect. Unfortunately, Spring Boot 1.5.x does not support JUnit 5.x natively, so we have to rely on extension provided by spring-test-junit5 community module. Here are the relevant changes in pom.xml, please notice that junit is explicitly excluded from the spring-boot-starter-test dependencies graph.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
    <exclusions>
        <exclusion>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<dependency>
    <groupId>com.github.sbrannen</groupId>
    <artifactId>spring-test-junit5</artifactId>
    <version>1.5.0</version>
    <scope>test</scope>
</dependency>

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-api</artifactId>
    <version>5.5.0</version>
    <scope>test</scope>
</dependency>

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <version>5.5.0</version>
    <scope>test</scope>
</dependency>

The test case itself is not very different besides usage of the @Nested annotation, which comes from JUnit 5.x to support tests as inner classes.

public class LoggingConfigurationTest {
    @Nested
    @ExtendWith(SpringExtension.class)
    @SpringBootTest
    @DisplayName("Logging is enabled, expecting Slf4j logger")
    public static class LoggerEnabledTest {
        @Autowired private Logger logger;
        
        @Test
        public void loggerShouldBeSlf4j() {
            assertThat(logger).isInstanceOf(ch.qos.logback.classic.Logger.class);
        }
    }
    
    @Nested
    @ExtendWith(SpringExtension.class)
    @SpringBootTest
    @TestPropertySource(properties = "logging.enabled=false")
    @DisplayName("Logging is disabled, expecting NOOP logger")
    public static class LoggerDisabledTest {
        @Autowired private Logger logger;
        
        @Test
        public void loggerShouldBeNoop() {
            assertThat(logger).isSameAs(NOPLogger.NOP_LOGGER);
        }
    }
}

If you try to run the tests from the command line using Apache Maven and Maven Surefire plugin, you might be surprised to see that none of them were executed during the build. The issue is that ... all nested classes are excluded ... so we need to put in place another workaround.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.22.2</version>
    <configuration>
        <excludes>
            <exclude />
        </excludes>
    </configuration>
</plugin>

With that, things should be rolling smoothly. But enough about legacy, the Spring Boot 2.1.x comes as the complete game changer. The family of the context runners, ApplicationContextRunner, ReactiveWebApplicationContextRunner and WebApplicationContextRunner, provide an easy and straightforward way to tailor the context on per test method level, keeping the test executions incredibly fast.

public class LoggingConfigurationTest {
    private final ApplicationContextRunner runner = new ApplicationContextRunner()
        .withConfiguration(UserConfigurations.of(LoggingConfiguration.class));
    
    @Test
    public void loggerShouldBeSlf4j() {
        runner
            .run(ctx -> 
                assertThat(ctx.getBean(Logger.class)).isInstanceOf(Logger.class)
            );
    }
    
    @Test
    public void loggerShouldBeNoop() {
        runner
            .withPropertyValues("logging.enabled=false")
            .run(ctx -> 
                assertThat(ctx.getBean(Logger.class)).isSameAs(NOPLogger.NOP_LOGGER)
            );
    }
}

It looks really great. The JUnit 5.x support in Spring Boot 2.1.x is much better and with the the upcoming 2.2 release, JUnit 5.x will be the default engine (not to worry, the old JUnit 4.x will still be supported). As of now, the switch to JUnit 5.x needs a bit of work on dependencies side.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
    <exclusions>
        <exclusion>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-api</artifactId>
    <scope>test</scope>
</dependency>

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <scope>test</scope>
</dependency>

As an additional step, you may need to use recent Maven Surefire plugin, 2.22.0 or above, with out-of-the box JUnit 5.x support. The the snippet below illustrates that.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.22.2</version>
</plugin>

The sample configuration we have worked with is pretty naive, many of the real-world applications would end up with quite complex contexts built out of many conditionals. The flexibility and enormous opportunities that come out of the context runners, the invaluable addition to the Spring Boot 2.x test scaffolding, are just the live savers, please keep them in mind.

The complete project sources are available on Github.

Saturday, May 11, 2019

When HTTP status code is not enough: tackling web APIs error reporting

One area of the RESTful web APIs design, quite frequently overlooked, is how to report errors and problems, either related to business or application. The proper usage of the HTTP status codes comes to mind first, and although quite handy, often it is not informative enough. Let us take 400 Bad Request for example. Yes, it clearly states that the request is problematic, but what exactly is wrong?

The RESTful architectural style does not dictate what should be done in this case and so everyone is inventing its own styles, conventions and specifications. It could be as simple as including error message into the response or as shortsighted as copy/pasting long stack traces (in case of Java or .NET, to name a few cultprits). There is no shortage of ideas but luckily, we have at least some guidance available in the form of RFC 7807: Problem Details for HTTP APIs. Despite the fact that it is not an official specification but a draft (still), it outlines the good common principles on the problem at hand and this is what we are going to talk about in this post.

In the nutshell, RFC 7807: Problem Details for HTTP APIs just proposes the error or problem representation (in JSON or XML formats) which may include at least the following details:

  • type - A URI reference that identifies the problem type
  • title - A short, human-readable summary of the problem type
  • status - The HTTP status code
  • detail - A human-readable explanation specific to this occurrence of the problem
  • instance - A URI reference that identifies the specific occurrence of the problem
More importantly, the problem type definitions may extend the problem details object with additional members, contributing to the ones above. As you see, it looks dead simple from the implementation perspective. Even better, thanks to Zalando, we already have the RFC 7807: Problem Details for HTTP APIs implementation for Java (and Spring Web in particular). So ... let us give it a try!

Our imaginary People Management web API is going to be built using the state of the art technology stack, Spring Boot and Apache CXF, the popular web services framework and JAX-RS 2.1 implementation. To keep it somewhat simple, there are only two endpoints which are exposed: registration and lookup by person identifier.

Sweeping aside the tons of issues and business constraints you may run into while developing the real-world services, even with this simple API a few things may go wrong. The first problem we age going to tackle is what if the person you are looking for is not registered yet? Looks like a fit for 404 Not Found, right? Indeed, let us start with our first problem, PersonNotFoundProblem!

public class PersonNotFoundProblem extends AbstractThrowableProblem {
    private static final long serialVersionUID = 7662154827584418806L;
    private static final URI TYPE = URI.create("http://localhost:21020/problems/person-not-found");
    
    public PersonNotFoundProblem(final String id, final URI instance) {
        super(TYPE, "Person is not found", Status.NOT_FOUND, 
            "Person with identifier '" + id + "' is not found", instance, 
                null, Map.of("id", id));
    }
}

It resembles a lot the typical Java exception, and it really is one, since AbstractThrowableProblem is the subclass of the RuntimeException. As such, we could throw it from our JAX-RS API.

@Produces({ MediaType.APPLICATION_JSON, "application/problem+json" })
@GET
@Path("{id}")
public Person findById(@PathParam("id") String id) {
    return service
        .findById(id)
        .orElseThrow(() -> new PersonNotFoundProblem(id, uriInfo.getRequestUri()));
}

If we run the server and just try to fetch the person providing any identifier, the problem detail response is going to be returned back (since the dataset is not pre-populated), for example:

$ curl "http://localhost:21020/api/people/1" -H  "Accept: */*" 

HTTP/1.1 404
Content-Type: application/problem+json

{
    "type" : "http://localhost:21020/problems/person-not-found",
    "title" : "Person is not found",
    "status" : 404,
    "detail" : "Person with identifier '1' is not found",
    "instance" : "http://localhost:21020/api/people/1",
    "id" : "1"
}

Please notice the usage of the application/problem+json media type along with additional property id being included into the response. Although there are many things which could be improved, it is arguably better than just naked 404 (or 500 caused by EntityNotFoundException). Plus, the documentation section behind this type of the problem (in our case, http://localhost:21020/problems/person-not-found) could be consulted in case further clarifications may be needed.

So designing the problems after exceptions is just one option. You may often (and for very valid reasons) restrain from coupling you business logic with unrelated details. In this case, it is perfectly valid to return the problem details as the response payload from the JAX-RS resource. As an example, the registration process may raise NonUniqueEmailException so our web API layer could transform it into appropriate problem detail.

@Consumes(MediaType.APPLICATION_JSON)
@Produces({ MediaType.APPLICATION_JSON, "application/problem+json" })
@POST
public Response register(@Valid final CreatePerson payload) {
    try {
        final Person person = service.register(payload.getEmail(), 
            payload.getFirstName(), payload.getLastName());
            
        return Response
            .created(uriInfo.getRequestUriBuilder().path(person.getId()).build())
            .entity(person)
            .build();

    } catch (final NonUniqueEmailException ex) {
        return Response
            .status(Response.Status.BAD_REQUEST)
            .type("application/problem+json")
            .entity(Problem
                .builder()
                .withType(URI.create("http://localhost:21020/problems/non-unique-email"))
                .withInstance(uriInfo.getRequestUri())
                .withStatus(Status.BAD_REQUEST)
                .withTitle("The email address is not unique")
                .withDetail(ex.getMessage())
                .with("email", payload.getEmail())
                .build())
            .build();
        }
    }

To trigger this issue, it is enough to run the server instance and try to register the same person twice, like we have done below.

$ curl -X POST "http://localhost:21020/api/people" \ 
     -H  "Accept: */*" -H "Content-Type: application/json" \
     -d '{"email":"john@smith.com", "firstName":"John", "lastName": "Smith"}'

HTTP/1.1 400                                                                              
Content-Type: application/problem+json                                                           
                                                                                                                                                                                   
{                                                                                         
    "type" : "http://localhost:21020/problems/non-unique-email",                            
    "title" : "The email address is not unique",                                            
    "status" : 400,                                                                         
    "detail" : "The email 'john@smith.com' is not unique and is already registered",        
    "instance" : "http://localhost:21020/api/people",                                       
    "email" : "john@smith.com"                                                              
}                                                                                         

Great, so our last example is a bit more complicated but, probably, at the same time, the most realistic one. Our web API heavily relies on Bean Validation in order to make sure the input provided by the consumers of the API is valid. How would we represent the validation errors as the problem details? The most straightforward way is to supply the dedicated ExceptionMapper provider, which is the part of the JAX-RS specification. Let us introduce one.

@Provider
public class ValidationExceptionMapper implements ExceptionMapper<ValidationException> {
    @Context private UriInfo uriInfo;
    
    @Override
    public Response toResponse(final ValidationException ex) {
        if (ex instanceof ConstraintViolationException) {
            final ConstraintViolationException constraint = (ConstraintViolationException) ex;
            
            final ThrowableProblem problem = Problem
                    .builder()
                    .withType(URI.create("http://localhost:21020/problems/invalid-parameters"))
                    .withTitle("One or more request parameters are not valid")
                    .withStatus(Status.BAD_REQUEST)
                    .withInstance(uriInfo.getRequestUri())
                    .with("invalid-parameters", constraint
                        .getConstraintViolations()
                        .stream()
                        .map(this::buildViolation)
                        .collect(Collectors.toList()))
                    .build();

            return Response
                .status(Response.Status.BAD_REQUEST)
                .type("application/problem+json")
                .entity(problem)
                .build();
        }
        
        return Response
            .status(Response.Status.INTERNAL_SERVER_ERROR)
            .type("application/problem+json")
            .entity(Problem
                .builder()
                .withTitle("The server is not able to process the request")
                .withType(URI.create("http://localhost:21020/problems/server-error"))
                .withInstance(uriInfo.getRequestUri())
                .withStatus(Status.INTERNAL_SERVER_ERROR)
                .withDetail(ex.getMessage())
                .build())
            .build();
    }

    protected Map<?, ?> buildViolation(ConstraintViolation<?> violation) {
        return Map.of(
                "bean", violation.getRootBeanClass().getName(),
                "property", violation.getPropertyPath().toString(),
                "reason", violation.getMessage(),
                "value", Objects.requireNonNullElse(violation.getInvalidValue(), "null")
            );
    }
}

The snippet above distingushes two kind of issues: the ConstraintViolationExceptions indicate the invalid input and are mapped to 400 Bad Request, whereas generic ValidationExceptions indicate the problem on the server side and are mapped to 500 Internal Server Error. We only extract the basic details about violations, however even that improves the error reporting a lot.

$ curl -X POST "http://localhost:21020/api/people" \
    -H  "Accept: */*" -H "Content-Type: application/json" \
    -d '{"email":"john.smith", "firstName":"John"}' -i    

HTTP/1.1 400                                                                    
Content-Type: application/problem+json                                              
                                                                                
{                                                                               
    "type" : "http://localhost:21020/problems/invalid-parameters",                
    "title" : "One or more request parameters are not valid",                     
    "status" : 400,                                                               
    "instance" : "http://localhost:21020/api/people",                             
    "invalid-parameters" : [ 
        {
            "reason" : "must not be blank",                                             
            "value" : "null",                                                           
            "bean" : "com.example.problem.resource.PeopleResource",                     
            "property" : "register.payload.lastName"                                    
        }, 
        {                                                                          
            "reason" : "must be a well-formed email address",                           
            "value" : "john.smith",                                                     
            "bean" : "com.example.problem.resource.PeopleResource",                     
            "property" : "register.payload.email"                                       
        } 
    ]                                                                           
}                                                                               

This time the additional information bundled into the invalid-parameters member is quite verbose: we know the class (PeopleResource), method (register), the method's argument (payload) and the properties (lastName and email) respectively (all that extracted from the property path).

Meaningful error reporting is one of corner stones of the modern RESTful web APIs. Often it is not easy but definitely worth the efforts. The consumers (which often are just other developers) should have a clear understanding of what went wrong and what to do about it. The RFC 7807: Problem Details for HTTP APIs is a step into right direction and libraries like problem and problem-spring-web are here to back you up, please make use of them.

The complete source code is available on Github.

Sunday, February 24, 2019

The Hypermedia APIs support in JAX-RS and OpenAPI: a long way to go

Sooner or later, most of the developers who actively work on REST(ful) web services and APIs stumble upon this truly extraterrestrial thing called HATEOAS: Hypertext As The Engine Of Application State. The curiosity of what HATEOAS is and how it relates to REST would eventually lead to discovery of the Richardson Maturity Model which demystifies the industry definitions of REST and RESTful. The latter comes as an enlightenment, raising the question however: have we been doing REST wrong all these years?

Let us try to answer this question from the different perspectives. The HATEOAS is one of the core REST architectural constraints. From this perspective, the answer is "yes", in order to claim REST compliance, the web service or API should support that. Nonetheless, if you look around (or even consult your past or present experience), you may find out that the majority of the web services and APIs are just CRUD wrappers around the domain models, with no HATEOAS support whatsoever. Why is that? Probably, there is more than one reason, but from the developer's toolbox perspective, the backing of HATEOAS is not that great.

In today's post we are going to talk about what JAX-RS 2.x has to offer with respect to HATEOAS, how to use that from the server and client perspectives, and how to augment the OpenAPI v3.0.x specification to expose hypermedia as part of the contract. If you are excited, let us get started.

So our JAX-RS web APIs are going to be built around managing companies and their staff. The foundation is Spring Boot and Apache CXF, with Swagger as OpenAPI specification implementation. The AppConfig is the only piece of configuration we need to define in order to get the application up and running (thanks to Spring Boot auto-configuration capabilities).

@SpringBootConfiguration
public class AppConfig {
    @Bean
    OpenApiFeature createOpenApiFeature() {
        final OpenApiFeature openApiFeature = new OpenApiFeature();
        openApiFeature.setSwaggerUiConfig(new SwaggerUiConfig().url("/api/openapi.json"));
        return openApiFeature;
    }
    
    @Bean
    JacksonJsonProvider jacksonJsonProvider() {
        return new JacksonJsonProvider();
    }
}

The model is very simple, Company and Person (please notice that there is no direct relationships between these two classes, purposely).

public class Company {
    private String id;
    private String name;
}

public class Person {
    private String id;
    private String email;
    private String firstName;
    private String lastName;
}

This model is exposed through CompanyResource, a typical JAX-RS resource class annotated with @Path, and additionally with OpenAPI's @Tag annotation.

@Component
@Path( "/companies" ) 
@Tag(name = "companies")
public class CompanyResource {
    @Autowired private CompanyService service;
}

Great, the resource class has no endpoints defined yet, so let us beef it up. Our first endpoint would lookup the company by identifier and return its representation in JSON format. But since we do not incorporate any staff-related details, it would be awesome to hint the consumer (web UI or any other client) where to look it up. There are multiple ways to do that but since we stick to JAX-RS, we could use Web Linking (RFC-5988) which is supported out of the box. The code snippet is worth thousand words.

@Produces(MediaType.APPLICATION_JSON)
@GET
@Path("{id}")
public Response getCompanyById(@Context UriInfo uriInfo, @PathParam("id") String id) {
    return service
        .findCompanyById(id)
        .map(company -> Response
            .ok(company)
            .links(
                Link.fromUriBuilder(uriInfo
                        .getRequestUriBuilder())
                    .rel("self")
                    .build(),
                Link.fromUriBuilder(uriInfo
                        .getBaseUriBuilder()
                        .path(CompanyResource.class))
                    .rel("collection")
                    .build(),
                Link.fromUriBuilder(uriInfo
                       .getBaseUriBuilder()
                       .path(CompanyResource.class)
                       .path(CompanyResource.class, "getStaff"))
                    .rel("staff")
                    .build(id)
             )
            .build())
        .orElseThrow(() -> new NotFoundException("The company with id '" + id + "' does not exists"));
}

There are few things happening here. The one we care about is usage of the ResponseBuilder::links method where we supply three links. The first is self, which is essentially the link context (defined as part of RFC-5988). The second one, collection, is pointing out to the CompanyResource endpoint which returns the list of companies (also is included into standard relations registry). And lastly, the third one is our own staff relation which we assemble from another CompanyResource endpoint implemented by the method with the name getStaff (we are going to see it shortly). These links are going to be delivered in the Link response header and guide the client where to go next. Let us see it in action by running the application.

$ mvn clean package 
$ java -jar target/jax-rs-2.1-hateaos-0.0.1-SNAPSHOT.jar

And inspect the response from this resource endpoint using curl (the unnecessary details have been filtered out).

$ curl -v http://localhost:8080/api/companies/1
> GET /api/companies/1 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.47.1
> Accept: */*
>
< HTTP/1.1 200
< Link: <http://localhost:8080/api/companies/1>;rel="self"
< Link: <http://localhost:8080/api/companies/1/staff>;rel="staff"
< Link: <http://localhost:8080/api/companies>;rel="collection"
< Content-Type: application/json
< Transfer-Encoding: chunked
<
{
   "id":"1",
   "name":"HATEOAS, Inc."
}

The Link header is there, referring to other endpoints of interest. From the client perspective, the things are looking pretty straightforward as well. The Response class provides dedicated getLinks method to wrap around the access to Link response header, for example:

final Client client = ClientBuilder.newClient();

try (final Response response = client
        .target("http://localhost:8080/api/companies/{id}")
        .resolveTemplate("id", "1")
        .request()
        .accept(MediaType.APPLICATION_JSON)
        .get()) {
            
    final Optional staff = response
        .getLinks()
        .stream()
        .filter(link -> Objects.equals(link.getRel(), "staff"))
        .findFirst();
            
    staff.ifPresent(link -> {
        // follow the link here 
    });           
} finally {
    client.close();
}

So far so good. Moving forward, since HATEOAS is essentially a part of the web APIs contract, let us find out what OpenAPI specification has for it on the table. Unfortunately, HATEOAS is not supported as of now, but on the bright side, there is a notion of links (although they should not be confused with Web Linking, they are somewhat similar but not the same). To illustrate the usage of the links as part of the OpenAPI specification, let us decorate the endpoint with Swagger annotations.

@Operation(
    description = "Find Company by Id",
    responses = {
        @ApiResponse(
            content = @Content(schema = @Schema(implementation = Company.class)),
            links = {
                @io.swagger.v3.oas.annotations.links.Link(
                   name = "self", 
                   operationRef = "#/paths/~1companies~1{id}/get",
                   description = "Find Company",
                   parameters = @LinkParameter(name = "id", expression = "$response.body#/id")
                ),
                @io.swagger.v3.oas.annotations.links.Link(
                    name = "staff", 
                    operationRef = "#/paths/~1companies~1{id}~1staff/get",
                    description = "Get Company Staff",
                    parameters = @LinkParameter(name = "id", expression = "$response.body#/id")
                ),
                @io.swagger.v3.oas.annotations.links.Link(
                    name = "collection", 
                    operationRef = "#/paths/~1companies/get",
                    description = "List Companies"
                )
            },
            description = "Company details",
            responseCode = "200"
        ),
        @ApiResponse(
            description = "Company does not exist",
            responseCode = "404"
        )
    }
)
@Produces(MediaType.APPLICATION_JSON)
@GET
@Path("{id}")
public Response getCompanyById(@Context UriInfo uriInfo, @PathParam("id") String id) {
  // ...
}

If we run the application and navigate to the http://localhost:8080/api/api-docs in the browser (this is where Swagger UI is hosted), we would be able to see the links section along each response.

But besides that ... not much you could do with the links there (please watch for this issue if you are interested in the subject). The resource endpoint to get the company's staff is looking quite similar.

@Operation(
    description = "Get Company Staff",
    responses = {
        @ApiResponse(
            content = @Content(array = @ArraySchema(schema = @Schema(implementation = Person.class))),
            links = {
                @io.swagger.v3.oas.annotations.links.Link(
                    name = "self", 
                    operationRef = "#/paths/~1companies~1{id}~1staff/get",
                    description = "Staff",
                    parameters = @LinkParameter(name = "id", expression = "$response.body#/id")
                ),
                @io.swagger.v3.oas.annotations.links.Link(
                    name = "company", 
                    operationRef = "#/paths/~1companies~1{id}/get",
                    description = "Company",
                    parameters = @LinkParameter(name = "id", expression = "$response.body#/id")
                )
            },
            description = "The Staff of the Company",
            responseCode = "200"
        ),
        @ApiResponse(
            description = "Company does not exist",
            responseCode = "404"
        )
    }
)
@Produces(MediaType.APPLICATION_JSON)
@GET
@Path("{id}/staff")
public Response getStaff(@Context UriInfo uriInfo, @PathParam("id") String id) {
    return service
        .findCompanyById(id)
        .map(c -> service.getStaff(c))
        .map(staff -> Response
            .ok(staff)
            .links(
                Link.fromUriBuilder(uriInfo
                        .getRequestUriBuilder())
                    .rel("self")
                    .build(),
                Link.fromUriBuilder(uriInfo
                        .getBaseUriBuilder()
                        .path(CompanyResource.class)
                        .path(id))
                    .rel("company")
                    .build()
             )
            .build())
        .orElseThrow(() -> new NotFoundException("The company with id '" + id + "' does not exists"));
}

As you might expect, beside the link to self, it also includes the link to the company. When we try it out using curl, the expected response headers are returned back.

$ curl -v http://localhost:8080/api/companies/1/staff
> GET /api/companies/1/staff HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.47.1
> Accept: */*
>
< HTTP/1.1 200
< Link: <http://localhost:8080/api/companies/1/staff>;rel="self"
< Link: <http://localhost:8080/api/companies/1>;rel="company"
< Content-Type: application/json
< Transfer-Encoding: chunked
<
[
    {
        "id":"1",
        "email":"john@smith.com",
        "firstName":"John",
        "lastName":"Smith"
    },
    {
        "id":"2",
        "email":"bob@smith.com",
        "firstName":"Bob",
        "lastName":"Smith"
    }
]

So what kind of conclusions we can draw from that? HATEOAS indeed unifies the interaction model between web API providers and consumers by dynamically driving the conversations. This is very powerful, but most of the frameworks and tools out there either have pretty basic support of the HATEOAS (for example, Web Linking) or none at all.

There are many use cases when usage of the Web Linking is sufficient (the examples we have seen so far, paging, navigation, ...), but what about let say creating, editing or patching the existing resources? What about enriching with hypermedia the individual elements which are returned in the collection (described in RFC-6537)? Is HATEOAS worth all this efforts?

As always, the answer is "it depends", may be we should look beyond the JAX-RS? In the next post(s_ we are going to continue figuring things out.

The complete source code is available on Github.

Tuesday, November 6, 2018

Building Enterprise Java applications, the Spring way

I think it is fair to say that Java EE has gained pretty bad reputation among Java developers. Despite the fact that it has certainly improved on all fronts over the years, even changed home to Eclipse Foundation to become Jakarta EE, its bitter taste is still quite strong. On the other side we have Spring Framework (or to reflect the reality better, a full-fledged Spring Platform): brilliant, lightweight, fast, innovative and hyper-productive Java EE replacement. So why to bother with Java EE?

We are going to answer this question by showing how easy it is to build modern Java applications using most of Java EE specs. And the key ingredient to succeed here is Eclipse Microprofile: enterprise Java in the age of microservices.

The application we are going to build is RESTful web API to manage people, as simple as that. The standard way to build RESTful web services in Java is by using JAX-RS 2.1 (JSR-370). Consequently, CDI 2.0 (JSR-365) is going to take care of dependency injection whereas JPA 2.0 (JSR-317) is going to cover the data access layer. And certainly, Bean Validation 2.0 (JSR-380) is helping us to deal with input verification.

The only non-Java EE specification we would be relying on is OpenAPI v3.0 which helps to provide the usable description of our RESTful web APIs. With that, let us get started with the PersonEntity domain model (omitting getters and setters as not very relevant details):

@Entity
@Table(name = "people")
public class PersonEntity {
    @Id @Column(length = 256) 
    private String email;

    @Column(nullable = false, length = 256, name = "first_name")
    private String firstName;

    @Column(nullable = false, length = 256, name = "last_name")
    private String lastName;

    @Version
    private Long version;
}

It just has the absolute minimum set of properties. The JPA repository is pretty straightforward and implements typical set of CRUD methods.

@ApplicationScoped
@EntityManagerConfig(qualifier = PeopleDb.class)
public class PeopleJpaRepository implements PeopleRepository {
    @Inject @PeopleDb private EntityManager em;

    @Override
    @Transactional(readOnly = true)
    public Optional<PersonEntity> findByEmail(String email) {
        final CriteriaBuilder cb = em.getCriteriaBuilder();
    
        final CriteriaQuery<PersonEntity> query = cb.createQuery(PersonEntity.class);
        final Root<PersonEntity> root = query.from(PersonEntity.class);
        query.where(cb.equal(root.get(PersonEntity_.email), email));
        
        try {
            final PersonEntity entity = em.createQuery(query).getSingleResult();
            return Optional.of(entity);
        } catch (final NoResultException ex) {
            return Optional.empty();
        }
    }

    @Override
    @Transactional
    public PersonEntity saveOrUpdate(String email, String firstName, String lastName) {
        final PersonEntity entity = new PersonEntity(email, firstName, lastName);
        em.persist(entity);
        return entity;
    }

    @Override
    @Transactional(readOnly = true)
    public Collection<PersonEntity> findAll() {
        final CriteriaBuilder cb = em.getCriteriaBuilder();
        final CriteriaQuery<PersonEntity> query = cb.createQuery(PersonEntity.class);
        query.from(PersonEntity.class);
        return em.createQuery(query).getResultList();
    }

    @Override
    @Transactional
    public Optional<PersonEntity> deleteByEmail(String email) {
        return findByEmail(email)
            .map(entity -> {
                em.remove(entity);
                return entity;
            });
    }
}

The transaction management (namely, the @Transactional annotation) needs some explanation. In the typical Java EE application, the container runtime is responsible for managing the transactions. Since we don't want to onboard the application container but stay lean, we could have used EntityManager to start / commit / rollback transactions. It would certainly work out but pollute the code with the boilerplate. Arguably, the better option is to use Apache DeltaSpike CDI extensions for declarative transaction management (this is where @Transactional and @EntityManagerConfig annotations are coming from). The snippet below illustrates how it is being integrated.

@ApplicationScoped
public class PersistenceConfig {
    @PersistenceUnit(unitName = "peopledb")
    private EntityManagerFactory entityManagerFactory;

    @Produces @PeopleDb @TransactionScoped
    public EntityManager create() {
        return this.entityManagerFactory.createEntityManager();
    }

    public void dispose(@Disposes @PeopleDb EntityManager entityManager) {
        if (entityManager.isOpen()) {
            entityManager.close();
        }
    }
}

Awesome, the hardest part is already behind! The Person data transfer object and the service layer are coming next.

public class Person {
    @NotNull private String email;
    @NotNull private String firstName;
    @NotNull private String lastName;
}

Honestly, for the sake of keeping the example application as small as possible we could skip the service layer altogether and go to the repository directly. But this is, in general, not a very good practice so let us introduce PeopleServiceImpl anyway.

@ApplicationScoped
public class PeopleServiceImpl implements PeopleService {
    @Inject private PeopleRepository repository;

    @Override
    public Optional<Person> findByEmail(String email) {
        return repository
            .findByEmail(email)
            .map(this::toPerson);
    }

    @Override
    public Person add(Person person) {
        return toPerson(repository.saveOrUpdate(person.getEmail(), person.getFirstName(), person.getLastName()));
    }

    @Override
    public Collection<Person> getAll() {
        return repository
            .findAll()
            .stream()
            .map(this::toPerson)
            .collect(Collectors.toList());
    }

    @Override
    public Optional<Person> remove(String email) {
        return repository
            .deleteByEmail(email)
            .map(this::toPerson);
    }
    
    private Person toPerson(PersonEntity entity) {
        return new Person(entity.getEmail(), entity.getFirstName(), entity.getLastName());
    }
}

The only part left is the definition of the JAX-RS application and resources.

@Dependent
@ApplicationPath("api")
@OpenAPIDefinition(
    info = @Info(
        title = "People Management Web APIs", 
        version = "1.0.0", 
        license = @License(
            name = "Apache License", 
            url = "https://www.apache.org/licenses/LICENSE-2.0"
        )
    )
)
public class PeopleApplication extends Application {
}

Not much to say, as simple as it could possibly be. The JAX-RS resource implementation is a bit more interesting though (the OpenAPI annotations are taking most of the place).

@ApplicationScoped
@Path( "/people" ) 
@Tag(name = "people")
public class PeopleResource {
    @Inject private PeopleService service;
    
    @Produces(MediaType.APPLICATION_JSON)
    @GET
    @Operation(
        description = "List all people", 
        responses = {
            @ApiResponse(
                content = @Content(array = @ArraySchema(schema = @Schema(implementation = Person.class))),
                responseCode = "200"
            )
        }
    )
    public Collection<Person> getPeople() {
        return service.getAll();
    }

    @Produces(MediaType.APPLICATION_JSON)
    @Path("/{email}")
    @GET
    @Operation(
        description = "Find person by e-mail", 
        responses = {
            @ApiResponse(
                content = @Content(schema = @Schema(implementation = Person.class)), 
                responseCode = "200"
            ),
            @ApiResponse(
                responseCode = "404", 
                description = "Person with such e-mail doesn't exists"
            )
        }
    )
    public Person findPerson(@Parameter(description = "E-Mail address to lookup for", required = true) @PathParam("email") final String email) {
        return service
            .findByEmail(email)
            .orElseThrow(() -> new NotFoundException("Person with such e-mail doesn't exists"));
    }

    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.APPLICATION_JSON)
    @POST
    @Operation(
        description = "Create new person",
        requestBody = @RequestBody(
            content = @Content(schema = @Schema(implementation = Person.class)),
        ), 
        responses = {
            @ApiResponse(
                 content = @Content(schema = @Schema(implementation = Person.class)),
                 headers = @Header(name = "Location"),
                 responseCode = "201"
            ),
            @ApiResponse(
                responseCode = "409", 
                description = "Person with such e-mail already exists"
            )
        }
    )
    public Response addPerson(@Context final UriInfo uriInfo,
            @Parameter(description = "Person", required = true) @Valid Person payload) {

        final Person person = service.add(payload);
        return Response
             .created(uriInfo.getRequestUriBuilder().path(person.getEmail()).build())
             .entity(person)
             .build();
    }
    
    @Path("/{email}")
    @DELETE
    @Operation(
        description = "Delete existing person",
        responses = {
            @ApiResponse(
                responseCode = "204",
                description = "Person has been deleted"
            ),
            @ApiResponse(
                responseCode = "404", 
                description = "Person with such e-mail doesn't exists"
            )
        }
    )
    public Response deletePerson(@Parameter(description = "E-Mail address to lookup for", required = true ) @PathParam("email") final String email) {
        return service
            .remove(email)
            .map(r -> Response.noContent().build())
            .orElseThrow(() -> new NotFoundException("Person with such e-mail doesn't exists"));
    }
}

And with that, we are done! But how could we assemble and wire all these pieces together? Here is the time for Microprofile to enter the stage. There are many implementations to chose from, the one we are going to use in this post is Project Hammock. The only thing we have to do is to specify the CDI 2.0, JAX-RS 2.1 and JPA 2.0 implementations we would like to use, which translates to Weld, Apache CXF, and OpenJPA respectively (expressed through the Project Hammock dependencies). Let us take a look on the Apache Maven pom.xml file.

<properties>
    <deltaspike.version>1.8.1</deltaspike.version>
    <hammock.version>2.1</hammock.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.apache.deltaspike.modules</groupId>
        <artifactId>deltaspike-jpa-module-api</artifactId>
        <version>${deltaspike.version}</version>
        <scope>compile</scope>
    </dependency>

    <dependency>
        <groupId>org.apache.deltaspike.modules</groupId>
        <artifactId>deltaspike-jpa-module-impl</artifactId>
        <version>${deltaspike.version}</version>
        <scope>runtime</scope>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>dist-microprofile</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>jpa-openjpa</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>util-beanvalidation</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>util-flyway</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>swagger</artifactId>
        <version>${hammock.version}</version>
    </dependency>
</dependencies>

Without further ado, let us build and run the application right away (if you are curious what relational datastore the application is using, it is H2 with the database configured in-memory).

> mvn clean package
> java -jar target/eclipse-microprofile-hammock-0.0.1-SNAPSHOT-capsule.jar 

The best way to ensure that our people management RESTful web APIs are fully functional is to send a couple of requests to it:

>  curl -X POST http://localhost:10900/api/people -H "Content-Type: application\json" \
     -d '{"email": "a@b.com", "firstName": "John", "lastName": "Smith"}'

HTTP/1.1 201 Created
Location: http://localhost:10900/api/people/a@b.com
Content-Type: application/json

{
    "firstName":"John","
    "lastName":"Smith",
    "email":"a@b.com"
}

What about making sure the Bean Validation is working fine? To trigger that, let us send the partially prepared request.

>  curl  --X POST http://localhost:10900/api/people -H "Content-Type: application\json" \
     -d '{"firstName": "John", "lastName": "Smith"}'

HTTP/1.1 400 Bad Request
Content-Length: 0

The OpenAPI specification and pre-bundled Swagger UI distribution are also available at http://localhost:10900/index.html?url=http://localhost:10900/api/openapi.json.

So far so good but fairly speaking we have not talked about testing our application at all. How hard it would be to come up with the integration test for, let say, the scenario of adding a person? It turns out that the frameworks around testing Java EE applications have improved a lot. In particular, it is exceptionally easy to accomplish with Arquillian test framework (along with beloved JUnit and REST Assured). One real example is worth thousand words.

@RunWith(Arquillian.class)
@EnableRandomWebServerPort
public class PeopleApiTest {
    @ArquillianResource private URI uri;
    
    @Deployment
    public static JavaArchive createArchive() {
        return ShrinkWrap
            .create(JavaArchive.class)
            .addClasses(PeopleResource.class, PeopleApplication.class)
            .addClasses(PeopleServiceImpl.class, PeopleJpaRepository.class, PersistenceConfig.class)
            .addPackages(true, "org.apache.deltaspike");
    }
            
    @Test
    public void shouldAddNewPerson() throws Exception {
        final Person person = new Person("a@b.com", "John", "Smith");
        
        given()
            .contentType(ContentType.JSON)
            .body(person)
            .post(uri + "/api/people")
            .then()
            .assertThat()
            .statusCode(201)
            .body("email", equalTo("a@b.com"))
            .body("firstName", equalTo("John"))
            .body("lastName", equalTo("Smith"));
    }
}

Amazing, isn't it? It is actually a lot of fun to develop modern Java EE applications, someone may say, the Spring way! And in fact, the parallels with Spring are not coincidental since it was inspiring, is inspiring and undoubtedly is going to continue inspire a lot of innovations in the Java EE ecosystem.

How the future is looking like? I think, by all means bright, both for Jakarta EE and Eclipse Microprofile. The latter just approached the version 2.0 with tons of new specifications included, oriented to address the needs of the microservice architectures. It is awesome to witness these transformations happening.

The complete source of the project is available on Github.

Sunday, August 26, 2018

Embracing modular Java platform: Apache CXF on Java 10

It's been mostly a year since Java 9 release finally delivered Project Jigsaw to the masses. It was a long, long journey, but it is there, so what has changed? This is a very good question and the answer to it is not obvious and straightforward.

By and large, Project Jigsaw is a disruptive change and there are many reasons why. Although mostly all of our existing application are going to run on Java 10 (to be replaced by JDK 11 very soon) with minimal or no changes, there are deep and profound implications Project Jigsaw brings to the Java developers: embrace the modular applications the Java platform way.

With the myriads of awesome frameworks and libraries out there, it will surely take time, a lot of time, to convert them to Java modules (many will not ever make it). This path is thorny but there are certain things which are already possible even today. In this rather short post we are going to learn how to use terrific Apache CXF project to build JAX-RS 2.1 Web APIs in a truly modular fashion using latest JDK 10.

Since 3.2.5 release, all Apache CXF artifacts have their manifests enriched with an Automatic-Module-Name directive. It does not make them full-fledged modules, but this is a first step in the right direction. So let us get started ...

If you use Apache Maven as the build tool of choice, not much changed here, the dependencies are declared the same way as before.

<dependencies>
    <dependency>
        <groupId>org.apache.cxf</groupId>
        <artifactId>cxf-rt-frontend-jaxrs</artifactId>
        <version>3.2.5</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.jaxrs</groupId>
        <artifactId>jackson-jaxrs-json-provider</artifactId>
        <version>2.9.6</version>
    </dependency>

    <dependency>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-server</artifactId>
        <version>9.4.11.v20180605</version>
    </dependency>

    <dependency>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-webapp</artifactId>
        <version>9.4.11.v20180605</version>
    </dependency>
</dependencies>

The uber-jar or fat-jar packaging are not really applicable to the modular Java applications so we have to collect the modules ourselves, for example at the target/modules folder.

<plugin>
    <artifactId>maven-jar-plugin</artifactId>
    <version>3.1.0</version>
    <configuration>
        <outputDirectory>${project.build.directory}/modules</outputDirectory>
    </configuration>
</plugin>

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>3.1.1</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <outputDirectory>${project.build.directory}/modules</outputDirectory>
                <includeScope>runtime</includeScope>
            </configuration>
        </execution>
    </executions>
</plugin>

All good, the next step is to create the module-info.java and list there the name of our module (com.example.cxf in this case) and, among other things, all required modules it needs in order to be functionable.

module com.example.cxf {
    exports com.example.rest;
    
    requires org.apache.cxf.frontend.jaxrs;
    requires org.apache.cxf.transport.http;
    requires com.fasterxml.jackson.jaxrs.json;
    
    requires transitive java.ws.rs;
    
    requires javax.servlet.api;
    requires jetty.server;
    requires jetty.servlet;
    requires jetty.util;
    
    requires java.xml.bind;
}

As you may spot right away, org.apache.cxf.frontend.jaxrs and org.apache.cxf.transport.http come from Apache CXF distribution (the complete list is available in the documentation) whereas java.ws.rs is the JAX-RS 2.1 API module. After that we could proceed with implementing our JAX-RS resources the same way we did before.

@Path("/api/people")
public class PeopleRestService {
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Collection<Person> getAll() {
        return List.of(new Person("John", "Smith", "john.smith@somewhere.com"));
    }
}

This looks easy, how about adding some spicy sauce, like server-sent events (SSE) and RxJava, for example? Let us see how exceptionally easy it is, starting from dependencies.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-sse</artifactId>
    <version>3.2.5</version>
</dependency>

<dependency>
    <groupId>io.reactivex.rxjava2</groupId>
    <artifactId>rxjava</artifactId>
    <version>2.1.14</version>
</dependency>

Also, we should not forget to update our module-info.java by adding the requires directive to these new modules.

module com.example.cxf {
    ...
    requires org.apache.cxf.rs.sse;
    requires io.reactivex.rxjava2;
    requires transitive org.reactivestreams;
    ...

}

In order to keep things simple, our SSE endpoint would just broadcast every new person added through the API. Here is the implementation snippet which does it.

private SseBroadcaster broadcaster;
private Builder builder;
private PublishSubject<Person> publisher;
    
public PeopleRestService() {
    publisher = PublishSubject.create();
}

@Context 
public void setSse(Sse sse) {
    this.broadcaster = sse.newBroadcaster();
    this.builder = sse.newEventBuilder();
        
    publisher
        .subscribeOn(Schedulers.single())
        .map(person -> createEvent(builder, person))
        .subscribe(broadcaster::broadcast);
}

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response add(@Context UriInfo uriInfo, Person payload) {
    publisher.onNext(payload);
        
    return Response
        .created(
            uriInfo
                .getRequestUriBuilder()
                .path(payload.getEmail())
                .build())
        .entity(payload)
        .build();
}
    
@GET
@Path("/sse")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void people(@Context SseEventSink sink) {
    broadcaster.register(sink);
}

Now when we build it:

      mvn clean package

And run it using module path:

      java --add-modules java.xml.bind \
           --module-path target/modules \
           --module com.example.cxf/com.example.Starter

We should be able to give our JAX-RS API a test drive. The simplest way to make sure things work as expected is to navigate in the Google Chrome to the SSE endpoint http://localhost:8686/api/people/sse and add some random people through the POST requests, using the old buddy curl from the command line:

      curl -X POST http://localhost:8686/api/people \
           -d '{"email": "john@smith.com", "firstName": "John", "lastName": "Smith"}' \
           -H "Content-Type: application/json"
      curl -X POST http://localhost:8686/api/people \
           -d '{"email": "tom@tommyknocker.com", "firstName": "Tom", "lastName": "Tommyknocker"}' \
           -H "Content-Type: application/json"

In the Google Chrome we should be able to see raw SSE events, pushed by the server (they are not looking pretty but good enough to illustrate the flow).

So, what about the application packaging? Docker and containers are certainly a viable option, but with Java 9 and above we have another player: jlink. It assembles and optimizes a set of modules and their dependencies into a custom, fully sufficient runtime image. Let us try it out.

      jlink --add-modules java.xml.bind,java.management \
            --module-path target/modules \
            --verbose \
            --strip-debug \
            --compress 2 \
            --no-header-files \
            --no-man-pages \
            --output target/cxf-java-10-app

Here we are hitting the first wall. Unfortunately, since mostly all the dependencies of our application are automatic modules, it is a problem for jlink and we still have to include module path explicitly when running from the runtime image:

      target/cxf-java-10-app/bin/java  \
           --add-modules java.xml.bind \
           --module-path target/modules \
           --module com.example.cxf/com.example.Starter

At the end of the day it turned out to be not that scary. We are surely in the very early stage of the JPMS adoption, this is just a beginning. When every library, every framework we are using adds the module-info.java to their artifacts (JARs), making them true modules despite all the quirks, then we could declare a victory. But the small wins are already happening, make one yours!

The complete source of the project is available on Github.