Sunday, August 26, 2018

Embracing modular Java platform: Apache CXF on Java 10

It's been mostly a year since Java 9 release finally delivered Project Jigsaw to the masses. It was a long, long journey, but it is there, so what has changed? This is a very good question and the answer to it is not obvious and straightforward.

By and large, Project Jigsaw is a disruptive change and there are many reasons why. Although mostly all of our existing application are going to run on Java 10 (to be replaced by JDK 11 very soon) with minimal or no changes, there are deep and profound implications Project Jigsaw brings to the Java developers: embrace the modular applications the Java platform way.

With the myriads of awesome frameworks and libraries out there, it will surely take time, a lot of time, to convert them to Java modules (many will not ever make it). This path is thorny but there are certain things which are already possible even today. In this rather short post we are going to learn how to use terrific Apache CXF project to build JAX-RS 2.1 Web APIs in a truly modular fashion using latest JDK 10.

Since 3.2.5 release, all Apache CXF artifacts have their manifests enriched with an Automatic-Module-Name directive. It does not make them full-fledged modules, but this is a first step in the right direction. So let us get started ...

If you use Apache Maven as the build tool of choice, not much changed here, the dependencies are declared the same way as before.

<dependencies>
    <dependency>
        <groupId>org.apache.cxf</groupId>
        <artifactId>cxf-rt-frontend-jaxrs</artifactId>
        <version>3.2.5</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.jaxrs</groupId>
        <artifactId>jackson-jaxrs-json-provider</artifactId>
        <version>2.9.6</version>
    </dependency>

    <dependency>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-server</artifactId>
        <version>9.4.11.v20180605</version>
    </dependency>

    <dependency>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-webapp</artifactId>
        <version>9.4.11.v20180605</version>
    </dependency>
</dependencies>

The uber-jar or fat-jar packaging are not really applicable to the modular Java applications so we have to collect the modules ourselves, for example at the target/modules folder.

<plugin>
    <artifactId>maven-jar-plugin</artifactId>
    <version>3.1.0</version>
    <configuration>
        <outputDirectory>${project.build.directory}/modules</outputDirectory>
    </configuration>
</plugin>

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>3.1.1</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <outputDirectory>${project.build.directory}/modules</outputDirectory>
                <includeScope>runtime</includeScope>
            </configuration>
        </execution>
    </executions>
</plugin>

All good, the next step is to create the module-info.java and list there the name of our module (com.example.cxf in this case) and, among other things, all required modules it needs in order to be functionable.

module com.example.cxf {
    exports com.example.rest;
    
    requires org.apache.cxf.frontend.jaxrs;
    requires org.apache.cxf.transport.http;
    requires com.fasterxml.jackson.jaxrs.json;
    
    requires transitive java.ws.rs;
    
    requires javax.servlet.api;
    requires jetty.server;
    requires jetty.servlet;
    requires jetty.util;
    
    requires java.xml.bind;
}

As you may spot right away, org.apache.cxf.frontend.jaxrs and org.apache.cxf.transport.http come from Apache CXF distribution (the complete list is available in the documentation) whereas java.ws.rs is the JAX-RS 2.1 API module. After that we could proceed with implementing our JAX-RS resources the same way we did before.

@Path("/api/people")
public class PeopleRestService {
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Collection<Person> getAll() {
        return List.of(new Person("John", "Smith", "john.smith@somewhere.com"));
    }
}

This looks easy, how about adding some spicy sauce, like server-sent events (SSE) and RxJava, for example? Let us see how exceptionally easy it is, starting from dependencies.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-sse</artifactId>
    <version>3.2.5</version>
</dependency>

<dependency>
    <groupId>io.reactivex.rxjava2</groupId>
    <artifactId>rxjava</artifactId>
    <version>2.1.14</version>
</dependency>

Also, we should not forget to update our module-info.java by adding the requires directive to these new modules.

module com.example.cxf {
    ...
    requires org.apache.cxf.rs.sse;
    requires io.reactivex.rxjava2;
    requires transitive org.reactivestreams;
    ...

}

In order to keep things simple, our SSE endpoint would just broadcast every new person added through the API. Here is the implementation snippet which does it.

private SseBroadcaster broadcaster;
private Builder builder;
private PublishSubject<Person> publisher;
    
public PeopleRestService() {
    publisher = PublishSubject.create();
}

@Context 
public void setSse(Sse sse) {
    this.broadcaster = sse.newBroadcaster();
    this.builder = sse.newEventBuilder();
        
    publisher
        .subscribeOn(Schedulers.single())
        .map(person -> createEvent(builder, person))
        .subscribe(broadcaster::broadcast);
}

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response add(@Context UriInfo uriInfo, Person payload) {
    publisher.onNext(payload);
        
    return Response
        .created(
            uriInfo
                .getRequestUriBuilder()
                .path(payload.getEmail())
                .build())
        .entity(payload)
        .build();
}
    
@GET
@Path("/sse")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void people(@Context SseEventSink sink) {
    broadcaster.register(sink);
}

Now when we build it:

      mvn clean package

And run it using module path:

      java --add-modules java.xml.bind \
           --module-path target/modules \
           --module com.example.cxf/com.example.Starter

We should be able to give our JAX-RS API a test drive. The simplest way to make sure things work as expected is to navigate in the Google Chrome to the SSE endpoint http://localhost:8686/api/people/sse and add some random people through the POST requests, using the old buddy curl from the command line:

      curl -X POST http://localhost:8686/api/people \
           -d '{"email": "john@smith.com", "firstName": "John", "lastName": "Smith"}' \
           -H "Content-Type: application/json"
      curl -X POST http://localhost:8686/api/people \
           -d '{"email": "tom@tommyknocker.com", "firstName": "Tom", "lastName": "Tommyknocker"}' \
           -H "Content-Type: application/json"

In the Google Chrome we should be able to see raw SSE events, pushed by the server (they are not looking pretty but good enough to illustrate the flow).

So, what about the application packaging? Docker and containers are certainly a viable option, but with Java 9 and above we have another player: jlink. It assembles and optimizes a set of modules and their dependencies into a custom, fully sufficient runtime image. Let us try it out.

      jlink --add-modules java.xml.bind,java.management \
            --module-path target/modules \
            --verbose \
            --strip-debug \
            --compress 2 \
            --no-header-files \
            --no-man-pages \
            --output target/cxf-java-10-app

Here we are hitting the first wall. Unfortunately, since mostly all the dependencies of our application are automatic modules, it is a problem for jlink and we still have to include module path explicitly when running from the runtime image:

      target/cxf-java-10-app/bin/java  \
           --add-modules java.xml.bind \
           --module-path target/modules \
           --module com.example.cxf/com.example.Starter

At the end of the day it turned out to be not that scary. We are surely in the very early stage of the JPMS adoption, this is just a beginning. When every library, every framework we are using adds the module-info.java to their artifacts (JARs), making them true modules despite all the quirks, then we could declare a victory. But the small wins are already happening, make one yours!

The complete source of the project is available on Github.

Sunday, June 17, 2018

In the shoes of the consumer: do you really need to provide the client libraries for your APIs?

The beauty of the RESTful web services and APIs is that any consumer which speaks HTTP protocol will be able to understand and use it. Nonetheless, the same dilemma pops up over and over again: should you accompany your web APis with the client libraries or not? If yes, what languages or/and frameworks should you support?

Quite often this is not really an easy question to answer. So let us take a step back and think about the overall idea for a moment: what are the values which client libraries may bring to the consumers?

Someone may say to lower the barrier for adoption. Indeed, specifically in the case of strongly typed languages, exploring the API contracts from your favorite IDE (syntax highlighting and auto-completion please!) is quite handy. But by and large, the RESTful web APIs are simple enough to start with and good documentation would be certainly more valuable here.

Others may say it is good to shield the consumers from dealing with multiple API versions or rough edges. Also kind of make sense but I would argue it just hides the flaws with the way the web APIs in question are designed and evolve over time.

All in all, no matter how many clients you decide to bundle, the APIs are still going to be accessible by any generic HTTP consumer (curl, HttpClient, RestTemplate, you name it). Giving a choice is great but the price to pay for maintenance could be really high. Could we do it better? And as you may guess already, we certainly have quite a few options hence this post.

The key ingredient of the success here is to maintain an accurate specification of your RESTful web APIs, using OpenAPI v3.0 or even its predecessor, Swagger/OpenAPI 2.0 (or RAML, API Blueprint, does not really matter much). In case of OpenAPI/Swagger, the tooling is the king: one could use Swagger Codegen, a template-driven engine, to generate API clients (and even server stubs) in many different languages, and this is what we are going to talk about in this post.

To simplify the things, we are going to implement the consumer of the people management web API which we have built in the previous post. To begin with, we need to get its OpenAPI v3.0 specification in the YAML (or JSON) format.

java -jar server-openapi/target/server-openapi-0.0.1-SNAPSHOT.jar

And then:

wget http://localhost:8080/api/openapi.yaml

Awesome, the half of the job is done, literally. Now, let us allow Swagger Codegen to take a lead. In order to not complicate the matter, let's assume that consumer is also Java application, so we could understand the mechanics without any difficulties (but Java is only one of the options, the list of supported languages and frameworks is astonishing).

Along this post we are going to use OpenFeign, one of the most advanced Java HTTP client binders. Not only it is exceptionally simple to use, it offers quite a few integrations we are going to benefit from soon.

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-core</artifactId>
    <version>9.7.0</version>
</dependency>

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-jackson</artifactId>
    <version>9.7.0</version>
</dependency>

The Swagger Codegen could be run as stand-alone application from command-line, or Apache Maven plugin (the latter is what we are going to use).

<plugin>
    <groupId>io.swagger</groupId>
    <artifactId>swagger-codegen-maven-plugin</artifactId>
    <version>3.0.0-rc1</version>
    <executions>
        <execution>
            <goals>
                <goal>generate</goal>
            </goals>
            <configuration>
                <inputSpec>/contract/openapi.yaml</inputSpec>
                <apiPackage>com.example.people.api</apiPackage>
                <language>java</language>
                <library>feign</library>
                <modelPackage>com.example.people.model</modelPackage>
                <generateApiDocumentation>false</generateApiDocumentation>
                <generateSupportingFiles>false</generateSupportingFiles>
                <generateApiTests>false</generateApiTests>
                <generateApiDocs>false</generateApiDocs>
                <addCompileSourceRoot>true</addCompileSourceRoot>
                <configOptions>
                    <sourceFolder>/</sourceFolder>
                    <java8>true</java8>
                    <dateLibrary>java8</dateLibrary>
                    <useTags>true</useTags>
                </configOptions>
            </configuration>
        </execution>
    </executions>
</plugin>

If some of the options are not very clear, the Swagger Codegen has pretty good documentation to look for clarifications. The important ones to pay attention to is language and library, which are set to java and feign respectively. The one thing to note though, the support of the OpenAPI v3.0 specification is mostly complete but you may encounter some issues nonetheless (as you noticed, the version is 3.0.0-rc1).

What you will get when your build finishes is the plain old Java interface, PeopleApi, annotated with OpenFeign annotations, which is direct projection of the people management web API specification (which comes from /contract/openapi.yaml). Please notice that all model classes are generated as well.

@javax.annotation.Generated(
    value = "io.swagger.codegen.languages.java.JavaClientCodegen",
    date = "2018-06-17T14:04:23.031Z[Etc/UTC]"
)
public interface PeopleApi extends ApiClient.Api {
    @RequestLine("POST /api/people")
    @Headers({"Content-Type: application/json", "Accept: application/json"})
    Person addPerson(Person body);

    @RequestLine("DELETE /api/people/{email}")
    @Headers({"Content-Type: application/json"})
    void deletePerson(@Param("email") String email);

    @RequestLine("GET /api/people/{email}")
    @Headers({"Accept: application/json"})
    Person findPerson(@Param("email") String email);

    @RequestLine("GET /api/people")
    @Headers({"Accept: application/json"})
    List<Person> getPeople();
}

Let us compare it with Swagger UI interpretation of the same specification, available at http://localhost:8080/api/api-docs?url=/api/openapi.json:

It looks right at the first glance but we have better ensure things work out as expected. Once we have OpenFeign-annotated interface, it could be made functional (in this case, implemented through proxies) using the family of Feign builders, for example:

final PeopleApi api = Feign
    .builder()
    .client(new OkHttpClient())
    .encoder(new JacksonEncoder())
    .decoder(new JacksonDecoder())
    .logLevel(Logger.Level.HEADERS)
    .options(new Request.Options(1000, 2000))
    .target(PeopleApi.class, "http://localhost:8080/");

Great, fluent builder style rocks. Assuming our people management web APIs server is up and running (by default, it is going to be available at http://localhost:8080/):

java -jar server-openapi/target/server-openapi-0.0.1-SNAPSHOT.jar

We could communicate with it by calling freshly built PeopleApi instance methods, as in the code snippet below.:

final Person person = api.addPerson(
        new Person()
            .email("a@b.com")
            .firstName("John")
            .lastName("Smith"));

It is really cool, if we rewind it back a bit, we actually did nothing. Everything is given to us for free with only web API specification available! But let us not stop here and remind ourselves that using Java interfaces will not eliminate the reality that we are dealing with remote systems. And things are going to fail here, sooner or later, no doubts.

Not so long ago we have learned about circuit breakers and how useful they are when properly applied in the context of distributed systems. It would be really awesome to somehow introduce this feature into our OpenFeign-based client. Please welcome another member of the family, HystrixFeign builder, the seamless integration with Hytrix library:

final PeopleApi api = HystrixFeign
    .builder()
    .client(new OkHttpClient())
    .encoder(new JacksonEncoder())
    .decoder(new JacksonDecoder())
    .logLevel(Logger.Level.HEADERS)
    .options(new Request.Options(1000, 2000))
    .target(PeopleApi.class, "http://localhost:8080/");

The only thing we need to do is just to add these two dependencies (strictly speaking hystrix-core is not really needed if you do not mind to stay on older version) to consumer's pom.xml file.

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-hystrix</artifactId>
    <version>9.7.0</version>
</dependency>

<dependency>
    <groupId>com.netflix.hystrix</groupId>
    <artifactId>hystrix-core</artifactId>
    <version>1.5.12</version>
</dependency>

Arguably, this is one of the best examples of how easy and straightforward integration could be. But even that is not the end of the story. Observability in the distributed systems is as important as never and as we have learned a while ago, distributed tracing is tremendously useful in helping us out here. And again, OpenFeign has support for it right out of the box, let us take a look.

OpenFeign fully integrates with OpenTracing-compatible tracer. The Jaeger tracer is one of those, which among other things has really nice web UI front-end to explore traces and dependencies. Let us run it first, luckily it is fully Docker-ized.

docker run -d -e \
    COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
    -p 5775:5775/udp \
    -p 6831:6831/udp \
    -p 6832:6832/udp \
    -p 5778:5778 \
    -p 16686:16686 \
    -p 14268:14268 \
    -p 9411:9411 \
    jaegertracing/all-in-one:latest

A couple of additional dependencies have to be introduced in order to OpenFeign client to be aware of the OpenTracing capabilities.

<dependency>
    <groupId>io.github.openfeign.opentracing</groupId>
    <artifactId>feign-opentracing</artifactId>
    <version>0.1.0</version>
</dependency>

<dependency>
    <groupId>io.jaegertracing</groupId>
    <artifactId>jaeger-core</artifactId>
    <version>0.29.0</version>
</dependency>

From the Feign builder side, the only change (besides the introduction of the tracer instance) is to wrap up the client into TracingClient, like the snippet below demonstrates:

final Tracer tracer = new Configuration("consumer-openapi")
    .withSampler(
        new SamplerConfiguration()
            .withType(ConstSampler.TYPE)
            .withParam(new Float(1.0f)))
    .withReporter(
        new ReporterConfiguration()
            .withSender(
                new SenderConfiguration()
                    .withEndpoint("http://localhost:14268/api/traces")))
    .getTracer();
            
final PeopleApi api = Feign
    .builder()
    .client(new TracingClient(new OkHttpClient(), tracer))
    .encoder(new JacksonEncoder())
    .decoder(new JacksonDecoder())
    .logLevel(Logger.Level.HEADERS)
    .options(new Request.Options(1000, 2000))
    .target(PeopleApi.class, "http://localhost:8080/");

On the server-side we also need to integrate with OpenTracing as well. The Apache CXF has first-class support for it, bundled into cxf-integration-tracing-opentracing module. Let us include it as the dependency, this time to server's pom.xml.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-integration-tracing-opentracing</artifactId>
    <version>3.2.4</version>
</dependency>

Depending on the way your configure your applications, there should be an instance of the tracer available which should be passed later on to the OpenTracingFeature, for example.

// Create tracer
final Tracer tracer = new Configuration(
        "server-openapi", 
        new SamplerConfiguration(ConstSampler.TYPE, 1),
        new ReporterConfiguration(new HttpSender("http://localhost:14268/api/traces"))
    ).getTracer();

// Include OpenTracingFeature feature
final JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();
factory.setProvider(new OpenTracingFeature(tracer()));
...
factory.create()

From now on, the invocation of the any people management API endpoint through generated OpenFeign client will be fully traceable in the Jaeger web UI, available at http://localhost:16686/search (assuming your Docker host is localhost).

Our scenario is pretty simple but imagine the real applications where dozen of external service calls could happen while the single request travels through the system. Without distributed tracing in place, every issue has a chance to turn into a mystery.

As a side note, if you look closer to the trace from the picture, you may notice that server and consumer use different versions of the Jaeger API. This is not a mistake since the latest released version of Apache CXF is using older OpenTracing API version (and as such, older Jaeger client API) but it does not prevent things to work as expected.

With that, it is time to wrap up. Hopefully, the benefits of contract-based (or even better, contract-first) development in the world of RESTful web services and APIs become more and more apparent: generation of the smart clients, consumer-driven contract test, discoverability and rich documentation are just a few to mention. Please make use of it!

The complete project sources are available on Github.

Monday, April 30, 2018

Moving With The Times: Towards OpenAPI v3.0.0 adoption in JAX-RS APIs

It is terrifying to see how fast time passes! The OpenAPI specification 3.0.0, a major revamp of so-get-used-to Swagger specification, has been released mostly one year ago but it took awhile for tooling to catch up. However, with the recent official release of the Swagger Core 2.0.0 things are going to accelerate for sure.

To prove the point, Apache CXF, the well-known JAX-RS 2.1 implementation, is one of the first adopters of the OpenAPI 3.0.0 and in today's post we are going to take a look how easy your JAX-RS 2.1 APIs could benefit from it.

As always, to keep things simple we are going to design a people management web APIs with just a handful set of resources to support it, nothing too exciting here.

POST   /api/people
GET    /api/people/{email}
GET    /api/people
DELETE /api/people/{email}

Our model would consist of a single Person class.

public class Person {
    private String email;
    private String firstName;
    private String lastName;
}

To add a bit of magic, we would be using Spring Boot to get us up and running as fast as possible. With that, let us start to fill in the dependencies (assuming we are using Apache Maven for build management).

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-spring-boot-starter-jaxrs</artifactId>
    <version>3.2.4</version>
</dependency>

In the recent 3.2.x releases Apache CXF introduces a new module cxf-rt-rs-service-description-openapi-v3 dedicated to OpenAPI 3.0.0, based on Swagger Core 2.0.0.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-service-description-openapi-v3</artifactId>
    <version>3.2.4</version>
</dependency>

<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>swagger-ui</artifactId>
    <version>3.13.6</version>
</dependency>

The presence of the Swagger UI is not strictly necessary, but this is exceptionally useful and beautiful tool to explore your APIs (and if it is available on classpath, Apache CXF seamlessly integrates it into your applications as we are going to see in a bit).

The prerequisites are in place, let us do some coding! Before we begin, it is worth to note that the Swagger Core 2.0.0 has many ways to populate the OpenAPI 3.0.0 definitions for your services, including property files, annotations or programmatically. Along this post we are going to use annotations only.

@OpenAPIDefinition(
    info = @Info(
        title = "People Management API",
        version = "0.0.1-SNAPSHOT",
        license = @License(
            name = "Apache 2.0 License",
            url = "http://www.apache.org/licenses/LICENSE-2.0.html"
        )
    )
)
@ApplicationPath("api")
public class JaxRsApiApplication extends Application {
}

Looks pretty simple, the @OpenAPIDefinition sets the top-level definition for all our web APIs. Moving on to the PeopleRestService, we just add the @Tag annotation to, well, tag our API.

@Path( "/people" ) 
@Tag(name = "people")
public class PeopleRestService {
    // ...
}

Awesome, nothing complicated so far. The meaty part starts with web API operation definitions, so let us take a look on the first example, the operation to fetch everyone.

@Produces(MediaType.APPLICATION_JSON)
@GET
@Operation(
    description = "List all people", 
    responses = {
        @ApiResponse(
            content = @Content(
                array = @ArraySchema(schema = @Schema(implementation = Person.class))
            ),
            responseCode = "200"
        )
    }
)
public Collection<Person> getPeople() {
    // ...
}

Quite a few annotations but by and large, looks pretty clean and straightforward. Let us take a look on another one, the endpoint to find the person by its e-mail address.

@Produces(MediaType.APPLICATION_JSON)
@Path("/{email}")
@GET
@Operation(
    description = "Find person by e-mail", 
    responses = {
        @ApiResponse(
            content = @Content(schema = @Schema(implementation = Person.class)), 
            responseCode = "200"
        ),
        @ApiResponse(
            responseCode = "404", 
            description = "Person with such e-mail doesn't exists"
        )
    }
)
public Person findPerson(
        @Parameter(description = "E-Mail address to lookup for", required = true) 
        @PathParam("email") final String email) {
    // ...
}

In the same vein, the operation to remove the person by e-mail looks mostly identical.

@Path("/{email}")
@DELETE
@Operation(
    description = "Delete existing person",
    responses = {
        @ApiResponse(
            responseCode = "204",
            description = "Person has been deleted"
        ),
        @ApiResponse(
            responseCode = "404", 
            description = "Person with such e-mail doesn't exists"
        )
     }
)
public Response deletePerson(
        @Parameter(description = "E-Mail address to lookup for", required = true ) 
        @PathParam("email") final String email) {
    // ...
}

Great, let us wrap up by looking into last, arguably the most interesting endpoint which adds a new person (the choice to use the @FormParam is purely to illustrate the different flavor of the APIs).

@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Produces(MediaType.APPLICATION_JSON)
@POST
@Operation(
    description = "Create new person",
    responses = {
        @ApiResponse(
            content = @Content(
                schema = @Schema(implementation = Person.class), 
                mediaType = MediaType.APPLICATION_JSON
            ),
            headers = @Header(name = "Location"),
            responseCode = "201"
        ),
        @ApiResponse(
            responseCode = "409", 
            description = "Person with such e-mail already exists"
        )
    }
)
public Response addPerson(@Context final UriInfo uriInfo,
        @Parameter(description = "E-Mail", required = true) 
        @FormParam("email") final String email, 
        @Parameter(description = "First Name", required = true) 
        @FormParam("firstName") final String firstName, 
        @Parameter(description = "Last Name", required = true) 
        @FormParam("lastName") final String lastName) {
    // ...
}

If you have an experience with documenting web APIs using older Swagger specifications, you might find the approach quite familiar but more verbose (or better to say, formalized). It is the result of the tremendous work which specification leads and community has done to make it as complete and extensible as possible.

The APIs are defined and documented, it is time to try them out! The missing piece though is the Spring configuration where we would initialize and expose our JAX-RS web services.

@Configuration
@EnableAutoConfiguration
@ComponentScan(basePackageClasses = PeopleRestService.class)
public class AppConfig {
    @Autowired private PeopleRestService peopleRestService;
 
    @Bean(destroyMethod = "destroy")
    public Server jaxRsServer(Bus bus) {
        final JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();

        factory.setApplication(new JaxRsApiApplication());
        factory.setServiceBean(peopleRestService);
        factory.setProvider(new JacksonJsonProvider());
        factory.setFeatures(Arrays.asList(new OpenApiFeature()));
        factory.setBus(bus);
        factory.setAddress("/");

        return factory.create();
    }

    @Bean
    public ServletRegistrationBean cxfServlet() {
        final ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(new CXFServlet(), "/api/*");
        servletRegistrationBean.setLoadOnStartup(1);
        return servletRegistrationBean;
    }
}

The OpenApiFeature is a key ingredient here which takes care of all the integration and introspection. The Spring Boot application is the last touch to finish the picture.

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(AppConfig.class, args);
    }
}

Let us build and run it right away:

mvn clean package 
java -jar target/jax-rs-2.1-openapi-0.0.1-SNAPSHOT.jar

Having the application started, the OpenAPI 3.0.0 specification of our web APIs should be live and available for consumption in the JSON format at:

http://localhost:8080/api/openapi.json

Or in YAML format at:

http://localhost:8080/api/openapi.json

Wouldn't it be great to explore our web APIs and play with it? Since we included Swagger UI dependency, this is no brainer, just navigate to http://localhost:8080/api/api-docs?url=/api/openapi.json:

A special attention to be made to the small icon Swagger UI places alongside your API version, hinting about its conformity to OpenAPI 3.0.0 specification.

Just to note here, there is nothing special in using Spring Boot. In case you are using Apache CXF inside the OSGi container (like Apache Karaf for example), the integration with OpenAPI 3.0.0 is also available (please check out the official documentation and samples if you are interested in the subject).

It all looks easy and simple, but what about migrating to OpenAPI 3.0.0 from older versions of the Swagger specifications? The Apache CXF has an powerful feature to convert older the specifications on the fly but in general the OpenApi.Tools portal is a right place to evaluate your options.

Should you migrate to OpenAPI 3.0.0? I honestly believe you should, at least should try experimenting with it, but please be aware that the tooling is still not mature enough, expect a few roadblocks along the way (which you would be able to overcome by contributing the patches, by the way). But undoubtedly, the future is bright!

The complete project sources are available on Github.

Tuesday, February 20, 2018

Run away from 'null' checks feast: doing PATCH properly with JSON Patch (RFC-6902)

Today we are going to have a conversation about REST(ful) services and APIs, more precisely, around one peculiar subject many experienced developers are struggling with. To put things into perspective, we are going to talk about web APIs, where the REST(ful) principles adhere to HTTP protocol and heavily exploit the semantics of HTTP methods and (usually but not necessarily) use JSON to represent the state.

One particular HTTP method stands out, and although its meaning sounds pretty straightforward, the implementation is far from that. Yes, we are looking at you, the PATCH. So what is the problem, really? It is just an update, right? Yes, in the essence the semantics of the PATCH method in the context of the HTTP-based REST(ful) web services is partial update of the resource. Now, how would you do that, Java developer? Here is where the fun begins.

Let us go over a very simple example of book management API, modeled using latest JSR 370: Java API for RESTful Web Services (JAX-RS 2.1) specification (which finally includes the @PATCH annotation!) and terrific Apache CXF framework. Our resource is just a very simplistic Book class.

public class Book {
    private String title;
    private Collection>String< authors;
    private String isbn;
}

How would you implement the partial update using the PATCH method? Sadly, the brute force solution, the null feast, is the clear winner here.

@PATCH
@Path("/{isbn}")
@Consumes(MediaType.APPLICATION_JSON)
public void update(@PathParam("isbn") String isbn, Book book) {
    final Book existing = bookService.find(isbn).orElseThrow(NotFoundException::new);
        
    if (book.getTitle() != null) {
        existing.setTitle(book.getTitle());
    }

    if (book.getAuthors() != null) {
        existing.setAuthors(book.getAuthors());
    }
        
    // And here it goes on and on ...
    // ...
}

In the nutshell, this is null-guarded PUT clone. Probably, someone could claim that it kind of works and declare the victory here. But hopefully for majority of us this approach clearly has a lot of flaws and should be never taken. Alternatives? Yes, absolutely, RFC-6902: JSON Patch, not an official standard just yet but it is getting there.

The RFC-6902: JSON Patch drastically changes the game by expressing a sequence of operations to apply to a JSON document. To illustrate the idea in action, let us start from a simple example of changing book's title, described in the terms of desired outcome.

{ "op": "replace", "path": "/title", "value": "..." }

Looks clean, what about adding the authors? Easy ...

{ "op": "add", "path": "/authors", "value": ["...", "..."] }

Awesome, sold out, but ... implementation-wise it seems to require quite a lot of work, isn't it? Not really if we rely on the latest and greatest JSR 374: Java API for JSON Processing 1.1 which fully supports RFC-6902: JSON Patch. Armed with the right tools, this time let us do it right.


    org.glassfish
    javax.json
    1.1.2

Interestingly, not many are aware of that Apache CXF, and in general any JAX-RS-complaint framework, closely integrates with JSON-P and supports its basic data types. In case of Apache CXF, it is just a matter of adding cxf-rt-rs-extension-providers module dependency:


    org.apache.cxf
    cxf-rt-rs-extension-providers
    3.2.2

And registering JsrJsonpProvider with your server factory bean, for example:

@Configuration
public class AppConfig {
    @Bean
    public Server rsServer(Bus bus, BookRestService service) {
        JAXRSServerFactoryBean endpoint = new JAXRSServerFactoryBean();
        endpoint.setBus(bus);
        endpoint.setAddress("/");
        endpoint.setServiceBean(service);
        endpoint.setProvider(new JsrJsonpProvider());
        return endpoint.create();
    }
}

With all the pieces wired together, our PATCH operation could be implemented using JSR 374: Java API for JSON Processing 1.1 alone, in just a few lines:

@Service
@Path("/catalog")
public class BookRestService {
    @Inject private BookService bookService;
    @Inject private BookConverter converter;

    @PATCH
    @Path("/{isbn}")
    @Consumes(MediaType.APPLICATION_JSON)
    public void apply(@PathParam("isbn") String isbn, JsonArray operations) {
        final Book book = bookService.find(isbn).orElseThrow(NotFoundException::new);
        final JsonPatch patch = Json.createPatch(operations);
        final JsonObject result = patch.apply(converter.toJson(book));
        bookService.update(isbn, converter.fromJson(result));
    }
}

The BookConverter performs the conversion between Book class and its JSON representation (and vise versa), which we are doing by hand to illustrate another capabilities which JSR 374: Java API for JSON Processing 1.1 provides.

@Component
public class BookConverter {
    public Book fromJson(JsonObject json) {
        final Book book = new Book();
        book.setTitle(json.getString("title"));
        book.setIsbn(json.getString("isbn"));
        book.setAuthors(
            json
                .getJsonArray("authors")
                .stream()
                .map(value -> (JsonString)value)
                .map(JsonString::getString)
                .collect(Collectors.toList()));
        return book;
    }

    public JsonObject toJson(Book book) {
        return Json
            .createObjectBuilder()
            .add("title", book.getTitle())
            .add("isbn", book.getIsbn())
            .add("authors", Json.createArrayBuilder(book.getAuthors()))
            .build();
    }
}

To finish up, let is wrap this simple JAX-RS 2.1 web API into the beautiful Spring Boot envelope.

@SpringBootApplication
public class BookServerStarter {    
    public static void main(String[] args) {
        SpringApplication.run(BookServerStarter.class, args);
    }
}

And run it.

mvn spring-boot:run

To conclude the discussion, let us play a bit with more realistic examples by deliberately adding an incomplete book into our catalog.

$ curl -i -X POST http://localhost:19091/services/catalog -H "Content-Type: application\json" -d '{
       "title": "Microservice Architecture",
       "isbn": "978-1491956250",
       "authors": [
           "Ronnie Mitra",
           "Matt McLarty"
       ]
   }'

HTTP/1.1 201 Created
Date: Tue, 20 Feb 2018 02:30:18 GMT
Location: http://localhost:19091/services/catalog/978-1491956250
Content-Length: 0

There are a couple of inaccuracies we would like to fix in this book description, namely set the title to be complete, "Microservice Architecture: Aligning Principles, Practices, and Culture", and include missing co-authors, Irakli Nadareishvili and Mike Amundsen. With the API we have developed a moment ago, it is a no-brainer.

$ curl -i -X PATCH http://localhost:19091/services/catalog/978-1491956250 -H "Content-Type: application\json" -d '[
       { "op": "add", "path": "/authors/0", "value": "Irakli Nadareishvili" },
       { "op": "add", "path": "/authors/-", "value": "Mike Amundsen" },
       { "op": "replace", "path": "/title", "value": "Microservice Architecture: Aligning Principles, Practices, and Culture" }
   ]'

HTTP/1.1 204 No Content
Date: Tue, 20 Feb 2018 02:38:48 GMT

The path reference of the first two operations may look confusing a bit but fear no more, let us clarify that. Because authors is a collection (or in terms of JSON data types, an array) we could use RFC-6902: JSON Patch array index notation to specify exactly where we would like the new element to be inserted. The first operations uses index '0' to denote the head position, while the second one uses '-' placeholder to simplify say "add to the end of the collection". If we retrieve the book right after the update, we should see our modifications to be applied exactly as we asked.

$ curl http://localhost:19091/services/catalog/978-1491956250

{
    "title": "Microservice Architecture: Aligning Principles, Practices, and Culture",
    "isbn": "978-1491956250",
    "authors": [
        "Irakli Nadareishvili",
        "Ronnie Mitra",
        "Matt McLarty",
        "Mike Amundsen"
    ]
}

Clean, simple and powerful. To be fair, there is a price to pay is a form of additional JSON manipulations (in order to apply the patch) but is it worth the effort? I believe it is ...

Next time you are going to design new shiny REST(ful) web APIs, please seriously consider RFC-6902: JSON Patch to back the PATCH implementation of your resources. I believe more close integration with JAX-RS is also coming (if not there yet) to directly support JSONPatch class and its family.

And last, but not least, in this post we have touched on server-side implementation only, but JSR 374: Java API for JSON Processing 1.1 includes convenient client-side scaffolding as well, giving full-fledged programmatic control over the patches.

final JsonPatch patch = Json.createPatchBuilder()
    .add("/authors/0", "Irakli Nadareishvili")
    .add("/authors/-", "Mike Amundsen")
    .replace("/title", "Microservice Architecture: Aligning Principles, Practices, and Culture")
    .build();

The complete project sources are available on Github.

Monday, October 30, 2017

Better late than never: SSE, or Server-Sent Events, are now in JAX-RS

Server-Sent Events (or just SSE) is quite useful protocol which allows the servers to push data to the clients over HTTP. This is something our web browsers support for ages but, surprisingly, neglected by JAX-RS specification for quite a long time. Although Jersey had an extension available for SSE media type, the API has never been formalized and as such, was not portable to other JAX-RS implementations.

Luckily, JAX-RS 2.1, also known as JSR-370, has changed that by making SSE support, both client-side and server-side, a part of the official specification. In today's post we are going to look at how to integrate SSE support into the existing Java REST(ful) web services, using recently released version 3.2.0 of the terrific Apache CXF framework. In fact, beside the bootstrapping, there is nothing CXF-specific really, all the examples should work in any other framework which implements JAX-RS 2.1 specification.

Without further ado, let us get started. As the significant amount of Java projects these days are built on top of awesome Spring Framework, our sample application would use Spring Boot and Apache CXF Spring Boot Integration to get us off the ground quickly. The old good buddy Apache Maven would help us as well by managing our project dependencies.


    org.springframework.boot
    spring-boot-starter
    1.5.8.RELEASE



    org.apache.cxf
    cxf-rt-frontend-jaxrs
    3.2.0



    org.apache.cxf
    cxf-spring-boot-starter-jaxrs
    3.2.0



    org.apache.cxf
    cxf-rt-rs-client
    3.2.0



     org.apache.cxf
     cxf-rt-rs-sse
     3.2.0

Under the hood Apache CXF is using Atmosphere framework to implement SSE transport so this is another dependency we have to include.


    org.atmosphere
    atmosphere-runtime
    2.4.14

The specifics around relying on Atmosphere framework introduces a need to provide additional configuration settings, namely transportId, so to ensure that SSE-capable transport will be picked up at runtime. The relevant details could be added into application.yml file:

cxf:
  servlet:
    init:
      transportId: http://cxf.apache.org/transports/http/sse

Great, so the foundation is there, moving on. The REST(ful) web service we are going to build would expose imaginary CPU load averages (for simplicity randomly generated) as the SSE streams. The Stats class would constitute our data model.

public class Stats {
    private long timestamp;
    private int load;

    public Stats() {
    }

    public Stats(long timestamp, int load) {
        this.timestamp = timestamp;
        this.load = load;
    }

    // Getters and setters are omitted
    ...
}

Speaking of streams, the Reactive Streams specification made its way into Java 9 and hopefully we are going to see the accelerated adoption of the reactive programming models by Java community. Moreover, developing SSE-enabled REST(ful) web services would be so much easier and straightforward when backed by Reactive Streams. To make the case, let us onboard RxJava 2 into our sample application.


    io.reactivex.rxjava2
    rxjava
    2.1.6

This is a good moment to start with our StatsRestService class, the typical JAX-RS resource implementation. The key SSE capabilities in JAX-RS 2.1 are centered around Sse contextual object which could be inject like this.

@Service
@Path("/api/stats")
public class StatsRestService {
    @Context 
    public void setSse(Sse sse) {
        // Access Sse context here
    }

Out of the Sse context we could get access to two very useful abstractions: SseBroadcaster and OutboundSseEvent.Builder, for example:

private SseBroadcaster broadcaster;
private Builder builder;
    
@Context 
public void setSse(Sse sse) {
    this.broadcaster = sse.newBroadcaster();
    this.builder = sse.newEventBuilder();
}

As you may already guess, the OutboundSseEvent.Builder constructs the instances of the OutboundSseEvent classes which could be sent over the wire, while SseBroadcaster broadcasts the same SSE stream to all the connected clients. With that being said, we could generate the stream of OutboundSseEvents and distribute it to everyone who is interested:

private static void subscribe(final SseBroadcaster broadcaster, final Builder builder) {
    Flowable
        .interval(1, TimeUnit.SECONDS)
        .zipWith(eventsStream(builder), (id, bldr) -> createSseEvent(bldr, id))
        .subscribeOn(Schedulers.single())
        .subscribe(broadcaster::broadcast);
}

private static Flowable<OutboundSseEvent.Builder> eventsStream(final Builder builder) {
    return Flowable.generate(emitter -> emitter.onNext(builder.name("stats")));
}

If you are not familiar with RxJava 2, no worries, this is what is happening here. The eventsStream method returns an effectively infinite stream of OutboundSseEvent.Builder instances for the SSE events of type stats. The subscribe method is a little bit more complicated. We start off by creating a stream which emits sequential number every second, f.e. 0,1,2,3,4,5,6,... and so on. Later, we combine this stream with the one returned by eventsStream method, essentially merging both streams to a single one which emits a tuple of (number, OutboundSseEvent.Builder) every second. Fairly speaking, this tuple is not very useful to us so we transform it to the instance of OutboundSseEvent class, treating the number as SSE event identifier:

private static final Random RANDOM = new Random();

private static OutboundSseEvent createSseEvent(OutboundSseEvent.Builder builder, long id) {
    return builder
        .id(Long.toString(id))
        .data(Stats.class, new Stats(new Date().getTime(), RANDOM.nextInt(100)))
        .mediaType(MediaType.APPLICATION_JSON_TYPE)
        .build();
}

The OutboundSseEvent may carry any payload in the data property which will be serialized with respect to the mediaType specified, using the usual MessageBodyWriter resolution strategy. Once we get our OutboundSseEvent instance, we send it off using SseBroadcaster::broadcast method. Please notice that we handed off the control flow to another thread using subscribeOn operator, this is usually what you would do all the time.

Good, hopefully the stream part is cleared out now but how could we actually subscribe to the SSE events emitted by SseBroadcaster? That is easier than you might think:

@GET
@Path("broadcast")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void broadcast(@Context SseEventSink sink) {
    broadcaster.register(sink);
}

And we are all set. The most important piece here is content type being produced, which should be set to MediaType.SERVER_SENT_EVENTS. In this case, the contextual instance of the SseEventSink becomes available and could be registered with SseBroadcaster instance.

To see our JAX-RS resource in action, we need to bootstrap the server instance using, for example, JAXRSServerFactoryBean, configuring all the necessary providers along the way. Please take a note that we are also explicitly specifying transport to be used, in this case SseHttpTransportFactory.TRANSPORT_ID.

@Configuration
@EnableWebMvc
public class AppConfig extends WebMvcConfigurerAdapter {
    @Bean
    public Server rsServer(Bus bus, StatsRestService service) {
        JAXRSServerFactoryBean endpoint = new JAXRSServerFactoryBean();
        endpoint.setBus(bus);
        endpoint.setAddress("/");
        endpoint.setServiceBean(service);
        endpoint.setTransportId(SseHttpTransportFactory.TRANSPORT_ID);
        endpoint.setProvider(new JacksonJsonProvider());
        return endpoint.create();
    }
    
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry
          .addResourceHandler("/static/**")
          .addResourceLocations("classpath:/web-ui/"); 
    }
}

To close the loop, we just need to supply the runner for our Spring Boot application:

@SpringBootApplication
public class SseServerStarter {    
    public static void main(String[] args) {
        SpringApplication.run(SseServerStarter.class, args);
    }
}

Now, if we run the application and navigate to http://localhost:8080/static/broadcast.html using multiple web browsers or different tabs within the same browser, we would observe the identical stream of events charted inside all of them:

Nice, broadcasting is certainly a valid use case, but what about returning an independent SSE stream on each endpoint invocation? Easy, just use SseEventSink methods, like send and close, to manipulate the SSE stream directly.

@GET
@Path("sse")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void stats(@Context SseEventSink sink) {
    Flowable
        .interval(1, TimeUnit.SECONDS)
        .zipWith(eventsStream(builder), (id, bldr) -> createSseEvent(bldr, id))
        .subscribeOn(Schedulers.single())
        .subscribe(sink::send, ex -> {}, sink::close);
}

This time, if we run the application and navigate to http://localhost:8080/static/index.html using multiple web browsers or different tabs within the same browser, we would observe absolutely different charts:

Excellent, the server-side APIs are indeed very concise and easy to use. But what about client side, could we consume SSE streams from the Java applications? The answer is yes, absolutely. The JAX-RS 2.1 outlines the client-side API as well, with SseEventSource in the heart of it.


final WebTarget target = ClientBuilder
    .newClient()
    .register(JacksonJsonProvider.class)
    .target("http://localhost:8080/services/api/stats/sse");
        
try (final SseEventSource eventSource =
            SseEventSource
                .target(target)
                .reconnectingEvery(5, TimeUnit.SECONDS)
                .build()) {

    eventSource.register(event -> {
        final Stats stats = event.readData(Stats.class, MediaType.APPLICATION_JSON_TYPE);
        System.out.println("name: " + event.getName());
        System.out.println("id: " + event.getId());
        System.out.println("comment: " + event.getComment());
        System.out.println("data: " + stats.getLoad() + ", " + stats.getTimestamp());
        System.out.println("---------------");
    });
    eventSource.open();

    // Just consume SSE events for 10 seconds
    Thread.sleep(10000); 
}     

If we run this code snippet (assuming the server is up and running as well) we would see something like that in the console (as you may recall, the data is generated randomly).

name: stats
id: 0
comment: null
data: 82, 1509376080027
---------------
name: stats
id: 1
comment: null
data: 68, 1509376081033
---------------
name: stats
id: 2
comment: null
data: 12, 1509376082028
---------------
name: stats
id: 3
comment: null
data: 5, 1509376083028
---------------

...

As we can see, the OutboundSseEvent from server-side becomes InboundSseEvent for the client side. The client may consume any payload from the data property which could be deserialized by specifying expected media type, using the usual MessageBodyReader resolution strategy.

There are a lot of material squeezed in the single post. And still, there are few more things regarding SSE and JAX-RS 2.1 which we have not covered here, like for example using HttpHeaders.LAST_EVENT_ID_HEADER or configuring reconnect delays. Those could be a great topic for the upcoming post if there would be an interest to learn about.

To conclude, the SSE support in JAX-RS is what many of us have been awaiting for so long. Finally, it is the there, please give it a try!

The complete project sources are available on Github.

Wednesday, July 12, 2017

The Patterns of the Antipatterns: Architecture

We have been always looking for the best ways to architect our applications and platforms so they are maintainable, extensible, observable, adaptable and easy to evolve (this is by no means a complete list of the properties we would like to have but the foundational ones). Layered architecture, component-based architecture, hexagonal architecture, microservice architecture (and many others) advocate to use great design principles and guidelines to achieve these goals.

For example, layered architecture suggests to have clear separation between different layers (persistence, services, caching, presentation, ...) with a strict rules imposed on how layers could talk to each other. Component-based architecture, quite widespread in the world of UI development but not limited to it, introduces the components (which internally may be built on top of layered architecture or other architectural style) as key building blocks of the applications or system.

In turn, hexagonal architecture goes even further by introducing the concepts of ports and adapters around the application core. Last but not least, microservice architecture, the hottest architectural style these days, advocates to structure the application or system as a set of loosely coupled, ideally small, self-sufficient services which collaborate with each other.

Each one of the aforementioned architectural styles (as well as others we haven't mentioned) has own strong/weak sides and the context where it shines the most. But there are two kinds of architectures which beat them all, and those we are going to discuss right away.

Proximity Architecture

There is only one principle this architectural style offers: use anything you want anywhere you need. Do you need access to the service from data transfer object? No problem, just do it! May be your RESTful resource needs access to persistence services? Not an issue, go for it, because by and large, why to bother? It is just a pile of classes (usually within the same application), use static fields or methods, dependency injection or any other technique your language / frameworks provide. At the end, if compiler is happy, what is the problem? Right?

No, it is not right ... For sure, you have heard funny terms such as "big ball of mud" or "spaghetti code". Those are the typical outcomes when proximity architecture drives the show.

There are many reasons why proximity architecture emerges. Those reasons range from the constant urge to push features out, lack of knowledge and/or experience in software craftsmanship or the worst case, an ethical and careless approach to software engineering discipline. If you happen to have a startup experience , or worked for one of those large companies that still claim to be a startup to justify their chaos, I’m sure you have seen this form of architecture or one of its variations. Are they all doomed?

Well, I would argue, no, not really but fleeing the claws of the proximity architecture solely depends on the trade-offs you are going to make. For the developers, it is hard or even impossible to push back on the timelines set for the delivering of new features (unless you are exceptionally lucky and your manager understands what is technical debt and how to keep it under control). You have to and you will take shortcuts. However, it is enormously important to strive for the right balance between pouring in hacky code (on top of bunch of other hacks) and going with quick / not the best option while keeping in mind how much harm it could do to overall application architecture (and how hard it would be to refactore / improve / reverse that later on). Undoubtedly, seasoned software developers are able to cope with that.

I know, it sounds pretty vague to be an actionable advice but every company is unique, there is no magic universal rule or recipe to stop proximity architecture from poisoning your applications. The one thing is clear though, if you let it in and do nothing, you are going to end up with the unmanageable codebase and the only route to take would be to rewrite everything from scratch.

Micromonolith Architecture

Microservices, microservices, microservices ... everyone talks about them ... You hear from left and right that if you don't do microservices, then you are stuck in stone age. And many actually truly believe that all their problems and issues could be solved with microservices.

In many regards, microservices architecture could put life into stagnating projects assuming you are ready to embrace it across the whole organization, culturally and technically. This is a serious long-term commitment as such the changes won't happen overnight. But so many organizations fail to see the new beast rising here: micromonolith architecture in microservice clothing.

Here is what you could be sold as microservices architecture. Many organizations ended up with monolith and have no choice as to deal with that. Quality is terrible, productivity is going down, releasing new features takes months (or even years) ... and throwing more people into it does not help at all. So everyone tries to escape the monolith, whatever it takes, building standalone applications or services on the side, luckily terrific Spring Boot makes it as easy as falling off a log.

At first glance, it doesn’t look bad, after all they are calling it a microservice, and a bunch of things are calling each other, right? Except that this is not right at all. Applications and services that thrive in this kind of unhealthy ecosystem often do not own any data. They do not have sufficient knowledge to function independently and they do not belong to a concrete domain. These kind of services need to talk to the monolith to perform any useful function and in the worst case scenario, they share a database with the monolith and becomes a mere chatty proxy.

Why is that? Well, because in most cases it is very hard to migrate the large monolithic application to microservices architecture (the true one). And along with the migration, the organization should transform itself to become a fertile ground for cultivation of microservices, naturally. Not everyone is so brave to pull it off but the good news though, it is feasible. And here is why.

While building the monolith, the organization (hopefully) becomes an expert in its domain. There are tons of knowledge developed around the product, there are well-defined, well-understood use cases and flows, the domain-driven design might be of great help to formalize and capture all that. Once there is a solid understanding of what should be built, it is time to start cutting the pieces off the monolith. It is going to be a bumpy ride for sure and probably you won't be able to get from point A to point B directly (refactorings? splitting monolith into modules?), not to forget about evolving your testing practices (consumer contract tests? component tests? e2e tests?) along the way. But the rewards are worth the efforts.

Microservice architecture is a great one, it opens unlimited amount of opportunities and, when done right, could bring back the joy of development, could serve as a solid foundation for organization's software stacks, current and future ones. But neglecting its core principles would pave the road to frustration and chaos.

Monday, June 19, 2017

The Patterns of the Antipatterns: Design / Testing

It's been a while since I have started my professional career as the software developer. Over the years I worked for many companies, on a whole bunch of different projects, trying my best to deliver them and be proud of the job. However, drifting from company to company, from project to project, you see the same bad design decisions made over and over again, no matter how old the codebase is or how many people have touched it. They are so widely present that truly become the patterns themselves, thereby destine the title of this blog: the patterns of the antipatterns.

I am definitely not the first one who identified them and certainly not the last one either (so let all the credits go to the original sources if any exist). They all come from the real projects, primarily Java ones, though other details should stay undisclosed. With that ...

Code first, no time to think

This is hardly sounds like a name of the antipattern but I believe it is. We all are developers, and unsurprisingly love to code. Whenever we have gotten some feature or idea to implement, we often just start coding it, without thinking much about the design or even not looking around if we could borrow / reuse some functionality which is already there. As such, reinventing the wheel all the time, hopefully enjoying at least the coding part ...

I am absolutely convinced that spending some time just thinking about high-level design, incarnating it in terms of interfaces, traits or protocols (whatever your favoring language is offering) is a must-do exercise. It won't take more time, really, but the results are rewarding: beautiful, well-thought solution at the end.

TDD is a set of the terrific practices to help and guide your through. While you are developing test cases, you are iterating over and over, making changes and with each step getting close to the right implementation. Please, adopt it in each and every project you are part of.

Occasional Doer

Good, enough philosophy, now we are moving on to a real antipatterns. Undoubtedly, many of you witnessed such pieces of code:

public void setSomething(Object something) {
    if (!heavensAreFalling) {
        this.something = something;
    }
}

Essentially, you call the method to set some value, and surely you expect the value to be set once the invocation is completed. However, due to presence of some logic inside the method implementation it may actually do nothing, silently ...

This is an example of really bad design. Instead, such interactions could be modeled using for example State pattern or at least just throwing an exception saying that the operation cannot be completed due to internal constraints. Yes, it may require a bit more code to be written but at least the expectations will be met.

The Universe

Often there is a class within the application which is referencing (or is referenced by) mostly every other class of the application. For example, all classes in the project must be inherited from some base class.

public class MyService extends MyBase {
  ...
}
public class MyDao extends MyBase {
  ...
}

In most cases, this is not a good idea. Not only it creates a hard dependency on this single class but also on every dependency this guy is using internally. You may argue that in Java every single class implicitly is inherited from Object. Indeed, but this is part of the language specification and has nothing to do with application logic, no need to mimic that.

Another extreme is to have a class which serves as a central brain of the application. It knows about every service / dao / ... and provides accessors (most of the time, static) so anyone can just turn to it and ask for anything, for example:

public class Services {
    public static MyService1 getMyService1() {...}
    public static MyService2 getMyService2() {...}
    public static MyService3 getMyService3() {...}
}

Those are universes, eventually they leak into every class of the application (it is easy to call static method, right?) and couple everything together. In more or less large codebase, getting rid of such universes is exceptionally hard.

To prevent such things to poison your projects, use dependency injection pattern, implemented by Spring Framework, Dagger, Guice, CDI, HK2, or pass the necessary dependencies through constructors or method arguments. It will actually help you to see the smells early on and address them even before they become a problem.

The Onionated Inheritance

This is a really fun and scary one, very often present in projects which expose and implement REST(ful) web services. Let us suppose you have a Customer class, with quite a few different properties, for example:

public class Customer {
    private String id;
    private String firstName;
    private String lastName;
    private Address address;
    private Company company;
    ...
}

Now, you have been asked to create an API (or expose through other means) which finds the customer but returns only his id, firstName and lastName. Sounds straightforward, isn't it? Let us do it:

public class CustomerWithIdAndNames {
    private String id;
    private String firstName;
    private String lastName;
}

Looks pretty neat, right? Everyone is happy but next day you get to work on another feature where you need to design an API which returns id, firstName, lastName and also company. Sounds pretty easy, we already have CustomerWithIdAndNames, only need to enrich it with a company. Perfect job for inheritance, right?

public class CustomerWithIdAndNamesAndCompany extends CustomerWithIdAndNames {
    private Company company;
}

It works! But after a week, another request comes in, where a new API also needs to expose the address property, anyway, you got the idea. So at the end you end up with dozen of classes which extends each other, adding properties here and there (like onions, hence the antipattern name) to fulfill the API contracts.

Yet another road to hell ... There are quite a few options here, the simplest one is JSON Views (here are some examples using Jackson) where the underlying class stays the same but different views of it could be returned. In case you really care about not fetching the data you don't need, another option is GraphQL which we have covered last time. Essentially, the message here is: don't create such onionated hierarchies, use single representation but use different techniques to assemble it and fetch the necessary pieces efficiently.

IDD: if-driven development

Most of real-world projects internally are built using pretty complex set of application and business rules, cemented by applying different layers of extensibility and customizations (if needed). In most cases, the implementation choice is pretty unpretentious and the logic is driven by a conditional statements, for example:

int price = ...;
if (calculateTaxes) {
    price += taxes;
}

With time, the business rules evolve, so do the conditional expression they are impersonated, becoming a real monsters at the end:

int price = ...;
if (calculateTaxes && (country == "US" || country == "GB") && 
    discount == null || (discount != null && discount.getType() == TAXES)) {
    price += taxes;
}

You see where I am going with that, the project is built on top IDD practices: if-driven development. Not only the code becomes fragile, prone to condition errors, very hard to follow, it is very scary to change as well! To be fair, you also need to test all possible combinations of such if branches to ensure they make sense and the right branch is picked up (I bet no one is actually doing that because this is a gigantic amount of tests and efforts)!

In many cases the feature toggles could be an answer. But if you get used to write such a code, please take some time and read the books about design patterns, test-driven development and coding practices. There are so many of them, below is the list I would definitely recommend:

There are so many great ones to add here, but those are a very good starting point and highly rewarding investment. Much better alternatives to if statements are going to fill you mind.

Test Lottery

When you hear something like "All projects in our organization are agile and using TDD practices", the idealistic pictures come in mind, you hope everything is done right by the book. In reality, in most projects things are very far from that. Hopefully, at least there are some test suites you could trust, or could you?

Please welcome the test lottery: the kingdom of flaky tests. Have you ever seen builds with random test failures? Like on every consecutive build (without any changes introduced) some tests suddenly pass but others are starting to fail? And may be one of ten builds may turn green (jackpot!) as stars finally got aligned properly? If no one cares, it becomes a new normal and every team member who joins the team is being told to ignore those failures, "they are failing for ages, screw them".

Tests are as important as the mainstream code you push into production. They need to be maintained, refactored and kept clean. Keep all your builds green all the time, if you see some flakiness or random failures, address them right away. It is not easy but in many cases you may actually run into real bugs! You have to trust your test suites, otherwise why do you need them at all?

Test Framework Factory

This guy is an outstanding one. It happens so often, it feels like every organization is just obligated to create own test framework, usually on top of existing ones. At first it sounds like a good idea, and arguably, it really is. Unfortunately, in 99% the outcome is yet another monstrous framework no one wants to use but is forced to because "it took 15 men / years to develop and we cannot throw it away after that". Everyone struggles, productivity is going down at the speed of light, and quality does not improve at all.

Think twice before taking this route. The goal to simplify testing of complex applications the organization is working on is certainly a good one. But don't fall into the trap of throwing people at it, who are going to spend few months or even years working on the "perfect & mighty test framework" in isolation, may be doing great job overall, but not solving any real problems. Instead, just creating new ones. If you decided to embark on this train anyway, try hard to stay focused, address the pain points at their core, prototype, always ask for feedback, listen and iterate ... And keep it stable, easy to use, helpful, trustful and as lightweight as possible.

Conclusions

You cannot build everything right from the beginning. Sometimes our decisions are impacted by pressure and deadlines, or the factors outside of our control. But it does not mean that we should let things stay like that forever, getting worst and worst every single day, please, don't ...

This post is a mesh of ideas from many people, terrific former teammates and awesome friends for good, credits go to all of them. If you happens to like it, you may be interested in checking out the upcoming post, where we are going to talk about architectural antipatterns.