Tuesday, November 6, 2018

Building Enterprise Java applications, the Spring way

I think it is fair to say that Java EE has gained pretty bad reputation among Java developers. Despite the fact that it has certainly improved on all fronts over the years, even changed home to Eclipse Foundation to become Jakarta EE, its bitter taste is still quite strong. On the other side we have Spring Framework (or to reflect the reality better, a full-fledged Spring Platform): brilliant, lightweight, fast, innovative and hyper-productive Java EE replacement. So why to bother with Java EE?

We are going to answer this question by showing how easy it is to build modern Java applications using most of Java EE specs. And the key ingredient to succeed here is Eclipse Microprofile: enterprise Java in the age of microservices.

The application we are going to build is RESTful web API to manage people, as simple as that. The standard way to build RESTful web services in Java is by using JAX-RS 2.1 (JSR-370). Consequently, CDI 2.0 (JSR-365) is going to take care of dependency injection whereas JPA 2.0 (JSR-317) is going to cover the data access layer. And certainly, Bean Validation 2.0 (JSR-380) is helping us to deal with input verification.

The only non-Java EE specification we would be relying on is OpenAPI v3.0 which helps to provide the usable description of our RESTful web APIs. With that, let us get started with the PersonEntity domain model (omitting getters and setters as not very relevant details):

@Entity
@Table(name = "people")
public class PersonEntity {
    @Id @Column(length = 256) 
    private String email;

    @Column(nullable = false, length = 256, name = "first_name")
    private String firstName;

    @Column(nullable = false, length = 256, name = "last_name")
    private String lastName;

    @Version
    private Long version;
}

It just has the absolute minimum set of properties. The JPA repository is pretty straightforward and implements typical set of CRUD methods.

@ApplicationScoped
@EntityManagerConfig(qualifier = PeopleDb.class)
public class PeopleJpaRepository implements PeopleRepository {
    @Inject @PeopleDb private EntityManager em;

    @Override
    @Transactional(readOnly = true)
    public Optional<PersonEntity> findByEmail(String email) {
        final CriteriaBuilder cb = em.getCriteriaBuilder();
    
        final CriteriaQuery<PersonEntity> query = cb.createQuery(PersonEntity.class);
        final Root<PersonEntity> root = query.from(PersonEntity.class);
        query.where(cb.equal(root.get(PersonEntity_.email), email));
        
        try {
            final PersonEntity entity = em.createQuery(query).getSingleResult();
            return Optional.of(entity);
        } catch (final NoResultException ex) {
            return Optional.empty();
        }
    }

    @Override
    @Transactional
    public PersonEntity saveOrUpdate(String email, String firstName, String lastName) {
        final PersonEntity entity = new PersonEntity(email, firstName, lastName);
        em.persist(entity);
        return entity;
    }

    @Override
    @Transactional(readOnly = true)
    public Collection<PersonEntity> findAll() {
        final CriteriaBuilder cb = em.getCriteriaBuilder();
        final CriteriaQuery<PersonEntity> query = cb.createQuery(PersonEntity.class);
        query.from(PersonEntity.class);
        return em.createQuery(query).getResultList();
    }

    @Override
    @Transactional
    public Optional<PersonEntity> deleteByEmail(String email) {
        return findByEmail(email)
            .map(entity -> {
                em.remove(entity);
                return entity;
            });
    }
}

The transaction management (namely, the @Transactional annotation) needs some explanation. In the typical Java EE application, the container runtime is responsible for managing the transactions. Since we don't want to onboard the application container but stay lean, we could have used EntityManager to start / commit / rollback transactions. It would certainly work out but pollute the code with the boilerplate. Arguably, the better option is to use Apache DeltaSpike CDI extensions for declarative transaction management (this is where @Transactional and @EntityManagerConfig annotations are coming from). The snippet below illustrates how it is being integrated.

@ApplicationScoped
public class PersistenceConfig {
    @PersistenceUnit(unitName = "peopledb")
    private EntityManagerFactory entityManagerFactory;

    @Produces @PeopleDb @TransactionScoped
    public EntityManager create() {
        return this.entityManagerFactory.createEntityManager();
    }

    public void dispose(@Disposes @PeopleDb EntityManager entityManager) {
        if (entityManager.isOpen()) {
            entityManager.close();
        }
    }
}

Awesome, the hardest part is already behind! The Person data transfer object and the service layer are coming next.

public class Person {
    @NotNull private String email;
    @NotNull private String firstName;
    @NotNull private String lastName;
}

Honestly, for the sake of keeping the example application as small as possible we could skip the service layer altogether and go to the repository directly. But this is, in general, not a very good practice so let us introduce PeopleServiceImpl anyway.

@ApplicationScoped
public class PeopleServiceImpl implements PeopleService {
    @Inject private PeopleRepository repository;

    @Override
    public Optional<Person> findByEmail(String email) {
        return repository
            .findByEmail(email)
            .map(this::toPerson);
    }

    @Override
    public Person add(Person person) {
        return toPerson(repository.saveOrUpdate(person.getEmail(), person.getFirstName(), person.getLastName()));
    }

    @Override
    public Collection<Person> getAll() {
        return repository
            .findAll()
            .stream()
            .map(this::toPerson)
            .collect(Collectors.toList());
    }

    @Override
    public Optional<Person> remove(String email) {
        return repository
            .deleteByEmail(email)
            .map(this::toPerson);
    }
    
    private Person toPerson(PersonEntity entity) {
        return new Person(entity.getEmail(), entity.getFirstName(), entity.getLastName());
    }
}

The only part left is the definition of the JAX-RS application and resources.

@Dependent
@ApplicationPath("api")
@OpenAPIDefinition(
    info = @Info(
        title = "People Management Web APIs", 
        version = "1.0.0", 
        license = @License(
            name = "Apache License", 
            url = "https://www.apache.org/licenses/LICENSE-2.0"
        )
    )
)
public class PeopleApplication extends Application {
}

Not much to say, as simple as it could possibly be. The JAX-RS resource implementation is a bit more interesting though (the OpenAPI annotations are taking most of the place).

@ApplicationScoped
@Path( "/people" ) 
@Tag(name = "people")
public class PeopleResource {
    @Inject private PeopleService service;
    
    @Produces(MediaType.APPLICATION_JSON)
    @GET
    @Operation(
        description = "List all people", 
        responses = {
            @ApiResponse(
                content = @Content(array = @ArraySchema(schema = @Schema(implementation = Person.class))),
                responseCode = "200"
            )
        }
    )
    public Collection<Person> getPeople() {
        return service.getAll();
    }

    @Produces(MediaType.APPLICATION_JSON)
    @Path("/{email}")
    @GET
    @Operation(
        description = "Find person by e-mail", 
        responses = {
            @ApiResponse(
                content = @Content(schema = @Schema(implementation = Person.class)), 
                responseCode = "200"
            ),
            @ApiResponse(
                responseCode = "404", 
                description = "Person with such e-mail doesn't exists"
            )
        }
    )
    public Person findPerson(@Parameter(description = "E-Mail address to lookup for", required = true) @PathParam("email") final String email) {
        return service
            .findByEmail(email)
            .orElseThrow(() -> new NotFoundException("Person with such e-mail doesn't exists"));
    }

    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.APPLICATION_JSON)
    @POST
    @Operation(
        description = "Create new person",
        requestBody = @RequestBody(
            content = @Content(schema = @Schema(implementation = Person.class)),
        ), 
        responses = {
            @ApiResponse(
                 content = @Content(schema = @Schema(implementation = Person.class)),
                 headers = @Header(name = "Location"),
                 responseCode = "201"
            ),
            @ApiResponse(
                responseCode = "409", 
                description = "Person with such e-mail already exists"
            )
        }
    )
    public Response addPerson(@Context final UriInfo uriInfo,
            @Parameter(description = "Person", required = true) @Valid Person payload) {

        final Person person = service.add(payload);
        return Response
             .created(uriInfo.getRequestUriBuilder().path(person.getEmail()).build())
             .entity(person)
             .build();
    }
    
    @Path("/{email}")
    @DELETE
    @Operation(
        description = "Delete existing person",
        responses = {
            @ApiResponse(
                responseCode = "204",
                description = "Person has been deleted"
            ),
            @ApiResponse(
                responseCode = "404", 
                description = "Person with such e-mail doesn't exists"
            )
        }
    )
    public Response deletePerson(@Parameter(description = "E-Mail address to lookup for", required = true ) @PathParam("email") final String email) {
        return service
            .remove(email)
            .map(r -> Response.noContent().build())
            .orElseThrow(() -> new NotFoundException("Person with such e-mail doesn't exists"));
    }
}

And with that, we are done! But how could we assemble and wire all these pieces together? Here is the time for Microprofile to enter the stage. There are many implementations to chose from, the one we are going to use in this post is Project Hammock. The only thing we have to do is to specify the CDI 2.0, JAX-RS 2.1 and JPA 2.0 implementations we would like to use, which translates to Weld, Apache CXF, and OpenJPA respectively (expressed through the Project Hammock dependencies). Let us take a look on the Apache Maven pom.xml file.

<properties>
    <deltaspike.version>1.8.1</deltaspike.version>
    <hammock.version>2.1</hammock.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.apache.deltaspike.modules</groupId>
        <artifactId>deltaspike-jpa-module-api</artifactId>
        <version>${deltaspike.version}</version>
        <scope>compile</scope>
    </dependency>

    <dependency>
        <groupId>org.apache.deltaspike.modules</groupId>
        <artifactId>deltaspike-jpa-module-impl</artifactId>
        <version>${deltaspike.version}</version>
        <scope>runtime</scope>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>dist-microprofile</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>jpa-openjpa</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>util-beanvalidation</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>util-flyway</artifactId>
        <version>${hammock.version}</version>
    </dependency>

    <dependency>
        <groupId>ws.ament.hammock</groupId>
        <artifactId>swagger</artifactId>
        <version>${hammock.version}</version>
    </dependency>
</dependencies>

Without further ado, let us build and run the application right away (if you are curious what relational datastore the application is using, it is H2 with the database configured in-memory).

> mvn clean package
> java -jar target/eclipse-microprofile-hammock-0.0.1-SNAPSHOT-capsule.jar 

The best way to ensure that our people management RESTful web APIs are fully functional is to send a couple of requests to it:

>  curl -X POST http://localhost:10900/api/people -H "Content-Type: application\json" \
     -d '{"email": "a@b.com", "firstName": "John", "lastName": "Smith"}'

HTTP/1.1 201 Created
Location: http://localhost:10900/api/people/a@b.com
Content-Type: application/json

{
    "firstName":"John","
    "lastName":"Smith",
    "email":"a@b.com"
}

What about making sure the Bean Validation is working fine? To trigger that, let us send the partially prepared request.

>  curl  --X POST http://localhost:10900/api/people -H "Content-Type: application\json" \
     -d '{"firstName": "John", "lastName": "Smith"}'

HTTP/1.1 400 Bad Request
Content-Length: 0

The OpenAPI specification and pre-bundled Swagger UI distribution are also available at http://localhost:10900/index.html?url=http://localhost:10900/api/openapi.json.

So far so good but fairly speaking we have not talked about testing our application at all. How hard it would be to come up with the integration test for, let say, the scenario of adding a person? It turns out that the frameworks around testing Java EE applications have improved a lot. In particular, it is exceptionally easy to accomplish with Arquillian test framework (along with beloved JUnit and REST Assured). One real example is worth thousand words.

@RunWith(Arquillian.class)
@EnableRandomWebServerPort
public class PeopleApiTest {
    @ArquillianResource private URI uri;
    
    @Deployment
    public static JavaArchive createArchive() {
        return ShrinkWrap
            .create(JavaArchive.class)
            .addClasses(PeopleResource.class, PeopleApplication.class)
            .addClasses(PeopleServiceImpl.class, PeopleJpaRepository.class, PersistenceConfig.class)
            .addPackages(true, "org.apache.deltaspike");
    }
            
    @Test
    public void shouldAddNewPerson() throws Exception {
        final Person person = new Person("a@b.com", "John", "Smith");
        
        given()
            .contentType(ContentType.JSON)
            .body(person)
            .post(uri + "/api/people")
            .then()
            .assertThat()
            .statusCode(201)
            .body("email", equalTo("a@b.com"))
            .body("firstName", equalTo("John"))
            .body("lastName", equalTo("Smith"));
    }
}

Amazing, isn't it? It is actually a lot of fun to develop modern Java EE applications, someone may say, the Spring way! And in fact, the parallels with Spring are not coincidental since it was inspiring, is inspiring and undoubtedly is going to continue inspire a lot of innovations in the Java EE ecosystem.

How the future is looking like? I think, by all means bright, both for Jakarta EE and Eclipse Microprofile. The latter just approached the version 2.0 with tons of new specifications included, oriented to address the needs of the microservice architectures. It is awesome to witness these transformations happening.

The complete source of the project is available on Github.

Sunday, August 26, 2018

Embracing modular Java platform: Apache CXF on Java 10

It's been mostly a year since Java 9 release finally delivered Project Jigsaw to the masses. It was a long, long journey, but it is there, so what has changed? This is a very good question and the answer to it is not obvious and straightforward.

By and large, Project Jigsaw is a disruptive change and there are many reasons why. Although mostly all of our existing application are going to run on Java 10 (to be replaced by JDK 11 very soon) with minimal or no changes, there are deep and profound implications Project Jigsaw brings to the Java developers: embrace the modular applications the Java platform way.

With the myriads of awesome frameworks and libraries out there, it will surely take time, a lot of time, to convert them to Java modules (many will not ever make it). This path is thorny but there are certain things which are already possible even today. In this rather short post we are going to learn how to use terrific Apache CXF project to build JAX-RS 2.1 Web APIs in a truly modular fashion using latest JDK 10.

Since 3.2.5 release, all Apache CXF artifacts have their manifests enriched with an Automatic-Module-Name directive. It does not make them full-fledged modules, but this is a first step in the right direction. So let us get started ...

If you use Apache Maven as the build tool of choice, not much changed here, the dependencies are declared the same way as before.

<dependencies>
    <dependency>
        <groupId>org.apache.cxf</groupId>
        <artifactId>cxf-rt-frontend-jaxrs</artifactId>
        <version>3.2.5</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.jaxrs</groupId>
        <artifactId>jackson-jaxrs-json-provider</artifactId>
        <version>2.9.6</version>
    </dependency>

    <dependency>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-server</artifactId>
        <version>9.4.11.v20180605</version>
    </dependency>

    <dependency>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-webapp</artifactId>
        <version>9.4.11.v20180605</version>
    </dependency>
</dependencies>

The uber-jar or fat-jar packaging are not really applicable to the modular Java applications so we have to collect the modules ourselves, for example at the target/modules folder.

<plugin>
    <artifactId>maven-jar-plugin</artifactId>
    <version>3.1.0</version>
    <configuration>
        <outputDirectory>${project.build.directory}/modules</outputDirectory>
    </configuration>
</plugin>

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>3.1.1</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <outputDirectory>${project.build.directory}/modules</outputDirectory>
                <includeScope>runtime</includeScope>
            </configuration>
        </execution>
    </executions>
</plugin>

All good, the next step is to create the module-info.java and list there the name of our module (com.example.cxf in this case) and, among other things, all required modules it needs in order to be functionable.

module com.example.cxf {
    exports com.example.rest;
    
    requires org.apache.cxf.frontend.jaxrs;
    requires org.apache.cxf.transport.http;
    requires com.fasterxml.jackson.jaxrs.json;
    
    requires transitive java.ws.rs;
    
    requires javax.servlet.api;
    requires jetty.server;
    requires jetty.servlet;
    requires jetty.util;
    
    requires java.xml.bind;
}

As you may spot right away, org.apache.cxf.frontend.jaxrs and org.apache.cxf.transport.http come from Apache CXF distribution (the complete list is available in the documentation) whereas java.ws.rs is the JAX-RS 2.1 API module. After that we could proceed with implementing our JAX-RS resources the same way we did before.

@Path("/api/people")
public class PeopleRestService {
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Collection<Person> getAll() {
        return List.of(new Person("John", "Smith", "john.smith@somewhere.com"));
    }
}

This looks easy, how about adding some spicy sauce, like server-sent events (SSE) and RxJava, for example? Let us see how exceptionally easy it is, starting from dependencies.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-sse</artifactId>
    <version>3.2.5</version>
</dependency>

<dependency>
    <groupId>io.reactivex.rxjava2</groupId>
    <artifactId>rxjava</artifactId>
    <version>2.1.14</version>
</dependency>

Also, we should not forget to update our module-info.java by adding the requires directive to these new modules.

module com.example.cxf {
    ...
    requires org.apache.cxf.rs.sse;
    requires io.reactivex.rxjava2;
    requires transitive org.reactivestreams;
    ...

}

In order to keep things simple, our SSE endpoint would just broadcast every new person added through the API. Here is the implementation snippet which does it.

private SseBroadcaster broadcaster;
private Builder builder;
private PublishSubject<Person> publisher;
    
public PeopleRestService() {
    publisher = PublishSubject.create();
}

@Context 
public void setSse(Sse sse) {
    this.broadcaster = sse.newBroadcaster();
    this.builder = sse.newEventBuilder();
        
    publisher
        .subscribeOn(Schedulers.single())
        .map(person -> createEvent(builder, person))
        .subscribe(broadcaster::broadcast);
}

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response add(@Context UriInfo uriInfo, Person payload) {
    publisher.onNext(payload);
        
    return Response
        .created(
            uriInfo
                .getRequestUriBuilder()
                .path(payload.getEmail())
                .build())
        .entity(payload)
        .build();
}
    
@GET
@Path("/sse")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void people(@Context SseEventSink sink) {
    broadcaster.register(sink);
}

Now when we build it:

      mvn clean package

And run it using module path:

      java --add-modules java.xml.bind \
           --module-path target/modules \
           --module com.example.cxf/com.example.Starter

We should be able to give our JAX-RS API a test drive. The simplest way to make sure things work as expected is to navigate in the Google Chrome to the SSE endpoint http://localhost:8686/api/people/sse and add some random people through the POST requests, using the old buddy curl from the command line:

      curl -X POST http://localhost:8686/api/people \
           -d '{"email": "john@smith.com", "firstName": "John", "lastName": "Smith"}' \
           -H "Content-Type: application/json"
      curl -X POST http://localhost:8686/api/people \
           -d '{"email": "tom@tommyknocker.com", "firstName": "Tom", "lastName": "Tommyknocker"}' \
           -H "Content-Type: application/json"

In the Google Chrome we should be able to see raw SSE events, pushed by the server (they are not looking pretty but good enough to illustrate the flow).

So, what about the application packaging? Docker and containers are certainly a viable option, but with Java 9 and above we have another player: jlink. It assembles and optimizes a set of modules and their dependencies into a custom, fully sufficient runtime image. Let us try it out.

      jlink --add-modules java.xml.bind,java.management \
            --module-path target/modules \
            --verbose \
            --strip-debug \
            --compress 2 \
            --no-header-files \
            --no-man-pages \
            --output target/cxf-java-10-app

Here we are hitting the first wall. Unfortunately, since mostly all the dependencies of our application are automatic modules, it is a problem for jlink and we still have to include module path explicitly when running from the runtime image:

      target/cxf-java-10-app/bin/java  \
           --add-modules java.xml.bind \
           --module-path target/modules \
           --module com.example.cxf/com.example.Starter

At the end of the day it turned out to be not that scary. We are surely in the very early stage of the JPMS adoption, this is just a beginning. When every library, every framework we are using adds the module-info.java to their artifacts (JARs), making them true modules despite all the quirks, then we could declare a victory. But the small wins are already happening, make one yours!

The complete source of the project is available on Github.

Sunday, June 17, 2018

In the shoes of the consumer: do you really need to provide the client libraries for your APIs?

The beauty of the RESTful web services and APIs is that any consumer which speaks HTTP protocol will be able to understand and use it. Nonetheless, the same dilemma pops up over and over again: should you accompany your web APis with the client libraries or not? If yes, what languages or/and frameworks should you support?

Quite often this is not really an easy question to answer. So let us take a step back and think about the overall idea for a moment: what are the values which client libraries may bring to the consumers?

Someone may say to lower the barrier for adoption. Indeed, specifically in the case of strongly typed languages, exploring the API contracts from your favorite IDE (syntax highlighting and auto-completion please!) is quite handy. But by and large, the RESTful web APIs are simple enough to start with and good documentation would be certainly more valuable here.

Others may say it is good to shield the consumers from dealing with multiple API versions or rough edges. Also kind of make sense but I would argue it just hides the flaws with the way the web APIs in question are designed and evolve over time.

All in all, no matter how many clients you decide to bundle, the APIs are still going to be accessible by any generic HTTP consumer (curl, HttpClient, RestTemplate, you name it). Giving a choice is great but the price to pay for maintenance could be really high. Could we do it better? And as you may guess already, we certainly have quite a few options hence this post.

The key ingredient of the success here is to maintain an accurate specification of your RESTful web APIs, using OpenAPI v3.0 or even its predecessor, Swagger/OpenAPI 2.0 (or RAML, API Blueprint, does not really matter much). In case of OpenAPI/Swagger, the tooling is the king: one could use Swagger Codegen, a template-driven engine, to generate API clients (and even server stubs) in many different languages, and this is what we are going to talk about in this post.

To simplify the things, we are going to implement the consumer of the people management web API which we have built in the previous post. To begin with, we need to get its OpenAPI v3.0 specification in the YAML (or JSON) format.

java -jar server-openapi/target/server-openapi-0.0.1-SNAPSHOT.jar

And then:

wget http://localhost:8080/api/openapi.yaml

Awesome, the half of the job is done, literally. Now, let us allow Swagger Codegen to take a lead. In order to not complicate the matter, let's assume that consumer is also Java application, so we could understand the mechanics without any difficulties (but Java is only one of the options, the list of supported languages and frameworks is astonishing).

Along this post we are going to use OpenFeign, one of the most advanced Java HTTP client binders. Not only it is exceptionally simple to use, it offers quite a few integrations we are going to benefit from soon.

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-core</artifactId>
    <version>9.7.0</version>
</dependency>

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-jackson</artifactId>
    <version>9.7.0</version>
</dependency>

The Swagger Codegen could be run as stand-alone application from command-line, or Apache Maven plugin (the latter is what we are going to use).

<plugin>
    <groupId>io.swagger</groupId>
    <artifactId>swagger-codegen-maven-plugin</artifactId>
    <version>3.0.0-rc1</version>
    <executions>
        <execution>
            <goals>
                <goal>generate</goal>
            </goals>
            <configuration>
                <inputSpec>/contract/openapi.yaml</inputSpec>
                <apiPackage>com.example.people.api</apiPackage>
                <language>java</language>
                <library>feign</library>
                <modelPackage>com.example.people.model</modelPackage>
                <generateApiDocumentation>false</generateApiDocumentation>
                <generateSupportingFiles>false</generateSupportingFiles>
                <generateApiTests>false</generateApiTests>
                <generateApiDocs>false</generateApiDocs>
                <addCompileSourceRoot>true</addCompileSourceRoot>
                <configOptions>
                    <sourceFolder>/</sourceFolder>
                    <java8>true</java8>
                    <dateLibrary>java8</dateLibrary>
                    <useTags>true</useTags>
                </configOptions>
            </configuration>
        </execution>
    </executions>
</plugin>

If some of the options are not very clear, the Swagger Codegen has pretty good documentation to look for clarifications. The important ones to pay attention to is language and library, which are set to java and feign respectively. The one thing to note though, the support of the OpenAPI v3.0 specification is mostly complete but you may encounter some issues nonetheless (as you noticed, the version is 3.0.0-rc1).

What you will get when your build finishes is the plain old Java interface, PeopleApi, annotated with OpenFeign annotations, which is direct projection of the people management web API specification (which comes from /contract/openapi.yaml). Please notice that all model classes are generated as well.

@javax.annotation.Generated(
    value = "io.swagger.codegen.languages.java.JavaClientCodegen",
    date = "2018-06-17T14:04:23.031Z[Etc/UTC]"
)
public interface PeopleApi extends ApiClient.Api {
    @RequestLine("POST /api/people")
    @Headers({"Content-Type: application/json", "Accept: application/json"})
    Person addPerson(Person body);

    @RequestLine("DELETE /api/people/{email}")
    @Headers({"Content-Type: application/json"})
    void deletePerson(@Param("email") String email);

    @RequestLine("GET /api/people/{email}")
    @Headers({"Accept: application/json"})
    Person findPerson(@Param("email") String email);

    @RequestLine("GET /api/people")
    @Headers({"Accept: application/json"})
    List<Person> getPeople();
}

Let us compare it with Swagger UI interpretation of the same specification, available at http://localhost:8080/api/api-docs?url=/api/openapi.json:

It looks right at the first glance but we have better ensure things work out as expected. Once we have OpenFeign-annotated interface, it could be made functional (in this case, implemented through proxies) using the family of Feign builders, for example:

final PeopleApi api = Feign
    .builder()
    .client(new OkHttpClient())
    .encoder(new JacksonEncoder())
    .decoder(new JacksonDecoder())
    .logLevel(Logger.Level.HEADERS)
    .options(new Request.Options(1000, 2000))
    .target(PeopleApi.class, "http://localhost:8080/");

Great, fluent builder style rocks. Assuming our people management web APIs server is up and running (by default, it is going to be available at http://localhost:8080/):

java -jar server-openapi/target/server-openapi-0.0.1-SNAPSHOT.jar

We could communicate with it by calling freshly built PeopleApi instance methods, as in the code snippet below.:

final Person person = api.addPerson(
        new Person()
            .email("a@b.com")
            .firstName("John")
            .lastName("Smith"));

It is really cool, if we rewind it back a bit, we actually did nothing. Everything is given to us for free with only web API specification available! But let us not stop here and remind ourselves that using Java interfaces will not eliminate the reality that we are dealing with remote systems. And things are going to fail here, sooner or later, no doubts.

Not so long ago we have learned about circuit breakers and how useful they are when properly applied in the context of distributed systems. It would be really awesome to somehow introduce this feature into our OpenFeign-based client. Please welcome another member of the family, HystrixFeign builder, the seamless integration with Hytrix library:

final PeopleApi api = HystrixFeign
    .builder()
    .client(new OkHttpClient())
    .encoder(new JacksonEncoder())
    .decoder(new JacksonDecoder())
    .logLevel(Logger.Level.HEADERS)
    .options(new Request.Options(1000, 2000))
    .target(PeopleApi.class, "http://localhost:8080/");

The only thing we need to do is just to add these two dependencies (strictly speaking hystrix-core is not really needed if you do not mind to stay on older version) to consumer's pom.xml file.

<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-hystrix</artifactId>
    <version>9.7.0</version>
</dependency>

<dependency>
    <groupId>com.netflix.hystrix</groupId>
    <artifactId>hystrix-core</artifactId>
    <version>1.5.12</version>
</dependency>

Arguably, this is one of the best examples of how easy and straightforward integration could be. But even that is not the end of the story. Observability in the distributed systems is as important as never and as we have learned a while ago, distributed tracing is tremendously useful in helping us out here. And again, OpenFeign has support for it right out of the box, let us take a look.

OpenFeign fully integrates with OpenTracing-compatible tracer. The Jaeger tracer is one of those, which among other things has really nice web UI front-end to explore traces and dependencies. Let us run it first, luckily it is fully Docker-ized.

docker run -d -e \
    COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
    -p 5775:5775/udp \
    -p 6831:6831/udp \
    -p 6832:6832/udp \
    -p 5778:5778 \
    -p 16686:16686 \
    -p 14268:14268 \
    -p 9411:9411 \
    jaegertracing/all-in-one:latest

A couple of additional dependencies have to be introduced in order to OpenFeign client to be aware of the OpenTracing capabilities.

<dependency>
    <groupId>io.github.openfeign.opentracing</groupId>
    <artifactId>feign-opentracing</artifactId>
    <version>0.1.0</version>
</dependency>

<dependency>
    <groupId>io.jaegertracing</groupId>
    <artifactId>jaeger-core</artifactId>
    <version>0.29.0</version>
</dependency>

From the Feign builder side, the only change (besides the introduction of the tracer instance) is to wrap up the client into TracingClient, like the snippet below demonstrates:

final Tracer tracer = new Configuration("consumer-openapi")
    .withSampler(
        new SamplerConfiguration()
            .withType(ConstSampler.TYPE)
            .withParam(new Float(1.0f)))
    .withReporter(
        new ReporterConfiguration()
            .withSender(
                new SenderConfiguration()
                    .withEndpoint("http://localhost:14268/api/traces")))
    .getTracer();
            
final PeopleApi api = Feign
    .builder()
    .client(new TracingClient(new OkHttpClient(), tracer))
    .encoder(new JacksonEncoder())
    .decoder(new JacksonDecoder())
    .logLevel(Logger.Level.HEADERS)
    .options(new Request.Options(1000, 2000))
    .target(PeopleApi.class, "http://localhost:8080/");

On the server-side we also need to integrate with OpenTracing as well. The Apache CXF has first-class support for it, bundled into cxf-integration-tracing-opentracing module. Let us include it as the dependency, this time to server's pom.xml.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-integration-tracing-opentracing</artifactId>
    <version>3.2.4</version>
</dependency>

Depending on the way your configure your applications, there should be an instance of the tracer available which should be passed later on to the OpenTracingFeature, for example.

// Create tracer
final Tracer tracer = new Configuration(
        "server-openapi", 
        new SamplerConfiguration(ConstSampler.TYPE, 1),
        new ReporterConfiguration(new HttpSender("http://localhost:14268/api/traces"))
    ).getTracer();

// Include OpenTracingFeature feature
final JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();
factory.setProvider(new OpenTracingFeature(tracer()));
...
factory.create()

From now on, the invocation of the any people management API endpoint through generated OpenFeign client will be fully traceable in the Jaeger web UI, available at http://localhost:16686/search (assuming your Docker host is localhost).

Our scenario is pretty simple but imagine the real applications where dozen of external service calls could happen while the single request travels through the system. Without distributed tracing in place, every issue has a chance to turn into a mystery.

As a side note, if you look closer to the trace from the picture, you may notice that server and consumer use different versions of the Jaeger API. This is not a mistake since the latest released version of Apache CXF is using older OpenTracing API version (and as such, older Jaeger client API) but it does not prevent things to work as expected.

With that, it is time to wrap up. Hopefully, the benefits of contract-based (or even better, contract-first) development in the world of RESTful web services and APIs become more and more apparent: generation of the smart clients, consumer-driven contract test, discoverability and rich documentation are just a few to mention. Please make use of it!

The complete project sources are available on Github.

Monday, April 30, 2018

Moving With The Times: Towards OpenAPI v3.0.0 adoption in JAX-RS APIs

It is terrifying to see how fast time passes! The OpenAPI specification 3.0.0, a major revamp of so-get-used-to Swagger specification, has been released mostly one year ago but it took awhile for tooling to catch up. However, with the recent official release of the Swagger Core 2.0.0 things are going to accelerate for sure.

To prove the point, Apache CXF, the well-known JAX-RS 2.1 implementation, is one of the first adopters of the OpenAPI 3.0.0 and in today's post we are going to take a look how easy your JAX-RS 2.1 APIs could benefit from it.

As always, to keep things simple we are going to design a people management web APIs with just a handful set of resources to support it, nothing too exciting here.

POST   /api/people
GET    /api/people/{email}
GET    /api/people
DELETE /api/people/{email}

Our model would consist of a single Person class.

public class Person {
    private String email;
    private String firstName;
    private String lastName;
}

To add a bit of magic, we would be using Spring Boot to get us up and running as fast as possible. With that, let us start to fill in the dependencies (assuming we are using Apache Maven for build management).

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-spring-boot-starter-jaxrs</artifactId>
    <version>3.2.4</version>
</dependency>

In the recent 3.2.x releases Apache CXF introduces a new module cxf-rt-rs-service-description-openapi-v3 dedicated to OpenAPI 3.0.0, based on Swagger Core 2.0.0.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-service-description-openapi-v3</artifactId>
    <version>3.2.4</version>
</dependency>

<dependency>
    <groupId>org.webjars</groupId>
    <artifactId>swagger-ui</artifactId>
    <version>3.13.6</version>
</dependency>

The presence of the Swagger UI is not strictly necessary, but this is exceptionally useful and beautiful tool to explore your APIs (and if it is available on classpath, Apache CXF seamlessly integrates it into your applications as we are going to see in a bit).

The prerequisites are in place, let us do some coding! Before we begin, it is worth to note that the Swagger Core 2.0.0 has many ways to populate the OpenAPI 3.0.0 definitions for your services, including property files, annotations or programmatically. Along this post we are going to use annotations only.

@OpenAPIDefinition(
    info = @Info(
        title = "People Management API",
        version = "0.0.1-SNAPSHOT",
        license = @License(
            name = "Apache 2.0 License",
            url = "http://www.apache.org/licenses/LICENSE-2.0.html"
        )
    )
)
@ApplicationPath("api")
public class JaxRsApiApplication extends Application {
}

Looks pretty simple, the @OpenAPIDefinition sets the top-level definition for all our web APIs. Moving on to the PeopleRestService, we just add the @Tag annotation to, well, tag our API.

@Path( "/people" ) 
@Tag(name = "people")
public class PeopleRestService {
    // ...
}

Awesome, nothing complicated so far. The meaty part starts with web API operation definitions, so let us take a look on the first example, the operation to fetch everyone.

@Produces(MediaType.APPLICATION_JSON)
@GET
@Operation(
    description = "List all people", 
    responses = {
        @ApiResponse(
            content = @Content(
                array = @ArraySchema(schema = @Schema(implementation = Person.class))
            ),
            responseCode = "200"
        )
    }
)
public Collection<Person> getPeople() {
    // ...
}

Quite a few annotations but by and large, looks pretty clean and straightforward. Let us take a look on another one, the endpoint to find the person by its e-mail address.

@Produces(MediaType.APPLICATION_JSON)
@Path("/{email}")
@GET
@Operation(
    description = "Find person by e-mail", 
    responses = {
        @ApiResponse(
            content = @Content(schema = @Schema(implementation = Person.class)), 
            responseCode = "200"
        ),
        @ApiResponse(
            responseCode = "404", 
            description = "Person with such e-mail doesn't exists"
        )
    }
)
public Person findPerson(
        @Parameter(description = "E-Mail address to lookup for", required = true) 
        @PathParam("email") final String email) {
    // ...
}

In the same vein, the operation to remove the person by e-mail looks mostly identical.

@Path("/{email}")
@DELETE
@Operation(
    description = "Delete existing person",
    responses = {
        @ApiResponse(
            responseCode = "204",
            description = "Person has been deleted"
        ),
        @ApiResponse(
            responseCode = "404", 
            description = "Person with such e-mail doesn't exists"
        )
     }
)
public Response deletePerson(
        @Parameter(description = "E-Mail address to lookup for", required = true ) 
        @PathParam("email") final String email) {
    // ...
}

Great, let us wrap up by looking into last, arguably the most interesting endpoint which adds a new person (the choice to use the @FormParam is purely to illustrate the different flavor of the APIs).

@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Produces(MediaType.APPLICATION_JSON)
@POST
@Operation(
    description = "Create new person",
    responses = {
        @ApiResponse(
            content = @Content(
                schema = @Schema(implementation = Person.class), 
                mediaType = MediaType.APPLICATION_JSON
            ),
            headers = @Header(name = "Location"),
            responseCode = "201"
        ),
        @ApiResponse(
            responseCode = "409", 
            description = "Person with such e-mail already exists"
        )
    }
)
public Response addPerson(@Context final UriInfo uriInfo,
        @Parameter(description = "E-Mail", required = true) 
        @FormParam("email") final String email, 
        @Parameter(description = "First Name", required = true) 
        @FormParam("firstName") final String firstName, 
        @Parameter(description = "Last Name", required = true) 
        @FormParam("lastName") final String lastName) {
    // ...
}

If you have an experience with documenting web APIs using older Swagger specifications, you might find the approach quite familiar but more verbose (or better to say, formalized). It is the result of the tremendous work which specification leads and community has done to make it as complete and extensible as possible.

The APIs are defined and documented, it is time to try them out! The missing piece though is the Spring configuration where we would initialize and expose our JAX-RS web services.

@Configuration
@EnableAutoConfiguration
@ComponentScan(basePackageClasses = PeopleRestService.class)
public class AppConfig {
    @Autowired private PeopleRestService peopleRestService;
 
    @Bean(destroyMethod = "destroy")
    public Server jaxRsServer(Bus bus) {
        final JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();

        factory.setApplication(new JaxRsApiApplication());
        factory.setServiceBean(peopleRestService);
        factory.setProvider(new JacksonJsonProvider());
        factory.setFeatures(Arrays.asList(new OpenApiFeature()));
        factory.setBus(bus);
        factory.setAddress("/");

        return factory.create();
    }

    @Bean
    public ServletRegistrationBean cxfServlet() {
        final ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(new CXFServlet(), "/api/*");
        servletRegistrationBean.setLoadOnStartup(1);
        return servletRegistrationBean;
    }
}

The OpenApiFeature is a key ingredient here which takes care of all the integration and introspection. The Spring Boot application is the last touch to finish the picture.

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(AppConfig.class, args);
    }
}

Let us build and run it right away:

mvn clean package 
java -jar target/jax-rs-2.1-openapi-0.0.1-SNAPSHOT.jar

Having the application started, the OpenAPI 3.0.0 specification of our web APIs should be live and available for consumption in the JSON format at:

http://localhost:8080/api/openapi.json

Or in YAML format at:

http://localhost:8080/api/openapi.json

Wouldn't it be great to explore our web APIs and play with it? Since we included Swagger UI dependency, this is no brainer, just navigate to http://localhost:8080/api/api-docs?url=/api/openapi.json:

A special attention to be made to the small icon Swagger UI places alongside your API version, hinting about its conformity to OpenAPI 3.0.0 specification.

Just to note here, there is nothing special in using Spring Boot. In case you are using Apache CXF inside the OSGi container (like Apache Karaf for example), the integration with OpenAPI 3.0.0 is also available (please check out the official documentation and samples if you are interested in the subject).

It all looks easy and simple, but what about migrating to OpenAPI 3.0.0 from older versions of the Swagger specifications? The Apache CXF has an powerful feature to convert older the specifications on the fly but in general the OpenApi.Tools portal is a right place to evaluate your options.

Should you migrate to OpenAPI 3.0.0? I honestly believe you should, at least should try experimenting with it, but please be aware that the tooling is still not mature enough, expect a few roadblocks along the way (which you would be able to overcome by contributing the patches, by the way). But undoubtedly, the future is bright!

The complete project sources are available on Github.

Tuesday, February 20, 2018

Run away from 'null' checks feast: doing PATCH properly with JSON Patch (RFC-6902)

Today we are going to have a conversation about REST(ful) services and APIs, more precisely, around one peculiar subject many experienced developers are struggling with. To put things into perspective, we are going to talk about web APIs, where the REST(ful) principles adhere to HTTP protocol and heavily exploit the semantics of HTTP methods and (usually but not necessarily) use JSON to represent the state.

One particular HTTP method stands out, and although its meaning sounds pretty straightforward, the implementation is far from that. Yes, we are looking at you, the PATCH. So what is the problem, really? It is just an update, right? Yes, in the essence the semantics of the PATCH method in the context of the HTTP-based REST(ful) web services is partial update of the resource. Now, how would you do that, Java developer? Here is where the fun begins.

Let us go over a very simple example of book management API, modeled using latest JSR 370: Java API for RESTful Web Services (JAX-RS 2.1) specification (which finally includes the @PATCH annotation!) and terrific Apache CXF framework. Our resource is just a very simplistic Book class.

public class Book {
    private String title;
    private Collection>String< authors;
    private String isbn;
}

How would you implement the partial update using the PATCH method? Sadly, the brute force solution, the null feast, is the clear winner here.

@PATCH
@Path("/{isbn}")
@Consumes(MediaType.APPLICATION_JSON)
public void update(@PathParam("isbn") String isbn, Book book) {
    final Book existing = bookService.find(isbn).orElseThrow(NotFoundException::new);
        
    if (book.getTitle() != null) {
        existing.setTitle(book.getTitle());
    }

    if (book.getAuthors() != null) {
        existing.setAuthors(book.getAuthors());
    }
        
    // And here it goes on and on ...
    // ...
}

In the nutshell, this is null-guarded PUT clone. Probably, someone could claim that it kind of works and declare the victory here. But hopefully for majority of us this approach clearly has a lot of flaws and should be never taken. Alternatives? Yes, absolutely, RFC-6902: JSON Patch, not an official standard just yet but it is getting there.

The RFC-6902: JSON Patch drastically changes the game by expressing a sequence of operations to apply to a JSON document. To illustrate the idea in action, let us start from a simple example of changing book's title, described in the terms of desired outcome.

{ "op": "replace", "path": "/title", "value": "..." }

Looks clean, what about adding the authors? Easy ...

{ "op": "add", "path": "/authors", "value": ["...", "..."] }

Awesome, sold out, but ... implementation-wise it seems to require quite a lot of work, isn't it? Not really if we rely on the latest and greatest JSR 374: Java API for JSON Processing 1.1 which fully supports RFC-6902: JSON Patch. Armed with the right tools, this time let us do it right.


    org.glassfish
    javax.json
    1.1.2

Interestingly, not many are aware of that Apache CXF, and in general any JAX-RS-complaint framework, closely integrates with JSON-P and supports its basic data types. In case of Apache CXF, it is just a matter of adding cxf-rt-rs-extension-providers module dependency:


    org.apache.cxf
    cxf-rt-rs-extension-providers
    3.2.2

And registering JsrJsonpProvider with your server factory bean, for example:

@Configuration
public class AppConfig {
    @Bean
    public Server rsServer(Bus bus, BookRestService service) {
        JAXRSServerFactoryBean endpoint = new JAXRSServerFactoryBean();
        endpoint.setBus(bus);
        endpoint.setAddress("/");
        endpoint.setServiceBean(service);
        endpoint.setProvider(new JsrJsonpProvider());
        return endpoint.create();
    }
}

With all the pieces wired together, our PATCH operation could be implemented using JSR 374: Java API for JSON Processing 1.1 alone, in just a few lines:

@Service
@Path("/catalog")
public class BookRestService {
    @Inject private BookService bookService;
    @Inject private BookConverter converter;

    @PATCH
    @Path("/{isbn}")
    @Consumes(MediaType.APPLICATION_JSON)
    public void apply(@PathParam("isbn") String isbn, JsonArray operations) {
        final Book book = bookService.find(isbn).orElseThrow(NotFoundException::new);
        final JsonPatch patch = Json.createPatch(operations);
        final JsonObject result = patch.apply(converter.toJson(book));
        bookService.update(isbn, converter.fromJson(result));
    }
}

The BookConverter performs the conversion between Book class and its JSON representation (and vise versa), which we are doing by hand to illustrate another capabilities which JSR 374: Java API for JSON Processing 1.1 provides.

@Component
public class BookConverter {
    public Book fromJson(JsonObject json) {
        final Book book = new Book();
        book.setTitle(json.getString("title"));
        book.setIsbn(json.getString("isbn"));
        book.setAuthors(
            json
                .getJsonArray("authors")
                .stream()
                .map(value -> (JsonString)value)
                .map(JsonString::getString)
                .collect(Collectors.toList()));
        return book;
    }

    public JsonObject toJson(Book book) {
        return Json
            .createObjectBuilder()
            .add("title", book.getTitle())
            .add("isbn", book.getIsbn())
            .add("authors", Json.createArrayBuilder(book.getAuthors()))
            .build();
    }
}

To finish up, let is wrap this simple JAX-RS 2.1 web API into the beautiful Spring Boot envelope.

@SpringBootApplication
public class BookServerStarter {    
    public static void main(String[] args) {
        SpringApplication.run(BookServerStarter.class, args);
    }
}

And run it.

mvn spring-boot:run

To conclude the discussion, let us play a bit with more realistic examples by deliberately adding an incomplete book into our catalog.

$ curl -i -X POST http://localhost:19091/services/catalog -H "Content-Type: application\json" -d '{
       "title": "Microservice Architecture",
       "isbn": "978-1491956250",
       "authors": [
           "Ronnie Mitra",
           "Matt McLarty"
       ]
   }'

HTTP/1.1 201 Created
Date: Tue, 20 Feb 2018 02:30:18 GMT
Location: http://localhost:19091/services/catalog/978-1491956250
Content-Length: 0

There are a couple of inaccuracies we would like to fix in this book description, namely set the title to be complete, "Microservice Architecture: Aligning Principles, Practices, and Culture", and include missing co-authors, Irakli Nadareishvili and Mike Amundsen. With the API we have developed a moment ago, it is a no-brainer.

$ curl -i -X PATCH http://localhost:19091/services/catalog/978-1491956250 -H "Content-Type: application\json" -d '[
       { "op": "add", "path": "/authors/0", "value": "Irakli Nadareishvili" },
       { "op": "add", "path": "/authors/-", "value": "Mike Amundsen" },
       { "op": "replace", "path": "/title", "value": "Microservice Architecture: Aligning Principles, Practices, and Culture" }
   ]'

HTTP/1.1 204 No Content
Date: Tue, 20 Feb 2018 02:38:48 GMT

The path reference of the first two operations may look confusing a bit but fear no more, let us clarify that. Because authors is a collection (or in terms of JSON data types, an array) we could use RFC-6902: JSON Patch array index notation to specify exactly where we would like the new element to be inserted. The first operations uses index '0' to denote the head position, while the second one uses '-' placeholder to simplify say "add to the end of the collection". If we retrieve the book right after the update, we should see our modifications to be applied exactly as we asked.

$ curl http://localhost:19091/services/catalog/978-1491956250

{
    "title": "Microservice Architecture: Aligning Principles, Practices, and Culture",
    "isbn": "978-1491956250",
    "authors": [
        "Irakli Nadareishvili",
        "Ronnie Mitra",
        "Matt McLarty",
        "Mike Amundsen"
    ]
}

Clean, simple and powerful. To be fair, there is a price to pay is a form of additional JSON manipulations (in order to apply the patch) but is it worth the effort? I believe it is ...

Next time you are going to design new shiny REST(ful) web APIs, please seriously consider RFC-6902: JSON Patch to back the PATCH implementation of your resources. I believe more close integration with JAX-RS is also coming (if not there yet) to directly support JSONPatch class and its family.

And last, but not least, in this post we have touched on server-side implementation only, but JSR 374: Java API for JSON Processing 1.1 includes convenient client-side scaffolding as well, giving full-fledged programmatic control over the patches.

final JsonPatch patch = Json.createPatchBuilder()
    .add("/authors/0", "Irakli Nadareishvili")
    .add("/authors/-", "Mike Amundsen")
    .replace("/title", "Microservice Architecture: Aligning Principles, Practices, and Culture")
    .build();

The complete project sources are available on Github.

Monday, October 30, 2017

Better late than never: SSE, or Server-Sent Events, are now in JAX-RS

Server-Sent Events (or just SSE) is quite useful protocol which allows the servers to push data to the clients over HTTP. This is something our web browsers support for ages but, surprisingly, neglected by JAX-RS specification for quite a long time. Although Jersey had an extension available for SSE media type, the API has never been formalized and as such, was not portable to other JAX-RS implementations.

Luckily, JAX-RS 2.1, also known as JSR-370, has changed that by making SSE support, both client-side and server-side, a part of the official specification. In today's post we are going to look at how to integrate SSE support into the existing Java REST(ful) web services, using recently released version 3.2.0 of the terrific Apache CXF framework. In fact, beside the bootstrapping, there is nothing CXF-specific really, all the examples should work in any other framework which implements JAX-RS 2.1 specification.

Without further ado, let us get started. As the significant amount of Java projects these days are built on top of awesome Spring Framework, our sample application would use Spring Boot and Apache CXF Spring Boot Integration to get us off the ground quickly. The old good buddy Apache Maven would help us as well by managing our project dependencies.


    org.springframework.boot
    spring-boot-starter
    1.5.8.RELEASE



    org.apache.cxf
    cxf-rt-frontend-jaxrs
    3.2.0



    org.apache.cxf
    cxf-spring-boot-starter-jaxrs
    3.2.0



    org.apache.cxf
    cxf-rt-rs-client
    3.2.0



     org.apache.cxf
     cxf-rt-rs-sse
     3.2.0

Under the hood Apache CXF is using Atmosphere framework to implement SSE transport so this is another dependency we have to include.


    org.atmosphere
    atmosphere-runtime
    2.4.14

The specifics around relying on Atmosphere framework introduces a need to provide additional configuration settings, namely transportId, so to ensure that SSE-capable transport will be picked up at runtime. The relevant details could be added into application.yml file:

cxf:
  servlet:
    init:
      transportId: http://cxf.apache.org/transports/http/sse

Great, so the foundation is there, moving on. The REST(ful) web service we are going to build would expose imaginary CPU load averages (for simplicity randomly generated) as the SSE streams. The Stats class would constitute our data model.

public class Stats {
    private long timestamp;
    private int load;

    public Stats() {
    }

    public Stats(long timestamp, int load) {
        this.timestamp = timestamp;
        this.load = load;
    }

    // Getters and setters are omitted
    ...
}

Speaking of streams, the Reactive Streams specification made its way into Java 9 and hopefully we are going to see the accelerated adoption of the reactive programming models by Java community. Moreover, developing SSE-enabled REST(ful) web services would be so much easier and straightforward when backed by Reactive Streams. To make the case, let us onboard RxJava 2 into our sample application.


    io.reactivex.rxjava2
    rxjava
    2.1.6

This is a good moment to start with our StatsRestService class, the typical JAX-RS resource implementation. The key SSE capabilities in JAX-RS 2.1 are centered around Sse contextual object which could be inject like this.

@Service
@Path("/api/stats")
public class StatsRestService {
    @Context 
    public void setSse(Sse sse) {
        // Access Sse context here
    }

Out of the Sse context we could get access to two very useful abstractions: SseBroadcaster and OutboundSseEvent.Builder, for example:

private SseBroadcaster broadcaster;
private Builder builder;
    
@Context 
public void setSse(Sse sse) {
    this.broadcaster = sse.newBroadcaster();
    this.builder = sse.newEventBuilder();
}

As you may already guess, the OutboundSseEvent.Builder constructs the instances of the OutboundSseEvent classes which could be sent over the wire, while SseBroadcaster broadcasts the same SSE stream to all the connected clients. With that being said, we could generate the stream of OutboundSseEvents and distribute it to everyone who is interested:

private static void subscribe(final SseBroadcaster broadcaster, final Builder builder) {
    Flowable
        .interval(1, TimeUnit.SECONDS)
        .zipWith(eventsStream(builder), (id, bldr) -> createSseEvent(bldr, id))
        .subscribeOn(Schedulers.single())
        .subscribe(broadcaster::broadcast);
}

private static Flowable<OutboundSseEvent.Builder> eventsStream(final Builder builder) {
    return Flowable.generate(emitter -> emitter.onNext(builder.name("stats")));
}

If you are not familiar with RxJava 2, no worries, this is what is happening here. The eventsStream method returns an effectively infinite stream of OutboundSseEvent.Builder instances for the SSE events of type stats. The subscribe method is a little bit more complicated. We start off by creating a stream which emits sequential number every second, f.e. 0,1,2,3,4,5,6,... and so on. Later, we combine this stream with the one returned by eventsStream method, essentially merging both streams to a single one which emits a tuple of (number, OutboundSseEvent.Builder) every second. Fairly speaking, this tuple is not very useful to us so we transform it to the instance of OutboundSseEvent class, treating the number as SSE event identifier:

private static final Random RANDOM = new Random();

private static OutboundSseEvent createSseEvent(OutboundSseEvent.Builder builder, long id) {
    return builder
        .id(Long.toString(id))
        .data(Stats.class, new Stats(new Date().getTime(), RANDOM.nextInt(100)))
        .mediaType(MediaType.APPLICATION_JSON_TYPE)
        .build();
}

The OutboundSseEvent may carry any payload in the data property which will be serialized with respect to the mediaType specified, using the usual MessageBodyWriter resolution strategy. Once we get our OutboundSseEvent instance, we send it off using SseBroadcaster::broadcast method. Please notice that we handed off the control flow to another thread using subscribeOn operator, this is usually what you would do all the time.

Good, hopefully the stream part is cleared out now but how could we actually subscribe to the SSE events emitted by SseBroadcaster? That is easier than you might think:

@GET
@Path("broadcast")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void broadcast(@Context SseEventSink sink) {
    broadcaster.register(sink);
}

And we are all set. The most important piece here is content type being produced, which should be set to MediaType.SERVER_SENT_EVENTS. In this case, the contextual instance of the SseEventSink becomes available and could be registered with SseBroadcaster instance.

To see our JAX-RS resource in action, we need to bootstrap the server instance using, for example, JAXRSServerFactoryBean, configuring all the necessary providers along the way. Please take a note that we are also explicitly specifying transport to be used, in this case SseHttpTransportFactory.TRANSPORT_ID.

@Configuration
@EnableWebMvc
public class AppConfig extends WebMvcConfigurerAdapter {
    @Bean
    public Server rsServer(Bus bus, StatsRestService service) {
        JAXRSServerFactoryBean endpoint = new JAXRSServerFactoryBean();
        endpoint.setBus(bus);
        endpoint.setAddress("/");
        endpoint.setServiceBean(service);
        endpoint.setTransportId(SseHttpTransportFactory.TRANSPORT_ID);
        endpoint.setProvider(new JacksonJsonProvider());
        return endpoint.create();
    }
    
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry
          .addResourceHandler("/static/**")
          .addResourceLocations("classpath:/web-ui/"); 
    }
}

To close the loop, we just need to supply the runner for our Spring Boot application:

@SpringBootApplication
public class SseServerStarter {    
    public static void main(String[] args) {
        SpringApplication.run(SseServerStarter.class, args);
    }
}

Now, if we run the application and navigate to http://localhost:8080/static/broadcast.html using multiple web browsers or different tabs within the same browser, we would observe the identical stream of events charted inside all of them:

Nice, broadcasting is certainly a valid use case, but what about returning an independent SSE stream on each endpoint invocation? Easy, just use SseEventSink methods, like send and close, to manipulate the SSE stream directly.

@GET
@Path("sse")
@Produces(MediaType.SERVER_SENT_EVENTS)
public void stats(@Context SseEventSink sink) {
    Flowable
        .interval(1, TimeUnit.SECONDS)
        .zipWith(eventsStream(builder), (id, bldr) -> createSseEvent(bldr, id))
        .subscribeOn(Schedulers.single())
        .subscribe(sink::send, ex -> {}, sink::close);
}

This time, if we run the application and navigate to http://localhost:8080/static/index.html using multiple web browsers or different tabs within the same browser, we would observe absolutely different charts:

Excellent, the server-side APIs are indeed very concise and easy to use. But what about client side, could we consume SSE streams from the Java applications? The answer is yes, absolutely. The JAX-RS 2.1 outlines the client-side API as well, with SseEventSource in the heart of it.


final WebTarget target = ClientBuilder
    .newClient()
    .register(JacksonJsonProvider.class)
    .target("http://localhost:8080/services/api/stats/sse");
        
try (final SseEventSource eventSource =
            SseEventSource
                .target(target)
                .reconnectingEvery(5, TimeUnit.SECONDS)
                .build()) {

    eventSource.register(event -> {
        final Stats stats = event.readData(Stats.class, MediaType.APPLICATION_JSON_TYPE);
        System.out.println("name: " + event.getName());
        System.out.println("id: " + event.getId());
        System.out.println("comment: " + event.getComment());
        System.out.println("data: " + stats.getLoad() + ", " + stats.getTimestamp());
        System.out.println("---------------");
    });
    eventSource.open();

    // Just consume SSE events for 10 seconds
    Thread.sleep(10000); 
}     

If we run this code snippet (assuming the server is up and running as well) we would see something like that in the console (as you may recall, the data is generated randomly).

name: stats
id: 0
comment: null
data: 82, 1509376080027
---------------
name: stats
id: 1
comment: null
data: 68, 1509376081033
---------------
name: stats
id: 2
comment: null
data: 12, 1509376082028
---------------
name: stats
id: 3
comment: null
data: 5, 1509376083028
---------------

...

As we can see, the OutboundSseEvent from server-side becomes InboundSseEvent for the client side. The client may consume any payload from the data property which could be deserialized by specifying expected media type, using the usual MessageBodyReader resolution strategy.

There are a lot of material squeezed in the single post. And still, there are few more things regarding SSE and JAX-RS 2.1 which we have not covered here, like for example using HttpHeaders.LAST_EVENT_ID_HEADER or configuring reconnect delays. Those could be a great topic for the upcoming post if there would be an interest to learn about.

To conclude, the SSE support in JAX-RS is what many of us have been awaiting for so long. Finally, it is the there, please give it a try!

The complete project sources are available on Github.

Wednesday, July 12, 2017

The Patterns of the Antipatterns: Architecture

We have been always looking for the best ways to architect our applications and platforms so they are maintainable, extensible, observable, adaptable and easy to evolve (this is by no means a complete list of the properties we would like to have but the foundational ones). Layered architecture, component-based architecture, hexagonal architecture, microservice architecture (and many others) advocate to use great design principles and guidelines to achieve these goals.

For example, layered architecture suggests to have clear separation between different layers (persistence, services, caching, presentation, ...) with a strict rules imposed on how layers could talk to each other. Component-based architecture, quite widespread in the world of UI development but not limited to it, introduces the components (which internally may be built on top of layered architecture or other architectural style) as key building blocks of the applications or system.

In turn, hexagonal architecture goes even further by introducing the concepts of ports and adapters around the application core. Last but not least, microservice architecture, the hottest architectural style these days, advocates to structure the application or system as a set of loosely coupled, ideally small, self-sufficient services which collaborate with each other.

Each one of the aforementioned architectural styles (as well as others we haven't mentioned) has own strong/weak sides and the context where it shines the most. But there are two kinds of architectures which beat them all, and those we are going to discuss right away.

Proximity Architecture

There is only one principle this architectural style offers: use anything you want anywhere you need. Do you need access to the service from data transfer object? No problem, just do it! May be your RESTful resource needs access to persistence services? Not an issue, go for it, because by and large, why to bother? It is just a pile of classes (usually within the same application), use static fields or methods, dependency injection or any other technique your language / frameworks provide. At the end, if compiler is happy, what is the problem? Right?

No, it is not right ... For sure, you have heard funny terms such as "big ball of mud" or "spaghetti code". Those are the typical outcomes when proximity architecture drives the show.

There are many reasons why proximity architecture emerges. Those reasons range from the constant urge to push features out, lack of knowledge and/or experience in software craftsmanship or the worst case, an ethical and careless approach to software engineering discipline. If you happen to have a startup experience , or worked for one of those large companies that still claim to be a startup to justify their chaos, I’m sure you have seen this form of architecture or one of its variations. Are they all doomed?

Well, I would argue, no, not really but fleeing the claws of the proximity architecture solely depends on the trade-offs you are going to make. For the developers, it is hard or even impossible to push back on the timelines set for the delivering of new features (unless you are exceptionally lucky and your manager understands what is technical debt and how to keep it under control). You have to and you will take shortcuts. However, it is enormously important to strive for the right balance between pouring in hacky code (on top of bunch of other hacks) and going with quick / not the best option while keeping in mind how much harm it could do to overall application architecture (and how hard it would be to refactore / improve / reverse that later on). Undoubtedly, seasoned software developers are able to cope with that.

I know, it sounds pretty vague to be an actionable advice but every company is unique, there is no magic universal rule or recipe to stop proximity architecture from poisoning your applications. The one thing is clear though, if you let it in and do nothing, you are going to end up with the unmanageable codebase and the only route to take would be to rewrite everything from scratch.

Micromonolith Architecture

Microservices, microservices, microservices ... everyone talks about them ... You hear from left and right that if you don't do microservices, then you are stuck in stone age. And many actually truly believe that all their problems and issues could be solved with microservices.

In many regards, microservices architecture could put life into stagnating projects assuming you are ready to embrace it across the whole organization, culturally and technically. This is a serious long-term commitment as such the changes won't happen overnight. But so many organizations fail to see the new beast rising here: micromonolith architecture in microservice clothing.

Here is what you could be sold as microservices architecture. Many organizations ended up with monolith and have no choice as to deal with that. Quality is terrible, productivity is going down, releasing new features takes months (or even years) ... and throwing more people into it does not help at all. So everyone tries to escape the monolith, whatever it takes, building standalone applications or services on the side, luckily terrific Spring Boot makes it as easy as falling off a log.

At first glance, it doesn’t look bad, after all they are calling it a microservice, and a bunch of things are calling each other, right? Except that this is not right at all. Applications and services that thrive in this kind of unhealthy ecosystem often do not own any data. They do not have sufficient knowledge to function independently and they do not belong to a concrete domain. These kind of services need to talk to the monolith to perform any useful function and in the worst case scenario, they share a database with the monolith and becomes a mere chatty proxy.

Why is that? Well, because in most cases it is very hard to migrate the large monolithic application to microservices architecture (the true one). And along with the migration, the organization should transform itself to become a fertile ground for cultivation of microservices, naturally. Not everyone is so brave to pull it off but the good news though, it is feasible. And here is why.

While building the monolith, the organization (hopefully) becomes an expert in its domain. There are tons of knowledge developed around the product, there are well-defined, well-understood use cases and flows, the domain-driven design might be of great help to formalize and capture all that. Once there is a solid understanding of what should be built, it is time to start cutting the pieces off the monolith. It is going to be a bumpy ride for sure and probably you won't be able to get from point A to point B directly (refactorings? splitting monolith into modules?), not to forget about evolving your testing practices (consumer contract tests? component tests? e2e tests?) along the way. But the rewards are worth the efforts.

Microservice architecture is a great one, it opens unlimited amount of opportunities and, when done right, could bring back the joy of development, could serve as a solid foundation for organization's software stacks, current and future ones. But neglecting its core principles would pave the road to frustration and chaos.