Thursday, March 27, 2014

Apache CXF 3.0: JAX-RS 2.0 and Bean Validation 1.1 finally together

The upcoming release 3.0 (currently in milestone 2 phase) of the great Apache CXF framework is bringing a lot of interesting and useful features, getting closer to deliver full-fledged JAX-RS 2.0 support. One of those features, a long-awaited by many of us, is the support of Bean Validation 1.1: easy and concise model to add validation capabilities to your REST services layer.

In this blog post we are going to look how to configure Bean Validation 1.1 in your Apache CXF projects and discuss some interesting use cases. To keep this post reasonably short and focused, we will not discuss the Bean Validation 1.1 itself but concentrate more on integration with JAX-RS 2.0 resources (some of the bean validation basics we have already covered in the older posts).

At the moment, Hibernate Validator is the de-facto reference implementation of Bean Validation 1.1 specification, with the latest version being 5.1.0.Final and as such it will be the validation provider of our choice (Apache BVal project at the moment supports only Bean Validation 1.0). It is worth to mention that Apache CXF is agnostic to implementation and will work equally well either with Hibernate Validator or Apache BVal once released.

We are going to build a very simple application to manage people. Our model consists of one single class named Person.

package com.example.model;

import javax.validation.constraints.NotNull;

import org.hibernate.validator.constraints.Email;

public class Person {
    @NotNull @Email private String email;
    @NotNull private String firstName;
    @NotNull private String lastName;
        
    public Person() {
    }

    public Person( final String email ) {
        this.email = email;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail( final String email ) {
        this.email = email;
    }
    
    public String getFirstName() {
        return firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setFirstName( final String firstName ) {
        this.firstName = firstName;
    }
    
    public void setLastName( final String lastName ) {
        this.lastName = lastName;
    }        
}

From snippet above we can see that Person class imposes couple of restrictions on its properties: all of them should not be null. Additionally, email property should contain a valid e-mail address (which will be validated by Hibernate Validator-specific constraint @Email). Pretty simple.

Now, let us take a look on JAX-RS 2.0 resources with validation constraints. The skeleton of the PeopleRestService class is bind to /people URL path and is shown below.

package com.example.rs;

import java.util.Collection;

import javax.inject.Inject;
import javax.validation.Valid;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import org.hibernate.validator.constraints.Length;

import com.example.model.Person;
import com.example.services.PeopleService;

@Path( "/people" ) 
public class PeopleRestService {
    @Inject private PeopleService peopleService;

    // REST methods here
}

It should look very familiar, nothing new. The first method we are going to add and decorate with validation constraints is getPerson, which will look up a person by its e-mail address.

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
public @Valid Person getPerson( 
        @Length( min = 5, max = 255 ) @PathParam( "email" ) final String email ) {
    return peopleService.getByEmail( email );
}

There are a couple of differences from traditional JAX-RS 2.0 method declaration. Firstly, we would like the e-mail address (email path parameter) to be at least 5 characters long (but no more than 255 characters) which is imposed by @Length( min = 5, max = 255 ) annotation. Secondly, we would like to ensure that only valid person is returned by this method so we annotated the method's return value with @Valid annotation. The effect of @Valid is very interesting: the person's instance in question will be checked against all validation constraints declared by its class (Person).

At the moment, Bean Validation 1.1 is not active by default in your Apache CXF projects so if you run your application and call this REST endpoint, all validation constraints will be simply ignored. The good news are that it is very easy to activate Bean Validation 1.1 as it requires only three components to be added to your usual configuration (please check out this feature documentation for more details and advanced configuration):

  • JAXRSBeanValidationInInterceptor in-inteceptor: performs validation of the input parameters of JAX-RS 2.0 resource methods
  • JAXRSBeanValidationOutInterceptor out-inteceptor: performs validation of return values of JAX-RS 2.0 resource methods
  • ValidationExceptionMapper exception mapper: maps the validation violations to HTTP status codes. As per specification, all input parameters validation violations result into 400 Bad Request error. Respectively, all return values validation violations result into 500 Internal Server Error error. At the moment, the ValidationExceptionMapper does not include additional information into response (as it may violate application protocol) but it could be easily extended to provide more details about validation errors.
The AppConfig class shows off one of the ways to wire up all the required components together using RuntimeDelegate and JAXRSServerFactoryBean (the XML-based configuration is also supported).

package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.interceptor.Interceptor;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.apache.cxf.jaxrs.validation.JAXRSBeanValidationInInterceptor;
import org.apache.cxf.jaxrs.validation.JAXRSBeanValidationOutInterceptor;
import org.apache.cxf.jaxrs.validation.ValidationExceptionMapper;
import org.apache.cxf.message.Message;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;

import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.example.services.PeopleService;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

@Configuration
public class AppConfig {    
    @Bean( destroyMethod = "shutdown" )
    public SpringBus cxf() {
        return new SpringBus();
    }

    @Bean @DependsOn( "cxf" )
    public Server jaxRsServer() {
        final JAXRSServerFactoryBean factory = 
            RuntimeDelegate.getInstance().createEndpoint( 
                jaxRsApiApplication(), 
                JAXRSServerFactoryBean.class 
            );
        factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) );
        factory.setAddress( factory.getAddress() );
        factory.setInInterceptors( 
            Arrays.< Interceptor< ? extends Message > >asList( 
                new JAXRSBeanValidationInInterceptor()
            ) 
        );
        factory.setOutInterceptors( 
            Arrays.< Interceptor< ? extends Message > >asList( 
                new JAXRSBeanValidationOutInterceptor() 
            ) 
        );
        factory.setProviders( 
            Arrays.asList( 
                new ValidationExceptionMapper(), 
                new JacksonJsonProvider() 
            ) 
        );

        return factory.create();
    }
    
    @Bean 
    public JaxRsApiApplication jaxRsApiApplication() {
        return new JaxRsApiApplication();
    }
    
    @Bean 
    public PeopleRestService peopleRestService() {
        return new PeopleRestService();
    }

    @Bean 
    public PeopleService peopleService() {
        return new PeopleService();
    }
}

All in/out interceptors and exception mapper are injected. Great, let us build the project and run the server to validate the Bean Validation 1.1 is active and works as expected.

mvn clean package
java -jar target/jaxrs-2.0-validation-0.0.1-SNAPSHOT.jar 

Now, if we issue a REST request with short (or invalid) e-mail address a@b, the server should return 400 Bad Request. Let us validate that.

> curl http://localhost:8080/rest/api/people/a@b -i

HTTP/1.1 400 Bad Request
Date: Wed, 26 Mar 2014 00:11:59 GMT
Content-Length: 0
Server: Jetty(9.1.z-SNAPSHOT)

Excellent! To be completely sure, we can check server console output and find there the validation exception of type ConstraintViolationException and its stacktrace. Plus, the last line provides the details what went wrong: PeopleRestService.getPerson.arg0: length must be between 5 and 255 (please notice, because argument names are not currently available on JVM after compilation, they are replaced by placeholders like arg0, arg1, ...).

WARNING: Interceptor for {http://rs.example.com/}PeopleRestService has thrown exception, unwinding now
javax.validation.ConstraintViolationException
        at org.apache.cxf.validation.BeanValidationProvider.validateParameters(BeanValidationProvider.java:119)
        at org.apache.cxf.validation.BeanValidationInInterceptor.handleValidation(BeanValidationInInterceptor.java:59)
        at org.apache.cxf.validation.AbstractValidationInterceptor.handleMessage(AbstractValidationInterceptor.java:73)
        at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
        at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
        at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:240)
        at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
        at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:197)
        at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:149)
        at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
        at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
        at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:211)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
        at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:711)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1112)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:479)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1046)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:462)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:281)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232)
        at org.eclipse.jetty.io.AbstractConnection$1.run(AbstractConnection.java:505)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
        at java.lang.Thread.run(Unknown Source)

Mar 25, 2014 8:11:59 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.getPerson.arg0: length must be between 5 and 255

Moving on, we are going to add two more REST methods to demonstrate the collections and Response validation in action.

@Produces( { MediaType.APPLICATION_JSON } )
@GET
public @Valid Collection< Person > getPeople( 
        @Min( 1 ) @QueryParam( "count" ) @DefaultValue( "1" ) final int count ) {
    return peopleService.getPeople( count );
}

The @Valid annotation on collection of objects will ensure that every single object in collection is valid. The count parameter is also constrained to have the minimum value 1 by @Min( 1 ) annotation (the @DefaultValue is taken into account if the query parameter is not specified). Let us on purpose add the person without first and last names set so the resulting collection will contain at least one person instance which should not pass the validation process.

> curl http://localhost:8080/rest/api/people -X POST -id "email=a@b3.com"

With that, the call of getPeople REST method should return 500 Internal Server Error. Let us check that is the case.

> curl -i http://localhost:8080/rest/api/people?count=10

HTTP/1.1 500 Server Error
Date: Wed, 26 Mar 2014 01:28:58 GMT
Content-Length: 0
Server: Jetty(9.1.z-SNAPSHOT)

Looking into server console output, the hint what is wrong is right there.

Mar 25, 2014 9:28:58 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.getPeople.[0].firstName: may not be null
Mar 25, 2014 9:28:58 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.getPeople.[0].lastName: may not be null

And finally, yet another example, this time with generic Response object.

@Valid
@Produces( { MediaType.APPLICATION_JSON  } )
@POST
public Response addPerson( @Context final UriInfo uriInfo,
        @NotNull @Length( min = 5, max = 255 ) @FormParam( "email" ) final String email, 
        @FormParam( "firstName" ) final String firstName, 
        @FormParam( "lastName" ) final String lastName ) {        
    final Person person = peopleService.addPerson( email, firstName, lastName );
    return Response.created( uriInfo.getRequestUriBuilder().path( email ).build() )
        .entity( person ).build();
}

The last example is a bit tricky: the Response class is part of JAX-RS 2.0 API and has no validation constraints defined. As such, imposing any validation rules on the instance of this class will not trigger any violations. But Apache CXF tries its best and performs a simple but useful trick: instead of Response instance, the response's entity will be validated instead. We can easy verify that by trying to create a person without first and last names set: the expected result should be 500 Internal Server Error.

> curl http://localhost:8080/rest/api/people -X POST -id "email=a@b3.com"

HTTP/1.1 500 Server Error
Date: Wed, 26 Mar 2014 01:13:06 GMT
Content-Length: 0
Server: Jetty(9.1.z-SNAPSHOT)

And server console output is more verbose:

Mar 25, 2014 9:13:06 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.addPerson.<return value>.firstName: may not be null
Mar 25, 2014 9:13:06 PM org.apache.cxf.jaxrs.validation.ValidationExceptionMapper toResponse
WARNING: PeopleRestService.addPerson.<return value>.lastName: may not be null

Nice! In this blog post we have just touched a bit the a topic of how Bean Validation 1.1 may make your Apache CXF projects better by providing such a rich and extensible declarative validation support. Definitely give it a try!

A complete project is available on GitHub.

Sunday, February 23, 2014

Knowing how all your components work together: distributed tracing with Zipkin

In today's post we will try to cover very interesting and important topic: distributed system tracing. What it practically means is that we will try to trace the request from the point it was issued by the client to the point the response to this request was received. At first, it looks quite straightforward but in reality it may involve many calls to several other systems, databases, NoSQL stores, caches, you name it ...

In 2010 Google published a paper about Dapper, a large-scale distributed systems tracing infrastructure (very interesting reading by the way). Later on, Twitter built its own implementation based on Dapper paper, called Zipkin and that's the one we are going to look at.

We will build a simple JAX-RS 2.0 server using great Apache CXF library. For the client side, we will use JAX-RS 2.0 client API and by utilizing Zipkin we will trace all the interactions between the client and the server (as well as everything happening on server side). To make an example a bit more illustrative, we will pretend that server uses some kind of database to retrieve the data. Our code will be a mix of pure Java and a bit of Scala (the choice of Scala will be cleared up soon).

One additional dependency in order for Zipkin to work is Apache Zookeeper. It is required for coordination and should be started in advance. Luckily, it is very easy to do:

  • download the release from http://zookeeper.apache.org/releases.html (the current stable version at the moment of writing is 3.4.5)
  • unpack it into zookeeper-3.4.5
  • copy zookeeper-3.4.5/conf/zoo_sample.cfg to zookeeper-3.4.5/conf/zoo.cfg
  • and just start Apache Zookeeper server
    Windows: zookeeper-3.4.5/bin/zkServer.cmd
    Linux: zookeeper-3.4.5/bin/zkServer.sh start

Now back to Zipkin. Zipkin is written in Scala. It is still in active development and the best way to start off with it is just by cloning its GitHub repository and build it from sources:

git clone https://github.com/twitter/zipkin.git

From architectural prospective, Zipkin consists of three main components:

  • collector: collects traces across the system
  • query: queries collected traces
  • web: provides web-based UI to show the traces

To run them, Zipkin guys provide useful scripts in the bin folder with the only requirement that JDK 1.7 should be installed:

  • bin/collector
  • bin/query
  • bin/web
Let's execute these scripts and ensure that every component has been started successfully, with no stack traces on the console (for curious readers, I was not able to make Zipkin work on Windows so I assume we are running it on Linux box). By default, Zipkin web UI is available on port 8080. The storage for traces is embedded SQLite engine. Though it works, the better storages (like awesome Redis) are available.

The preparation is over, let's do some code. We will start with JAX-RS 2.0 client part as it's very straightforward (ClientStarter.java):

package com.example.client;

import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

import com.example.zipkin.Zipkin;
import com.example.zipkin.client.ZipkinRequestFilter;
import com.example.zipkin.client.ZipkinResponseFilter;

public class ClientStarter {
  public static void main( final String[] args ) throws Exception { 
    final Client client = ClientBuilder
      .newClient()
      .register( new ZipkinRequestFilter( "People", Zipkin.tracer() ), 1 )
      .register( new ZipkinResponseFilter( "People", Zipkin.tracer() ), 1 );        
                        
    final Response response = client
      .target( "http://localhost:8080/rest/api/people" )
      .request( MediaType.APPLICATION_JSON )
      .get();

    if( response.getStatus() == 200 ) {
      System.out.println( response.readEntity( String.class ) );
    }
        
    response.close();
    client.close();
        
    // Small delay to allow tracer to send the trace over the wire
    Thread.sleep( 1000 );
  }
}

Except a couple of imports and classes with Zipkin in it, everything should look simple. So what those ZipkinRequestFilter and ZipkinResponseFilter are for? Zipkin is awesome but it's not a magical tool. In order to trace any request in distributed system, there should be some context passed along with it. In REST/HTTP world, it's usually request/response headers. Let's take a look on ZipkinRequestFilter first (ZipkinRequestFilter.scala):

package com.example.zipkin.client

import javax.ws.rs.client.ClientRequestFilter
import javax.ws.rs.ext.Provider
import javax.ws.rs.client.ClientRequestContext
import com.twitter.finagle.http.HttpTracing
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.tracing.Annotation
import com.twitter.finagle.tracing.TraceId
import com.twitter.finagle.tracing.Tracer

@Provider
class ZipkinRequestFilter( val name: String, val tracer: Tracer ) extends ClientRequestFilter {
  def filter( requestContext: ClientRequestContext ): Unit = {      
    Trace.pushTracerAndSetNextId( tracer, true )
 
    requestContext.getHeaders().add( HttpTracing.Header.TraceId, Trace.id.traceId.toString )
    requestContext.getHeaders().add( HttpTracing.Header.SpanId, Trace.id.spanId.toString )
         
    Trace.id._parentId foreach { id => 
      requestContext.getHeaders().add( HttpTracing.Header.ParentSpanId, id.toString ) 
    }    

    Trace.id.sampled foreach { sampled => 
      requestContext.getHeaders().add( HttpTracing.Header.Sampled, sampled.toString ) 
    }

    requestContext.getHeaders().add( HttpTracing.Header.Flags, Trace.id.flags.toLong.toString )
             
    if( Trace.isActivelyTracing ) {
      Trace.recordRpcname( name,  requestContext.getMethod() )
      Trace.recordBinary( "http.uri", requestContext.getUri().toString()  )
      Trace.record( Annotation.ClientSend() )    
    }
  }
}

A bit of Zipkin internals will make this code superclear. The central part of Zipkin API is Trace class. Every time we would like to initiate tracing, we should have a Trace Id and the tracer to actually trace it. This single line generates new Trace Id and register the tracer (internally this data is held in thread local state).

Trace.pushTracerAndSetNextId( tracer, true )

Traces are hierarchical by nature, so do Trace Ids: every Trace Id could be a root or part of another trace. In our example, we know for sure that we are the first and as such the root of the trace. Later on the Trace Id is wrapped into HTTP headers and will be passed along the request (we will see on server side how it is being used). The last three lines associate the useful information with the trace: name of our API (People), HTTP method, URI and most importantly, that it's the client sending the request to the server.

Trace.recordRpcname( name,  requestContext.getMethod() )
Trace.recordBinary( "http.uri", requestContext.getUri().toString()  )
Trace.record( Annotation.ClientSend() ) 

The ZipkinResponseFilter does the reverse to ZipkinRequestFilter and extract the Trace Id from the request headers (ZipkinResponseFilter.scala):

package com.example.zipkin.client

import javax.ws.rs.client.ClientResponseFilter
import javax.ws.rs.client.ClientRequestContext
import javax.ws.rs.client.ClientResponseContext
import javax.ws.rs.ext.Provider
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.tracing.Annotation
import com.twitter.finagle.tracing.SpanId
import com.twitter.finagle.http.HttpTracing
import com.twitter.finagle.tracing.TraceId
import com.twitter.finagle.tracing.Flags
import com.twitter.finagle.tracing.Tracer

@Provider
class ZipkinResponseFilter( val name: String, val tracer: Tracer ) extends ClientResponseFilter {  
  def filter( requestContext: ClientRequestContext, responseContext: ClientResponseContext ): Unit = {
    val spanId = SpanId.fromString( requestContext.getHeaders().getFirst( HttpTracing.Header.SpanId ).toString() )

    spanId foreach { sid =>
      val traceId = SpanId.fromString( requestContext.getHeaders().getFirst( HttpTracing.Header.TraceId ).toString() )
        
      val parentSpanId = requestContext.getHeaders().getFirst( HttpTracing.Header.ParentSpanId ) match {
        case s: String => SpanId.fromString( s.toString() )
        case _ => None
      }

      val sampled = requestContext.getHeaders().getFirst( HttpTracing.Header.Sampled ) match { 
        case s: String =>  s.toString.toBoolean
        case _ => true
      }
        
      val flags = Flags( requestContext.getHeaders().getFirst( HttpTracing.Header.Flags ).toString.toLong )        
      Trace.setId( TraceId( traceId, parentSpanId, sid, Option( sampled ), flags ) )
    }
      
    if( Trace.isActivelyTracing ) {
      Trace.record( Annotation.ClientRecv() )
    }
  }
}

Strictly speaking, in our example it's not necessary to extract the Trace Id from the request because both filters should be executed by the single thread. But the last line is very important: it marks the end of our trace by saying that client has received the response.

Trace.record( Annotation.ClientRecv() )

What's left is actually the tracer itself (Zipkin.scala):

package com.example.zipkin

import com.twitter.finagle.stats.DefaultStatsReceiver
import com.twitter.finagle.zipkin.thrift.ZipkinTracer
import com.twitter.finagle.tracing.Trace
import javax.ws.rs.ext.Provider

object Zipkin {
  lazy val tracer = ZipkinTracer.mk( host = "localhost", port = 9410, DefaultStatsReceiver, 1 )
}

If at this point you are confused what all those traces and spans mean please look through this documentation page, you will get the basic understanding of those concepts.

At this point, there is nothing left on the client side and we are good to move to the server side. Our JAX-RS 2.0 server will expose the single endpoint (PeopleRestService.java):

package com.example.server.rs;

import java.util.Arrays;
import java.util.Collection;
import java.util.concurrent.Callable;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;

import com.example.model.Person;
import com.example.zipkin.Zipkin;

@Path( "/people" ) 
public class PeopleRestService {
  @Produces( { "application/json" } )
  @GET
  public Collection< Person > getPeople() {
    return Zipkin.invoke( "DB", "FIND ALL", new Callable< Collection< Person > >() {
      @Override
      public Collection call() throws Exception {
        return Arrays.asList( new Person( "Tom", "Bombdil" ) );
      }   
    } );   
  }
}

As we mentioned before, we will simulate the access to database and generate a child trace by using Zipkin.invoke wrapper (which looks very simple, Zipkin.scala):

package com.example.zipkin

import java.util.concurrent.Callable
import com.twitter.finagle.stats.DefaultStatsReceiver
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.zipkin.thrift.ZipkinTracer
import com.twitter.finagle.tracing.Annotation

object Zipkin {
  lazy val tracer = ZipkinTracer.mk( host = "localhost", port = 9410, DefaultStatsReceiver, 1 )
    
  def invoke[ R ]( service: String, method: String, callable: Callable[ R ] ): R = Trace.unwind {
    Trace.pushTracerAndSetNextId( tracer, false )      
      
    Trace.recordRpcname( service, method );
    Trace.record( new Annotation.ClientSend() );
          
    try {
      callable.call()
    } finally {
      Trace.record( new Annotation.ClientRecv() );
    }
  }   
}

As we can see, in this case the server itself becomes a client for some other service (database).

The last and most important part of the server is to intercept all HTTP requests, extract the Trace Id from them so it will be possible to associate more data with the trace (annotate the trace). In Apache CXF it's very easy to do by providing own invoker (ZipkinTracingInvoker.scala):

package com.example.zipkin.server

import org.apache.cxf.jaxrs.JAXRSInvoker
import com.twitter.finagle.tracing.TraceId
import org.apache.cxf.message.Exchange
import com.twitter.finagle.tracing.Trace
import com.twitter.finagle.tracing.Annotation
import org.apache.cxf.jaxrs.model.OperationResourceInfo
import org.apache.cxf.jaxrs.ext.MessageContextImpl
import com.twitter.finagle.tracing.SpanId
import com.twitter.finagle.http.HttpTracing
import com.twitter.finagle.tracing.Flags
import scala.collection.JavaConversions._
import com.twitter.finagle.tracing.Tracer
import javax.inject.Inject

class ZipkinTracingInvoker extends JAXRSInvoker {
  @Inject val tracer: Tracer = null
  
  def trace[ R ]( exchange: Exchange )( block: => R ): R = {
    val context = new MessageContextImpl( exchange.getInMessage() )
    Trace.pushTracer( tracer )
        
    val id = Option( exchange.get( classOf[ OperationResourceInfo ] ) ) map { ori =>
      context.getHttpHeaders().getRequestHeader( HttpTracing.Header.SpanId ).toList match {
        case x :: xs => SpanId.fromString( x ) map { sid => 
          val traceId = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.TraceId ).toList match {
            case x :: xs => SpanId.fromString( x )
            case _ => None
          }
          
          val parentSpanId = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.ParentSpanId ).toList match {
            case x :: xs => SpanId.fromString( x )
            case _ => None
          }
  
          val sampled = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.Sampled ).toList match { 
            case x :: xs =>  x.toBoolean
            case _ => true
          }
                    
          val flags = context.getHttpHeaders().getRequestHeader( HttpTracing.Header.Flags ).toList match {
            case x :: xs =>  Flags( x.toLong )
            case _ => Flags()
          }
         
          val id = TraceId( traceId, parentSpanId, sid, Option( sampled ), flags )                     
          Trace.setId( id )
        
          if( Trace.isActivelyTracing ) {
            Trace.recordRpcname( context.getHttpServletRequest().getProtocol(), ori.getHttpMethod() )
            Trace.record( Annotation.ServerRecv() )
          }
        
          id
        }           
          
        case _ => None
      }
    }
    
    val result = block
    
    if( Trace.isActivelyTracing ) {
      id map { id => Trace.record( new Annotation.ServerSend() ) }
    }
    
    result
  }
  
  @Override
  override def invoke( exchange: Exchange, parametersList: AnyRef ): AnyRef = {
    trace( exchange )( super.invoke( exchange, parametersList ) )     
  }
}

Basically, the only thing this code does is extracting Trace Id from request and associating it with the current thread. Also please notice that we associate additional data with the trace marking the server participation.

  Trace.recordRpcname( context.getHttpServletRequest().getProtocol(), ori.getHttpMethod() )
  Trace.record( Annotation.ServerRecv() )

To see the tracing in live, let's start our server (please notice that sbt should be installed), assuming all Zipkin components and Apache Zookeeper are already up and running:

sbt 'project server' 'run-main com.example.server.ServerStarter'
then the client:
sbt 'project client' 'run-main com.example.client.ClientStarter'
and finally open Zipkin web UI at http://localhost:8080. We should see something like that (depending how many times you have run the client):

Alternatively, we can build and run fat JARs using sbt-assembly plugin:

sbt assembly
java -jar server/target/zipkin-jaxrs-2.0-server-assembly-0.0.1-SNAPSHOT.jar 
java -jar client/target/zipkin-jaxrs-2.0-client-assembly-0.0.1-SNAPSHOT.jar 

If we click on any particular trace, the more detailed information will be shown, much resembling client <-> server <-> database chain.

Even more details are shown when we click on particular element in the tree.

Lastly, the bonus part is components / services dependency graph.

As we can see, all the data associated with the trace is here and follows hierarchical structure. The root and child traces are detected and shown, as well as timelines for client send/receive and server receive/send chains. Our example is quite naive and simple, but even like that it demonstrates how powerful and useful distributed system tracing is. Thanks to Zipkin guys.

The complete source code is available on GitHub.

Monday, January 13, 2014

Your build tool is your good friend: what sbt can do for Java developer

I think for developers picking the right build tool is a very important choice. For years I have been sticking to Apache Maven and, honestly, it does the job well enough, even nowadays it's a good tool to use. But I always feel it could be done much better ... and then Gradle came along ...

Despite many hours I have spent getting accustomed to Gradle way to do things, I finally gave up and switched back to Apache Maven. The reason - I didn't feel comfortable with it, mostly because of Groovy DSL. Anyway, I think Gradle is great, powerful and extensible build tool which is able to perform any task your build process needs.

But engaging myself more and more with Scala, I quickly discovered sbt. Though sbt is acronym for "simple build tool", my first impression was quite a contrary: I found it complicated and hard to understand. For some reasons, I liked it nonetheless and by spending more time reading the documentation (which is getting better and better), many experiments, I finally would say the choice is made. In this post I would like to show up couple of great things sbt can do to make Java developer's life easy (some knowledge of Scala would be very handy, but it's not required).

Before moving on to real example, couple of facts about sbt. It uses Scala as a language for build scenario and requires a launcher which could be downloaded from here (the version we'll be using is 0.13.1). There are several ways to describe build in sbt, the one this post demonstrates is by using Build.scala with single project.

Our example is a simple Spring console application with couple of JUnit test cases: just enough to see how build with external dependencies is structured and tests are run. Application contains only two classes:

package com.example;

import org.springframework.stereotype.Service;

@Service
public class SimpleService {
    public String getResult() {
        return "Result";
    }
}
and
package com.example;

import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.support.GenericApplicationContext;

public class Starter {
    @Configuration
    @ComponentScan( basePackageClasses = SimpleService.class )
    public static class AppConfig {  
    }
 
    public static void main( String[] args ) {
        try( GenericApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class ) ) {
            final SimpleService service = context.getBean( SimpleService.class );
            System.out.println( service.getResult() );
        }
    }
}

Now, let see how sbt build looks like. By convention, Build.scala should be located in project subfolder. Additionally, there should be present build.properties file with desired sbt version and plugins.sbt with external plugins (we will use sbteclipse plugin to generate Eclipse project files). We will start with build.properties which contains only one line:

sbt.version=0.13.1

and continue with plugins.sbt, which in our case is also just one line:

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

Lastly, let's start with the heart of our build: Build.scala. There would be two parts in it: common settings for all projects in our build (useful for multi-project builds but we have only one now) and here is the snippet of this part:

import sbt._
import Keys._
import com.typesafe.sbteclipse.core.EclipsePlugin._

object ProjectBuild extends Build {
  override val settings = super.settings ++ Seq(
    organization := "com.example",    
    name := "sbt-java",    
    version := "0.0.1-SNAPSHOT",    
    
    scalaVersion := "2.10.3",
    scalacOptions ++= Seq( "-encoding", "UTF-8", "-target:jvm-1.7" ),    
    javacOptions ++= Seq( "-encoding", "UTF-8", "-source", "1.7", "-target", "1.7" ),        
    outputStrategy := Some( StdoutOutput ),
    compileOrder := CompileOrder.JavaThenScala,
    
    resolvers ++= Seq( 
      Resolver.mavenLocal, 
      Resolver.sonatypeRepo( "releases" ), 
      Resolver.typesafeRepo( "releases" )
    ),        

    crossPaths := false,            
    fork in run := true,
    connectInput in run := true,
    
    EclipseKeys.executionEnvironment := Some(EclipseExecutionEnvironment.JavaSE17)
  )
}

The build above looks quite clean and understandable: resolvers is a straight analogy of Apache Maven repositories, EclipseKeys.executionEnvironment is customization for execution environment (Java SE 7) for generated Eclipse project. All these keys are very well documented.

Second part is much smaller and defines our main project in terms of dependencies and main class:

lazy val main = Project( 
  id = "sbt-java",  
  base = file("."), 
  settings = Project.defaultSettings ++ Seq(              
    mainClass := Some( "com.example.Starter" ),
     
    initialCommands in console += """
      import com.example._
      import com.example.Starter._
      import org.springframework.context.annotation._
    """,
       
    libraryDependencies ++= Seq(
      "org.springframework" % "spring-context" % "4.0.0.RELEASE",
      "org.springframework" % "spring-beans" % "4.0.0.RELEASE",
      "org.springframework" % "spring-test" % "4.0.0.RELEASE" % "test",
      "com.novocode" % "junit-interface" % "0.10" % "test",
      "junit" % "junit" % "4.11" % "test"
    )
  ) 
)

The initialCommands requires a bit of explanation here: sbt is able to run Scala console (REPL) and this setting allows to add default import statements so we can use our classes immediately. The dependency to junit-interface allows sbt to run JUnit test cases and it's the first thing we'll do: add some tests. Before creating actual tests, we will start sbt and ask it to run test cases on every code change, just like that:

sbt ~test

While sbt is running, we will add a test case:

package com.example;

import static org.hamcrest.core.IsEqual.equalTo;
import static org.junit.Assert.assertThat;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.support.GenericApplicationContext;

import com.example.Starter.AppConfig;

public class SimpleServiceTestCase {
    private GenericApplicationContext context;
    private SimpleService service;
 
    @Before
    public void setUp() {
        context = new AnnotationConfigApplicationContext( AppConfig.class );
        service = context.getBean( SimpleService.class );
    }
 
    @After
    public void tearDown() {
        context.close();
    }

    @Test
    public void testSampleTest() { 
        assertThat( service.getResult(), equalTo( "Result" ) );
    } 
}

In a console we should see that sbt picked the change automatically and run all test cases. Sadly, because of this issue which is already fixed and should be available in next release of junit-interface, we cannot use @RunWith and @ContextConfiguration annotation to run Spring test cases yet.

For TDD practitioners it's an awesome feature to have. The next terrific feature we are going to look at is Scala console (RELP) which gives as the ability to play with application without actually running it. It could be invoked by typing:

sbt console

and observing something like this in the terminal (as we can see, the imports from initialCommands are automatically included):

At this moment playground is established and we can do a lot of very interesting things, for example: create context, get beans and call any methods on them:

sbt takes care about classpath so all your classes and external dependencies are available for use. I found this way to discover things much faster than by using debugger or other techniques.

At the moment, there is no good support for sbt in Eclipse but it's very easy to generate Eclipse project files by using sbteclipse plugin we've touched before:

sbt eclipse

Awesome! Not to mention other great plugins which are kindly listed here and the ability to import Apache Maven POM files using externalPom() which really simplifies the migration. As a conclusion from my side, if you are looking for better, modern, extensible build tool for your project, please take a look at sbt. It's a great piece of software built on top of awesome, consise language.

Complete project is available on GitHub.

Saturday, November 30, 2013

Java WebSockets (JSR-356) on Jetty 9.1

Jetty 9.1 is finally released, bringing Java WebSockets (JSR-356) to non-EE environments. It's awesome news and today's post will be about using this great new API along with Spring Framework.

JSR-356 defines concise, annotation-based model to allow modern Java web applications easily create bidirectional communication channels using WebSockets API. It covers not only server-side, but client-side as well, making this API really simple to use everywhere.

Let's get started! Our goal would be to build a WebSockets server which accepts messages from the clients and broadcasts them to all other clients currently connected. To begin with, let's define the message format, which server and client will be exchanging, as this simple Message class. We can limit ourselves to something like a String, but I would like to introduce to you the power of another new API - Java API for JSON Processing (JSR-353).

package com.example.services;

public class Message {
    private String username;
    private String message;
 
    public Message() {
    }
 
    public Message( final String username, final String message ) {
        this.username = username;
        this.message = message;
    }
 
    public String getMessage() {
        return message;
    }
 
    public String getUsername() {
        return username;
    }

    public void setMessage( final String message ) {
        this.message = message;
    }
 
    public void setUsername( final String username ) {
        this.username = username;
    }
}

To separate the declarations related to the server and the client, JSR-356 defines two basic annotations: @ServerEndpoint and @ClientEndpoit respectively. Our client endpoint, let's call it BroadcastClientEndpoint, will simply listen for messages server sends:

package com.example.services;

import java.io.IOException;
import java.util.logging.Logger;

import javax.websocket.ClientEndpoint;
import javax.websocket.EncodeException;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;

@ClientEndpoint
public class BroadcastClientEndpoint {
    private static final Logger log = Logger.getLogger( 
        BroadcastClientEndpoint.class.getName() );
 
    @OnOpen
    public void onOpen( final Session session ) throws IOException, EncodeException  {
        session.getBasicRemote().sendObject( new Message( "Client", "Hello!" ) );
    }

    @OnMessage
    public void onMessage( final Message message ) {
        log.info( String.format( "Received message '%s' from '%s'",
            message.getMessage(), message.getUsername() ) );
    }
}

That's literally it! Very clean, self-explanatory piece of code: @OnOpen is being called when client got connected to the server and @OnMessage is being called every time server sends a message to the client. Yes, it's very simple but there is a caveat: JSR-356 implementation can handle any simple objects but not the complex ones like Message is. To manage that, JSR-356 introduces concept of encoders and decoders.

We all love JSON, so why don't we define our own JSON encoder and decoder? It's an easy task which Java API for JSON Processing (JSR-353) can handle for us. To create an encoder, you only need to implement Encoder.Text< Message > and basically serialize your object to some string, in our case to JSON string, using JsonObjectBuilder.

package com.example.services;

import javax.json.Json;
import javax.json.JsonReaderFactory;
import javax.websocket.EncodeException;
import javax.websocket.Encoder;
import javax.websocket.EndpointConfig;

public class Message {
    public static class MessageEncoder implements Encoder.Text< Message > {
        @Override
        public void init( final EndpointConfig config ) {
        }
  
        @Override
        public String encode( final Message message ) throws EncodeException {
            return Json.createObjectBuilder()
                .add( "username", message.getUsername() )
                .add( "message", message.getMessage() )
                .build()
                .toString();
        }
  
        @Override
        public void destroy() {
        }
    }
}

For decoder part, everything looks very similar, we have to implement Decoder.Text< Message > and deserialize our object from string, this time using JsonReader.

package com.example.services;

import javax.json.JsonObject;
import javax.json.JsonReader;
import javax.json.JsonReaderFactory;
import javax.websocket.DecodeException;
import javax.websocket.Decoder;

public class Message {
    public static class MessageDecoder implements Decoder.Text< Message > {
        private JsonReaderFactory factory = Json.createReaderFactory( Collections.< String, Object >emptyMap() );
  
        @Override
        public void init( final EndpointConfig config ) {
        }
  
        @Override
        public Message decode( final String str ) throws DecodeException {
            final Message message = new Message();
   
            try( final JsonReader reader = factory.createReader( new StringReader( str ) ) ) {
                final JsonObject json = reader.readObject();
                message.setUsername( json.getString( "username" ) );
                message.setMessage( json.getString( "message" ) );
            }
   
            return message;
        }
  
        @Override
        public boolean willDecode( final String str ) {
            return true;
        }
  
        @Override
        public void destroy() {
        }
    }
}

And as a final step, we need to tell the client (and the server, they share same decoders and encoders) that we have encoder and decoder for our messages. The easiest thing to do that is just by declaring them as part of @ServerEndpoint and @ClientEndpoit annotations.


import com.example.services.Message.MessageDecoder;
import com.example.services.Message.MessageEncoder;

@ClientEndpoint( encoders = { MessageEncoder.class }, decoders = { MessageDecoder.class } )
public class BroadcastClientEndpoint {
}

To make client's example complete, we need some way to connect to the server using BroadcastClientEndpoint and basically exchange messages. The ClientStarter class finalizes the picture:

package com.example.ws;

import java.net.URI;
import java.util.UUID;

import javax.websocket.ContainerProvider;
import javax.websocket.Session;
import javax.websocket.WebSocketContainer;

import org.eclipse.jetty.websocket.jsr356.ClientContainer;

import com.example.services.BroadcastClientEndpoint;
import com.example.services.Message;

public class ClientStarter {
    public static void main( final String[] args ) throws Exception {
        final String client = UUID.randomUUID().toString().substring( 0, 8 );
  
        final WebSocketContainer container = ContainerProvider.getWebSocketContainer();    
        final String uri = "ws://localhost:8080/broadcast";  
  
        try( Session session = container.connectToServer( BroadcastClientEndpoint.class, URI.create( uri ) ) ) {
            for( int i = 1; i <= 10; ++i ) {
                session.getBasicRemote().sendObject( new Message( client, "Message #" + i ) );
                Thread.sleep( 1000 );
            }
        }
  
        // Application doesn't exit if container's threads are still running
        ( ( ClientContainer )container ).stop();
    }
}

Just couple of comments what this code does: we are connecting to WebSockets endpoint at ws://localhost:8080/broadcast, randomly picking some client name (from UUID) and generating 10 messages, every with 1 second delay (just to be sure we have time to receive them all back).

Server part doesn't look very different and at this point could be understood without any additional comments (except may be the fact that server just broadcasts every message it receives to all connected clients). Important to mention here: new instance of the server endpoint is created every time new client connects to it (that's why peers collection is static), it's a default behavior and could be easily changed.

package com.example.services;

import java.io.IOException;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;

import javax.websocket.EncodeException;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

import com.example.services.Message.MessageDecoder;
import com.example.services.Message.MessageEncoder;

@ServerEndpoint( 
    value = "/broadcast", 
    encoders = { MessageEncoder.class }, 
    decoders = { MessageDecoder.class } 
) 
public class BroadcastServerEndpoint {
    private static final Set< Session > sessions = 
        Collections.synchronizedSet( new HashSet< Session >() ); 
   
    @OnOpen
    public void onOpen( final Session session ) {
        sessions.add( session );
    }

    @OnClose
    public void onClose( final Session session ) {
        sessions.remove( session );
    }

    @OnMessage
    public void onMessage( final Message message, final Session client ) 
            throws IOException, EncodeException {
        for( final Session session: sessions ) {
            session.getBasicRemote().sendObject( message );
        }
    }
}

In order this endpoint to be available for connection, we should start the WebSockets container and register this endpoint inside it. As always, Jetty 9.1 is runnable in embedded mode effortlessly:

package com.example.ws;

import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class ServerStarter  {
    public static void main( String[] args ) throws Exception {
        Server server = new Server( 8080 );
        
        // Create the 'root' Spring application context
        final ServletHolder servletHolder = new ServletHolder( new DefaultServlet() );
        final ServletContextHandler context = new ServletContextHandler();

        context.setContextPath( "/" );
        context.addServlet( servletHolder, "/*" );
        context.addEventListener( new ContextLoaderListener() );   
        context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
        context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );
   
        server.setHandler( context );
        WebSocketServerContainerInitializer.configureContext( context );        
        
        server.start();
        server.join(); 
    }
}

The most important part of the snippet above is WebSocketServerContainerInitializer.configureContext: it's actually creates the instance of WebSockets container. Because we haven't added any endpoints yet, the container basically sits here and does nothing. Spring Framework and AppConfig configuration class will do this last wiring for us.

package com.example.config;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.websocket.DeploymentException;
import javax.websocket.server.ServerContainer;
import javax.websocket.server.ServerEndpoint;
import javax.websocket.server.ServerEndpointConfig;

import org.eclipse.jetty.websocket.jsr356.server.AnnotatedServerEndpointConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.context.WebApplicationContext;

import com.example.services.BroadcastServerEndpoint;

@Configuration
public class AppConfig  {
    @Inject private WebApplicationContext context;
    private ServerContainer container;
 
    public class SpringServerEndpointConfigurator extends ServerEndpointConfig.Configurator {
        @Override
        public < T > T getEndpointInstance( Class< T > endpointClass ) 
                throws InstantiationException {
            return context.getAutowireCapableBeanFactory().createBean( endpointClass );   
        }
    }
 
    @Bean
    public ServerEndpointConfig.Configurator configurator() {
        return new SpringServerEndpointConfigurator();
    }
 
    @PostConstruct
    public void init() throws DeploymentException {
        container = ( ServerContainer )context.getServletContext().
            getAttribute( javax.websocket.server.ServerContainer.class.getName() );
  
        container.addEndpoint( 
            new AnnotatedServerEndpointConfig( 
                BroadcastServerEndpoint.class, 
                BroadcastServerEndpoint.class.getAnnotation( ServerEndpoint.class )  
            ) {
                @Override
                public Configurator getConfigurator() {
                    return configurator();
                }
            }
        );
    }  
}

As we mentioned earlier, by default container will create new instance of server endpoint every time new client connects, and it does so by calling constructor, in our case BroadcastServerEndpoint.class.newInstance(). It might be a desired behavior but because we are using Spring Framework and dependency injection, such new objects are basically unmanaged beans. Thanks to very well-thought (in my opinion) design of JSR-356, it's actually quite easy to provide your own way of creating endpoint instances by implementing ServerEndpointConfig.Configurator. The SpringServerEndpointConfigurator is an example of such implementation: it's creates new managed bean every time new endpoint instance is being asked (if you want single instance, you can create singleton of the endpoint as a bean in AppConfig and return it all the time).

The way we retrieve the WebSockets container is Jetty-specific: from the attribute of the context with name "javax.websocket.server.ServerContainer" (it probably might change in the future). Once container is there, we are just adding new (managed!) endpoint by providing our own ServerEndpointConfig (based on AnnotatedServerEndpointConfig which Jetty kindly provides already).

To build and run our server and clients, we need just do that:

mvn clean package
java -jar target\jetty-web-sockets-jsr356-0.0.1-SNAPSHOT-server.jar // run server
java -jar target/jetty-web-sockets-jsr356-0.0.1-SNAPSHOT-client.jar // run yet another client

As an example, by running server and couple of clients (I run 4 of them, '392f68ef', '8e3a869d', 'ca3a06d0', '6cb82119') you might see by the output in the console that each client receives all the messages from all other clients (including its own messages):

Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Hello!' from 'Client'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #1' from '392f68ef'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #2' from '8e3a869d'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from 'ca3a06d0'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '6cb82119'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #2' from '392f68ef'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #3' from '8e3a869d'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from 'ca3a06d0'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '6cb82119'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #3' from '392f68ef'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '8e3a869d'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from 'ca3a06d0'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '6cb82119'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '392f68ef'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '8e3a869d'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from 'ca3a06d0'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '6cb82119'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '392f68ef'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '8e3a869d'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '6cb82119'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '392f68ef'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '8e3a869d'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '6cb82119'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '392f68ef'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '8e3a869d'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '6cb82119'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '392f68ef'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '8e3a869d'
Nov 29, 2013 9:21:37 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '392f68ef'
Nov 29, 2013 9:21:37 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '8e3a869d'
Nov 29, 2013 9:21:38 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '392f68ef'
2013-11-29 21:21:39.260:INFO:oejwc.WebSocketClient:main: Stopped org.eclipse.jetty.websocket.client.WebSocketClient@3af5f6dc

Awesome! I hope this introductory blog post shows how easy it became to use modern web communication protocols in Java, thanks to Java WebSockets (JSR-356), Java API for JSON Processing (JSR-353) and great projects such as Jetty 9.1!

As always, complete project is available on GitHub.

Tuesday, October 29, 2013

Book review: "Instant Effective Caching with Ehcache" by Daniel Wind

Recently, I have had a chance to review the book "Instant Effective Caching with Ehcache" by Daniel Wind. Honestly, I do think the book justifies its title very well: constructed as a set of various recipes, it guides you step-by-step through typical scenarios by providing brief explanation along with small code snippets, clear enough to serve as a starting point (most recipes also have references to relevant sections of EhCache documentation).

If you have ever worked with EhCache, many recipes would look very familiar to you. But for a newbie or even intermediate developer, it might be very interesting to see:

More advanced examples include:

As a final note: short but useful book, not a comprehensive guide to EhCache world but rather a quick reference. Thanks to Daniel Wind for gathering all these recipes together.

Monday, October 28, 2013

Coordination and service discovery with Apache Zookeeper

Service-oriented design has proven to be a successful solution for a huge variety of different distributed systems. When used properly, it has a lot of benefits. But as number of services grows, it becomes more difficult to understand what is deployed and where. And because we are building reliable and highly-available systems, yet another question to ask: how many instances of each service are currently available?

In today's post I would like to introduce you to the world of Apache ZooKeeper - a highly reliable distributed coordination service. The number of features ZooKeeper provides is just astonishing so let us start with very simple problem to solve: we have a stateless JAX-RS service which we deploy across as many JVMs/hosts as we want. The clients of this service should be able to auto-discover all available instances and just pick one of them (or all) to perform a REST call.

Sounds like a very interesting challenge. There could be many ways to solve it but let me choose Apache ZooKeeper for that. The first step is to download Apache ZooKeeper (the current stable version at the moment of writing is 3.4.5) and unpack it. Next, we need to create a configuration file. The simple way to do that is by copying conf/zoo_sample.cfg to conf/zoo.cfg. To run, just execute:

Windows: bin/zkServer.cmd
Linux: bin/zkServer

Excellent, now Apache ZooKeeper is up and running, listening on port 2181 (default). Apache ZooKeeper itself worth of a book to explain its capabilities. But brief overview gives a very high-level picture, enough to get us started.

Apache ZooKeeper has a powerful Java API but it's quite low-level and not an easy one to use. That's why Netflix developed and open-sourced a great library called Curator to wrap native Apache ZooKeeper API into more convenient and easy to integrate framework (it's now an Apache incubator project).

Now, let's do some code! We are developing simple JAX-RS 2.0 service which returns list of people. As it will be stateless, we are able to run many instances within single host or multiple hosts, depending on system load for example. The awesome Apache CXF and Spring Framework will backed our implementation. Below is the code snippet for PeopleRestService:

package com.example.rs;

import java.util.Arrays;
import java.util.Collection;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;

import com.example.model.Person;

@Path( PeopleRestService.PEOPLE_PATH ) 
public class PeopleRestService {
    public static final String PEOPLE_PATH = "/people";
 
    @PostConstruct
    public void init() throws Exception {
    }
 
    @Produces( { MediaType.APPLICATION_JSON } )
    @GET
    public Collection< Person > getPeople( @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
        return Arrays.asList(
            new Person( "Tom", "Bombadil" ),
            new Person( "Jim", "Tommyknockers" )
        );
    }
}

Very basic and naive implementation. Method init is empty by intention, it will be very helpful quite soon. Also, let us assume that every JAX-RS 2.0 service we're developing does support some notion of versioning, the class RestServiceDetails serves this purpose:

package com.example.config;

import org.codehaus.jackson.map.annotate.JsonRootName;

@JsonRootName( "serviceDetails" )
public class RestServiceDetails {
    private String version;
 
    public RestServiceDetails() {
    }
    
    public RestServiceDetails( final String version ) {
        this.version = version;
    }
    
    public void setVersion( final String version ) {
        this.version = version;
    }
    
    public String getVersion() {
        return version;
    }    
}

Our Spring configuration class AppConfig creates instance of JAX-RS 2.0 server with people REST service which will be hosted by Jetty container:

package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;

import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

@Configuration
public class AppConfig {
    public static final String SERVER_PORT = "server.port";
    public static final String SERVER_HOST = "server.host";
    public static final String CONTEXT_PATH = "rest";
 
    @Bean( destroyMethod = "shutdown" )
    public SpringBus cxf() {
        return new SpringBus();
    }
 
    @Bean @DependsOn( "cxf" )
    public Server jaxRsServer() {
        JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class );
        factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) );
        factory.setAddress( factory.getAddress() );
        factory.setProviders( Arrays.< Object >asList( jsonProvider() ) );
        return factory.create();
    } 

    @Bean 
    public JaxRsApiApplication jaxRsApiApplication() {
        return new JaxRsApiApplication();
    }
 
    @Bean 
    public PeopleRestService peopleRestService() {
        return new PeopleRestService();
    }
 
    @Bean
    public JacksonJsonProvider jsonProvider() {
        return new JacksonJsonProvider();
    } 
}

And here is the class ServerStarter which runs embedded Jetty server. As we would like to host many such servers per host, the port shouldn't be hard-coded but rather provided as an argument:

package com.example;

import org.apache.cxf.transport.servlet.CXFServlet;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class ServerStarter {
    public static void main( final String[] args ) throws Exception {
        if( args.length != 1 ) {
            System.out.println( "Please provide port number" );
            return;
        }
  
        final int port = Integer.valueOf( args[ 0 ] );
        final Server server = new Server( port );
   
        System.setProperty( AppConfig.SERVER_PORT, Integer.toString( port ) );
        System.setProperty( AppConfig.SERVER_HOST, "localhost" );
  
        // Register and map the dispatcher servlet
        final ServletHolder servletHolder = new ServletHolder( new CXFServlet() );
        final ServletContextHandler context = new ServletContextHandler();   
        context.setContextPath( "/" );
        context.addServlet( servletHolder, "/" + AppConfig.CONTEXT_PATH + "/*" );  
        context.addEventListener( new ContextLoaderListener() );
   
        context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
        context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );
      
        server.setHandler( context );
        server.start();
        server.join(); 
    }
}

Nice, at this moment the boring part is over. But where Apache ZooKeeper and service discovery fit into this picture? Here is the answer: whenever new PeopleRestService service instance is deployed, it publishes (or registers) itself into Apache ZooKeeper registry, including the URL it's accessible at and service version it hosts. The clients can query Apache ZooKeeper in order to get the list of all available services and call them. The only thing services and their clients need to know is where Apache ZooKeeper is running. As I am deploying everything on my local machine, my instance is on localhost. Let's add this constant to the AppConfig class:

private static final String ZK_HOST = "localhost";

Every client maintains the persistent connection to the Apache ZooKeeper server. Whenever client dies, the connection goes down as well and Apache ZooKeeper can make a decision about availability of this particular client. To connect to Apache ZooKeeper, we have to create an instance of CuratorFramework class:

@Bean( initMethod = "start", destroyMethod = "close" )
public CuratorFramework curator() {
    return CuratorFrameworkFactory.newClient( ZK_HOST, new ExponentialBackoffRetry( 1000, 3 ) );
}

Next step is to create an instance of ServiceDiscovery class which will allow to publish service information for discovery into Apache ZooKeeper using just created CuratorFramework instance (we also would like to submit RestServiceDetails as additional metadata along with every service registration):

@Bean( initMethod = "start", destroyMethod = "close" )
public ServiceDiscovery< RestServiceDetails > discovery() {
    JsonInstanceSerializer< RestServiceDetails > serializer = 
        new JsonInstanceSerializer< RestServiceDetails >( RestServiceDetails.class );

    return ServiceDiscoveryBuilder.builder( RestServiceDetails.class )
        .client( curator() )
        .basePath( "services" )
        .serializer( serializer )
        .build();        
}

Internally, Apache ZooKeeper stores all its data as hierarchical namespace, much like standard file system does. The services path will be the base (root) path for all our services. Every service also needs to figure out which host and port it's running. We can do that by building URI specification which is included into JaxRsApiApplication class (the {port} and {scheme} will be resolved by Curator framework at the moment of service registration):

package com.example.rs;

import javax.inject.Inject;
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

import org.springframework.core.env.Environment;

import com.example.config.AppConfig;
import com.netflix.curator.x.discovery.UriSpec;

@ApplicationPath( JaxRsApiApplication.APPLICATION_PATH )
public class JaxRsApiApplication extends Application {
    public static final String APPLICATION_PATH = "api";
 
    @Inject Environment environment;

    public UriSpec getUriSpec( final String servicePath ) {
        return new UriSpec( 
            String.format( "{scheme}://%s:{port}/%s/%s%s",
                environment.getProperty( AppConfig.SERVER_HOST ),
                AppConfig.CONTEXT_PATH,
                APPLICATION_PATH, 
                servicePath
            ) );   
    }
}

The last piece of the puzzle is the registration of PeopleRestService inside service discovery, and the init method comes into play here:

@Inject private JaxRsApiApplication application;
@Inject private ServiceDiscovery< RestServiceDetails > discovery; 
@Inject private Environment environment;

@PostConstruct
public void init() throws Exception {
    final ServiceInstance< RestServiceDetails > instance = 
        ServiceInstance.< RestServiceDetails >builder()
            .name( "people" )
            .payload( new RestServiceDetails( "1.0" ) )
            .port( environment.getProperty( AppConfig.SERVER_PORT, Integer.class ) )
            .uriSpec( application.getUriSpec( PEOPLE_PATH ) )
            .build();
  
    discovery.registerService( instance );
}

Here is what we have done:

  • created a service instance with name people (the complete name would be /services/people)
  • set the port to the actual value this instance is running
  • set the URI specification for this specific REST service endpoint
  • additionally, attached a payload (RestServiceDetails) with service version (though it's not used, it demonstrates the ability to pass more details)
Every new service instance we are running will publish itself under /services/people path in Apache ZooKeeper. To see everything in action, let us build and run couple of people service instances.

mvn clean package
java -jar jax-rs-2.0-service\target\jax-rs-2.0-service-0.0.1-SNAPSHOT.one-jar.jar 8080
java -jar jax-rs-2.0-service\target\jax-rs-2.0-service-0.0.1-SNAPSHOT.one-jar.jar 8081

From Apache ZooKeeper it might look like this (please notice that session UUIDs will be different):

Having two service instances up and running, let's try to consume them. From service client prospective, the first step is exactly the same: instances of CuratorFramework and ServiceDiscovery should be created (configuration class ClientConfig declares those beans), in the way we have done it above, no changes required. But instead of registering service, we will query the available ones:

package com.example.client;

import java.util.Collection;

import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

import org.springframework.context.annotation.AnnotationConfigApplicationContext;

import com.example.config.RestServiceDetails;
import com.netflix.curator.x.discovery.ServiceDiscovery;
import com.netflix.curator.x.discovery.ServiceInstance;

public class ClientStarter {
    public static void main( final String[] args ) throws Exception {
        try( final AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext( ClientConfig.class ) ) { 
            @SuppressWarnings("unchecked")
            final ServiceDiscovery< RestServiceDetails > discovery = 
                context.getBean( ServiceDiscovery.class );
            final Client client = ClientBuilder.newClient();
      
            final Collection< ServiceInstance< RestServiceDetails > > services = 
                discovery.queryForInstances( "people" );
            for( final ServiceInstance< RestServiceDetails > service: services ) {
                final String uri = service.buildUriSpec();
       
                final Response response = client
                    .target( uri )
                    .request( MediaType.APPLICATION_JSON )
                    .get();
       
                System.out.println( uri + ": " + response.readEntity( String.class ) );
                System.out.println( "API version: " + service.getPayload().getVersion() );
       
                response.close();
            }
        }
    }
}

Once service instances are retrieved, the REST call is being made (using awesome JAX-RS 2.0 client API) and additionally the service version is being asked (as payload contains instance of RestServiceDetails class). Let's build and run our client against two instances we have deployed previously:

mvn clean package
java -jar jax-rs-2.0-client\target\jax-rs-2.0-client-0.0.1-SNAPSHOT.one-jar.jar

The console output should show two calls to two different endpoints:

http://localhost:8081/rest/api/people: [{"email":null,"firstName":"Tom","lastName":"Bombadil"},{"email":null,"firstName":"Jim","lastName":"Tommyknockers"}]
API version: 1.0

http://localhost:8080/rest/api/people: [{"email":null,"firstName":"Tom","lastName":"Bombadil"},{"email":null,"firstName":"Jim","lastName":"Tommyknockers"}]
API version: 1.0

If we stop one or all instances, they will disappear from Apache ZooKeeper registry. The same applies if any instance crashes or becomes unresponsive.

Excellent! I guess we achieved our goal using such a great and powerful tool as Apache ZooKeeper. Thanks to its developers as well as to Curator guys for making it so easy to use Apache ZooKeeper in your applications. We have just scratched the surface of what is possible to accomplish by using Apache ZooKeeper, I strongly encourage everyone to explore its capabilities (distributed locks, caches, counters, queues, ...).

Worth to mention another great project build on top of Apache ZooKeeper from LinkedIn called Norbert. For Eclipse developers, the Eclipse plugin is also available.

All sources are available on GitHub.

Monday, September 30, 2013

Swagger: make developers love working with your REST API

As JAX-RS API is evolving, with version 2.0 released earlier this year under JSR-339 umbrella, it's becoming even more easy to create REST services using excellent Java platform.

But with great simplicity comes great responsibility: documenting all these APIs so other developers could quickly understand how to use them. Unfortunately, in this area developers are on their own: the JSR-339 doesn't help much. For sure, it would be just awesome to generate verbose and easy to follow documentation from source code, and not asking someone to write it along the development process. Sounds unreal, right? In certain extent, it really is, but help is coming in a form of Swagger.

Essentially, Swagger does a simple but very powerful thing: with a bit of additional annotations it generates the REST API descriptions (HTTP methods, path / query / form parameters, responses, HTTP error codes, ...) and even provides a simple web UI to play with REST calls to your APIs (not to mention that all this metadata is available over REST as well).

Before digging into implementation details, let's take a quick look what Swagger is from API consumer prospective. Assume you have developed a great REST service to manage people. As a good citizen, this REST service is feature-complete and provides following functionality:

  • lists all people (GET)
  • looks up person by e-mail (GET)
  • adds new person (POST)
  • updates existing person (PUT)
  • and finally removes person (DELETE)
Here is the same API from Swagger's perspective:

It looks quite pretty. Let's do more and call our REST service from Swagger UI, here this awesome framework really shines. The most complicated use-case is adding new person (POST) so this one will be looked closely.

As you can see on the snapshot above, every piece of REST service call is there:

  • description of the service
  • relative context path
  • parameters (form / path / query), required or optional
  • HTTP status codes: 201 CREATED and 409 CONFLICT
  • ready to go Try it out! to call REST service immediately (with out-of-the box parameters validation)

To complete the demo part, let me show yet another example, where REST resource is being involved (in our case, it's a simple class Person). Swagger is able to provide its properties and meaningful description together with expected response content type(s).

Looks nice! Moving on to the next part, it's all about implementation details. Swagger supports seamless integration with JAX-RS services, with just couple of additional annotations required on top of existing ones. Firstly, every single JAX-RS service which supposed to be documented should be annotated with @Api annotation, in our case:

@Path( "/people" ) 
@Api( value = "/people", description = "Manage people" )
public class PeopleRestService {
    // ...
}

Next, the same approach is applicable to REST service operations: every method which supposed to be documented should be annotated with @ApiOperation annotation, and optionally with @ApiResponses/@ApiResponse. If it accepts parameters, those should be annotated with @ApiParam annotation. Couple of examples here:

@Produces( { MediaType.APPLICATION_JSON } )
@GET
@ApiOperation( 
    value = "List all people", 
    notes = "List all people using paging", 
    response = Person.class, 
    responseContainer = "List"
)
public Collection< Person > getPeople(  
        @ApiParam( value = "Page to fetch", required = true ) 
        @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
    // ...
}

And another one:

@Produces( { MediaType.APPLICATION_JSON } )
@Path( "/{email}" )
@GET
@ApiOperation( 
    value = "Find person by e-mail", 
    notes = "Find person by e-mail", 
    response = Person.class 
)
@ApiResponses( {
    @ApiResponse( code = 404, message = "Person with such e-mail doesn't exists" )    
} )
public Person getPeople( 
        @ApiParam( value = "E-Mail address to lookup for", required = true ) 
        @PathParam( "email" ) final String email ) {
    // ...
}

REST resource classes (or model classes) require special annotations: @ApiModel and @ApiModelProperty. Here is how our Person class looks like:

@ApiModel( value = "Person", description = "Person resource representation" )
public class Person {
    @ApiModelProperty( value = "Person's first name", required = true ) 
    private String email;
    @ApiModelProperty( value = "Person's e-mail address", required = true ) 
    private String firstName;
    @ApiModelProperty( value = "Person's last name", required = true ) 
    private String lastName;

    // ...
}

The last steps is to plug in Swagger into JAX-RS application. The example I have developed uses Spring Framework, Apache CXF, Swagger UI and embedded Jetty (complete project is available on Github). Integrating Swagger is a matter of adding configuration bean (swaggerConfig), one additional JAX-RS service (apiListingResourceJson) and two JAX-RS providers (resourceListingProvider and apiDeclarationProvider).

package com.example.config;

import java.util.Arrays;

import javax.ws.rs.ext.RuntimeDelegate;

import org.apache.cxf.bus.spring.SpringBus;
import org.apache.cxf.endpoint.Server;
import org.apache.cxf.jaxrs.JAXRSServerFactoryBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.springframework.core.env.Environment;

import com.example.resource.Person;
import com.example.rs.JaxRsApiApplication;
import com.example.rs.PeopleRestService;
import com.example.services.PeopleService;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;
import com.wordnik.swagger.jaxrs.config.BeanConfig;
import com.wordnik.swagger.jaxrs.listing.ApiDeclarationProvider;
import com.wordnik.swagger.jaxrs.listing.ApiListingResourceJSON;
import com.wordnik.swagger.jaxrs.listing.ResourceListingProvider;

@Configuration
public class AppConfig {
    public static final String SERVER_PORT = "server.port";
    public static final String SERVER_HOST = "server.host";
    public static final String CONTEXT_PATH = "context.path";  
 
    @Bean( destroyMethod = "shutdown" )
    public SpringBus cxf() {
        return new SpringBus();
    }
 
    @Bean @DependsOn( "cxf" )
    public Server jaxRsServer() {
        JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class );
        factory.setServiceBeans( Arrays.< Object >asList( peopleRestService(), apiListingResourceJson() ) );
        factory.setAddress( factory.getAddress() );
        factory.setProviders( Arrays.< Object >asList( jsonProvider(), resourceListingProvider(), apiDeclarationProvider() ) );
        return factory.create();
    }
 
    @Bean @Autowired
    public BeanConfig swaggerConfig( Environment environment ) {
        final BeanConfig config = new BeanConfig();

        config.setVersion( "1.0.0" );
        config.setScan( true );
        config.setResourcePackage( Person.class.getPackage().getName() );
        config.setBasePath( 
            String.format( "http://%s:%s/%s%s",
                environment.getProperty( SERVER_HOST ),
                environment.getProperty( SERVER_PORT ),
                environment.getProperty( CONTEXT_PATH ),
                jaxRsServer().getEndpoint().getEndpointInfo().getAddress() 
            ) 
        );
  
        return config;
    }

    @Bean
    public ApiDeclarationProvider apiDeclarationProvider() {
        return new ApiDeclarationProvider();
    }
 
    @Bean
    public ApiListingResourceJSON apiListingResourceJson() {
        return new ApiListingResourceJSON();
    }
 
    @Bean
    public ResourceListingProvider resourceListingProvider() {
        return new ResourceListingProvider();
    }
 
    @Bean 
    public JaxRsApiApplication jaxRsApiApplication() {
        return new JaxRsApiApplication();
    }
 
    @Bean 
    public PeopleRestService peopleRestService() {
        return new PeopleRestService();
    }
   
    // ... 
}

In order to get rid of any possible hard-coded configuration, all parameters are passed through named properties (SERVER_PORT, SERVER_HOST and CONTEXT_PATH). Swagger exposes additional REST endpoint to provide API documentation, in our case it is accessible at: http://localhost:8080/rest/api/api-docs. It is used by Swagger UI which itself is embedded into final JAR archive and served by Jetty as static web resource.

The final piece of the puzzle is to start embedded Jetty container which glues all those parts together and is encapsulated into Starter class:

package com.example;

import org.apache.cxf.transport.servlet.CXFServlet;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.handler.HandlerList;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.util.resource.Resource;
import org.springframework.core.io.ClassPathResource;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class Starter {
    private static final int SERVER_PORT = 8080;
    private static final String CONTEXT_PATH = "rest";
 
    public static void main( final String[] args ) throws Exception {
        Resource.setDefaultUseCaches( false );
  
        final Server server = new Server( SERVER_PORT );  
        System.setProperty( AppConfig.SERVER_PORT, Integer.toString( SERVER_PORT ) );
        System.setProperty( AppConfig.SERVER_HOST, "localhost" );
        System.setProperty( AppConfig.CONTEXT_PATH, CONTEXT_PATH );    

        // Configuring Apache CXF servlet and Spring listener  
        final ServletHolder servletHolder = new ServletHolder( new CXFServlet() );      
        final ServletContextHandler context = new ServletContextHandler();   
        context.setContextPath( "/" );
        context.addServlet( servletHolder, "/" + CONTEXT_PATH + "/*" );     
        context.addEventListener( new ContextLoaderListener() ); 
     
        context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
        context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );
   
        // Configuring Swagger as static web resource
        final ServletHolder swaggerHolder = new ServletHolder( new DefaultServlet() );
        final ServletContextHandler swagger = new ServletContextHandler();
        swagger.setContextPath( "/swagger" );
        swagger.addServlet( swaggerHolder, "/*" );
        swagger.setResourceBase( new ClassPathResource( "/webapp" ).getURI().toString() );

        final HandlerList handlers = new HandlerList();
        handlers.addHandler( context );
        handlers.addHandler( swagger );
   
        server.setHandler( handlers );
        server.start();
        server.join(); 
    }
}

Couple of comments make thing a bit more clear: our JAX-RS services will be available under /rest/* context path while Swagger UI is available under /swagger context path. The one important note concerning Resource.setDefaultUseCaches( false ): because we are serving static web content from JAR file, we have to set this property to false as workaround for this bug.

Now, let's build and run our JAX-RS application by typing:

mvn clean package
java -jar target/jax-rs-2.0-swagger-0.0.1-SNAPSHOT.jar

In a second, Swagger UI should be available in your browser at: http://localhost:8080/swagger/

As a final note, there are a lot more to say about Swagger but I hope this simple example shows the way to make our REST services self-documented and easily consumable with minimal efforts. Many thanks to Wordnik team for that.

Source code is available on Github.