Monday, September 19, 2016

Stop being clueless, just instrument and measure: using metrics to gain insights about your JAX-RS APIs

How often we, developers, build these shiny REST(ful) APIs (or microservices, joining the hype here) hoping they are going to just work in production? There is enormous amount of frameworks and toolkits out there which give us the ability to be very productive at development, however when things are deployed in production, most of them keep us clueless about what it is going on.

Spring Boot is certainly an exception from this rule and in today's post we are going to talk about using Spring Boot Actuator along with terrific Dropwizard Metrics library to collect and expose metrics about Apache CXF-based JAX-RS APIs. To keep things even more interesting, we are going to feed the metrics into amazing Prometheus collector and visualize using beautiful Grafana dashboards.

With that, let us get started by defining simple JAX-RS service to manage people, PeopleRestService. We are not going to plug it to any external storage or whatnot but instead just cheat a bit by relying on Spring Reactor project and introducing random delays while returning a predefined response.

public class PeopleRestService {
    private final Random random = new Random();
    public Collection<Person> getPeople() {
        return Flux
                new Person("", "John", "Smith"), 
                new Person("", "Bob", "Bobinec")

Because we are going to use Spring Boot and its automagic discovery capabilities, the configuration is going look rather trivial. We have already talked about using Spring Boot along with Apache CXF but now even this part has been improved, thanks to the Apache CXF Spring Boot Starter available (it becomes just a matter of adding one more dependency to your project).

public class AppConfig {
    @Bean(destroyMethod = "destroy")
    public Server jaxRsServer(final Bus bus) {
        final JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();

        factory.setProvider(new JacksonJsonProvider());

        return factory.create();
    public PeopleRestService peopleRestService() {
        return new PeopleRestService();

At this point we did everything we needed to have a bare bones Spring Boot hosting Apache CXF-based JAX-RS service. Please take a note that by default with Apache CXF Spring Boot Starter all APIs are served under /services/* mapping.

Till now we said nothing about metrics yet and that is what we are going to talk about next. Dropwizard Metrics is a de-facto standard for JVM applications and has a rich set of different kind of metrics (meters, gauges, counters, histograms, ...) and reporters (console, JMX, HTTP, ...). Consequently, the MetricRegistry is a central place to manage all the metrics. And surely, the typical way to expose metrics for JVM-based application is JMX so let us include the respective beans into configuration.

@Bean(initMethod = "start", destroyMethod = "stop")
public JmxReporter jmxReporter() {
    return JmxReporter.forRegistry(metricRegistry()).build();
public MetricRegistry metricRegistry() {
    return new MetricRegistry();

You are free to create as many metrics as you need and we could have added a few for our PeopleRestService as well. But luckily, Apache CXF has a dedicated MetricsFeature feature to integrate with Dropwizard Metrics and collect all the relevant ones, with zero effort. A minor update of the JAXRSServerFactoryBean initialization is enough.

@Bean(destroyMethod = "destroy")
public Server jaxRsServer(final Bus bus) {
    final JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();

    factory.setProvider(new JacksonJsonProvider());
        new MetricsFeature(new CodahaleMetricsProvider(bus))

    return factory.create();

Just a quick note about By default, Apache CXF is going to name metrics quite verbosely, including unique bus identifier as part of the name as well. It is not very readable so we just override the default behaviour using static 'cxf-services.' prefix. This is how those metrics are going to look like in the JMX console.

It looks terrific, but JMX is not a very pleasant piece of technology to dial with, could we do better? Here is where Spring Boot Actuator comes into play. Along with many other endpoints, it is able to expose all the metrics over HTTP protocol by adding a couple of properties to application.yml file:

    enabled: true
    unique-names: true

    enabled: true 

It is important to mention here that metrics, along with other Spring Boot Actuator endpoints, may expose a sensitive details about your application so it is always a good idea to protect them, for example, using Spring Security and HTTP Basic Authentication. Again, a few configuration properties in application.yml will do all the work:

    - /services/**
    name: guest
    password: guest

Brilliant, if we run our application and access /metrics endpoint (providing guest/guest as credentials), we should see quite an extensive list of metrics, like these ones:

> curl -u guest:guest http://localhost:19090/metrics

    "classes": 8673,
    "classes.loaded": 8673,
    "classes.unloaded": 0,
    "counter.status.200.metrics": 5,
    "": 1,
    "counter.status.401.error": 2,
    "cxf-services.Attribute=Checked Application Faults.count": 0,
    "cxf-services.Attribute=Checked Application Faults.fifteenMinuteRate": 0.0,
    "cxf-services.Attribute=Checked Application Faults.fiveMinuteRate": 0.0,
    "cxf-services.Attribute=Checked Application Faults.meanRate": 0.0,

It would be great to have some dedicated monitoring solution which could understand these metrics, store them somewhere and give us useful insights and aggregations in real-time. Prometheus is exactly the tool we are looking for but there are bad and good news. On a not so good side, Prometheus does not understand the format which Spring Boot Actuator uses to expose metrics. But on a bright side, Prometheus has a dedicated Spring Boot integration so the same metrics could be exposed in Prometheus-compatible format, we are few beans away from that.

public class PrometheusConfig {
    public CollectorRegistry collectorRegistry() {
        return new CollectorRegistry();

    public SpringBootMetricsCollector metricsCollector(
            final Collection<PublicMetrics> metrics, final CollectorRegistry registry) {
        return new SpringBootMetricsCollector(metrics).register(registry);

    public ServletRegistrationBean exporterServlet(final CollectorRegistry registry) {
        return new ServletRegistrationBean(new MetricsServlet(registry), "/prometheus");

With this configuration in place, metrics alternatively are going to be exposed under /prometheus endpoint, let us check this out.

> curl -u guest:guest http://localhost:19090/prometheus

# HELP cxf_services_Attribute_Data_Read_fifteenMinuteRate cxf_services_Attribute_Data_Read_fifteenMinuteRate
# TYPE cxf_services_Attribute_Data_Read_fifteenMinuteRate gauge
cxf_services_Attribute_Data_Read_fifteenMinuteRate 0.0
# HELP cxf_services_Attribute_Runtime_Faults_count cxf_services_Attribute_Runtime_Faults_count
# TYPE cxf_services_Attribute_Runtime_Faults_count gauge
cxf_services_Attribute_Runtime_Faults_count 0.0
# HELP cxf_services_Attribute_Totals_snapshot_stdDev cxf_services_Attribute_Totals_snapshot_stdDev
# TYPE cxf_services_Attribute_Totals_snapshot_stdDev gauge
cxf_services_Attribute_Totals_snapshot_stdDev 0.0

All the necessary pieces are covered and the fun time is about to begin. Prometheus has a very simple and straightforward installation steps but Docker is certainly the easiest one. The project repository includes docker-compose.yml file in docker folder to get you started quickly. But before, let us build the Docker image of our Spring Boot application using Apache Maven:

> mvn clean install

Upon successful build, we are ready to use Docker Compose tool to start all the containers and wire them together, for example:

> cd docker
> docker-compose up

Recreating docker_cxf_1
Recreating docker_prometheus_1
Recreating docker_grafana_1
Attaching to docker_cxf_1, docker_prometheus_1, docker_grafana_1

If you are using native Docker packages, just open your browser at http://localhost:9090/targets where you could see that Prometheus has successfully connected to our application and is consuming its metrics (for older Docker installations, please use the address of your Docker Machine).

The cxf target came preconfigured from Prometheus configuration file, located at docker/prometheus.yml and used to build the respective container in the docker-compose.yml (please notice the presence of the credentials to access /prometheus endpoint):

# my global config
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.

  - job_name: 'cxf'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

      username: guest
      password: guest

    metrics_path: '/prometheus'

    # Default scheme is http
      - targets: ['cxf:19090']

Prometheus supports graph visualizations but Grafana is unquestionable leader in mastering beautiful dashboards. It needs a bit of configuration though which could be done over web UI or, even better, through API . The data source is the most important one, and in our case should point to running Prometheus instance.

> curl 'http://admin:admin@localhost:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"name": "prometheus", "type": "prometheus", "url":"http://prometheus:9090", "access":"proxy", "isDefault":true}'

Done! Adding a sample dashboard is the next thing to do and again, API is the best way to accomplish that (assuming you are still in the docker folder)

> curl 'http://admin:admin@localhost:3000/api/dashboards/db' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary @cxf-dashboard.json

The same rule applies here, if you are still using Docker Machine, please replace localhost with appropriate virtual machine address. Also please notice that you have to do this only once when the containers are created first time. The configuration will be kept for existing containers.

To finish up, let us open our custom Grafana dashboard by navigating to http://localhost:3000/dashboard/db/cxf-services, using admin/admin as default credentials. Surely, you are going to see no data at first but by generating some load (f.e. using siege), we could have gotten interesting graphs to analyze, for example:

Those graphs were made simple (and not so much useful to be honest) on purpose, just to demonstrate how easy it is to collect and visualize metrics from your Apache CXF-based JAX-RS APIs in real-time. There are so many useful metrics our applications could expose that no shortage of ideas here expected. Plus, Grafana allows to define quite sophisticated graphs and queries, worth of another article, but official documentation is a good point to start off.

Hope this post will encourage everyone to think seriously about monitoring your JAX-RS APIs by exposing, collecting and visualizing important metrics. This is just a beginning ...

The complete project sources are available on Github.