Wednesday, March 22, 2023

JDK-20: the most boring JDK release yet?

Still hot off the press, JDK-20 is out! These are terrific news, but what about exciting new features? Although a number of JEPs made it into the release, all of them are either preview or incubation features:

  • JEP-429: Scoped Values (Incubator): introduces scoped values, which enable the sharing of immutable data within and across threads. They are preferred to thread-local variables, especially when using large numbers of virtual threads.

  • JEP-432: Record Patterns (Second Preview): enhances the Java programming language with record patterns to deconstruct record values. Record patterns and type patterns can be nested to enable a powerful, declarative, and composable form of data navigation and processing.

  • JEP-433: Pattern Matching for switch (Fourth Preview): enhances the Java programming language with pattern matching for switch expressions and statements. Extending pattern matching to switch allows an expression to be tested against a number of patterns, each with a specific action, so that complex data-oriented queries can be expressed concisely and safely.

    If you keen to learn more, please check out Using Pattern Matching publication, a pretty comprehensive overview of this feature.

  • JEP-434: Foreign Function & Memory API (Second Preview): introduces an API by which Java programs can interoperate with code and data outside of the Java runtime. By efficiently invoking foreign functions (i.e., code outside the JVM), and by safely accessing foreign memory (i.e., memory not managed by the JVM), the API enables Java programs to call native libraries and process native data without the brittleness and danger of JNI.

  • JEP-436: Virtual Threads (Second Preview): introduces virtual threads to the Java Platform. Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications.

  • JEP-437: Structured Concurrency (Second Incubator): simplifies multithreaded programming by introducing an API for structured concurrency. Structured concurrency treats multiple tasks running in different threads as a single unit of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability.

  • JEP-438: Vector API (Fifth Incubator): introduces an API to express vector computations that reliably compile at runtime to optimal vector instructions on supported CPU architectures, thus achieving performance superior to equivalent scalar computations.

The pessimists may say this is the most boring release of JDK yet, but the optimist would argue that this is the calm before the storm (yes, I am talking about the next LTS release later this year, JDK-21). Nonetheless, there are quite a few notable changes to look at.

The standard library was the one benefited the most in JDK-20 release, let us take a closer look at what has changed:

The garbage collectors have got a considerable chunk of improvements (especially G1), covered by JDK 20 G1/Parallel/Serial GC changes in great details. To highlight just a few:

From the security perspective, it worth mentioning these changes:

  • JDK-8256660: Disable DTLS 1.0 by default: disables DTLS 1.0 by default by adding "DTLSv1.0" to the jdk.tls.disabledAlgorithms security property in the configuration file.

  • JDK-8290368: Introduce LDAP and RMI protocol-specific object factory filters to JNDI implementation: introduces LDAP-specific factories filter (defined by jdk.jndi.ldap.object.factoriesFilter property) and RMI-specific factories filter (defined by jdk.jndi.rmi.object.factoriesFilter property). The new factory filters are consulted in tandem with the jdk.jndi.object.factoriesFilter global factories filter to determine if a specific object factory is permitted to instantiate objects for the given protocol.

This releases proves one more time that boring is not always bad. If you want to learn more about these (and other) features in JDK-20, I would highly recommend going over Java 20 πŸ₯± guide.

I πŸ‡ΊπŸ‡¦ stand πŸ‡ΊπŸ‡¦ with πŸ‡ΊπŸ‡¦ Ukraine.

Tuesday, January 3, 2023

Project Loom in JDK-19: the benefits but with quirks

A few months have passed already since JDK-19 release, which we talked about previously in details. More and more developers are switching to JDK-19, turning their heads towards Project Loom and starting to play with virtual threads and structured concurrency (despite the incubation / preview status of these features). And it certainly makes sense, sooner or later, the JVM and API changes will be finalized, marking the era of the Project Loom production readiness.

In today's post, we are going to cover some not so obvious quirks (by-products of the Project Loom implementation) you should be aware of (or may run into) while switching to JDK-19 in the green-field or, more importantly, brown-field projects. Those may manifest even if you are not planning to use Project Loom just yet.

Let us kick it off with API changes. The code snippet below uses old fashioned java.lang.Thread class to spawn some work aside. The computation uses the instance of the Builder inner class, the implementation of the Builder::build() method is left off since it is not really important.

public class Starter {
    public static void main(String[] args) throws InterruptedException {
        Thread thread = new Thread() {
            public void run() {
                final Builder builder = new Builder();

    private static class Builder {
        public void build() {
            // implementation details

The code compiles and runs just fine on any modern JDK, predating JDK-19. On JDK-19 however, it fails to compile, with somewhat cryptic error.

Unresolved compilation problems: 
	Cannot instantiate the type Thread.Builder
	The method build() is undefined for the type Thread.Builder

The rare example of how existing code may clash with API changes: as part of the Project Loom, the java.lang.Thread got a new public sealed interface Builder, which is being rightly picked by the compiler (instead of our Builder class) inside Thread's the subclass. The fix is easy (but may not look pretty), just use the qualified class name:

public class Starter {
    public static void main(String[] args) throws InterruptedException {
        Thread thread = new Thread() {
            public void run() {
                final Starter.Builder builder = new Starter.Builder();

    private static class Builder {
        public void build() {
            // implementation details

Please notice that nonetheless JDK's preview features were not enabled, the preview APIs are still visible and taken into the consideration by the compiler. The issue has been reported (JDK-8287968) and the possible incompatibilities have been documented (JDK-8288416).

The next quirk we are going to look at is also related to java.lang.Thread but this time we would be using thread pools (executors) from the standard library. Let us assume we need an executor instance which tracks the moment when the new thread is started. One of the options to accomplish that is to use custom java.util.concurrent.ThreadFactory and override Thread::start() method.

public class Starter {
    public static void main(String[] args) throws Exception {
        final ExecutorService executor = Executors.newSingleThreadExecutor(new ThreadFactory() {
            public Thread newThread(Runnable r) {
                return new Thread(r) {
                    public void start() {
                        System.out.println("Thread has started!");
        executor.submit(() -> {}).get(1, TimeUnit.SECONDS);

On JDKs prior to JDK-19, the expected message will be printed out in the console.

Thread has started!

But not in JDK-19: in scope of the Project Loom implementation, the thread pools and executors (notably ForkJoinPool and ThreadPoolExecutor) do not call Thread::start() method anymore. It does not matter if the preview features are enabled or not, and sadly, there is no workaround to simulate the desired behavior (the alternative Thread::start(ThreadContainer) replacement is not accessible). The issue has been reported and is still open as of today (JDK-8292027).

Great, so far we have seen some quirks caused by Project Loom irrespective of the fact it is used or not. Moving on, let us quickly summarize the constraints you may run into when using Project Loom (by enabling JDKs preview features) and virtual threads.

  • be aware of the limitations using synchronized blocks or methods in scope of virtual threads
  • be aware of the limitations using native methods or foreign functions in scope of virtual threads
  • be aware of the limitations some APIs (like file system) in the JDK have when called in scope to virtual threads

Two JEPs, the JEP-425: Virtual Threads (Preview) and JEP-436: Virtual Threads (Second Preview) offer quite a comprehensive overview with respect to the virtual thread implementation and limitations in JDK-19 and upcoming JDK-20, worth of your time reading them. Another good source I would recommend is Coming to Java 19: Virtual threads and platform threads published by Java Magazine last May.

It is fair to say that Project Loom is evolving really fast, and this is the kind of the feature JVM really screamed for. Yes, it has some limitations now, but there are high chances that in the future most of them will be lifted or worked through.

I πŸ‡ΊπŸ‡¦ stand πŸ‡ΊπŸ‡¦ with πŸ‡ΊπŸ‡¦ Ukraine.

Thursday, October 27, 2022

JDK 19: revolution is looming!

The JDK-19 landed last month and may look like yet another routine release except it is not: Project Loom has finally delivered first bits! Alhough still in half-incubation / half-preview, it is a really big deal to the future of OpenJDK platform. Interestingly enough, a vast majority of the JDK-19 JEPs are either preview or incubating features. So, what JDK-19 has to offer?

From the security perspective, a few notable changes to mention (but feel free to go over much more detailed overview in the JDK 19 Security Enhancements post by Sean Mullan):

If you are one of the early adopters, please note that JDK-19 introduced a native memory usage regression very late in the release process so that the fix could not be integrated any more (JDK-8292654). The regression has already been addressed in JDK 19.0.1 and later.

With that, hope you are as excited as I am about JDK-19 release, the peaceful revolution is looming! And the JDK-20 early access builds are already there, looking very promising!

I πŸ‡ΊπŸ‡¦ stand πŸ‡ΊπŸ‡¦ with πŸ‡ΊπŸ‡¦ Ukraine.

Tuesday, August 30, 2022

Quick, somewhat naΓ―ve but still useful: benchmarking HTTP services from the command line

Quite often, while developing HTTP services, you may find yourself looking for a quick and easy way to throw some load at them. The tools like Gatling and JMeter are the golden standard in the open-source world, but developing meaningful load test suites in both of them may take some. It is very likely you are going to end up with one of these but during the development, using more approachable tooling gives the invaluable feedback much faster.

The command line HTTP load testing tools is what we are going to talk about. There are a lot of them, but we will focus on just a few: ab, vegeta, wrk, wrk2, and rewrk. And since HTTP/2 is getting more and more traction, the tools that support both will be highlighted and awarded with bonus points. The sample HTTP service we are going to run tests against is exposing only single GET /services/catalog endpoint over HTTP/1.1, HTTP/2 and HTTP/2 over clear text.

Let us kick off with ab, the Apache HTTP server benchmarking tool: one of the oldest HTTP load testing tools out there. It is available on most Linux distributions and supports only HTTP/1.x (in fact, it does not implement HTTP/1.x fully). The standard set of parameters like desired number of requests, concurrency and timeout is supported.

$> ab -n 1000 -c 10 -s 1 -k  http://localhost:19091/services/catalog

This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,

Benchmarking localhost (be patient)
Completed 100 requests
Completed 1000 requests
Finished 1000 requests

Server Software:
Server Hostname:        localhost
Server Port:            19091

Document Path:          /services/catalog
Document Length:        41 bytes

Concurrency Level:      10
Time taken for tests:   51.031 seconds
Complete requests:      1000
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      146000 bytes
HTML transferred:       41000 bytes
Requests per second:    19.60 [#/sec] (mean)
Time per request:       510.310 [ms] (mean)
Time per request:       51.031 [ms] (mean, across all concurrent requests)
Transfer rate:          2.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       5
Processing:     3  498 289.3    497    1004
Waiting:        2  498 289.2    496    1003
Total:          3  499 289.3    497    1004

Percentage of the requests served within a certain time (ms)
  50%    497
  66%    645
  75%    744
  80%    803
  90%    914
  95%    955
  98%    979
  99%    994
 100%   1004 (longest request)

The report is pretty comprehensive but if your service is talking over HTTP/2 or is using HTTPS with self-signed certificates (not unusual in development), you are out of luck.

Let us move on to a bit more advanced tools, or to be precise - a family of tools, inspired by wrk: a modern HTTP benchmarking tool. There are no official binary releases of wrk available so you are better off building the bits from the sources yourself.

$> wrk -c 50 -d 10 -t 5 --latency --timeout 5s https://localhost:19093/services/catalog

Running 10s test @ https://localhost:19093/services/catalog
  5 threads and 50 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   490.83ms  295.99ms   1.04s    57.06%
    Req/Sec    20.79     11.78    60.00     81.29%
  Latency Distribution
     50%  474.64ms
     75%  747.03ms
     90%  903.44ms
     99%  999.52ms
  978 requests in 10.02s, 165.23KB read
Requests/sec:     97.62
Transfer/sec:     16.49KB

Capability-wise, it is very close to ab with slightly better HTTPS support. The report is as minimal as it could get, however the distinguishing feature of the wrk is the ability to use LuaJIT scripting to perform HTTP request generation. No HTTP/2 support though.

wrk2 is an improved version of wrk (and is based mostly on its codebase) that was modified to produce a constant throughput load and accurate latency details. Unsurprisingly, you have to build this one from the sources as well (and, the binary name is kept as wrk).

$> wrk -c 50 -d 10 -t 5 -L -R 100 --timeout 5s https://localhost:19093/services/catalog

Running 10s test @ https://localhost:19093/services/catalog
  5 threads and 50 connections
  Thread calibration: mean lat.: 821.804ms, rate sampling interval: 2693ms
  Thread calibration: mean lat.: 1077.276ms, rate sampling interval: 3698ms
  Thread calibration: mean lat.: 993.376ms, rate sampling interval: 3282ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   933.61ms  565.76ms   3.35s    70.42%
    Req/Sec       -nan      -nan   0.00      0.00%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%  865.79ms
 75.000%    1.22s
 90.000%    1.69s
 99.000%    2.61s
 99.900%    3.35s
 99.990%    3.35s
 99.999%    3.35s
100.000%    3.35s

  Detailed Percentile spectrum:
       Value   Percentile   TotalCount 1/(1-Percentile)

      28.143     0.000000            1         1.00
     278.015     0.100000           36         1.11
     426.751     0.200000           71         1.25
    2969.599     0.996875          354       320.00
    3348.479     0.997266          355       365.71
    3348.479     1.000000          355          inf
#[Mean    =      933.606, StdDeviation   =      565.759]
#[Max     =     3346.432, Total count    =          355]
#[Buckets =           27, SubBuckets     =         2048]
  893 requests in 10.05s, 150.87KB read
Requests/sec:     88.85
Transfer/sec:     15.01KB

Besides these noticeable enhancements, the feature set of wrk2 is largely matching the wrk's one, so HTTP/2 is out of the picture. But do not give up just yet, we are not done.

The most recent addition to wrk's family is rewrk, a more modern HTTP framework benchmark utility, which could be thought of as wrk rewritten in beloved Rust with HTTP/2 support backed in.

$> rewrk -c 50 -d 10s -t 5 --http2 --pct --host https://localhost:19093/services/catalog

Beginning round 1...
Benchmarking 50 connections @ https://localhost:19093/services/catalog for 10 second(s)
    Avg      Stdev    Min      Max
    494.62ms  281.78ms  5.56ms   1038.28ms
    Total:   972   Req/Sec:  97.26
    Total: 192.20 KB Transfer Rate: 19.23 KB/Sec
+ --------------- + --------------- +
|   Percentile    |   Avg Latency   |
+ --------------- + --------------- +
|      99.9%      |    1038.28ms    |
|       99%       |    1006.26ms    |
|       95%       |    975.33ms     |
|       90%       |    942.94ms     |
|       75%       |    859.73ms     |
|       50%       |    737.45ms     |
+ --------------- + --------------- +

As you may have noticed, the report is very similar to the one produced by wrk. In my opinion, this is a tool which has a perfect balance of features, simplicity and insights, at least while you are in the middle of development.

The last one we are going to look at is vegeta, the HTTP load testing tool and library, written in Go. It supports not only HTTP/2, but HTTP/2 over clear text and has powerful reporting built-in. It heavily uses pipes for composing different steps together, for example:

$> echo "GET https://localhost:19093/services/catalog" | 
   vegeta attack -http2 -timeout 5s -workers 10 -insecure -duration 10s | 
   vegeta report

Requests      [total, rate, throughput]         500, 50.10, 45.96
Duration      [total, attack, wait]             10.88s, 9.98s, 899.836ms
Latencies     [min, mean, 50, 90, 95, 99, max]  14.244ms, 529.886ms, 537.689ms, 918.294ms, 962.448ms, 1.007s, 1.068s
Bytes In      [total, mean]                     20500, 41.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:500
Error Set:

You could also ask for histograms (with customizable reporting buckets), not to mention beautiful plots with plot command:

$> echo "GET http://localhost:19092/services/catalog" | 
   vegeta attack -h2c -timeout 5s -workers 10 -insecure -duration 10s | 
   vegeta report  -type="hist[0,200ms,400ms,600ms,800ms]"

Bucket           #    %       Histogram
[0s,     200ms]  86   17.20%  ############
[200ms,  400ms]  102  20.40%  ###############
[400ms,  600ms]  98   19.60%  ##############
[600ms,  800ms]  113  22.60%  ################
[800ms,  +Inf]   101  20.20%  ###############

Out of all above, vegeta is clearly the most powerful tool in terms of capabilities and reporting. It has all the chances to become the one-stop HTTP benchmarking harness even for production.

So we walked through a good number of tools, which one is yours? My advice would be: for HTTP/1.x, use ab; for basic HTTP/2 look at rewrk; and if none of these fit, turn to vegeta. Once your demands and sophistication of the load tests grow, consider Gatling or JMeter.

With that, Happy HTTP Load Testing!

Saturday, May 28, 2022

The lost art of learning ... (a new programming language)

I remember the time, 20 years ago, when as a junior software developer I was driven by constant, non-stoppable desire to learn new things: new features, techniques, frameworks, architectures, styles, practices, ... and surely, new programming languages. This curiosity, hunger to learn, drove me to consume as many books as possible and to hunt around the clock for the scanty online publications available at that time.

Year after year, the accumulated knowledge backed by invaluable experience led me to settle on proven and battle-tested (in production) choices, which have worked, work these days and I am sure will work in the future. For me, the foundation was set in Java (later Scala as well) and JVM platform in general. This is where my expertise lays in, but don't get me wrong, I still try to read a lot, balancing on the edge between books and enormous amount of the blog posts, articles, talks and podcast. How many programming languages I was able to master for the last, let say, 10 years? If we count the JVM ecosystem only, Scala and Groovy would make the list, but otherwise - none, zero, 0. The pace of learning on this front has slowed for me, significantly ...

Why is that? Did I get burned out? Nope (thanks God). Fed up by development? Nope, hopefully never. Opionated? Yes, certainly. Older? Definitely. Less curious? Not really. To be fair, there are so many things going on in the JVM ecosystem, so many projects are being developed by the community, it has become increasingly difficult to find the time for something else! And frankly, why bother if you could do Java for the next 20 years without any fear of it disappearing or becoming irrelevant.

JVM is amazing platform but its fundamental principles heavily impact certain design aspects of any programming language built on top of it. It is good and bad (everything is a trade-off), but with the time, you as a developer get used to it, questioning if anything else makes sense. As many others, I ended up in the bubble which has to be blown up. How? I decided to learn Rust, and go way beyond the basics.

But no worries, I do not intend to sell you Rust but rather share an alarming discovery: the lost art of learning a new programming language. It all started as usual, crunching through the blogs, people's and companies' experiences and finally getting the book, Programming Rust: Fast, Safe Systems Development (really good one but the official The Rust Programming Language is equally awesome). A few months (!) later, I was done reading, feeling ready for delivering cool, production-ready Rust applications, or so I thought ... The truth - I was far, very far from it ... and those are the things I did wrong.

  • Rust is concise (at least, to my taste), modern and powerful, but it is not a simple language. If some language aspects are not clear, do not skip over but try to understand them. If book is not doing sufficiently good job at it, look around till you get them, otherwise you risk to miss even more. I did not do that (overly confident that nothing could be harder than learning functional Scala) and at some point I was somewhat lost.

  • In the same vein, Rust has unique perspective on many things, including concurrency and memory management, and investing the time to understand what those are is a must (the Java/JVM background is not very helpful here). Essentially, you have to "unlearn" in order to grasp certain new concepts, this is not easy and confuses the brain.

  • Stay very focused and methodical, try to read at least few pages **every day**. Yes, we are all super busy, always distracted, have families and started to travel again, ... yada yada yada. Also, Programming Rust: Fast, Safe Systems Development is not a slim book: ~800 pages! Irregular, long breaks in reading were killing my progress.

  • Practice, practice and practice. If you have some pet projects in mind, this is great, if not - just come up with anything ad-hoc but try to use the things you have just read about. Learning the "theory" is great, but mastering the programming language is all about using it every day to solve problems. I screwed there epically by following "bed time story" style ...

  • Learn the coding practices and patterns by looking into the language's ecosystem, in this regard a curated list of Rust code and resources is super useful.

  • Subscribe to newsletters to stay up to date and be aware what is going on in the core development and in the community. In case of Rust, the This Week in Rust turned out to be invaluable.

  • Watch talks and presentations, luckily most of the conferences do publish all their videos online these days. I stumbled upon the official Rust YouTube channel and since then follow it closely.

Learning new programming language is rewarding experience, in particular when it happens to be Rust, a most beloved programming language for the last six years. But not only because of that, Rust is perfect fit for a system programming, something Java/JVM is inherently not suited very well (yes, it is better now with JEP 330: Launch Single-File Source-Code Programs, jbang and GraalVM native images but still far from being perfect for a task). From the other side, it is quite challenging when the programming language you are learning is not part of your day-to-day job.

As of this moment, I am not sure if Rust ever becomes my primary programming language but one thing I do know: I think every passionate software developer should strive to learn at least one new programming language every 1-2 years (we have so many good ones). It is amazing mental exercice and I used to follow this advice back in the days but lost the sight of it along the journey. I really should have not.

Happy learning and peace to everyone!

Friday, March 25, 2022

The show must go on: JDK-18 is here!

It feels like JDK-17 release just landed not so long ago, but there is a new one already out: please welcome JDK-18! Although it is not an LTS release, there are quite a few interesting features to talk about (the preview and incubator ones are not included).

The JDK-18 comes in with a large number of the security enhancements, including further work on phasing out SecurityManager and its APIs:

And while we are enjoying the perils of JDK-18, the early access builds of JDK-19 are already available. There are not much details of what is coming just yet, but may be Project Loom would finally make its appearance in some form or another? Who knows ...

Peace to everyone!