Continuous deep dive with JFR events

November 13, 2023
Posted in Performance
November 13, 2023 Peter Paul Bakker

Introduction

This tutorial provides a practical walkthrough on using JDK Flight Recorder (JFR) for performance testing. JFR is a diagnostic tool integrated into the Java Virtual Machine (JVM) that collects detailed runtime information with minimal performance overhead. It’s particularly useful for identifying issues, understanding JVM behavior, and improving application performance.

We will set up a controlled testing environment using Docker and Afterburner and introduce JFR Exporter to collect JFR data and stream it to InfluxDB and show it in a Grafana dashboard. Our focus is on equipping you with the knowledge to seamlessly integrate JFR into your testing workflow, enabling you to efficiently pinpoint and analyze performance regressions.

Let’s begin the journey towards a more performant and reliable application with JDK Flight Recorder. We released the JFR Exporter as an open source project so you can try it out now.

What you will get dashboards with detailed (and live!) information about your application. 

For instance, it will show memory allocation rate, garbage collections events and large memory allocations. (Next to the usual suspects such as CPU and heap usage.)

Memory allocation shows the amount of memory allocated over time, so how many MB per second. 

The garbage collector events for the old/young collector show when collections happen and how long they take. These events are more detailed than the data from actuator.

Safe points events show the time that it takes to reach a safe point: simply put, the time it takes before all threads are stopped before a garbage collect can do its work during a stop-the-world. This information is not present in most metrics or profilers, but is via JFR. 

Big allocations over time show when allocations happen. Besides that, it also shows what type of object is allocated and where it happened in the code with a stack trace. The full stack trace can be shown from the table view.

Follow the steps below to see and experiment yourself.

Prerequisites

Before diving into the specifics of utilizing Java Flight Recorder for performance testing, ensure you have the following prerequisites in place:

  1. Java 17 SDK or Higher: The examples and instructions in this post are compatible with Java 17 SDK or newer versions. Verify your Java installation by running java -version in your command line interface. If you need to install or update your Java SDK, the easiest way is to use SdkMan. Alternatively, visit the official Oracle website or use a package manager like scoop on Windows, apt on Ubuntu or homebrew on macOS.
2. Docker: Docker must be installed and running on your machine. It is used to containerize your application and environment, ensuring a consistent testing platform. Check your Docker installation by executing docker --version. If you need to install Docker, follow the instructions on the Docker website.

3. Afterburner: We are going to demonstrate using JFR in a performance test scenario and we are using Afterburner, which can be found at Perfana’s GitHub repository. Afterburner is a Java Spring Boot service that can act as a System Under Test and simulate performance bottlenecks. Alternatively, you can set up one of your own applications as a System Under Test, but you will have to adapt the instructions accordingly yourself. 

4. Optional: the watch command to use for a poor-man’s load test: brew install watch

Make sure Java and Docker are properly installed and configured before moving on to the next section where we will set up Afterburner together with Java Flight Recorder for capturing insightful performance data.

Setup and Configuration

In this section, we will set up InfluxDB for storing the JFR metrics data, and Grafana for visualizing these metrics. 

1. Setup InfluxDB

docker run --rm -d --name jfr-influx -p 8086:8086 \
-v influxdb:/var/lib/influxdb \
influxdb:1.8

Create an InfluxDB database, and check that it has been created. When done, exit.

$ docker exec -it jfr-influx influx
> create database jfr
> show databases
name: databases
name
----
_internal
jfr
> exit

2. Setup Grafana

docker run --rm -d -p 3000:3000 --name=my-grafana grafana/grafana

Go to http://localhost:3000 in a browser to access Grafana. Use admin/admin to login (and update password).

3. Add InfluxDb datasource

First, find the internal IP address of the InfluxDb container:

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' jfr-influx

Then, in Grafana, go to Connections to add a new InfluxDb connection (here) using the following properties:

URL: http://<ip address>:8086

Database: jfr

Replace <ip address> with the IP address of the jfr-influx container, in the example above: 172.17.0.2.

When you click “Save & test”: it should be green and good to go!

4. JFR Dashboard

Final step: add the JFR dashboard

Download the dashboard definition using curl (or your browser):

curl -O -L https://raw.githubusercontent.com/perfana/jfr-exporter/main/dashboards/jfr-dashboard-export-share.json

And add the dashboard to Grafana by importing this definition file via “Import Dashboard” here.

Make sure to select the Influx datasource and press “Import”:

After the import there is an empty dashboard.

Looking good! We can proceed to generate data for it.

JFR Exporter and Afterburner

Download the JFR Exporter and Afterburner:

curl -O -L https://github.com/perfana/jfr-exporter/releases/download/0.2.0/jfr-exporter-0.2.0.jar
curl -O -L https://github.com/perfana/afterburner/releases/download/2.3.0/afterburner-java-2.3.0-exec.jar

Start Afterburner with the JFR Exporter agent:

java -Xmx1024m -javaagent:jfr-exporter-0.2.0.jar=influxUrl=http://localhost:8086,application=afterburner -jar afterburner-java-2.3.0-exec.jar

Now go to your new JFR dashboard and you should see some data. 

Step-by-Step Instructions

To make things more interesting, start some load from another terminal/command line (keep Afterburner running):

watch "curl -s localhost:8080/memory/grow\?objects=1000\&size=1000"

The watch command calls a memory leak request every 2 seconds until you press control-C. You can also use the url from the browser and press “refresh” multiple times.

Set dashboard to “Last 15 minutes” and enable auto refresh to see live data coming in:

Check the garbage collection metrics:

And also the large memory allocations during the startup phase:

If you look at the “Allocation Stacktraces” panel in the Grafana dashboard (hover over the panel and press ‘v’ to get a bigger view), you see some big allocations during the startup, with the following stack trace. Press the “eye” to see a full stack trace.

One of the stack traces looks as follows:

java.lang.AbstractStringBuilder.<init> (line: 101)

java.lang.StringBuilder.<init> (line: 106)

org.springframework.boot.loader.jar.JarFileEntries.getEntry (line: 250)

org.springframework.boot.loader.jar.JarURLConnection.get (line: 263)

org.springframework.boot.loader.jar.Handler.openConnection (line: 89)

java.net.URL.openConnection (line: 1258)

jdk.internal.loader.URLClassPath$Loader.getResource (line: 650)

java.net.URLClassLoader.findClass (line: 420)

java.lang.ClassLoader.loadClass (line: 593)

org.springframework.boot.loader.LaunchedURLClassLoader.loadClass (line: 151)

java.lang.ClassLoader.loadClass (line: 526)

kotlin.reflect.jvm.ReflectJvmMapping.getKotlinFunction (line: 146)

org.springframework.http.converter.json.Jackson2ObjectMapperBuilder.registerWellKnownModulesIfAvailable (line: 884)

org.springframework.http.converter.json.Jackson2ObjectMapperBuilder.configure (line: 718)

org.springframework.http.converter.json.Jackson2ObjectMapperBuilder.build (line: 701)

org.springframework.http.converter.json.MappingJackson2HttpMessageConverter.<init> (line: 59)

java.lang.Thread.runWith (line: 1596)

The stack trace hints to a combination of Spring boot, Jackson json mappings and classloading via URL. There is a byte array (byte[]) allocation of 135.512.400 bytes (~135MB!) in loading jar file entries and creating a StringBuilder doing so, from the background-preinit thread.

This is very detailed and useful information if you want to reduce the memory requirements of the Afterburner application: you get to see the place in the code where the allocation happened, what the size in bytes is and what data structure is allocated. This helps developers to solve or investigate straight away. 

The fact that a 135MB allocation is done seems a bit over the top: that is a very large allocation for loading jar files. In fact, after further investigation, what you see here is a sample of allocations that is given a “weight” in bytes. The JFR specs describe these “jdk.ObjectAllocationSample” events as: “The relative weight of the sample. Aggregating the weights for a large number of samples, for a particular class, thread or stack trace, gives a statistically accurate representation of the allocation pressure.

What you see here is the startup process of a Spring Boot application, with lots of class loading and jar inspections. This accumulates in large allocation weight per sample. To speed up startup and to reduce the allocation pressure, you can apply techniques such as limiting the scan of classes via the @ComponentScan annotation.

Beware to stop the watch process or the memory usage will continue to grow until you get errors like the following:

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "http-nio-8080-exec-8"

Note that we have set the upper limit of the Java process to 1 GB to limit what it can use (-Xmx1024m). The graph of the heap used shows a typical graph of a memory leak:

If pressing CTRL-C will not kill the Java process, try to “kill -9” the Java process.

Let’s look at another stack trace:

java.util.Calendar$Builder.build (line: 1497)

sun.util.locale.provider.CalendarProviderImpl.getInstance (line: 87)

java.util.Calendar.createCalendar (line: 1700)

java.util.Calendar.getInstance (line: 1630)

io.perfana.afterburner.domain.MusicScore.<init> (line: 12)

io.perfana.afterburner.domain.MusicMachineMemory.<init> (line: 15)

io.perfana.afterburner.controller.MemoryLeak.memoryGrow (line: 44)

A large allocation of 9.174.920 bytes (~9MB), in the Afterburner code. The stack trace indicates it is happening in the Calendar$Builder.build method which looks a bit suspicious. If I look up the specific line there is an object allocation there: new GregorianCalendar(zone, locale, true);. Why would a Calendar (a date!) need an 9MB allocation? Again, this is an aggregation of allocations, so expect this method to be called multiple times per sample.

So if you see a stacktrace like this, you can solve it by not using the Calendar class. The code is in an application specific class, so the source code is under our control: nl.stokpop.afterburner.domain.MusicScore. What can you change to this code to avoid the use of the Calendar class?

Our advice is to minimize the use of Calendar/GregorianCalendar and preferably switch to leaner alternatives, such as the java.time classes.

While writing this blog we thought of a more specific allocation metric to find large allocations, one where the event is just the one big allocation. And we found one, th ObjectAllocationOutsideTLAB event. Most regular sized objects are allocated within a Thread Local Allocation Buffer (TLAB). This is faster and leaner than allocation without TLAB, because each thread has its own TLAB and no coordination between threads is needed. But big allocations that do not fit take place outside the TLAB. And we like to know about it to see if we can avoid these big allocations, for example, by making objects smaller.

The jfr-explorer now reports on these, and creates an event for objects larger than 32kB. You can change this limit via the jfr-explorer arguments.

Use this as an example to trigger a one-time large allocation:

curl localhost:8080/memory/churn\?objects=1818000

Look in the “large allocation” graph, and zoom into the time of the previous request:

You see many allocations happening about the same time, about doubling in size. This is a pattern you will see more often, because of the dynamic growth behavior of many classes in JDK. Dynamic growth is the doubling of backing storage when running out of free slots to store more items, such as array size. You often see array allocations and array copies. In cases where it matters, this can be avoided by presizing the structures, such as using the StringBuilder(int capacity) constructor.

Summary

 We showed you how to use the Perfana JFR Exporter in the most basic setup. There are many interesting metrics with extra information rarely found in other APM tools from the amazing JDK Flight Recorder built into the JVM at a low level that cannot be reached by other agents.

Having these metrics available in dashboards makes it straightforward to incorporate them in continuous performance validation with Perfana. For instance, set a requirement that large allocation should not exceed 1MB.

Try it on your own applications and let us know what you find!

We are especially curious about the value and accuracy of the stack traces. Let us know when you find interesting and impactful cases, we are more than happy to hear your stories.

 

Further Resources

The jfr-exporter Github repository

JDK Flight Recorder

Java 17

Java Safepoints

TLAB

Peter Paul Bakker

Performance Consultant
cookie statement   |   disclaimer   |   privacy policy   |   contact  

© 2019-2024 Perfana | All rights reserved
socket