Troubleshooting High CPU Usage in Production Java Applications
A systematic approach to diagnosing and resolving high CPU usage issues in production Java systems, based on real-world experience.
Troubleshooting High CPU Usage in Production Java Applications
A systematic approach to diagnosing and resolving high CPU usage issues in production Java systems, based on real-world experience.
Overview
One of the projects I was responsible for used iReport + JasperReports to implement a printing system. Recently, the production system frequently became unresponsive. Restarting the service would temporarily resolve the issue, but the problem kept recurring.
After investigation, the root cause turned out to be JasperReports consuming excessive memory, which triggered continuous garbage collection (GC) and caused CPU usage to spike dramatically.
This article documents the diagnostic process and methodology used to identify the issue.
Environment
- Tomcat 7
- JDK 7
- Linux
Investigation Process
1. Check Application Logs
First, I reviewed the application logs. All requests appeared normal, and no exceptions were reported. This suggested that the problem was likely related to system resource exhaustion rather than functional errors.
2. Inspect System Resource Usage
Use the top command to observe CPU and memory usage:
topExample output:
Cpu(s): 57.6%us, 6.3%sy, 9.2%id, 26.2%wa
Mem: 3922928k total, 3794232k used, 128696k freeThe Java process was consuming nearly 190% CPU and over 28% memory.
3. Identify High-CPU Threads
ps -mp <pid> -o THREAD,tid,timeThis revealed two threads consistently consuming around 45% CPU each.
4. Convert Thread ID to Hexadecimal
printf "%x\n" <tid>5. Capture Thread Stack Traces
jstack <pid> | grep <hex_tid>Sample output:
"GC task thread#0 (ParallelGC)" runnable
"GC task thread#1 (ParallelGC)" runnableThese were clearly GC threads, indicating excessive garbage collection.
6. Analyze JVM Memory Usage
jstat -gcutil <pid> 2000 10Output showed:
- Young generation usage: 100%
- Old generation usage: 100%
- Full GC count increasing rapidly
This confirmed severe memory pressure.
Heap Dump Analysis
To further analyze memory usage, a heap dump was generated:
jmap -dump:format=b,file=dump.bin <pid>The dump was analyzed locally using VisualVM.
Findings:
net.sf.jasperreports.engine.fill.JRTemplatePrintTextobjects dominated memory usage- These objects accounted for over 58% of heap memory
Root Cause
JasperReports was creating an excessive number of objects during report rendering. When memory became insufficient, the JVM continuously triggered Full GC, causing CPU usage to skyrocket.
Resolution
There is no perfect fix without modifying JasperReports internals, but the issue can be mitigated by:
- Disabling Print When Detail Overflows
- Enabling JasperReports Virtualizer to offload memory to disk
Example Virtualizer usage:
JRVirtualizer virtualizer =
new JRFileVirtualizer(100, "/tmp");These changes significantly reduced memory pressure and stabilized CPU usage.
Lessons Learned
- High CPU usage often originates from memory issues
- GC threads can dominate CPU under memory pressure
- JVM diagnostic tools (
top,jstack,jstat,jmap) are essential - Heap dump analysis provides decisive evidence
Conclusion
This case reinforced the importance of understanding JVM internals and having a structured troubleshooting methodology.
Performance issues rarely resolve themselves—systematic analysis is the only reliable way to identify and fix them in production environments.