Emerging Trends in Java Garbage Collection
Efficient garbage collection is essential to Java application performance. What worked well in 1995, when Java was first released, won’t cope with the high demands of modern computing. To stay ahead of the game, you need to make sure you’re using the best GC algorithm for your application. In this article, we’ll look at evolving trends in Java garbage collection, and take a quick look at what’s planned for the future. Computing Trends Any software that stands the test of time must evolve to keep up with technology. To understand how and why GC has changed over the years, we must look at overall trends in IT. Hardware has progressively trended towards becoming smaller, cheaper, faster and more powerful. Multi-core CPUs have become the norm, and GPUs have taken over the load of graphics processing. In 1995, a typical PC had around 8MB of memory. Most laptops now have at least 8GB. Faster networks have enabled larger numbers of users to access IT systems directly. On the one hand, cloud computing, BigData and AI require huge memory and CPU capacities. On the other, smart devices and the IoT require small, ultra-efficient systems. Robotics, virtual reality and other real time processes need fast and consistent response times. Evolution of Java Garbage Collection If you’re not yet familiar with how GC works, I recommend this article: Java Garbage Collection. Initially, Java had a single, simple GC algorithm, suitable for the small memory sizes and single-core processors of that era. With today’s wide divergence in IT requirements, from the very large to the very small, it’s no longer one-size-fits-all. Java now has six different GC algorithms (or seven, if you count Epsilon, used only for benchmarking). Each was written with a particular purpose in mind, and most of them still have a place today. Let’s have a brief look at them. Serial: The original algorithm. It’s single-threaded, and all GC is stop-the-world, resulting in high latency. You may still use it where the heap size is very small, or you’re running in a small container or small device where multi-cores are not available. Parallel: Added to take advantage of multi-cored CPUs in Java 5. It has much better throughput, but latency is high. It’s used in processes where latency is not important. Large batch processes may perform well with this algorithm. CMS (Concurrent Mark Sweep): Released in Java 4, it reduced latency by doing much of the work concurrently. It didn’t deal well with compaction, and was later discontinued. G1: Released in Java 7 to provide a good balance between latency and throughput. The heap is divided into equal-sized regions, each collected separately. Most of the work is done concurrently. You can configure pause time goals, making it simple to tune. It has been the default since Java 8, and is usually the best choice if the heap size is less than 32GB. Z: Introduced in Java 11, it aimed to provide low latency and good scalability. Almost all the work is done concurrently, and it can scale for heap sizes in terabytes. It’s ideal for cloud computing and BigData. Originally non-generational, it has had the option of generational GC since Java 21. This resulted in a huge improvement in performance, at the expense of a larger memory footprint. Shenandoah: Developed by Red Hat, it had much the same objectives as Z. It’s also good for very large heap sizes and applications requiring low latency. Future Planned Developments Developers are currently working on: More features for Z GC; Throughput improvements for G1 GC; Footprint reduction in all algorithms; A new algorithm, N2GC, which is an extension of G1. Suitable for BigData and other large caches, it aims to reduce latency and fragmentation. Monitoring and Tuning It’s a good idea to evaluate and compare at least two algorithms for high-performance applications. A great tool for evaluating GC efficiency is GCeasy, which instantly provides a range of analytical data. Conclusion Java GC must follow general trends in computing to remain competitive. Coping with large heap sizes, and delivering low latency, are the key requirements today. The development team works continually to improve performance and cater for new trends. GC efficiency has improved enormously since Java 8. Upgrading to the latest version of Java gives significant performance gains.

Efficient garbage collection is essential to Java application performance. What worked well in 1995, when Java was first released, won’t cope with the high demands of modern computing. To stay ahead of the game, you need to make sure you’re using the best GC algorithm for your application.
In this article, we’ll look at evolving trends in Java garbage collection, and take a quick look at what’s planned for the future.
Computing Trends
Any software that stands the test of time must evolve to keep up with technology. To understand how and why GC has changed over the years, we must look at overall trends in IT.
Hardware has progressively trended towards becoming smaller, cheaper, faster and more powerful. Multi-core CPUs have become the norm, and GPUs have taken over the load of graphics processing. In 1995, a typical PC had around 8MB of memory. Most laptops now have at least 8GB. Faster networks have enabled larger numbers of users to access IT systems directly.
On the one hand, cloud computing, BigData and AI require huge memory and CPU capacities. On the other, smart devices and the IoT require small, ultra-efficient systems. Robotics, virtual reality and other real time processes need fast and consistent response times.
Evolution of Java Garbage Collection
If you’re not yet familiar with how GC works, I recommend this article: Java Garbage Collection.
Initially, Java had a single, simple GC algorithm, suitable for the small memory sizes and single-core processors of that era. With today’s wide divergence in IT requirements, from the very large to the very small, it’s no longer one-size-fits-all. Java now has six different GC algorithms (or seven, if you count Epsilon, used only for benchmarking).
Each was written with a particular purpose in mind, and most of them still have a place today. Let’s have a brief look at them.
Serial: The original algorithm. It’s single-threaded, and all GC is stop-the-world, resulting in high latency. You may still use it where the heap size is very small, or you’re running in a small container or small device where multi-cores are not available.
Parallel: Added to take advantage of multi-cored CPUs in Java 5. It has much better throughput, but latency is high. It’s used in processes where latency is not important. Large batch processes may perform well with this algorithm.
CMS (Concurrent Mark Sweep): Released in Java 4, it reduced latency by doing much of the work concurrently. It didn’t deal well with compaction, and was later discontinued.
G1: Released in Java 7 to provide a good balance between latency and throughput. The heap is divided into equal-sized regions, each collected separately. Most of the work is done concurrently. You can configure pause time goals, making it simple to tune. It has been the default since Java 8, and is usually the best choice if the heap size is less than 32GB.
Z: Introduced in Java 11, it aimed to provide low latency and good scalability. Almost all the work is done concurrently, and it can scale for heap sizes in terabytes. It’s ideal for cloud computing and BigData. Originally non-generational, it has had the option of generational GC since Java 21. This resulted in a huge improvement in performance, at the expense of a larger memory footprint.
Shenandoah: Developed by Red Hat, it had much the same objectives as Z. It’s also good for very large heap sizes and applications requiring low latency.
Future Planned Developments
Developers are currently working on:
More features for Z GC;
Throughput improvements for G1 GC;
Footprint reduction in all algorithms;
A new algorithm, N2GC, which is an extension of G1. Suitable for BigData and other large caches, it aims to reduce latency and fragmentation.
Monitoring and Tuning
It’s a good idea to evaluate and compare at least two algorithms for high-performance applications. A great tool for evaluating GC efficiency is GCeasy, which instantly provides a range of analytical data.
Conclusion
Java GC must follow general trends in computing to remain competitive. Coping with large heap sizes, and delivering low latency, are the key requirements today.
The development team works continually to improve performance and cater for new trends. GC efficiency has improved enormously since Java 8. Upgrading to the latest version of Java gives significant performance gains.