Thursday, August 28, 2014

JDK 8: Revisiting ReservedCodeCacheSize and CompileThreshold

In[1], someone has commented that:
Did you specify any extra JVM parameters to reach the state of full CodeCache? Some comments on the internet indicate this happens if you specify too low "-XX:CompileThreshold" and too much bytecode gets compiled by HotSpot very early.

when the following warning was seen:
VM warning: CodeCache is full. Compiler has been disabled.

In this article, we will look at tuning CompileThreshold and ReservedCodeCacheSize in JDK 8.


CompileThreshold



By default, CompileThreshold is set to be 10,000:

     intx CompileThreshold     = 10000       {pd product}

As described in [2], we know {pd product} means "platform-dependent product option".  Our platfrom is linux-x64 and that will be used for this discussion.

Very often, you see people setting the threshold lower.  For example
-XX:CompileThreshold=8000
Why?  Since the JIT compiler does not have time to compile every single method in an application, all code starts out initially running in the interpreter, and once it becomes hot enough it gets scheduled for compilation. To help determine when to convert bytecodes to compiled code, every method has two counters:
  • Invocation counter
    • Which is incremented every time a method is entered
  • Backedge counter
    •  Which is incremented every time control flow moves from a higher bytecode index to a lower one
Whenever either counter is incremented by the interpreter it checks them against a threshold, and if they cross this threshold, the interpreter requests a compile of that method.

The threshold used for the invocation counter is called the CompileThreshold, the backedge counter uses a more complex formula derived from CompileThreshold and OnStackReplacePercentage.  So, if you set the threshold lower, HotSpot compiles methods earlier.  And, in some cases, that can help the performance of server codes.

ReservedCodeCacheSize


A code cache is where JVM uses to store the native code generated for compiled methods.  As described in [3], to improve an application's performance, you can set the "reserved" code cache size:
  • -XX:ReservedCodeCacheSize=256m
when tiered compilation is enabled for the HotSpot.  Basically it sets the maximum size for the compiler's code cache.  In [4], we have shown that an application can run faster if tiered compilation is enabled in a server environment.  However, code cashe size also needs to be specified larger.

What's New in JDK 8?


We have seen people setting the following JVM options:
  • -XX:ReservedCodeCacheSize=256m -XX:+TieredCompilation
or
  • -XX:CompileThreshold=8000 
in JDK 7.  In JDK 8, do we still need to set them?  The answer is that it depends on the platform.  On linux-x64 platforms, those setting are no longer necessary.  Here we will describe why.

In JDK 8, it chooses the following default values for linux-x64 platforms:

    bool TieredCompilation        = true       {pd product}     
    intx CompileThreshold         = 10000      {pd product}
    uintx ReservedCodeCacheSize   = 251658240  {pd product}


When tiered compilation is enabled, two things happen:
  1. CompileThreshold is ignored
  2. A bigger code cache is needed.  Internally, HotSpot will set it to be 240 MB (i.e., 48 MB * 5)
That's why we say that people don't need to set the following options anymore in JDK8:
  • -XX:ReservedCodeCacheSize=256m -XX:+TieredCompilation 
or
  • -XX:CompileThreshold=8000

Noted that “reserved” code cache is just an address space reservation, it does not really consume any additional physical memory unless it’s used.  On 64-bit platforms, it doesn’t hurt at all to set a higher value.  However, if you have set cache size to be too small, you will definitively see the negative impact on your application's performance.

Acknowledgement


Some writings here are based on the feedback from Igor Veresov and Vladimir Kozlov. However, the author would assume the full responsibility for the content himself.

References

  1. VM warning: CodeCache is full. Compiler has been disabled.
  2. HotSpot: What Does {pd product} Mean?  (Xml and More)
  3. Performance Tuning with Hotspot VM Option: -XX:+TieredCompilation (Xml and More
  4. A Case Study of Using Tiered Compilation in HotSpot  (Xml and More)
  5. Useful JVM Flags – Part 4 (Heap Tuning)

Wednesday, August 27, 2014

JDK 8: UseCompressedClassPointers vs. UseCompressedOops

A new JVM option was introduced into JDK 8 after PermGen removal:
UseCompressedClassPointers
In this article, we will discuss the difference between  UseCompressedOops and UseCompressedClassPointers.

Default Values


As described in [2], you can find out the default values of UseCompressedClassPointers and UseCompressedOops:

     bool UseCompressedClassPointers           := true   {lp64_product}
     bool UseCompressedOops                    := true   {lp64_product}

Our platform is linux-x64 and these options are both set to be true based on ergonomics.

UseCompressedOops vs. UseCompressedClassPointers


CompressedOops are for the compression of pointers to objects in the Java Heap.  Class data is no
longer in the Java Heap and the compression of pointers to class data is done under the flag
UseCompressedClassPointers.  In next sections, we will discuss them in more details.

Oops and Compressed Oops


Oops are "ordinary" object pointers.  Specifically, a pointer into the GC-managed heap. Implemented as a native machine address, not a handle. Oops may be directly manipulated by compiled or interpreted Java code, because the GC knows about the liveness and location of Oops within such code.  Oops can also be directly manipulated by short spans of C/C++ code, but must be kept by such code within handles across every safepoint.

Compressed Oops represent managed pointers (in many but not all places in the JVM) as 32-bit values which must be scaled by a factor of 8 and added to a 64-bit base address to find the object they refer to in Java Heap.

Compressed Class Pointers


Objects (in its 2nd word) have a pointer  to VM Metadata class, which can be compressed.  If compressed, it uses a base which points to the Compressed Class Pointer Space.

Before we continue, you need to know what Metaspace and Compressed Class Pointer Space  are. A Compressed Class Pointer Space (which is logically part of Metaspace) is introduced for 64 bit platforms. Whereas the Compressed Class Pointer Space contains only class metadata, the Metaspace can contain all other large class metadata including methods, bytecode etc.

For 64 bit platforms, the default behavior is using compressed (32 bit) object pointers (-XX:+UseCompressedOops) and compressed (32 bit) class pointers (-XX:+UseCompressedClassPointers).  However, you can modify default settings if you like.  When you do  modify them, be warned that there is a dependency between the two options—i.e., UseCompressedOops must be on for UseCompressedClassPointers to be on.

To summarize it, the differences between Metaspace and Compressed Class Pointer Space are :[3]
  • Compressed Class Pointer Space contains only class metadata
    • InstanceKlass, ArrayKlass
      • Only when UseCompressedClassPointers true
      • These include Java virtual tables for performance reasons
  • Metaspace contains all other class metadata that can be large.
    • Methods, Bytecodes, ConstantPool ...

 

References

  1. HotSpot: Monitoring and Tuning Metaspace in JDK 8 (Xml and More)
  2. What Are the Default HotSpot JVM Values?
  3. Metaspace in Java 8 (good)  
  4. HotSpot Glossary of Terms