Sunday, January 29, 2012

How to Avoid Deadlocks

Deadlock can be illustrated by the classic dining philosophers problem[1].

In an operating system, a deadlock[2] is a situation which occurs when a process enters a waiting state because a resource requested by it is being held by another waiting process, which in turn is waiting for another resource. If a process is unable to change its state indefinitely because the resources requested by it are being used by other waiting process, then the system is said to be in a deadlock.

In Java programming, deadlocks may manifest themselves where multiple threads wait forever due to a cyclic locking dependency.  For example, when thread A holds lock L and tries to acquire lock M, but at the same time thread B holds M and tries to acquire L, both threads will wait forever.  Just as threads can deadlock when they are each waiting for a lock that the other holds and will not release, they can also deadlock when waiting for resources. Comparing to deadlock, starvation and livelock are much less common a problem, but are still problems that every designer of concurrent software is likely to encounter. Java applications do not recover from deadlock, so it is worthwhile to ensure that your design precludes the conditions that could cause it[6].

In this article, we will discuss some threading best practices to avoid deadlocks.

Necessary Conditions

A deadlock situation can arise only if all of the following conditions hold simultaneously in a system:[3]
  1. Mutual Exclusion: At least, one resource must be non-shareable.  Only one process can use the resource at any given instant of time.
  2. Hold and Wait: A process is currently holding at least one resource and requesting additional resources which are being held by other processes.
  3. No Preemption: The operation system can de-allocate resources once they have been allocated. They must released by the holding process voluntarily.
  4. Circular Wait: A process waiting for a resource which is being held by another process, which in turn is waiting for another process to release a resource.
Unfulfillment of any of these conditions is enough to preclude a deadlock from occurring.

How to Avoid Deadlocks

Never cede control to the client within a synchronized method or block[4]

Invoking an alien method with a lock held is asking for liveness trouble. The alien method might acquire other locks (risking deadlock) or block for an unexpectedly long time, stalling other threads that need the lock you hold.

In other words, inside a synchronized region, do not invoke a method that is designed to be overridden, or one provided by a client in the form of a function object (i.e., object references used to implement the Strategy pattern).  The class has no knowledge of what the alien method does and has no control over it. If you do call an alien method from within a synchronized region, you open the opportunities of deadlocks by allowing the following conditions to hold:
  • Hold and Wait
  • Circular Wait
Because you call the alien method from a shynchronized region, it holds a lock. If this alien method is overriden and it engages the services of another thread to do the deed,  Hold and Wait condition will be established.  Given another counterpart of Hold-and-Wait, it can possibly form a Circular Wait condition.

Calling a method with no locks held is called an open call[6], and classes that rely on open calls are more well-behaved and composable than classes that make calls with locks held.

More generally, try to limit the amount of work that you do from within synchronized regions. When you are designing a mutable class, think about whether it should do its own synchronization. Synchronize your class internally only if there is a good reason to do so, and document your decision clearly.

Ensure that resources are always acquired in some well-defined order[7]

Deadlocks can be avoided by assigning a partial order to the resources, and establishing the convention that all resources will be requested in order, and released in reverse order, and that no two resources unrelated by order will ever be used by a single unit of work at the same time.  This solution to the dining philosophers problem was originally proposed by Dijkstra.

There are different approaches to enforce a partial order to the resources:
  1. If resources are static and known in advanced
    • Just stick to a programming policy (i.e., all programmers apply the policy of acquiring the locks in some well-defined order)
    • Provide a method that combines acquisition and release of the multiple locks in the correct way (see here for an example)
  2. If resources are not known in advanced or acquired dynamically
    • We have to explicitly define some way of ordering them. For example,
      • If locking will only exist for the lifetime of the application
        • We can use the identity hash code (System.identityHashCode(r)) of the two objects.
        • In the rare case that two objects have the same hash code, we must use an arbitrary means of ordering the lock acquisitions, and this reintroduces the possibility of deadlock. To prevent inconsistent lock ordering in this case, a third tie breaking lock can be used[6].
Removing the mutual exclusion condition

Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.

For example, optimistic concurrency control[9] uses a pair of consistency markers in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.

Finally, immutability is great for multi-threading. Instances of immutable class appear constant. Therefore, no external synchronization is necessary. Examples include String, Long, and BigInteger.

Using timed lock acquisition to acquire multiple locks[6]

Another technique for detecting and recovering from deadlocks is to use the timed tryLock feature of the explicit Lock classes instead of intrinsic locking.

If a lock acquisition times out, you can release the locks, back off and wait for a while, and try again, possibly clearing the deadlock condition and allowing the program to recover. However, this technique works only when the two locks are acquired together; if multiple locks are acquired due to the nesting of method calls, you cannot just release the outer lock, even if you know you hold it.


We conclude this article by quoting Goetz et al.[6]:
Like many other concurrency hazards, deadlocks rarely manifest themselves immediately. The fact that a class has a potential deadlock doesn’t mean that it ever will deadlock, just that it can. When deadlocks do manifest themselves, it is often at the worst possible time—under heavy production load.
  1. Dining philosophers problem
  2. Deadlock
  3. Advanced Synchronization in Java Threads by Scott Oaks and Henry Wong
  4. Effective Java by Joshua Block
  5. Threading Best Practices
  6. Java Concurrency in Practice by Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes and Doug Lea
  7. Deadlock Tutorial
  8. The Java Tutorial (Concurrency)
  9. Optimistic concurrency control
  10. Concurrency: State Models & Java Programs (2nd Edition), by Jeff Magee and Jeff Kramer.
  11. Java Concurrent Animated
  12. Analyze Hanging Programs Using Java Thread Traces (Xml and More)

Sunday, January 22, 2012

Volatile Keyword in Java

Multithreading is one of the most important software technologies for boosting the performance and scalability of all types of software.   However, it comes at a prisecomplexity.  The pain of concurrent programming can be alleviated by a framework like Hadoop.  However, as a Java programmer using threads, you need to deal with three interwined issues:
  1. Atomicity
  2. Visibility
  3. Ordering
Working Memory

Java Memory Model defines an abstract relation between threads and main memory. Every thread is defined to have a working memory (an abstraction of caches and registers) in which to store values. The model guarantees a few properties surrounding the interactions of instruction sequences corresponding to methods and memory cells corresponding to fields. Most rules are phrased in terms of when values must be transferred between the main memory and per-thread working memory.

Common variables such as instance fields, static fields and array elements in heap memory can be shared between threads.  At any time, these variables can be kept in one of the following locations:
  • Register
  • L1-L3 caches
  • Main memory
  • Hard disk (passivation and activation of Java Beans; paging or swapping)
When multiple threads are all running unsynchronized code that reads and writes common variables, then:
  • Arbitrary interleavings
  • Atomicity failures
  • Race conditions
  • Visibility failures
may result in execution patterns.

The language specification guarantees that reading or writing a single variable is atomic unless the variable is of type long or double.  This includes fields serving as references to other objects.  In other words, this implies that every thread accessing a field of any type except long or double will read its current value before continuing, instead of (potentially) using a cached value. This atomicity guarantee can be extended to longs or doubles, if you declare them volatile.  We'll discuss this more later.


Visibility discusses under what conditions the effects of one thread are visible to another. The effects of interest here are writes to fields, as seen via reads of those fields.  While the atomicity guarantee ensures that a thread will not see a random value when reading atomic data, it does not gurantee that a value written by one thread will be visible to another:
  • Synchronization is required for reliable communication between threads as well as for mutual exclusion[8]
If a variable is declared volatile, this signals that the variable will be accessed by multiple threads, and also gives visibility guarantees.


Ordering describes under what conditions the effects of operations can appear out of order to any given thread. The main ordering issues surround reads and writes associated with sequences of assignment statements.

If a program has no data races, then all executions of the program will appear to be sequentially consistent.  If JLS were to use sequential consistency as its memory model, many of the compiler and processor optimizations would be illegal.  For example, JLS allows the following statements to be reordered:

This provides essential flexibility for compilers and machines. Exploitation of such opportunities (via pipelined superscalar CPUs, multilevel caches, load/store balancing, interprocedural register allocation, and so on) is responsible for a significant amount of the massive improvements in execution speed seen in computing over the past decade.

In other words, not only may concurrent executions be interleaved, but they may also be reordered and otherwise manipulated in an optimized form that bears little resemblance to their source code. As compiler and run-time technology matures and multiprocessors become more prevalent, such phenomena become more common. They can lead to surprising results for programmers with backgrounds in sequential programming who have never been exposed to the underlying execution properties of allegedly sequential code. This can be the source of subtle concurrent programming errors. In almost all cases, there is an obvious, simple way to avoid contemplation of all the complexities arising in concurrent programs due to optimized execution mechanics: Use synchronization. There are multiple ways to achieve synchronization.  Below we'll disucusss using volatile keyword in Java for limited cases.


In terms of atomicity, visibility, and ordering, declaring a field as volatile is nearly identical in effect to using a little fully synchronized class protecting only that field via get/set methods, as in:
final class VFloat { 
  private float value; 

  final synchronized void set(float f) { value = f; } 
  final synchronized float get() { return value; } 

This may invovle low-level memory barrier machine instructions to keep value representations in synch across threads.  However, it involves no locking.   In the following sections, we will discuss what're the good occasions for you to use volatile and what're the dangers of misusing it.

When to Use Volatile

Declaring fields as volatile can be useful when you do not need locking for any other reason, yet values must be accurately accessible across multiple threads. This may occur when[5]:
  • The field need not obey any invariants with respect to others.
  • Writes to the field do not depend on its current value.
  • No thread ever writes an illegal value with respect to intended semantics.
  • The actions of readers do not depend on values of other non-volatile fields.
Below we provide some examples for such usages:

Use volatile in DCL

Double-checked locking (DCL) is OK as of Java 5 provided that you make the instance reference volatile.

// Works with acquire/release semantics for volatile in Java 5
class Foo {
  private volatile Helper helper = null;
  public Helper getHelper() {
  if (helper == null) {
    synchronized(this) {
      if (helper == null)
        helper = new Helper();
    return helper;

In Java 5, JLS ensures that the unsycnrhonized volatile read must happen after the write has taken place, and the reading thread will see the correct values of all fields on Helper.

Use volatile in Control Flag

// Cooperative thread termination with a volatile field
public class StopThread {
    private static volatile boolean stopRequested;

    public static void main(String[] args)
            throws InterruptedException {
        Thread backgroundThread = new Thread(new Runnable() {
            public void run() {
                int i = 0;
                while (!stopRequested)

        stopRequested = true;

Volatile declarations on control flags are needed to ensure that result flag values are visible across threads.
Dangers of Using volatile Keyword

Composite operations such as the "++" operation on volatile variables both read and write the variable.  So, they are not atomic.

// Need to add synchronized modifier to the volatile variable
// Once you’ve done that, you can and should remove the volatile modifier 
// from nextSeuqenceNumber[8].
private static int nextSequenceNubmer = 0;
public static synchronized int nextSeuqenceNumber() {
  return nextSequenceNumber++;

Ordering and visibility effects surround only the single access or update to the volatile field itself. Declaring a reference field as volatile does not ensure visibility of non-volatile fields that are accessed via this reference. Similarly, declaring an array field as volatile does not ensure visibility of its elements.   In other words, it is unsafe to call arr[x] = y on an array (even if declared volatile) in one thread and then expect arr[x] to return y from another thread.  See [1] for possible ways of fixing this issue.

Because no locking is involved, declaring fields as volatile is likely to be cheaper than using synchronization, or at least no more expensive. However, if volatile fields are accessed frequently inside methods, their use is likely to lead to slower performance than would locking the entire methods.

  1. Volatile Arrays in Java
  2. Dangers of Volatile Keyword
  3. The Volatile Keyword in Java 5
  4. The Volatile Keyword in Java
  5. Synchronization and Thread Safety in Java
  6. Double-Checked Locking and How to Fix it
  7. Synchronization and Java Memory Model
  8. Effective Java by Joshua Bloch
  9. The Java Language Specification

Saturday, January 21, 2012

Singleton Is Not as Simple as You Think

As Gamma et al.[1] put it, singleton ensures a class only has one instance,and provides a global point of access to it.

Typically, when modelling a system, business objects come out of the nouns defined within a use case statement[2]. These Business Objects are acted upon by the external actors such as User of the System.

If you have only one file system or one window manager in the modelled system, these business objects are good candidates for singletons.

Creating and Destroying Objects

As a Java developer, one design consideration is related to create and destroy objects.  Multiple design decisions need to be made on:
  • When and how to create objects
  • When and how to avoid creating them
  • How to ensure that objects are destroyed in a timely manner
  • How to manage any cleanup actions that must precede object destruction.
Using singleton, it can simplify the above design decisions.  With it, a class can ensure that no other instance can be created (by instercepting requrests to create new objects), and it provides a way to access the instance.

However, as pointed out in this article, using singleton design pattern in a cluster environment is challenging.


A computer cluster consists of a set of loosely connected computers that work together.  If all nodes in the system use the same hardware, they are named cluster.  If the nodes use different hardware, they are named grid.  Technologies like cluster or grid have all aimed at allowing access to large amounts of computing power in a fully virtualized manner, by aggregating resources and offering a single system view.  For example, Oracle RAC (Real Application Clusters) allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, thus providing a clustered database.

When Fusion web application runs in a clustered environment, it must be possible for the node that the application is running on to fail, but to have it switched to a separate node without any impact on the user. The application server makes this possible by serializing the state that the application requires. If a node goes down, the user is moved to a different node, and the state is automatically recovered.

Similarly managed servers in a Oracle WebLogic domain may be grouped together in a cluster , running simultaneously and working together for increased scalability and reliability[9].  In such a cluster, most resources and services are deployed identically to each managed server, enabling fail-over and load-balancing.  Either active-active and active-passive failover patterns can be supported.  For an active-active configuration, it requires an external load balancer to detect failure and automatically redirect requests.  For an active-passive configuration, it provides a fully redundant instance of each node, which is only brought online when its associated primary node fails.

Multiple Singleton Implementations

Implementation 1

// Singleton with public final field
public class Singleton {
    public static final Singleton INSTANCE = new Singleton();
    private Singleton() { ... }
    public void doSomething() { ... }

As noted in [5], a privileged client can invoke the private constructor reflectively with the aid of the AccessibleObject.setAccessible method. If you need to defend against this attack, modify the constructor to make it throw an exception if it’s asked to create a second instance.

Implementation 2

// Singleton with static factory
public class Singleton {
    private static final Singleton INSTANCE = new Singleton();
    private Singleton() { ... }
    public static Singleton getinstance() { return INSTANCE ; }
    public void doSomething() { ... }

One advantage of the factory-method approach is that it gives you the flexibility to change your mind about whether the class should be a singleton without changing its API. The factory method returns the sole instance but could easily be modified to return, say, a unique instance for each thread that invokes it.

Implementation 3

// Enum singleton
public enum Singleton {
  public void doSomething() { ... }

In [5], Joshua Bloch states that this approach is functionally equivalent to the public field approach (i.e., Implementation 1), except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. In his opinion, a single-element enum type is the best way to implement a singleton.

Implementation 4

// Singleton implemented with lazy initialization
public class Singleton {
 private static Singleton INSTANCE = null;
 protected Singleton() {
   // Exists only to defeat instantiation.
 public synchronized static Singleton getInstance() {
    if(INSTANCE == null) {
       INSTANCE = new Singleton();
    return INSTANCE;
  public void doSomething() { ... }

Because synchronization is very expensive performance-wise (synchronized methods can run up to 100 times slower than unsynchronized methods), we can introduce a performance enhancement that only synchronizes the singleton assignment in getInstance()[4].

Implementation 5

// initialization-on-demand holder
public class Singleton {
  private Singleton() { }
  private static class LazyHolder {
    public static final Singleton INSTANCE = new Singleton();
   public static Singleton getInstance() {
     return LazyHolder.INSTANCE;
   public void doSomething() { ... }

The static class LazyHolder is only executed when the static method getInstance is invoked on the class Singleton, and the first time this happens the JVM will load and initialize the LazyHolder class. The initialization of the LazyHolder class results in static variable INSTANCE being initialized by executing the (private) constructor for the outer class Singleton.

Since the class initialization phase is guaranteed by the JLS to be serial, i.e., non-concurrent, no further synchronization is required in the static getInstance method during loading and initialization. And since the initialization phase writes the static variable INSTANCE in a serial operation, all subsequent concurrent invocations of the getInstance will return the same correctly initialized INSTANCE without incurring any additional synchronization overhead.

When is a Singleton not a Singleton?

In [3], Joshua Fox has shown that you can inadvertently allow more than one instance to be created under the following circumstances:
  • Multiple singletons in two or more virtual machines
  • Multiple singletons simultaneously loaded by different class loaders
  • Singleton classes destroyed by garbage collection, then reloaded
  • Purposely reloaded singleton classes
  • Multiple instances resulting from incorrect synchronization
  • Multiple singletons arising when someone has subclassed your singletons
  • Multiple singletons created by a factory specially asked to create multiple objects
  • Copies of a singleton object that has undergone serialization and deserialization
  • Multiple singletons caused by problems in a factory
As shown above, implementing singletons in Java is anything but simple. In [4], David Geary tells you how to test singleton classes by using JUnit and log4j.

Singleton in Cluster Environment

To support singletons in a cluster, you need to ask yourselves one question:
  • Is it "single" in the context of the running VM good enough
If your application requires an absolutely single instance in a cluster, you need to read [3] to see how to avoid the pitfalls of creating multiple singletons.

In a cluster environment, application state needs to be serialized and preserved across multiple nodes.  To make a singleton class serializable, it is not sufficient merely to add implements Serializable to its declaration.  To maintain the singleton guarantee, you must also provide a readResolve method.  Otherwise, each deserialization of a serialized instance will result in the creation of a new instance.  To prevent this, add the following readResolve method to the Singleton class:

// readResolve method to preserve singleton property
private Object readResolve() throws ObjectStreamExcetion {
  // Return the one true INSTANCE and let the garbage collector take care of the cloned one
  return INSTANCE;

In [7], it has discussed a case that uses singleton to cache some custom information from Database.  This information is mostly read-only but gets refreshed when some particular event occurs.  A question is asked:
  • When this singleton is deployed in a clustered environment, what is the best way to keep the cache in sync.
I'll just refer you to read [7] for possible approaches.  However, in general, the approach is based on the object being cached.
  • If the object being cached is read-only or thread-safe, you simply make the field volatile.
  • If the cache object is mutable and not thread-safe, you have to make sure that any access to mutable fields in the object are synchronized.

  1. Design Patterns: Elements of Reusable Object-Oriented Software by Gamma et al.
  2. The Mysteries of Business Object
  3. When is a Singleton not a Singleton?
  4. Simply Singleton
  5. Effective Java by Joshua Bloch
  6. High-availability cluster
  7. Singleton in Cluster environment
  8. Initialization-on-demand holder idiom
  9. Oracle Service Bus Clustering for Developers by Jeff Davies

Sunday, January 15, 2012

Understanding Garbage Collection

Every Java programmer will encounter
  • java.lang.OutOfMemoryError
sooner or later.  The most often offered advices include:
  • Try increase the MaxPermSize first
  • Increase the maximum size of java heap size to 512m
In this article, we will show what Java heap space or MaxPermSize is.  Using Hotspot as our JVM, we will introduce you to the following topics:
Garbage Collector

A garbage collector is responsible for
  • Allocating memory
  • Ensuring that any referenced objects remain in memory 
  • Recovering memory used by objects that are no longer reachable from references in executing code. 

The process of locating and removing those dead objects can stall your Java application while consuming as much as 25 percent of throughput.
GC provided by Hotspot VM is a generational GC[9,16,18]—memory is divided into generations, that is, separate pools holding objects of different ages.  The design is based on the Weak Generational Hypothesis:
  1. Most newly-allocated objects die young
  2. There are few references from old to young objects

This separation into generations has proven effective at reducing garbage collection pause times and overall costs in a wide range of applications.  Hotspot VM divides its heap space into three generational spaces:
  1. Young Generation
    • When a Java application allocates Java objects, those objects are allocated in the young generation space
    • Is typically small and collected frequently
    • The garbage collection algorithm chosen for a young generation typically puts a premium on speed, since young generation collections are frequent. 
  2. Old Generationn
    • Objects that are longer-lived are eventually promoted, or tenured, to the old generation
    • Is typically larger than the young generation, and its occupancy grows more slowly
    • The old generation is typically managed by an algorithm that is more space efficient, because the old generation takes up most of the heap and old generation algorithms have to work well with low garbage densities.
  3. Permanent Generation
    • Holds VM and Java class metadata as well as interned Strings and class static variables
    • Note that PermGen will be removed from Java heap in JDK 8.
Minor vs Full GC

A garbage collection occurs when any one of those three generational spaces is considered full and there is some request for additional space that is not available.  There are two types of garbage collection activities: minor and full.   When the young generation fills up, it triggers a minor collection in which the surviving objects are moved to the old generation. When the old generation fills up, it triggers a full collection which involves the entire object heap.

Minor GC
  • Occurs when the young generation space does not have enough room
  • Tends to be short in duration relative to full garbage collections
Full GC
  • When the old or permanent generation fills up, a full collection is typically done.
  • A Full GC can also be started explicitly by the application using System.gc()
  • Takes a longer time depending upon the heap size.  However, if it takes longer than 3 to 5 seconds, then it's too long[1].
Full garbage collections typically have the longest duration and as a result are the number one reason for applications not meeting their latency or throughput requirements.   The goal of GC tuning is to reduce the number and frequency of full garbage collections experienced by the application.  To achieve it, we can approach from two sides:
  • From the system side
    • You should use as large a heap size as possible without causing your system to "swap" pages to disk.  Typically, you should use 80 percent of the available RAM (not taken by the operating system or other processes) for your JVM[1].
    • The larger the Java heap space, the better the garbage collector and application perform when it comes to throughput and latency.
  • From the application side
    • A reduction in object allocations, more importantly, object retention helps reduce the live data size, which in turn helps the GC and application to perform.
    • Read this article -- Java Performance Tips [19].

The dreaded OutOfMemoryError is something that no Java programmer ever wishes to see. Yet it can happen, particularly if your application involves a large amount of data processing, or is long lived.
The total memory size of an application includes:
  • Java heap size
  • Thread stacks
  • I/O buffers
  • Memory allocated by native libraries
If an application runs out of memory and JVM GC fails to reclaim more object spaces, an OutOfMemoryError exception will be thrown.  An OutOfMemoryError does not necessarily imply a memory leak. The issue might simply be a configuration issue, for example if the specified heap size (or the default size if not specified) is insufficient for the application.

JVM Command Options

Whether running a client or server application, if your system is running low on heap and spending a lot of time with garbage collection you'll want to investigate adjusting your heap size. You also don't want to set your heap size too large and impact other applications running on the system.   

GC tuning is non-trivial.  Finding the optimal generation space sizes involves an iterative process[3,10,12].  Here we assume you have successfully identified the optimal heap space sizes for your application.  Then you can use the following JVM command options to set them:

GC Command Line OptionsDescription
-Xms Sets the  initial and minimum size of java heap size.  For example, -Xms512m (note that there is no "=").
-Xmx Sets the maximum size of java heap size
-Xmn Sets the initial, minimum, and maximum size of the young generation space.   Note that  the size of the old generation space is implicitly set based on the size of the young generation space.
-XX:PermSize=<n>[g|m|k] Sets the initial and minimum size of permanent generation space
-XX:MaxPermSize=<n>[g|m|k] Sets the maximum size of permanent generation space

As a final note, ergonomics for servers was first introduced in Java SE 5.0[13]. It has greatly reduced application tuning time for server applications, particularly with heap sizing and advanced GC tuning. In many cases no tuning options when running on a server is the best tuning you can do.

  1. Tuning Java Virtual Machines (JVMs)
  2. Diagnosing Java.lang.OutOfMemoryError
  3. Java Performance by Charlie Hunt and Binu John
  4. Java HotSpot VM Options
  5. GCViewer (a free open source tool)
  6. Comparison Between Sun JDK And Oracle JRockit 
  7. Java SE 6 Performance White Paper 
  8. F&Q about Garbage Collection 
  9. Hotspot Glossary of Terms 
  10. Memory Management in the Java HotSpot Virtual Machine White Paper
  11. Java Hotspot Garbage Collection 
  12. FAQ about GC in the Hotspot JVM  (with good details) 
  13. Java Heap Sizing: How do I size my Java heap correctly? 
  14. Java Tuning White Paper 
  15. JRockit JVM Heap Size Options
  16. Pick up performance with generational garbage collection
  17. Which JVM?
  18. Memory Management in the Java HotSpot™ Virtual Machine
  19. Java Performance Tips
  20. A Case Study of java.lang.OutOfMemoryError: GC overhead limit exceeded
  21. HotSpot—java.lang.OutOfMemoryError: PermGen space

Saturday, January 7, 2012

hashCode() and equals() in Java

Java object's hash code is important to the performance of hash tables and other data structures that store objects in groups ("buckets") based on their computed hash values. Good implementation of hashCode() must:
  • Follow its general contract[1]
  • Generate values that are evenly distributed for varied inputs to avoid clustering
Overriding equals() and hashCode() in your class needs to be considered together. In this article, we'll examine why.

Object Identity vs. Logical Identity

  • Object identity
    • For two object references x and y, two objects are equal if and only if x and y refer to the same object. This is the default behavior of java.lang.Object.equals().
  • Logical identity
    • For two value object references x and y, two value objects are equal if their values are identical even they may refer to different objects.

When to Override equals()

Your class needs to override the default object identity behavior if logical identity is its desired behavior and a superclass has not already overridden Object's default equals() implementation.

Override hashCode() If equals() Is Overridden

An object's hashCode() method digests the data stored in the object into a single hash value (a 32-bit signed integer). Hash value cannot be used as a
unique identifier for an object because hashCode() is not required to produce distinct integer results for unequal objects based on its contract. However, a good hash function tends to produce unequal hash values for unequal objects.

Always override hashCode() when you override equals(). Failure to do so will prevent your class from functioning properly in conjunction with all hash-based collections, including
  • HashMap
  • HashSet
  • Hashtable
    • Note that you should use HashMap instead of Hashtable for better performance.
The reason is that one of the general contract of hashCode dictates that:
  • If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
If you override equals() by changing its behavior from object identity to logical identity, then your hashCode() must produce the same hash value for two logically identical objects. However, the default hashCode() is typically implemented by converting the internal address of the object into an integer. Therefore, the default hashCode() will return different hash values for two logically identical objects, which violates the general contract.

Queries on hash-based collections should run in linear time. If a bad hash function is used, it might run in quadratic time. For example, if a hashCode() always returns a fixed hash value, hash tables essentially degenerate to linked lists.

As with any general hashing function, collisions are possible. If that happens, objects will be put in groups (or buckets) based on their computed hash values.


  • If a class overrides equals(), it must override hashCode()
  • equals() and hashCode() must use the same set of significant fields of your object
    • You can exclude redundant fields which can be computed from other significant fields that are included in the computation
    • If you exclude redundant fields, treat them the same way in both methods
  • If two objects are equal, then their hashCode values must be equal as well
    • However, unequal objects need not produce distinct hash values
  • If the object is immutable, then hashCode is a candidate for caching and lazy initialization

Read More

  1. java.lang.Object
  2. Equals and Hash Code in Java
  3. Effective Java by Joshua Bloch
  4. Performance tuning articles
  5. Java Performance Tips

IT Job Interview Questions


  1. What's the next number in this sequence: 10, 9, 60, 90, 70, 66 … ?
  2. Using only a four-minute hourglass and a seven-minute hourglass, measure exactly nine minutes—without the process taking longer than nine minutes.
  3.     +---+
        | 1 |
    | 2 | 3 | 4 |
    | 5 | 6 | 7 |
        | 8 |
    Rearrange the above numbers in the squares such that any adjacent numbers (for example, 1 and 2 or 6 and 7) are not next to each other horizontally, vertically, or diagonally. For example, in the above figure, 1 and 2 are next to each other diagonally, which violates the rule.
  4. Three people work together to figure out their average salary without telling anybody his/her own salary.


  1. Spell the numbers out: (Ten, Nine, Sixty, Ninety, Seventy, Sixty-six). A correct response will have nine letters: 96.
    • Lesson learned here: If you tried to solve the series by looking for patterns and failed, look at the question from different angles.
  2. Start both hourglasses at 0 minutes. Flip over the four-minute glass when it runs out (at 4:00); ditto for the seven-minute glass (at 7:00). When the four-minute glass runs out the second time (at 8:00), the seven-minute glass will then have one minute of sand in its lower bulb. Flip the seven-minute glass over again and let the minute of sand run back. When the last grain falls, that will be nine minutes.
    • Lesson learned here: 4 + 3 + 1 + 1. The important event is when the glass runs out the sand. So, the decision to make involves when to flip the glass and which glass.
  3.     +---+
        | 7 |
    | 3 | 1 | 4 |
    | 5 | 8 | 6 |
        | 2 |
    • Lesson learned here: Place 1 and 8 at the center, which they are the most distant away. Then place 2 and 7 at the opposite sides. Finally, arrange 3-6 to be separate from each other.
  4. Do it this way:
    1. Each person chooses a random number and adds that number to his/her salary
    2. At the first run, pass that number to next person and add them up.
    3. Now, we have the sum of three person's salary plus 3 random numbers.
    4. At the second run, remove his/her random number from the total and pass the new total to the next person.
    5. Now, we have the sum of 3 person's salary. Just divide it by 3.
    Lesson learned here: Adding a random number to the salary helps encrypt his/her salary. Then work together as described in two runs.


  1. Answers to Google Interview Questions
  2. What Recruiters Look At During The 6 Seconds They Spend On Your Resume
  3. The 10 Best Interview Questions to Ask
  4. Java Performance Tips
  5. Top 10 Collection Interview Questions Answers in Java
  6. HotSpot VM Performance Tuning Tips
  7. 30 Questions You Should And Shouldn't Ask In A Job Interview
  8. How to Promote Yourself Without Looking Like a Jerk

Tuesday, January 3, 2012

Starting the CPU Profiler in JDeveloper

JDeveloper offers two kinds of profilers: The CPU Profiler and the Memory Profiler, for local as well as remote profiling.
  • CPU Profiling enables you to identify the most expensive methods and threads in your program.
  • Memory profiling helps you to find out how your program is using the Java heap.
In this article, we will show you how to:
  • Set Options for the CPU Profiler
  • Start the CPU Profiler
CPU Profiler

The CPU Profiler tabulates and displays statistical data on the performance of your application. It enables you to profile your code in one of two modes:
  • Sample CPU Time
  • Method Call Count
In Figure 1, it shows the result of method call count operation.

Figure 1. Hotspots view of the method call count operation

Figure 2. Call Stacks view of the method call count operation

The CPU Profiler displays data in two views:
  • Hotspots
  • Call Stacks
Also, depending on the mode of operation, they display different information.

In the method call count operation, these views show:
  • Hotspots
    • all the methods and the number of times they were called (see Figure 1)
  • Call Stacks
    • the Java platform methods called, sorted by thread group (see Figure 2)
In the time sampling mode of operation, these views show:
  • Hotspots
    • all Java platform methods and all methods they call, sorted by time usage
    • the cumulative amount of CPU time spent in each method
  • Call Stacks
    • the Java platform methods called in their call hierarchy

Setting Options for the CPU Profiler

You can specify if you want the Profiler to sample CPU time usage by your application, or to count method calls. Note that you can choose one mode at a time, not both.

To set CPU Profiler options:
  • In the navigator, double-click the project you want to profile to open the Project Properties dialog.
  • Click Run/Debug/Profiler to open the Project Properties - Run/Debug/Profile page.
  • Click Edit.
  • In the Edit Run Configuration dialog, set the options as desired on the Tool Settings - Profiler - CPU page. You can specify if you want the profiler to sample CPU time or count method calls. In Figure 3, we want the profiler to count method calls.
  • When finished, click OK to close the Edit Run Configuration dialog.

Figure 3. Edit Run Configuration Dialog

Starting the CPU Profile

Starting a CPU profiling session will automatically run your program. Once the CPU profiler window is open (see Figure 1), you can begin a use case to profile your application.

To start the CPU Profiler:
  1. In the navigator, select the project you want to profile. For example, OsmPublicUi.
  2. From the main menu, choose Run > CPU Profile OsmPublicUi.jpr.
  3. The CPU Profiler opens and runs your application.
  4. Click the Begin Use Case icon to begin a profiling

Figure 4. Starting the CPU Profiler
More Hints

If no default run target is specified in the Launch Settings page of the Edit Run Configuration dialog (Application menu > Project Properties > Run/Debug/Profile), the Choose Default Run Target dialog opens. Use this dialog to specify the default run target.

If you want to profile your application immediately when the profiler is launched, select the Begin Use Case on Application Startup checkbox in Profiler page of Edit Run Configuration dialog (see Figure 5).

If you want to analyze a specific method or class you might be interested in, enter the name of a particular method or class in your application in Method Filter (see Figure 5) at the options setting step.

  1. Oracle® Fusion Middleware User's Guide for Oracle JDeveloper 11g Release 2 ( Profiling a Project