Tuesday, December 25, 2012

How to Download WebLogic Upgrade Installer?

In our old Oracle Fusion Middleware Installation, we have a WebLogic Server. See [1] for how to find the version of WebLogic Server.  Because we are testing a new feature in HotSpot VM which requires WebLogic Server to work with, we need to upgrade WebLogic Server from to

After some research, we have found [2] which provides the upgrade instructions and decided to give it a try.  In this article, we report the steps that we have used to download the WebLogic Upgrade Installer.

Types of Installers

In total, there are four types of WebLogic Server installers available:
  • OS-specific Package installer
  • Generic Package installer
  • Upgrade installer
  • Development-only and supplemental installers
You can read [4] for the detailed descriptions of these various types of installers.  For our purpose, what we need is an Upgrade installer.

Upgrade installers, as their names suggest, allow you to upgrade an existing WebLogic Server installation to a later patch release.  Note that if you have an existing WebLogic Server 10.3.0, 10.3.1, 10.3.2, or 10.3.3 installation that includes Workshop for WebLogic, and you want to use an Upgrade installer to upgrade that installation to WebLogic Server 10.3.6, you must uninstall Workshop for WebLogic before running the Upgrade installer.  See [5] for more information.

How to Download?

Our steps have followed the instructions in [3], however, with minor modifications.  To download an Upgrade installer:
  1. Enter the My Oracle Support URL (https://support.oracle.com/) in a Web browser.
  2. Click Sign In and enter your My Oracle Support username and password.
  3. Select the Patches and Updates tab.
  4. In the Patch Search pane, click Product or Family (Advanced Search).
  5. From the Product Is: drop-down list, select Oracle WebLogic Server.
  6. From the Release Is: drop-down list, click the arrow next to the Oracle WebLogic Server folder, select the release you want to download, and click Close. (To upgrade to WebLogic Server 10.3.6, select WLS from the list.)
  7. From the Platform drop-down list, select your platform and click Close. You can select multiple platforms. Selected platforms are indicated by a check mark.
  8. Click + next to Platform drop-down list. A new filter row appears and + changes to -.
  9. From the left operand drop-down list, select Description.  From the operator drop-down list, select contains.  In the right operand field, specify INSTALLER.
  10. Click Search. The search results list displays all available Upgrade installers for the selected release for all of the selected platforms.
  11. Click the Download button on the right to begin the download.
  12. Click Save.
  13. Browse to the directory where you want to save the installer, and click Save again to start the file download. A compressed file downloads for the selected platform.
  14. After the download completes, extract the compressed file, which contains only the appropriate installer executable for the selected platform.
Note that the typical description for an upgrade installer patch includes the following text:
For example, here is the description of the patch that we have found:

Finally, if you run into download issue (for example, the download sits at 99% and never completes), try using different browser (for example, Chrome instead of IE).


  1. How to find Oracle WebLogic Server Version?
  2. Oracle® Fusion Middleware Installation Guide for Oracle WebLogic Server 11g Release 1 (10.3.6)
  3. Downloading an Upgrade Installer From My Oracle Support
  4. Types of Installers
  5. Uninstalling the Software
  6. Professional Oracle WebLogic Server by Robert Patrick, Gregory Nyberg, and Philip Aston
  7. Oracle® Fusion Middleware Patching Guide 11g Release 1 (
    • Patching involves copying a small collection of files over an existing installation.
    • Upgrade involves moving from a previous major version to a new major version. For example, an upgrade would be required to move from Oracle Application Server 10g to Oracle Fusion Middleware 11g.
  8. Oracle Fusion Middleware 12c —Install, Patch, and Upgrade
  9. Oracle Products: What Patching, Migration, and Upgrade Mean? (Xml and More

Thursday, December 20, 2012

umount: /dev/sdd: device is busy

This article is a follow-up on a previous article [1].  When we tried to unmount one of the file system, it failed with the following messages:

# umount /dev/sdd
umount: /dev/sdd: device is busy
umount: /dev/sdd: device is busy

In the following sections, we will summarize what we have found.

What Was Keeping the Device Busy?

On Linux platforms, you can use fuser command to find what processes have been keeping your device busy. The fuser command lists the process numbers of local processes that use the local or remote files specified by the File parameter. For block special devices, the command lists the processes that use any file on that device.  For example, here is what we have found with fuser command:
# fuser -m /dev/sdd
/dev/sdd:            24381c 24393ce 24449ce

In the listing, each process number is followed by a letter indicating how the process uses the file[2]:
cUses the file as the current directory.
eUses the file as a program's executable object.
rUses the file as the root directory.
sUses the file as a shared library (or other loadable object).

What Application It Is Given a Process Number?

To find which application it is given its process number, you can use:

#  ps auxw|grep  24381
oracle   24381  0.0  0.0  63868  1160 ?        S    Dec19   0:00 /bin/sh -f /home/oracle/atg/IDM_11gR1.PS4.RC4/Oracle_IDM1/bin/emctl asstart agent

# ps auxw|grep  24393
oracle   24393  0.0  0.0  88552 10904 ?        S    Dec19   0:01 /home/oracle/atg/IDM_11gR1.PS4.RC4/Oracle_IDM1/perl/bin/perl /home/oracle/atg/IDM_11gR1.PS4.RC4/Oracle_IDM1/bin/emwd.pl agent /home/oracle/atg/IDM_11gR1.PS4.RC4/Instances/asinst_1/EMAGENT/EMAGENT/sysman/log/emagent.nohup

# ps auxw|grep  24449
oracle   24449  0.0  0.3 318624 38424 ?        Sl   Dec19   0:09 /home/oracle/atg/IDM_11gR1.PS4.RC4/Oracle_IDM1/bin/emagent

In our case, the above orphaned processes still hang around after the following command:

  • $MW_HOME/asinst_1/bin/opmnctl  stopall

Normally, "opmnctl stopall"[5] should be able to stop opmn and all managed processes.  However, there are some mis-configured host names in our Management Agent property file:
  • $MW_HOME/asinst_1/EMAGENT/EMAGENT/sysman/config/emd.properties
which have caused orphaned processes unresponsive.

After stopping these orphaned processes by force, we are able to unmount /dev/sdd file system and have it checked by e2fsck command.

Tuesday, December 18, 2012

init: Id "co" respawning too fast: disabled for 5 minutes

From Linux syslog (i.e., /var/log/messages), we have found the following repeated messages:
  • Dec 17 04:21:41 myserver init: Id "co" respawning too fast: disabled for 5 minutes
As usual, we have taken action to investigate it.  Here is the report on what we have found.

The Culprit

The serial ports[2] in Linux are named ttyS0, ttyS1, etc. The /dev directory has a special file for each port. Type "ls /dev/ttyS*" to see them. Just because there may be (for example) a ttyS1 file, doesn't necessarily mean that there exists a physical serial port there.

For our issue, it turns out to be:
  • ttyS1 physical serial port is not present on our host myserer
To find out if a physical serial port is present or not, you can test it in two ways:
  1. Using setserial command
  2. Using dmesg command

Output from setserial Command

As shown below, we can tell that ttyS0 is present, but not ttyS1:
[aroot@myserver oracle]# setserial /dev/ttyS0
/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
[aroot@myserver oracle]# setserial /dev/ttyS1
/dev/ttyS1, UART: unknown, Port: 0x02f8, IRQ: 3

Output from dmesg Command

As shown below, only ttyS0 is printed by dmesg:
[root@myserver ~]# dmesg | grep tty
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A

The Solution

Because ttyS1 is absent, that is why the following line in /etc/inittab is failing and causes the re-spawning:
  • co:2345:respawn:/sbin/agetty 9600 ttyS1
When a process re-spawns too many times, it will be disabled for sometime by the kernel to keep the host stable. Following message is the result of that:
  • init: Id "co" respawning too fast: disabled for 5 minutes
As there is no ttyS1, the fix for our issue is:
  1. $su aroot
  2. #vi /etc/inittab
    • Commenting or removing the line for co
  3. #init q
You can verify that inittab is re-read by Linux kernel from the following line:
  • Dec 17 23:49:40 myserver init: Re-reading inittab
in Linux syslog.

Monday, December 17, 2012

EXT3-fs warning: checktime reached, running e2fsck is recommended

From Linux syslog[1] (i.e., /var/log/messages), we have found the following message:
  • EXT3-fs warning: checktime reached, running e2fsck is recommended 

And, from the file system disk space report (i.e., "df -k"), we find:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda3             61189192  31629588  26401228  55% /
/dev/sda1               101086     21650     74217  23% /boot
tmpfs                 16472072         0  16472072   0% /dev/shm
/dev/sdb              70440240  35203348  31658524  53% /data

So, our Linux platform has three different file systems:
  • /dev/sda1
  • /dev/sda3
  • /dev/sdb

What is /dev/sda1[2,5]?

For Linix device naming,

  • hd
    • It designates IDE devices
  • sd
    • It designates SCSI devices including kernel-level emulation of SCSI devices, like USB devices or, in some cases, CD-RW drives.

The a and the b's, etc .. (i.e., sda, sdb)are the equivalent to the concept of hda, and hdb, etc.  When different devices are plugged in, they are mapped to sda, sdb, etc. For detailed information about these plug-in devices, we can find them in the boot message, which can be generated by:
  • $dmesg > boot.messages
From the boot message, we have found the following entries:

scsi0 : aacraid
  Vendor: Sun       Model: ssssss-xdddd-dd   Rev: V1.0
  Type:   Direct-Access                      ANSI SCSI revision: 02
SCSI device sda: 143134720 512-byte hdwr sectors (73285 MB)
sda: Write Protect is off
sda: Mode Sense: 06 00 10 00
SCSI device sda: drive cache: write through w/ FUA
SCSI device sda: 143134720 512-byte hdwr sectors (73285 MB)
sda: Write Protect is off
sda: Mode Sense: 06 00 10 00
SCSI device sda: drive cache: write through w/ FUA
 sda: sda1 sda2 sda3
sd 0:0:0:0: Attached scsi removable disk sda
  Vendor: Sun       Model: solaris root      Rev: V1.0
  Type:   Direct-Access                      ANSI SCSI revision: 02

EXT3 FS on sda3, internal journal
kjournald starting.  Commit interval 5 seconds
EXT3 FS on sda1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting.  Commit interval 5 seconds
EXT3-fs warning: checktime reached, running e2fsck is recommended
EXT3 FS on sdb, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
Adding 8289532k swap on /dev/sda2.  Priority:-1 extents:1 across:8289532k

So, we know our devices use EXT3 FS.

What Is EXT3 FS[3]?

EXT3 FS, or third extended filesystem, is a journaled file system that is commonly used by the Linux kernel.  It is the default file system for many popular Linux distributions.

EXT3 adds the following features to EXT2:
  • A journal
  • Online file system growth
  • Htree indexing for larger directories
Without these features, any EXT3 file system is also a valid EXT2 file system. This situation has allowed well-tested and mature file system maintenance utilities for maintaining and repairing EXT2 file systems to also be used with EXT3 without major changes. The EXT2 and EXT3 file systems share the same standard set of utilities, e2fsprogs, which includes an fsck tool. The close relationship also makes conversion between the two file systems (both forward to EXT3 and backward to EXT2) straightforward.

With all these background information, now we are ready to run e2fsck as recommended.

Running e2fsck

In general it is not safe to run e2fsck on mounted filesystems. So, before we run e2fsck command, we have unmounted our file system.

[root@myserver bench]# umount -v /dev/sdb
/dev/sdb umounted
[root@myserver bench]# /sbin/e2fsck -v  /dev/sdb
e2fsck 1.39 (29-May-2006)
data has gone 652 days without being checked, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

  349958 inodes used (3.90%)
    2278 non-contiguous inodes (0.7%)
         # of inodes with ind/dind/tind blocks: 16016/1145/0
 9082105 blocks used (50.76%)
       0 bad blocks
       2 large files

  311040 regular files
   38016 directories
       0 character device files
       0 block device files
       0 fifos
       0 links
     893 symbolic links (893 fast symbolic links)
       0 sockets
  349949 files


The 6th column of fstab is a fsck option. fsck looks at the number in the 6th column to determine in which order the filesystems should be checked. If it's zero, fsck won't check the filesystem.

In our system, we find that both "/" and "/boot" filesystems are automatically checked whenever the system is rebooted:
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
Therefore, we don't need to fsck either /dev/sda3 or /dev/sda1.


  1. Troubleshooting Linux with syslog
  2. What is /dev/sda1?
  3. ext3 (Wikipedia)
  4. How to edit and understand /etc/fstab - 1.1
  5. linux sda vs hda ?

Friday, December 14, 2012

How to Configure Logging Using Weblogic Scripting Tool

Weblogic Scripting Tool (WLST) is a command-line tool that runs on the same machine as the Weblogic server and allows the user to browse the configuration and state of the server through a tree of mbeans (managememnt beans).  It is based on the Java scripting interpreter, Jython.

Using WLST, you can configure a server instance’s logging and message output.  To determine which log attributes can be configured, see LogMBean and LogFileMBean in the WebLogic Server MBean Reference[1].

In this article, we first show you how to configure log attributes from WebLogic Server Administration Console and then show you how to set attributes of LogMBean using WLST.

Modifying Attribute from WLS Console

To bring WLS Administration Console up, you type the following address into your browser's address field:
  • http://<myserver>:7001/console
and log in with your credentials (say, "weblogic/weblogic1").  To modify "Rotation file size" of the log attribute, you do:
  • Click "Lock & Edit"
  • Select and Click:
    • Environment > Servers > CRMDemo_server1 > Logging
You modify "Rotation file size" to be a different value and then activate your change.  

Next to the "Rotation file size" field, you can click on "More Info..." to see its detailed description.

As you can see, "Rotation file size" field is linked to the following MBean Attibute:
  • LogMBean.FileMinSize

Modifying Attribute from WLST

Instead of modifying log attributes from the console, you can also achieve it by using WLST.

$cd $MW_HOME/wlserver_10.3/common/bin
wls:/offline> connect("weblogic","weblogic1", "t3://localhost:7001")  
wls:/atgdomain/serverConfig> cd("Servers/CRMDemo_server1/Log/CRMDemo_server1")
wls:/atgdomain/serverConfig/Servers/CRMDemo_server1/Log/CRMDemo_server1> ls()
dr--   DomainLogBroadcastFilter
dr--   LogFileFilter
-r--   FileMinSize                                  5000
-r--   FileName                                     logs/CRMDemo_server1.log
-r-x   unSet                                        Void : String(propertyName)

As shown above, this is what happened:
  1. We connected to the server
  2. We set Current Management Object (CMO) to the server log config
  3. We listed all LogMBean attributes
  4. Our log attribute FileMinSize was shown on the list
To modify "FileMinSize" log attribute, you can create a script (i.e., setFileMinSize.py) such as:

# Connect to the server
# set CMO to the server log config
# change LogMBean attributes
set("FileMinSize", 400)
# list the current directory to confirm the new attribute values
# save and activate the changes
# all done...

Thursday, December 13, 2012

How to find Oracle WebLogic Server Version?

In [1], it describes 3 ways of finding Oracle WebLogic Server Version.  In this article, we will focus on using registry.xml[2] which contains record of all Oracle products (including weblogic) installed .

MiddleWare Home and Oracle Home

Middleware home is a top-level directory created during the Weblogic Installation and contains all the Oracle Homes (or optional Oracle product homes), Weblogic server , Coherence Server and optionally Weblogic domain (under user_projects which will be created when you create the first domain).  A middleware home can reside on a local file system or on a remote shared disk that is accessible through NFS. The Oracle Fusion Middleware home is represented in path names as MW_HOME.  An Oracle home contains installed files necessary to host a specific product.  The Oracle home is represented in path names as ORACLE_HOME.

Before you install any Fusion Web Application, you need to install an Application Server such as WebLogic Server and other applications that this application depends on.  At "Installation Location" step, you will be asked for:
  • Oracle Middleware Home
  • Oracle Home Directory
In our case, we have specified:
  • Oracle Middleware Home
    • /export/home/bench/ATG/PS6ST7
  • Oracle Home Directory
    • Oracle_WC1
When the installation completes, it displays where Middleware Home Location and Oracle Home Location are (see the diagram).  


This registry file contains record of all weblogic products installed along with product-related information such as version number, patch set level, patch level and location of the production installation directories.

To find Oracle WebLogic Server Version, do:
  1. Go to Middleware Home under which WebLogic is installed 
  2. Look for file registry.xml
    • For example, it's under:
      • $MW_HOME/registry.xml
  3. Open registry.xml and search for “component name=”WebLogic Server”” 
    • Variable "version" next to "component name" will tell you WebLogic version. For example, this is what I've found from my FA installation:
              <component name="WebLogic Server" version="" ...>
                <component name="Core Application Server"/>
                <component name="Administration Console"/>
                <component name="Evaluation Database"/>
                <component name="Workshop Code Completion Support"/>
To learn more on the following topics

  • WebLogic Server Version Numbers
  • WebLogic Version Compatibility
  • etc.
read [5].

Photo Credit


  1. Using the BEA Registry API
  2. Oracle® Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 (11.1.1)
  3. How to find Oracle WebLogic Server Version ?
  4. Using Oracle Fusion Middleware
  5. WebLogic Server Compatibility (WLS 12c)
    • Within a WebLogic domain, the Administration Server, Managed Server instances, and the domain itself each have a WebLogic Server version number. The version number contains five decimal places, for example WebLogic Server

Tuesday, December 11, 2012

Understanding WebLogic Incident and the Diagnostic Framework behind It

From the WebLogic Server log, I have found the following incident:

[INCIDENT_ERROR] [_createIncident] An incident was created. The details are: 
Incident details are: IncidentID=359
. (FND-10102). (FND-10000)
<Dec 8, 2012 9:22:57 PM PST> <Alert> <Diagnostics> 
<BEA-320016> <Creating diagnostic image in /data/ATG/MLRRC4/user_projects/domains/atgdomain/servers/CRMDemo_server1/adr/diag/ofm/fusionapps/ApplicationCoreCRMDemoUI/incident/incdir_359 with a lockout minute period of 1.>
[INCIDENT_ERROR] [JUFormBinding] java.lang.Exception: An application error occurred.  
See the incident log for more information.
  at oracle.apps.fnd.applcore.messages.ExceptionHandlerUtil...

In this article, we will examine what WebLogic Incident and Diagnostic Framework are.

Diagnostic Framework[1]

Oracle Fusion Middleware includes a Diagnostic Framework (DFW).  DFW is available with all FMW 11g installations that run on WebLogic Server.  It aids in detecting, diagnosing, and resolving problems. The problems that are targeted in particular are critical errors such as those caused by:
  • Code bugs
  • Metadata corruption
  • Customer data corruption
  • Deadlocked threads
  • Inconsistent state
When a critical error occurs, it is assigned an incident number, and diagnostic data for the error (such as log files) are immediately captured and tagged with this number. The data is then stored in the Automatic Diagnostic Repository (ADR).

The Automatic Diagnostic Repository (ADR) is a file-based repository for storing diagnostics data associated with incidents. It consists of metadata that describes each Problem and Incident, along with the set of diagnostic dump output generated for each incident.

ADR Directory Structure

The ADR root directory is known as ADR base. By default, the ADR base is located in the following directory:
  • DOMAIN_HOME/servers/server_name/adr
Within ADR base, there can be multiple ADR homes, where each ADR home is the root directory for all incident data for a particular instance of Oracle WebLogic Server or a Java application.

For example, the following path shows the locations of ADR home for
  • Oracle WebLogic Server instance
    • ADR_BASE/diag/ofm/domain_name/server_name
  • Fusion Application
    • ADR_BASE/diag/ofm/fusionapps/app_name


The ADR Command Interpreter (ADRCI) is a utility that enables you to investigate problems, and package and upload first-failure diagnostic data to Oracle Support, all within a command-line environment. ADRCI also enables you to view the names of the dump files in the ADR, and to view the alert log with XML tags stripped, with and without content filtering.

ADRCI is installed in the following directory:

(UNIX) MW_HOME/wlserver_10.3/server/adr
(Windows) MW_HOME\wlserver_10.3\server\adr

$ find -name wlserver_10.3
$ cd ./MLRRC4/wlserver_10.3
$ cd server/adr
$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/export/home/bench/ATG/MLRRC4/wlserver_10.3/server/adr
$ export PATH=${PATH}:/export/home/bench/ATG/MLRRC4/wlserver_10.3/server/adr
$ adrci
ADRCI: Release - Production on Tue Dec 11 19:47:41 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

No ADR base is set
adrci> set base MW_HOME/user_projects/domains/atgdomain/servers/CRMDemo_server1/adr
adrci> show homes
ADR Homes:

adrci> set homepath diag/ofm/fusionapps/ApplicationCoreCRMDemoUI
adrci> show incident
adrci> show incident -mode DETAIL -p "incident_id=359"

After setting LD_LIBRARY_PATH and PATH environment variables, we invoked adrci utility directly from the command line.  The first thing we did after entering ADRCI is setting ADR base and ADR home.  Within our specified ADR home, we listed all the incidents under it.  Then we listed a specific incident (i.e., 359) that we are interested in.  For further information, read [2,6].

Diagram Credit

  • Figure 13-1 ADR Directory Structure for Oracle Fusion Middleware


  1. Diagnosing Problems
    • Describes how to use the Oracle Fusion Middleware Diagnostic Framework to collect and manage information about a problem so that you can resolve it or send it to Oracle Support for resolution.
  2. ADRCI: ADR Command Interpreter
    • ADRCI is a command line utility that originated as a Oracle database software utility. Recent versions of Oracle Weblogic also include this utility.
  3. Introduction to FMW Diagnostic Framework
  4. How to View Java and PL/SQL Incidents
  5. Troubleshooting Oracle Fusion Applications Using Incidents, Logs, QuickTrace, and Diagnostic Tests
  6. Using the ADRCI utility with Oracle Weblogic
  7. Professional Oracle WebLogic Server by Robert Patrick, Gregory Nyberg, and Philip Aston
  8. ADRCI-WebLogic
  9. Using the ADRCI utility with Oracle Weblogic

Friday, November 30, 2012

JPS-02592: Failed to push ldap config data to libOvd for service instance "idstore.ldap" in JPS context "default"

Today I've run into JPS-02592 and was not able to bring up my server instance.  Here is the message:

####<Nov 29, 2012 7:49:25 PM PST> <Error> <Security> <myserver.xxx.com> <SalesServer_1> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1354247365330> <BEA-090892> <The loading of OPSS java security policy provider failed due to exception, see the exception stack trace or the server log file for root cause. If still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. Error message: JPS-02592: Failed to push ldap config data to libOvd for service instance "idstore.ldap" in JPS context "default", cause: oracle.xml.parser.v2.XMLParseException: Element 'root' not expected.>

How Did I Debug It?

First, I've located the jps-config.xml in my environment.  At the time of launching the server instance, it refers to the following security configuration file:

  • -Doracle.security.jps.config=/u01/rup1/instance/domains/myserver.xxx.com/CRMDomain/config/fmwconfig/jps-config.xml

I have looked inside the file.  Nothing was obvious.  The line below:

  • oracle.xml.parser.v2.XMLParseException: Element 'root' not expected.>

seems to suggest that the document may have failed with schema validation.  However, the main cause is not that.  I have experimented with several things.  For example, I've renamed jps-config.xml file and restarted the instance.  Now, the error shows that that file was missing.  This suggests that system did reference that file for security policy providers.  Another thing I have tried is to comment out the following element in that file:
<serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">
  <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>
  <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/>
  <property name="username.attr" value="uid"/>
  <property name="PROPERTY_ATTRIBUTE_MAPPING" value="PREFERRED_LANGUAGE=orclfalanguage"/>

Now the system complained that "idstore.ldap" instance cannot be found.  This confirms that "idstore.ldap" is indeed used and required.

Final Solution

Puzzled by what happened, then I have found this forums thread [2].  So, I have decided to follow the instructions and gave it a try.  Fortunately, that resolved my issue.

Here are my steps:
  1. Rename $DOMAIN_HOME/config/fmwconfig/ovd/default/adapters.os_xml to be adapters.os_xml.backup
  2. Copy adapters.os_xml from $MW_HOME/oracle_common/modules/oracle.ovd_11.1.1/templates/ to $DOMAIN_HOME/config/fmwconfig/ovd/default/
  3. Restart my server instance
At beginning, adapters.os_xml is just an empty template as:
  <?xml version="1.0" encoding="UTF-8"?>
  <adapters schvers="303" version="0"

After my server instance started, it get filled with new information.  When I diff the backup file with the newly touched file, the differences are:

$ diff adapters.os_xml.backup adapters.os_xml
<       <default/>
>          <default>
>             <plugin name="UserManagement"/>
>          </default>
<       <root>dc=us,dc=oracle,dc=com</root>


As I run my Fusion Applications as benchmarks only, I'm happy if the server instance can start.  But, for your case, you may want to contact Oracle's support team for any security issues.


  1. Configuring the Identity Store Service
  2. Problem getting started weblogic server (for BI Publisher)

Tuesday, November 27, 2012

Using rsync to Clone Local and Remote Systems

This article is a follow-up from the previous article:
As pointed out in that article, there are limitations and issues with cloning (either an application or a database). This article describes one of the issues (see also [4]).


In [1], we have outlined the cloning tasks step by step.  In this article, we will discuss:
  • How to use rsync utility to transfer and synchronize local and remote systems
  • How to deal with symbolic links
To do cloning, we need to duplicate a software installation from a source to a destination by preserving its path structure.

Symbolic Links

One of the challenges in cloning is that not everything is self-contained in a source tree.  Very often, symbolic links are also involved.  There are two types of symbolic links:
  1. Symbolic links point outwards from the source tree
  2. External symbolic links point towards the source tree
For the cloning, we use rsync utility to do the job. Here are the options that we have used:
  • rsync -az
This command can copy the first type of links appropriately while it cannot handle the second type of links.  That means you need to create extra symbolic links in the destination after the cloning.  So, one of the pre-cloning tasks is to list all symbolic links and their locations in the source.[2]

Why It Happened?

Why the second type of symbolic links exist in the first place?  It depends on each application and the way cloning was done previously.  In our benchmark cloning, we usually clone one environment to multiple destinations in a chain.  For example, someone has set up a good benchmark on machine A.  Then we clone that to machine B followed by cloning it from machine B to C, etc.

On machine B, we often find there is a need to distribute resources on different file systems for load balancing. Because of that, new symbolic links were introduced. Then, when we clone the installation from machine B to C, we will find both types of symbolic links existing in the source.

Rsync Command[3]

One way of copying a directory is using rsync.  The rsync utility has an archive switch -a that allows it to perform a copy of a directory that includes dot files while maintaining all permissions, ownership, and modification times. However, the destination soft links have the modification time of when the copy was performed, but that shouldn't matter much.

When using the following commands, there is a very subtle syntax difference between the two (i.e., the trailing slash), which ends up with quite different results:
  • rsync -az /src/dir/ /dest/dir
    • The contents of /src/dir will be copied to /dest/dir
  • rsync -az /src/dir /dest/dir
    • The directory itself will be copied into /dest/dir. In other words, you’ll end up with /dest/dir/dir

In the command, we have also include a compression switch -z which can be used in the remote transfer to reduce network traffic.

To enable remote transfer, you append "<userLogin>@<serverName>:" to either src or dest path. For example,

  • rsync -az oracle@otherserver:/data/home/oracle/atg/OracleDB_11.2.0.2 /data/home/oracle/atg

will copy the directory named OracleDB_11.2.0.2 from a remote server into /data/home/oracle/atg.


  1. Simplify Cloning by Using Hosts File
  2. List symbolic links and location, pointing to a particular directory
  3. Expert Shell Scripting
  4. ORA-00313: open failed for members of log group 1 of thread 1
  5. Migrating Oracle B2B from Test to Production (T2P) (Chap 10 of the Book "Getting Started with Oracle SOA B2B Integration: A Hands-On Tutorial")
    • This section provides a real-world scenario to replicate (clone) the test environment to production for Oracle SOA.
    • Oracle Fusion Middleware provides a series of scripts for this task.
  6. To check if the symbolic links are broken in the target system, do:
    • find . -type l -! -exec test -e {} \; -print
  7. Oracle Products: What Patching, Migration, and Upgrade Mean? (Xml and More)
    • For your Oracle production systems, follow official recommendations as shown in this article.

Saturday, November 24, 2012

Book Review: Oracle 11g Anti-hacker's Cookbook‏

The number of security threats related to operating systems and databases are increasing every day, and this trend is expected to continue. Therefore, effective countermeasures to reduce or eliminate these threats must be found and applied.

"Oracle 11g Anti-hacker's Cookbook" covers all the important security measures that can be deployed to protect hackers from attacking your Oracle database.  It provides many useful tips and tricks.  As such, you should add this book to your arsenals of Oracle security.

Connecting to the Database

There are different ways of connecting to an Oracle database (i.e., creating an Oracle session):
  • Programmers 
    • Use ODBC, JDBC and OCI 
  • Database Administrators
    • Use SQL*Plus and Oracle Enterprise Manager (OEM)
Although connection concepts apply to all utilities, we use SQL Command Line (SQL*Plus) for illustration, which is the principal DBA interface into Oracle.

The Weakest Link

As shown above, you can see there are multiple systems involved in the database connection.  Any system involved can have one or more vulnerabilities that can be exploited by hackers in a threat action.

Security practitioners[2] often point out that security is a chain; and just as a chain is only as strong as the weakest link, a database security system is only as secure as its weakest component.

Therefore, there are no short-cuts for Oracle protection.  In this book, it describes lots of tips and tricks which can be deployed to fortify components along this connection chain.

Types of attacks

There can exist many types of attacks on an Oracle session.  Here are some of them as covered in this book:
  • Man-in-the-middle-type (MITM) attack
    • Attack in which an interposed attacker hijacks a client connection
  • TCP and UDP protocol-level attack
    • Targeted towards the network traffic and the data in flight
  • TNS poison attack[3]
    • TNS poison attack is classified as a man-in-the-middle-type attack
  • Replay attack
    • An attack in which a valid data transmission is maliciously or fraudulently repeated or delayed
  • DoS attack
    • To fill up the the file systems on the disk with useless log messages
    • To send a succession of SYN requests
    • To send large numbers of IP packets with the source address faked to appear to be the address of the victim
  • IP Spoofing
    • To create Internet Protocol (IP) packets with a forged source IP address, with the purpose of concealing the identity of the sender or impersonating another computing system
  • Dictionary and pattern matching type attack
  • Password cracking
  • Other attacks 
    • Target the database, listener, and configuration files
And lastly and the most importantly, the weakest part of your Oracle system will be administrators, users or tech support people who fall prey to social engineering.

General vs. Oracle Specific Recipes

In this book, many recipes are provided to show how those security risks could be mitigated or reduced.  To sum up, recipes can be classified into general or Oracle specific security measures.  For example, to confront different interception-type attacks, you can use either Oracle Advanced Security encryption and integrity, or alternatives such as IPSEC, stunnel, and SSH tunneling.

For general measures, topics such as OS security and Securing the network and data in transit are covered in Chapter 1 and 2.  Starting from Chapter 3, security measures using Oracle products start emerging, which includes the following:
  • Oracle RMAN
  • Oracle Enterprise Manager
  • Oracle Virtual Private Database
  • Oracle Label Security
  • Oracle Database Vault
  • Oracle Audit
  • Oracle Cryptographic API
  • Oracle Wallets

Other Recommendations

In the book, it also make suggestions such as:
  • You should implement data audits to detect the origin of the attack or the source of the inappropriate data access or modification
  • You  should develop and implement appropriate alerting systems to proactively detect and prevent attacks on systems and data
  • You should test these security measures first before their final deployment
  • You should perform security assessments regularly on your system

Picture Credit

  • Figure 3-2 Remote Connection in Oracle® Database Express Edition 2 Day DBA 10g Release 2 (10.2)


    1. Oracle 11g Anti-hacker's Cookbook
    2. Viega, John & McGraw, Gary.  Building Secure Software: How to Avoid Security Problems the Right Way. Boston, MA: Addison-Wesley, 2002.
    3. Oracle Database TNS Listener Poison Attack
    4. Replay Attack (wikipedia)
    5. Oracle® Database Installation Guide 11g Release 2 (11.2) for Linux
    6. Securing the Weakest Link
    7. The Onion Model

    Wednesday, November 14, 2012

    ORA-00313: open failed for members of log group 1 of thread 1

    This article is a follow-up to the previous article:
    As pointed out in that article, there are limitations and issues with cloning (either an application or a database). This article describes one of the issues.


    After following the cloning steps as described in [1], we have run into this Oracle database exception when trying to bring up our cloned Oracle.  Obviously, this is our fault because we have not done a thorough planning before the cloning.

    What this exception tells us is:
    • The online log cannot be opened.

    What Is the Redo Log?

    The most crucial structure for Oracle recovery operations is the redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.

    Where Did We Find This ORA-00313?

    From the initialization parameter file (i.e., dbs/init<sid>.ora ) , we have traced down the location of diagnostic destination[3]:
    • diagnostic_dest=/slot/fiz7865/log
    From there, we went down to a folder named:
    • <diagnostic_dest>/diag/rdbms/<dbname>/<instname>/trace
    In there, there is a file named:
    • alert_fiz7865.log
    From that file, we have found the following entries:
      Lost write protection disabled
      Completed: ALTER DATABASE   MOUNT
      Wed Nov 14 09:42:55 2012
      Errors in file <diagnostic_dest>/diag/rdbms/<dbname>/<instname>/trace/fiz7865_lgwr_25410.trc:
      ORA-00313: open failed for members of log group 1 of thread 1
      ORA-00312: online log 1 thread 1: '/data1/rup3.redolog/log3.dbf'

    Note that Oracle will write the alert_<instname>.log file to the directory as specified by the BACKGROUND_DUMP_DEST parameter[4]. So, you can also find out its location by:
    SQL> show parameter BACKGROUND_DUMP_DEST
    NAME                   TYPE        VALUE
    ---------------------- ----------- ------------------------------
    background_dump_dest   string     /slot/fiz7865/log/diag/rdbms/fiz7865/fiz7865/trace

    What Happened?

    When we do the cloning, not everything is contained in a single source directory.  For example, redo log files have been reallocated to another file system (i.e., /data1) which is outside the source directory.  For the Oracle to be fully functional, original redo logs need to be reopened.  If they are not found, an ORA-00313 will be thrown.

    How to Find the Redo Log Location

    Before you do the cloning, keep the source database up and running.  Then query the logfile location by:
    • select * from V$LOGFILE;


    1. Simplify Cloning by Using Hosts File
    2. Managing the Redo Log
    4. Alert Log
    5. Migrating Oracle B2B from Test to Production (T2P) (Chap 10 of the Book "Getting Started with Oracle SOA B2B Integration: A Hands-On Tutorial")
      • This section provides a real-world scenario to replicate (clone) the test environment to production for Oracle SOA.
      • Oracle Fusion Middleware provides a series of scripts for this task.
    6. Oracle Products: What Patching, Migration, and Upgrade Mean?

    Wednesday, November 7, 2012

    Simplify Cloning by Using Hosts File

    Oftentimes, you will find there is a need to install a same application on different systems.  In our case, we have a multi-tier setup for benchmark:
    • Oracle Application Testing Suite (OATS)
    • Application Server
    • Database Server
    This means that individual server need to communicate with other servers using their domain names.

    In this article, we will discuss the simplest way of cloning an application from one environment to another.

    Hosts File

    The hosts file allows you to define which domain names (websites) are linked to which IP addresses. On some platforms, it takes precedence over your DNS servers.  However, the hosts file is under the direct control of the local computer's administrator unlike the DNS. So your DNS servers may say oracle.com is linked to a specific IP address, but you can have oracle.com go anywhere you want by using hosts file.

    In Microsoft Windows, hosts file is located at locations depending on your OS.  For example, for NT, 2000, XP (x86 & x64), 2003, Vista, 7and 8, it is located at:
    • %SystemRoot%\system32\drivers\etc\hosts
      • Need to have write permission on this file for the editing user
    In Linux, hosts file is located at:
    • /etc/hosts
    Hosts file is a plain text file and you can use any text editor to modify it as long as you are given the permission. After modification, it will take effect immediately without rebooting. So, you can restart your application to see the new changes right away.


    In its function of resolving host names, the hosts file may be used to define any host name or domain name for use in the local system. This may be used either beneficially or maliciously for various effects.  In this article, we will discuss using the mapping to redirect a website (i.e., source of cloning) to another website (i.e., destination of cloning) during the cloning of a multi-tier environment.  Because our multi-tier environment exists in a private network, there is no security concern for us.  However, it is possible for you to face serious security attacks if your hosts file is compromised[3].

    As we all know, to deploy and configure any web application is a non trivial task.  As performance engineers, we often need to create similar systems on different set of servers.  Instead of  deploying and configuring web applications from scratch, it will be easier to just do the cloning.

    After cloning the application from one system to another, you then need to fix platform-specific part in the cloned image.  For example, you need to change the domain names referenced in the URLs from old server's to new server's.  Domain names can also be embedded in configuration files, scripts, etc.  Either you can do a global search and replace them or you can modify hosts file to map same host names to the new IP addresses.  The latter is easier.


    Cloning can be done in three stages and the most important stage is the preparation or pre-cloning.  In the following, we cover the tasks involved in these stages for Linux platforms.  For Windows, the steps are similar.
    • Pre-cloning Stage
      • nohup
        • Cloning can take hours to finish.  
          • If you use "putty" to access Linux box, remember to use "nohup" command
            • nohup is used to run a command that is immune to hangups
            • For example, you can prefix your cloning command with nohup and redirect the stderr and stdout to cloning.out file:
              • nohup {cloning command} &> cloning.out &
      • Be the right user that has the privilege to do the cloning.  
        • Sometimes, you may need to be the "root" user to do the cloning.  After the cloning, you can then reduce the accessibility to the correct level.
      • Create the same path structure on the destination as source's
        • Create symbolic links if needed
          • You may need to be root user to create the path.  But, reduce the accessibility to the correct level later.
      • Find the file system (or disk) that is big enough to hold the cloned image
        • Free space left should allow application data to grow after it starts running
      • Hosts file
        • Save the original hosts file
      • Shutdown server instances before cloning
    • Cloning Stage
      • Copy everthing needed from source machine to destination machine.  This can include
        • Server installation
        • Scripts
        • hosts file
          • Copy the new hosts file from source to destination machine and make appropriate changes
          • Validate the changes.  For example, you can use ping command to test 
      • Use rsync command to clone
        • Syntax:
          • rsync -az aroot@sourceServer:/export/home/bench/ATG/RUP3 /export/home/bench/ATG/
        • Don't forget to use nohup for the rsync
        • Try the command out with a small copy first
        • Be patient—the cloning could take hours
    • Post-cloning Stage
      • Verify that your cloned environment work as expected
        • You can test this by stages.  For example, you can
          1. Run your front end (or OATS) against original Application Server and Database Server first.  After verifying that your front end system is working correctly, move to next.
          2. Run your application server against original Databasse Server. After verifying that your middle tier is working correctly, move to next.
          3. Verifying your database server is working correctly.
          4. Run your application server against your new database server.
          5. And so on.
      • Document what you have done


    Cloning applications seems to be straightforward.  But, there are limitations and caveats. You can read [5,6,9,10] for such details.  If you are cloning Oracle Fusion Middleware, read [7].  If you are moving from a test to a production environment, read [8].  Finally, you must pay attention to the license-violations-and-compliance issue when you plan a cloning.


    1. Oracle Application Testing Suite
    2. 6 Surprising Uses For The Windows Hosts File
    3. Hosts (Wikipedia)
    4. Cloning Application Server Middle-Tier Instances
    5. General Considerations and Limitations for Cloning
    6. ORA-00313: open failed for members of log group 1 of thread 1
    7. Cloning Oracle Fusion Middleware (Chapter 20)
    8. Moving from a Test to a Production Environment (Chapter 21)
    9. Cloning Issue—What If Host Name(s) Are Stored in the Database
    10. ORA-01031: insufficient privileges
    11. Migrating Oracle B2B from Test to Production (T2P) (Chap 10 of the Book "Getting Started with Oracle SOA B2B Integration: A Hands-On Tutorial")
      • This section provides a real-world scenario to replicate (clone) the test environment to production for Oracle SOA.
      • Oracle Fusion Middleware provides a series of scripts for this task.
    12. Oracle Products: What Patching, Migration, and Upgrade Mean? (Xml and More)
      • For your Oracle production systems, follow official recommendations as shown in this article.

    Friday, October 26, 2012

    HotSpot VM Performance Tuning Tips

    In some cases, it may be obvious from benchmarking that parts of an application need to be rewritten using more efficient algorithms[1]. Sometimes it may just be enough to provide a more optimal runtime environment by tuning the JVM parameters.

    In this article, we will show you some of the HotSpot VM performance tuning tips.

    What to tune?

    You can tune HotSpot performance on multiple fronts:
    • Code generation[8,10]
    • Memory management
    In this article, we will focus more on memory management (or garbage collector).  The goals of tuning garbage collector include:
    • To make a garbage collector operate efficiently, by
      • Reducing pause time or
      • Increasing throughput
    • To avoid heap fragmentation
      • Different garbage collector uses different compaction to eliminate fragmentation
    • To make it scalable for multithreaded applications on multiprocessor systems
    In this article, we will cover the following tuning options:
    1. Client VM or Server VM
    2. 32-bit VM or 64-bit VM
    3. GC strategy
    4. Heap sizing
    5. Further tuning

    Client vs. Server VM

    The HotSpot Client JVM has been specially tuned to reduce application startup time and memory footprint, making it particularly well suited for client environments. On all platforms, the HotSpot Client JVM is the default.

    The Java HotSpot Server VM is similar to the HotSpot Client JVM except that it has been specially tuned to maximize peak operating speed. It is intended for long-running server applications, for which the fastest possible operating speed is generally more important than having the fastest startup time. To invoke the HotSpot Server JVM instead of the default HotSpot Client JVM, use the -server parameter; for example,
    • java -server MyApp
    In [7], authors have mentioned a third HotSpot VM runtime named tiered.   If you are using Java 6 Update 25, Java 7, or later, you may consider using tiered server runtime as a replacement for the client runtime.   For more details, read [8,10].

    32-Bit or 64-Bit VM

    The 32-bit JVM is the default for the HotSpot VM. The choice of using a 32-bit or 64-bit JVM is dictated by the memory footprint required by the application along with whether any third-party software used in the application supports 64-bit JVMs and if there are any native components in the Java application. All native components using the Java Native Interface (JNI) in a 64-bit JVM must be compiled in 64-bit mode.

    Running 64-bit VM has the following advantages[7]:
    • Larger address space
    • Better performance in two fronts
      • 64-bit JVMs can make use of additional CPU registers
      • Help avoid register spilling
        • Register spilling occurs where there is more live state (i.e. variables) in the application than the CPU has registers. 
    and one disadvantage:
    • Increased width for oops
      • Results in fewer oops being available on a CPU cache line and as a result decreases CPU cache efficiency. 
      • This negative performance impact can be mitigated by setting:
        • -XX:+UseCompressedOops VM command line option
    Note that client runtimes are not available in 64-bit HotSpot VMs.   See [11] for more details.

    GC Strategy

    JVM performance is usually measured by its GC's effectiveness.  Garbage collection (GC) reclaims the heap space previously allocated to objects no longer needed. The process of locating and removing those dead objects can stall your Java application while consuming as much as 25 percent of throughput.

    The Java HotSpot virtual machine includes five garbage collectors.[27] All the collectors are generational.
    • Serial Collector
      • Both young and old collections are done serially (using a single CPU), in a stop-the-world fashion.
      • The old and permanent generations are collected via a mark-sweep-compact collection algorithm. 
        • The sweep phase “sweeps” over the generations, identifying garbage. The collector then performs sliding compaction, sliding the live objects towards the beginning of the old generation space (and similarly for the permanent generation), leaving any free space in a single contiguous chunk at the opposite end.
      • When to use
        • For most applications that are run on client-style machines and that do not have a requirement for low pause times
      • How to select
        • In the J2SE 5.0 and above, the serial collector is automatically chosen as the default garbage collector on machines that are not server-class machines. On other machines, the serial collector can be explicitly requested by using the -XX:+UseSerialGC command line option.
    • Parallel Collector (or throughput collector)
      • Young generation collection
        • Uses a parallel version of the young generation collection algorithm utilized by the serial collector
        • It is still a stop-the-world and copying collector, but performing the young generation collection in parallel, using many CPUs, decreases garbage collection overhead and hence increases application throughput.
      • Old generation collection
        • Uses the same serial mark-sweep-compact collection algorithm as the serial collector
      • When to use
        • For applications run on machines with more than one CPU and do not have pause time constraints, since infrequent, but potentially long, old generation collections will still occur. 
        • Examples of applications for which the parallel collector is often appropriate include those that do batch processing, billing, payroll, scientific computing, and so on.
      • How to select
        • In the J2SE 5.0 and above, the parallel collector is automatically chosen as the default garbage collector on server-class machines. On other machines, the parallel collector can be explicitly requested by using the -XX:+UseParallelGC command line option.
    • Parallel Compacting Collector
        • Young generation collection
          • Use the same algorithm as that for young generation collection using the parallel collector.
        • Old generation collection
          • The old and permanent generations are collected in a stop-the-world, mostly parallel fashion with sliding compaction
          • The collector utilizes three phases (see [5] for more details):
            • Marking phase
            • Summary phase
            • Compaction phase
        • When to use
          • For applications that are run on machines with more than one CPU and applications that have pause time constraints.
        • How to select
          • If you want the parallel compacting collector to be used, you must select it by specifying the command line option -XX:+UseParallelOldGC.
      • Concurrent Mark-Sweep (CMS) Collector[5,6]
          • Young generation collection
            • The CMS collector collects the young generation in the same manner as the parallel collector. 
          • Old generation collection
            • Most of the collection of the old generation using the CMS collector is done concurrently with the execution of the application. 
            • The CMS collector is the only collector that is non-compacting. That is, after it frees the space that was occupied by dead objects, it does not move the live objects to one end of the old generation. 
              • To minimize risk of fragmentation, CMS is doing statistical analysis of object’s sizes and have separate free lists for objects of different sizes.
          • When to use
            • For application needs shorter garbage collection pauses and can afford to share processor resources with the garbage collector when the application is running. (Due to its concurrency, the CMS collector takes CPU cycles away from the application during a collection cycle.)
            • Typically, applications that have a relatively large set of long-lived data (a large old generation), and that run on machines with two or more processors, tend to benefit from the use of this collector. 
            • Compared to the parallel collector, the CMS collector decreases old generation pauses—sometimes dramatically—at the expense of slightly longer young generation pauses, some reduction in throughput, and extra heap size requirements.
          • How to select
            • If you want the CMS collector to be used, you must explicitly select it by specifying the command line option -XX:+UseConcMarkSweepGC.  
            • If you want it to be run in incremental mode, also enable that mode via the  –XX:+CMSIncrementalMode option.
              • This feature is useful when applications that need the low pause times provided by the concurrent collector are run on machines with small numbers of processors (e.g., 1 or 2).
            • ParNewGC is the parallel young generation collector for use with CMS.  To choose it, you can specify the command line option  ‑XX:+UseParNewGC.
        • Generation First (G1) Garbage Collector[31]
          • Summary
            • Differs from CMS in the following ways
              • Compacting
                • Reduce fragmentation and is good for long-running applications 
              • Heap is split into regions
                • Easy to allocate and resize
            • Evacuation pauses 
              • For both young and old regions
          • Young generation collection
            • During a young GC, survivors from the young regions are evacuated to either survivor regions or old regions
              • Done with stop-the-world evacuation pauses
              • Performed in parallel
          • Old generation collection
            • Some garbage objects in regions with very high live ratio may be left in the heap and be collected later
            • Concurrent marking phase
              • Calculates liveness information per region
                • Empty regions can be reclaimed immediately
                • Identifies best regions for subsequent evacuation pauses
              • Remark is done with one stop-the-world puase while initial mark is piggybacked on an evacuation pause
              • No corresponding sweeping phase
              • Different marking algorithm than CMS
            • Old regions are reclaimed by
              • Evacuation pauses
                • Using compaction
                • Where most reclamation is done
              • Remark (when totally empty)
          • When to use
            • The G1 collector is a server-style garbage collector, targeted for multi-processor machines with large memories. It meets garbage collection (GC) pause time goals with high probability, while achieving high throughput. 
          • How to select
            • If you want the G1 garbage collector to be used, you must explicitly select it by specifying the command line option -XX:+UseG1GC.  
        Note that the difference between:
        • -XX:+UseParallelOldGC
        • -XX:+UseParallelGC
        is that -XX:+UseParallelOldGC enables both a multithreaded young generation garbage collector and a multithreaded old generation garbage collector, that is, both minor garbage collections and full garbage collections are multithreaded. -XX:+UseParallelGC enables only a multithreaded young generation garbage collector. The old generation garbage collector used with -XX:+UseParallelGC is single threaded. 

        Using -XX:+UseParallelOldGC also automatically enables -XX:+UseParallelGC. Hence, if you want to use both a multithreaded young generation garbage collector and a multithreaded old generation garbage collector, you need only specify -XX:+UseParallelOldGC.

        Note that the above distinction between -XX:+UseParallelOldGC and -XX:+UseParallelGC are no longer true in JDK 7.  In JDK 7, the following three settings are equivalent:

        • Default
        • -XX:+UseParallelGC
        • -XX:+UseParallelOldGC
        They all use multithreaded collectors for both young generation and old generation.

        Heap Sizing

        If a heap size is small, collection will be fast but the heap will fill up more quickly, thus requiring more frequent collections. Conversely, a large heap will take longer to fill up and thus collections will be less frequent, but they may take longer.

        Command line parameters that divide the heap between new and old generations usually cause the greatest performance impact.  If you increase the new generation's size, you often improve the overall throughput; however, you also increase footprint, which may slow down servers with limited memory.

        For more details, you can read [12-15].

        Further Tuning

        HotSpot's default parameters are effective for most small applications that require faster startup and a smaller footprint.  However, more often than not, you will find default settings are not good enough and need further tuning your Java applications.  As shown in [16], there are many VM options exoosed and can be further tuned by brave souls.  I'm not going to discuss such tunings in this article.  But, I'll keep on posting articles on this blogger with VM tunings.  Stay tuned!


        1. Java Performance Tips
        2. Pick up performance with generational garbage collection
        3. Java HotSpot™ Virtual Machine Performance Enhancements
        4. Oracle JRockit
        5. Memory Management in the Java HotSpot™ Virtual Machine
        6. Understanding GC pauses in JVM, HotSpot's CMS collector
        7. Java Performance by Charlie Hunt and Binu John
        8. Performance Tuning with Hotspot VM Option: -XX:+TieredCompilation
        9. Java Tuning White Paper
        10. A Case Study of Using Tiered Compilation in HotSpot
        11. HotSpot VM Binaries: 32-Bit vs. 64-Bit
        12. HotSpot Performance Option — SurvivorRatio
        13. A Case Study of java.lang.OutOfMemoryError: GC overhead limit exceeded
        14. Understanding Garbage Collection
        15. Diagnosing Java.lang.OutOfMemoryError
        16. What Are the Default HotSpot JVM Values?
        17. Understanding Garbage Collector Output of Hotspot VM
        18. On Stack Replacement in HotSpot JVM
        19. Professional Oracle WebLogic Server by Robert Patrick, Gregory Nyberg, and Philip Aston
        20. Sun Performance and Tuning: Java and the Internet by Adrian Cockroft and Richard Pettit
        21. Concurrent Programming in Java: Design Principles and Patterns by Doug Lea
        22. Capacity Planning for Web Performance: Metrics, Models, and Methods by Daniel A. Menascé and Virgilio A.F. Almeida
        23. Java Performance Tuning (Michael Finocchiaro)
        24. Diagnosing Heap Stress in HotSpot
        25. Introduction to HotSpot JVM Performance and Tuning
        26. Tuning the JVM (video)
          • Frequency of Minhor GC dictated by:
            • Application object allocation rate
            • Size of Eden
          • Frequency of object promotion dictated by:
            • Frequency of minor GCs (tenuring)
            • Size of survivor spaces
          • Full GC Frequency dictated by
            • Promotion rate
            • Size of old generation
        27. JEP 173: Retire Some Rarely-Used GC Combinations
        28. G1 GC Glossary of Terms
        29. Learn More About Performance Improvements in JDK 8 
        30. Java SE HotSpot at a Glance
        31. Garbage-First Garbage Collector (JDK 8 HotSpot Virtual Machine Garbage Collection Tuning Guide)
        32. Tuning that was great in old JRockit versions might not be so good anymore
          • Trying to bring over each and every tuning option from a JR configuration to an HS one is probably a bad idea.
          • Even when moving between major versions of the same JVM, we usually recommend going back to the default (just pick a collector and heap size) and then redoing any tuning work from scratch (if even necessary).