Friday, September 27, 2013

Configuring Diagnostic Framework (DFW) Settings

A while ago, I've posted an article titled
Understanding WebLogic Incident and the Diagnostic Framework behind It[1]

This article is a follow-up of that one.  In this article, we will discuss how to configure Diagnostic Framework Settings.

Diagnostic Framework (DFW)


A quick recap what Diagnostic Framework (DFW) is.  Oracle Fusion Middleware includes a Diagnostic Framework (DFW). DFW is available with all FMW 11g installations that run on WebLogic Server. It aids in detecting, diagnosing, and resolving problems, which are targeted in particular critical errors.

There are two ways that you can modify DFW settings:
  1. Modifying the configuration file named dfw_config.xml
  2. Making updates via Fusion Middleware Control (FMW Console)[2]
In this article, we will show how to modify the following setting:
  • maxTotalIncidentSize 
which configures the maximum total disk space allocated to incidents.

Configuration File


DFW's configuration file is named dfw_config.xml.  There is one dfw_config.xml file for each server.  For example, there is one for  CRMCommonServer_1:
  • config/fmwconfig/servers/CRMCommonServer_1/dfw_config.xml
in the CRMDomain.

Here is the sample contents of dfw_config.xml:
<?xml version="1.0" encoding="UTF-8"?>
<diagnosticsConfiguration xmlns="<snipped>" 
  xmlns:xs="http://www.w3.org/2001/XMLSchema-instance">
  <!-- maxTotalIncidentSize configures the maximum total disk space
       allocated to incidents, in megabytes. -->
  <incidentCleanup maxTotalIncidentSize="500"/>
  <incidentCreation
    incidentCreationEnabled="true"
    logDetectionEnabled="true"
    uncaughtExceptionDetectionEnabled="true"
    floodControlEnabled="true"
    floodControlIncidentCount="5"
    floodControlIncidentTimePeriod="60"
    reservedMemoryKB="512"/>
  <threadDump useExternalCommands="true"/>
  <dumpSampling enabled="true">
    <dumpSample
      sampleName="JVMThreadDump"
      diagnosticDumpName="jvm.threads"
      samplingInterval="60"
      rotationCount="10"
      dumpedImplicitly="true"
      toAppend="true">
      <dumpArgument name="timing" value="true"/>
      <dumpArgument name="context" value="true"/>
    </dumpSample>
    <dumpSample
      sampleName="JavaClassHistogram"
      diagnosticDumpName="jvm.classhistogram"
      samplingInterval="1800"
      rotationCount="5"
      dumpedImplicitly="false"
      toAppend="true">
    </dumpSample>
  </dumpSampling>
</diagnosticsConfiguration> 
In our case, we would like to update
  • maxTotalIncidentSize
to 150 MB from 500 MB.

FMW Console


To start Oracle Enterprise Manager 11g, I have used the following URL:
  • http://myserver.oracle.com:9001/em

The following diagrams show how to configure
  • maxTotalIncidentSize 
using the Fusion Middleware Control System MBean Browser:
  1. From the target navigation pane, expand the farm, then WebLogic Domain.
  2. Select the domain.
  3. From the WebLogic Domain menu, choose System MBean Browser.
  4. The System MBean Browser page is displayed.
  5. Expand Application Defined Beans, then oracle.dfw, then domain.domain_name, then dfw.jmx.DiagnosticsConfigMBean.
  6. Select one of the DiagnosticConfig entries. There is one DiagnosticConfig entry for each server.
  7. In the Application Defined MBean pane, expand Show MBean Information to see the server name.





Book Review: Developing Web Applications with Oracle ADF Essentials

At the end of book, the author has claimed that:
If you have followed the exercises in this book, you are ready to build real-world ADF Essentials applications and can consider yourself an ADF Essentials journeyman.
I cannot agree more. If you are new to ADF (Oracle Application Development Framework) programming, you probably need to add this cookbook to your toolbox. Unfortunately, some hyperlinks embedded in the book are broken. At the end of this article, you can find the correct links to some of the important topics covered in the book.

Bottom-up Approach


ADF Essentials toolkit is used in this book to help you learn ADF programming. The recent released ADF Essentials gives developers a free version of the core components of the ADF framework which they can use to build an end-to-end ADF-based solution including advanced UI components, taskflows,[4] the binding layer and business components or EJBs.

In this book, the author uses a bottom-up approach which introduces you to the ADF programming. A full-blown DVD rental application (in Chap 6) was built, tested and deployed using the following technology stack:
  • The free MySQL database[5]
  • The free GlassFish application server[7]
    • Note that you also need Java Development Kit[6]
  • The free ADF Essentials toolkit[3]
  • The free JDeveloper development tool[9]
    • Oracle also supports ADF Essentials as part of their Oracle Enterprise Pack for Eclipse (OEPE) product.[8]
Although GlassFish was used in the exercises, this book also discusses features that are available only on WebLogic Server (WLS) and its associated management system—Oracle Enterprise Manager Grid Control. For example, ADF logger is the preferred logging component (vs. log4j and Logback) for ADF applications. And there are differences in the supported features for logging on GlassFish and WLS:
  • Logging configuration
    • ADF logging is controlled by the logging.xml file.
    • JDeveloper offers a nice interface for managing this file for the built-in WLS.
  • Log monitoring
    • WLS—logging can be read and analyzed using Oracle Enterprise Manager Grid Control.
    • GlassFish—logging can only be read directly from server.log file.

ADF Framework


All ADF applications consist of the following parts:


  • View layer
    • The View layer consists of the pages that are displayed to end users (JSF pages or JSF page fragments).
    • ADF Faces is based on JSF and built on top of Trinidad, an open source JSF framework.
  • Controller layer
    • The Controller layer consists of ADF Task Flows[4,17] that control the flow between the elements of the view layer
  • Model layer
    • ADF Model is a binding layer to bind the UI (ADF Faces based on JSF) without tight coupling UI components to the back end data model.
  • Business Service layer
    • The Business Service layer provides services to query and manipulate data.
    • There are many ways to build business services—in this book, it uses ADF Business Components, but you can also use, for example, JPA Entities and EJB 3.0 Session beansPOJOsweb services, and so on.
  • Database layer
    • The Database layer is where your data is stored persistently.

The Book


In this book, it shows you:
  • How to set up the entire infrastructure for building ADF applications
  • How to install the necessary interconnections and wired everything together
  • How to add Java code to your application to implement customized business logic
  • How to build and deploy ADF applications to application servers
  • How to debug ADF applications
  • How to build scalable structure using foundation workspaces and ADF libraries
  • How to secure ADF application (Apache Shiro[15] is used in this book)
Without doubt, you will be able to write real-world ADF applications after reading this book. But, before you roll up sleeves and jump to the programming, try to read the following guidelines first:
  • ADF Naming and Project Layout Guidelines[11]

References

  1. Developing Web Applications with Oracle ADF Essentials (reviewed book in this article)
  2. journeyman (wikipedia)
  3. ADF Essentials downloads
    • Version 11.1.2.4 was used in the book.
    • After navigating to this home page, then click on
    • Oracle ADF Essentials - FREEOracle ADF Task Flow in a Nutshell
  4. Oracle ADF Task Flow in a Nutshell (Xml and More)
  5. MySQL downloads
    • Free Community Server edition Version 5.6.12 was used in the book.
  6. JDK 7 downloads
    • In order to be able to install and run GlassFish, your system first needs to have JDK 7 installed. Jdk1.7.0_25 was installed and used in the book.
  7. GlassFish downloads
    • GlassFish Server Open Source Edition 3.1.2.2 (for Windows platform) was used in the book.
  8. Oracle Enterprise Pack for Eclipse 12c (12.1.2.1.1)
    • Oracle Application Development Framework - Oracle ADF
    • Oracle JDeveloper downloads
      • Studio Edition 11.1.2.4.0 was used in the book.
    • ADF Naming and Project Layut Guidelines v1.00 (16/Jan/2013)
    • Oracle ADF Essentials
    • ADF Naming and Project Layout Guidelines (By Chris Muir)
    • Adventures in ADF Logging - Part 1 (Duncan Mills)
    • Apache Shiro
    • Using Bind Variable to Implement Range Selection Declaratively (Xml and More)
    • Understanding Task Flow Transaction and Savepoint Support in Oracle ADF (Xml and More)

    Thursday, September 26, 2013

    WebLogic Startup Slowness Caused by Kernel's Random Number Generator

    Java Application (i.e., WebLogic Server) could be slow at startup time and it could be caused by the slowness of random number generator used by the application.  You can read [2] for the case that discusses the slowness of WebLogic startup.

    In this article, we will examine the following issues:
    • /dev/random vs. /dev/urandom
      • How to test the performance of a random number generator?
      • How to configure it?
    • Security considerations
    on Linux systems.  Note that this can happen with WLS running on AIX too.

    /dev/random vs. /dev/urandom


    Without much ado, here is the man output for "urandom":
    The character special files /dev/random and /dev/urandom (present since Linux 1.3.30) provide an interface to the kernel's random number generator.  File /dev/random has major device number 1 and minor device number 8.  File /dev/urandom has major device number 1  and  minor  device number 9.
    The  random  number  generator  gathers environmental noise from device drivers and other sources into an entropy  pool.   The  generator  also keeps  an  estimate of the number of bits of noise in the entropy pool.  From this entropy pool random numbers are created. 
    When read, the /dev/random device will only return random bytes  within the estimated number of bits of noise in the entropy pool.  /dev/random should be suitable for uses that need very high quality randomness such as  one-time  pad  or  key generation.  When the entropy pool is empty, reads from /dev/random will block until additional environmental  noise is gathered. 
    A  read  from  the  /dev/urandom device will not block waiting for more entropy.  As a result, if  there  is  not  sufficient  entropy  in  the entropy  pool,  the  returned  values are theoretically vulnerable to a cryptographic attack on the algorithms used by the  driver.   Knowledge of how to do this is not available in the current non-classified literature, but it is theoretically possible that such an attack may  exist.  If this is a concern in your application, use /dev/random instead.

    How to Test?


    You can use "time" command to measure the performance of each random number generator.  For example, here is the output from the Linux system.

    $time head -1 /dev/random                                                  
    real    0m9.718s
    user    0m0.000s
    sys     0m0.001s


    $ time head -1 /dev/./urandom
    real    0m0.002s
    user    0m0.000s
    sys     0m0.002s

    As you can see that "/dev/urandom" is much faster because it's non-blocking.  However, /dev/random will block until additional environmental noise is gathered and takes longer time to return.

    How to Configure?


    You can configure which source of seed data for SecureRandom to use at JVM level or at WLS' command-line level.

    At the JVM level, you can change the value of securerandom.source property in the file:
    • $JAVA_HOME/jre/lib/security/java.security
    Here is the description of securerandom.source property:

    # Select the source of seed data for SecureRandom. By default an
    # attempt is made to use the entropy gathering device specified by
    # the securerandom.source property. If an exception occurs when
    # accessing the URL then the traditional system/thread activity
    # algorithm is used.
    #
    # On Solaris and Linux systems, if file:/dev/urandom is specified and it
    # exists, a special SecureRandom implementation is activated by default.
    # This "NativePRNG" reads random bytes directly from /dev/urandom.
    #
    # On Windows systems, the URLs file:/dev/random and file:/dev/urandom
    # enables use of the Microsoft CryptoAPI seed functionality.
    #
    securerandom.source=file:/dev/urandom
    

    Or, you can specify which source of seed data to use by adding
    • -Djava.security.egd=file:/dev/./urandom
    to the java command-line that starts WebLogic Server.

    Security Considerations


    In [1], it warns that if you choose /dev/urandom over /dev/random for better performance, you should be aware of that:
    This workaround should not be used in production environments because it uses pseudo-random numbers instead of genuine random numbers.

    
    

    References

    1. Random Number Generator May Be Slow on Machines With Inadequate Entropy
    2. Weblogic starts slow
    3. Fusion Middleware Performance and Tuning for Oracle WebLogic Server
    4. Oracle® Fusion Middleware Tuning Performance of Oracle WebLogic Server 12c (12.2.1)
    5. Fusion Middleware Tuning Performance of Oracle WebLogic Server (12.2.1.3.0)

    Wednesday, September 11, 2013

    New Diagnostic, Monitoring, Security and Deployment Capabilities for Java SE 7

    Earlier today Oracle released Java SE 7u40.  

    The most important features are:
    • Java Mission Control and Java Flight Recorder (commercial features) are now available for Hotspot as well as for JRockit
    • Deployment Rule Set for system administrators to control what applet can use which JRE version
    • ARM V7 support for hardfloat
    • Scene Builder 1.1 (separate download) for creating JavaFX GUIs.
    The press release contains more information on this update.

    Downloads are now live in OTN and in java.com.

    This release does not change the security baseline so users who had the previous version of the JRE (7u25) installed will not be auto-updated to 7u40.

    Wednesday, September 4, 2013

    How to Troubleshoot High CPU Usage of Java Applications?

    A Java application that is constantly maxing out the CPU load sometimes can be a good thing[3]. For instance, for a batch application that is computationally bound, it would normally be a best case scenario for it to complete as soon as possible. Also idled CPU could be a waste and should be avoided. CPU idle happens when the system:
    • Needs to wait for locks or external resources
      • The application might be blocked on a synchronization primitive and unable to execute until that lock is released
      • The application might be waiting for something, such as a response to come back from a call to the database
    • Has no threads available to handle the work in a multithreaded, multi-CPU case
    • Has nothing to do
    Looking at the % CPU utilization is a first step in understanding your application performance, but it is only that—Use it to see if you are using all the CPU you expect, or if it points to some synchronization or resource issue.

    Normally, some over-provisioning is needed to keep an application responsive. If the CPU usage is very high (i.e, consistently over 95%), you may want to invest in better hardware, or look over the data structures and algorithms employed by the application.

    In this article, we will show you how to investigate where all those CPU cycles are being spent in your Java applications.

    How to Troubleshoot High CPU Usage?


    The easiest approach is to generate a sequence of thread dumps to see what's keeping the processor busy. Note that you can't tell much from a single thread dump. So, you need to generate a sequence of thread dumps.

    Thread dumps generated at high CPU times are the most useful. To monitor CPU usage, you can use Linux tools like top [2] or prstat[4] to see which threads are consuming the most CPU and get thread dumps at the same time. Then you can map the ids. It may end up being GC that is taking the CPU if your memory pressure is high. In that case, you also need to gather GC logs for further analysis.

    Using top Linux command, Java threads (or Linux LWP's) will be sorted based on the %CPU by default. Pressing Shift+F, you will be shown a screen specifying the current sort field. Then you can select different sort field by selecting different field letter. For example, select "n" for sorting by memory usage (RES).

    User Time vs. System Time


    Some Linux commands (i.e., vmstat)[1] can report CPU time spent in either system or user space. User time (including nice time for vmstat) is the percentage of time the CPU is executing application code (including GC code), while system time is the percentage of time the CPU is executing kernel code.

    System time could be related to your application too. For example, if your application performs I/O, the kernel will execute the code to read the file from disk, or write the network buffer, and so on. High levels of system time often mean something is wrong, or the application is making many system calls. Investigating the cause of high system time is always worthwhile.

    CPU Tuning


    The goal in performance is always to drive the CPU usage as high as possible (for as short a time as possible). The CPU number is an indication of how effectively the program is using the expensive CPU, and so the higher the number the better. As previously mentioned, in some CPU-bound applications (i.e., the CPU is the limiting factor), for example batch jobs, it is normally a good thing for the system to be completely saturated during the run. However, for a standard server-side application it is probably more beneficial if the system is able to handle some extra load in addition to the expected one.

    Based on [8], Oracle has provided the following tuning guidelines (including CPU tuning) for its Fusion Applications:


    Metric Category Metric Name Warning Threshold Critical Threshold Comments
    Disk Activity Disk Device Busy >80% >95%
    Filesystems Filesystem Space Available <20 <5
    Load CPU in I/O wait >60% >80%
    CPU Utilization >80% >95%
    Run Queue (5 min average) >2 >4 The run queue is normalized by the number of CPU cores.
    Swap Utilization >75% >90%
    Total Processes >15000 >25000
    Logical Free Memory % <20 <10
    CPU in System Mode >20% >40%
    Network Interfaces Summary All Network Interfaces Combined Utilization >80% >95%
    Switch/Swap Activity Total System Swaps >3 >5 Value is per second.
    Paging Activity Pages Paged-in (per second)
    Pages Paged-out (per second) The combined value of Pages Paged-in and Pages Paged-out should be <=1000


    Oracle Performance Tools


    To analyze high CPU usage in Java applications, the best approach is to use enterprise profilers. For example, Oracle Solaris Studio[5] can offer more performance details and better measurements.  Now it can run on Oracle Solaris, Oracle Linux, and Red Hat Enterprise Linux operating systems.

    The Oracle Solaris Studio Performance Analyzer can be extremely useful to identify bottlenecks and provide advanced profiling for your applications. The key features of it includes[6]:
    • Low overhead for fast and accurate results
    • Advanced profiling of single-threaded and multithreaded applications
    • Support for multiprocess and system-wide profiling
    • Ability to analyze MPI applications
    • Support for C, C++, Fortran, and Java code
    • Optimized for the latest Oracle systems
    If your applications run in JRockit, another good way to profile CPU usage is to capture JRockit Flight Recordings[7]. JFR can provide extremely detailed level of profiling with little impact of your application performance.  If you use HotSpot, Java Mission Control and Java Flight Recorder (commercial features) are now available for Java SE 7u40 as well as for JRockit[9].

    References

    Monday, September 2, 2013

    HotSpot: Using jstat to Explore the Performance Data Memory

    HotSpot provides jvmstat instrumentation for performance testing and problem isolation purposes.  And it's enabled by default (see -XX:+UsePerfData).

    If you run Java application benchmarks, it's also useful to save PerfData memory to hsperfdata_ file on exit by setting:
    • -XX:+PerfDataSaveToFile
    A file named  hsperfdata_<vmid> will be saved in the WebLogic domain's top-level folder.

    How to Read hsperfdata File?


    To display statistics collected in PerfData memory, you can use:
    • jstat[3]
      • Experimental JVM Statistics Monitoring Tool - It can attach to an instrumented HotSpot Java virtual machine and collects and logs performance statistics as specified by the command line options. (formerly jvmstat)
    There are two ways of showing statistics collected in PerfData memory:
    • Online
      • You can attach to an instrumented HotSpot JVM and collect and log performance statistics at runtime.
    • Offline
      • You can set -XX:+PerfDataSaveToFile flag and read the contents of the hsperfdata_ file on the exit of JVM.
    In the following, we have shown an offline example of reading the hsperfdata_ file (i.e. a binary file; you need to use jstat[3] to display its content):
    $ /scratch/perfgrp/JVMs/jdk-hs/bin/jstat -class file:///<Path to Domain>/MyDomain/hsperfdata_9872

    Loaded    Bytes  Unloaded   Bytes       Time
    30600   64816.3         2     3.2      19.74

    You can check all available command options supported by jstat using:

    $jdk-hs/bin/jstat -options
    -class
    -compiler
    -gc
    -gccapacity
    -gccause
    -gcmetacapacity
    -gcnew
    -gcnewcapacity
    -gcold
    -gcoldcapacity
    -gcutil
    -printcompilation

    HotSpot Just-In-Time Compiler Statistics


    One of the command option supported by jstat is "-compiler", which can provide high-level JIT compiler statistics.

    Column Description
    Compiled Number of compilation tasks performed.
    Failed Number of compilation tasks that failed.
    Invalid Number of compilation tasks that were invalidated.
    Time Time spent performing compilation tasks.
    FailedType Compile type of the last failed compilation.
    FailedMethod Class name and method for the last failed compilation.

    In the following, we have shown the compiler statistics of three managed servers in one WLS Domain using two different JVM builds:

    $/scratch/perfgrp/JVMs/jdk-hs/bin/jstat -compiler file:///<Path to Domain>/MyDomain/hsperfdata_9872


    JVM1

    Compiled Failed Invalid   Time   FailedType FailedMethod
       33210     13       0   232.97          1 oracle/ias/cache/Bucket objInvalidate
       74054     20       0   973.03          1 oracle/security/o5logon/b b
       74600     18       0  1094.21          1 oracle/security/o5logon/b b

    JVM2

    Compiled Failed Invalid   Time   FailedType FailedMethod
       33287     10       0   246.26          1 oracle/ias/cache/Bucket objInvalidate
       68237     18       0  1022.46          1 oracle/security/o5logon/b b
       67346     18       0   943.79          1 oracle/security/o5logon/b b

    Given the above statistics, we could take next action on analyzing why JVM2 generating less compiled methonds than JVM1 did. At least this is one of the use case for using PerfData with its associated tool—jstat.

    PerfData-Related JVM Options


    NameDescriptionDefaultType
    UsePerfDataFlag to disable jvmstat instrumentation for performance testing and problem isolation purposes.truebool
    PerfDataSaveToFileSave PerfData memory to hsperfdata_ file on exitfalsebool
    PerfDataSamplingIntervalData sampling interval in milliseconds50 /*ms*/intx
    PerfDisableSharedMemStore performance data in standard memoryfalsebool
    PerfDataMemorySizeSize of performance data memory region. Will be rounded up to a multiple of the native os page size.32*Kintx

    Note that the default size of PerfData memory is 32K. Therefore the file (i.e., hsperfdata_ file) dumped on exit is also 32K in size.

    References

    1. New Home of Jvmstat Technology
    2. The most complete list of -XX options for Java JVM
    3. jstat - Java Virtual Machine Statistics Monitoring Tool