Cross Column

Showing posts with label Benchmark. Show all posts
Showing posts with label Benchmark. Show all posts

Thursday, December 21, 2017

JMeter—Use Extractors (Post-Processor Elements) for Correlation



What Is Correlation


Correlation is the process of capturing and storing the dynamic response (e.g., "Session ID" in the above diagram) from the server and passing it on to subsequent requests. A response is considered dynamic when it can return different data for each iterating request, occasionally affecting successive requests. Correlation is a critical process during performance load test scripting, because if it isn’t handled correctly, your script will become useless.

Correlation is 2-step process:
  1. Parse and extract the dynamic value from the response of a step using a Post Processor element such as:
  2. Refer the extracted value in the request of a subsequent step
    • http://.../.../...?sessionID=${sessionId}#....


How to Use Regular Expression Extractor


Watching above video, you can learn how to create Regular Expression Extractors in JMeter in the following steps:[2]
  1. Create a Test Plan where you want to do dynamic referencing in JMeter
  2. Add Regular Expression Extractor in the Step from where response value(s) needs to be extracted
    • You can use RegExr—an online tool—to learn, build, and test Regular Expressions
  3. Refer the extracted value (referred by Reference Name) in subsequent step(s)
  4. Run and validate it


How to Use CSS/JQuery Extractor





Watching above video, you can learn how to create CSS/JQuery Extractors in JMeter in similar steps:[8]
  1. Create a Test Plan where you want to do dynamic referencing in JMeter
  2. Add CSS/JQuery Extractor[9]in the Step from where response value(s) needs to be extracted
    • You can find a detailed explanation of CSS syntax here. jQuery's selector engine uses most of the same syntax as CSS with some exceptions. For selecting an arbitrary locator, you can use field Match No. with the ‘0’ value, which returns a random value from all found results.
    • It is also worth mentioning there is a list of very convenient browser plugins to test CSS locators right into your browser. For Firefox, you can use the ‘Firebug’ plugin, while for Chrome ‘XPath Helper’ is the most convenient tool.
  3. Refer the extracted value (referred by Reference Name) in subsequent step(s)
  4. Run and validate it


How to Use JSON Extractor


Read the companion articles on this subject:

References

  1. Advanced Load Testing Scenarios with JMeter: Part 1 - Correlations
  2. JMeter Beginner Tutorial 19 - Correlation (with Regular Expression Extractor)
  3. Using RegEx (Regular Expression Extractor) with JMeter
  4. RegExr—an online tool (good)
  5. JMeter Listeners - Part 1: Listeners with Basic Displays
  6. Understand and Analyze Summary Report in Jmeter
  7. How to Automate Auth Token using JMETER
  8. How to Use the CSS/JQuery Extractor in JMeter (BlazeMeter)
  9. How to Use the CSS/jQuery Extractor in JMeter  (DZone)
  10. JMeter: How to Turn Off Captive Portal from the Recording Using Firefox (Xml and More)
  11. JMeter―Select on Multiple Criteria with JSONPath  (Xml and More)
  12. JMeter: How to Verify JSON Response?  (Xml and More)

Tuesday, June 13, 2017

Linux sar Command: Using -o and -f in Pairs

System Activity Reporter (SAR) is one of the important tool to monitor Linux servers. By using this command you can analyse the history of different resource usages.

In this article, we will examine how to monitor resource usages of servers (e.g., in a cluster) during the entire run of an application (e.g., a benchmark) using the following sar command pairs:
  • Data Collection
    • nohup sar -A -o /tmp/sar.data 10 > /dev/null &
  • Record Extraction
    • sar -f /tmp/sar.data [-u | -d | -n DEV]

Sar Command Options


In the data collection phase, we will use -o option to save data in a file of binary format and then use -f option combined with other options (e.g.,  [-u | -d | -n DEV]) to extract records related to different statistics (e.g., CPU, I/O, Network):

Main options

       -o [ filename ]
              Save the readings in the file in binary form. Each reading is in
              a separate record. The default value of the  filename  parameter
              is  the  current daily data file, the /var/log/sa/sadd file. The
              -o option is exclusive of the -f option.  All the data available
              from  the  kernel  are saved in the file (in fact, sar calls its
              data collector sadc with the option "-S ALL". See sadc(8) manual
              page).


       -f [ filename ]
              Extract records from filename (created by the -o filename flag).
              The default value of the filename parameter is the current daily
              data file, the /var/log/sa/sadd file. The -f option is exclusive
              of the -o option.

Others

       -u [ ALL ]
              Report CPU utilization. The ALL keyword indicates that  all  the
              CPU fields should be displayed.

       -d    Report activity for each block device  (kernels  2.4  and  newer
              only).

       -n { keyword [,...] | ALL }
              Report network statistics.


Monitoring the Entire Run of a Benchmark


In the illustration, we will use three benchmarks (i.e., scan / aggregation / join) in the HiBench suite as examples (see [2] for details).  At beginning of each benchmark run, we will start up sar commands on the servers of a cluster; then followed by running spark application of a specific workload; finally, we will kill the sar processes at the end of run.

run.sh
#!/bin/bash

if [ $# -ne 2 ]; then
  echo "usage: run.sh "
  echo "  where could be:"
  echo "    scan"
  echo "    aggregation"
  echo "    join"
  echo "  where could be:"
  echo "    mapreduce"
  echo "    spark/java"
  echo "    spark/scala"
  echo "    spark/python"
  exit 1
fi

workload=$1
target=$2
workloadsRoot=/data/hive/BDCSCE-HiBench/workloads

mkdir ~/$workload/$target

echo "start all sar commands ..."

./stats.sh start

while read -r vmIp
do
  echo "start stats on $vmIp"
  ./myssh opc@$vmIp "~/stats.sh start" &
done < vm.lst

# run a test in different workloads using different lang interfaces
$workloadsRoot/$workload/$target/bin/run.sh


echo "stop all sar commands ..."
./stats.sh stop

while read -r vmIp
do
  echo "stop stats on $vmIp"
  ./myssh opc@$vmIp "~/stats.sh stop" &
done < vm.lst


stats.sh

#!/bin/sh

case $1 in
  'start')
        pkill sar
        rm /tmp/sar.data
        nohup sar -A -o /tmp/sar.data 10 > /dev/null &
        ;;
  'stop')
        pkill sar
        scp /tmp/sar.data ~
        ;;
  '*')
        echo "usage: $0 start|stop"
        ''
esac

CPU Statistics


To view the overall CPU statistics, you can use option -u as follows:

$ sar -f sar.data -u

03:39:28 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle

03:39:38 PM     all      0.03      0.00      0.01      0.02      0.00     99.94

03:39:48 PM     all      0.05      0.00      0.05      0.02      0.01     99.88

<snipped> 

Average:        all      0.09      0.00      0.02      0.02      0.00     99.86

           

I/O Statistics of Block Devices


To view the activity for each block device, you can use option -d as follows:

$ sar -f sar.data -d


03:39:28 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
03:39:38 PM dev202-16      1.20      0.00     16.06     13.33      0.02     14.67      6.50      0.78
03:39:38 PM dev202-32      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM dev202-48      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM dev202-64      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM dev202-80      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM  dev251-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM  dev251-1      1.20      0.00     16.06     13.33      0.02     14.67      6.50      0.78
03:39:38 PM  dev251-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM  dev251-3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:38 PM  dev251-4      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
<snipped>

Average:          DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
Average:    dev202-16      1.22      0.00     15.79     12.99      0.01     11.85      6.57      0.80
Average:    dev202-32      0.85      0.00      8.92     10.46      0.01     10.27      4.18      0.36
Average:    dev202-48      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    dev202-64      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    dev202-80      0.21      0.00      1.74      8.43      0.00      0.30      0.08      0.00
Average:     dev251-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     dev251-1      1.25      0.00     15.97     12.73      0.01     11.78      6.37      0.80
Average:     dev251-2      0.90      0.00      8.92      9.88      0.01     10.44      3.95      0.36
Average:     dev251-3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     dev251-4      0.22      0.00      1.74      8.00      0.00      0.28      0.08      0.00

If you are interested in the average tps of dev251-1:
              tps 
                     Indicate  the  number  of  transfers per second that were
                     issued to the device.  Multiple logical requests  can  be
                     combined  into  a  single  I/O  request  to the device. A
                     transfer is of indeterminate size.
you can specify the following command:
$ sar -f "$destDir/sar.data" -d | grep Average  | grep dev251-1 | awk '{print $3}'

Network Statistics


To view the overall statistics of network devices like eth0, bond, etc, you can use option -n as follows:

Syntax: 
sar -n [VALUE]
The VALUE can be:
  • DEV: For network devices like eth0, bond, etc. 
  • EDEV: For network device failure details 
  • NFS: For NFS client info 
  • NFSD: For NFS server info 
  • SOCK: For sockets in use for IPv4 
  • IP: For IPv4 network traffic 
  • EIP: For IPv4 network errors 
  • ICMP: For ICMPv4 network traffic 
  • EICMP: For ICMPv4 network errors 
  • TCP: For TCPv4 network traffic 
  • ETCP: For TCPv4 network errors 
  • UDP: For UDPv4 network traffic 
  • SOCK6, IP6, EIP6, ICMP6, UDP6 : For IPv6 
  • ALL: For all above mentioned information.
$ sar -f sar.data -n DEV

03:39:28 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s

03:39:38 PM      eth0     12.35     16.47      1.34      4.04      0.00      0.00      0.00
03:39:38 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
03:39:48 PM      eth0      9.63     14.64      1.17      4.03      0.00      0.00      0.00
03:39:48 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
<snipped> 

Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s 
  Average:         eth0     11.26     16.14      3.95      6.46      0.00      0.00      0.00 
  Average:           lo      1.23      1.23      0.33      0.33      0.00      0.00      0.00

If you are interested in the average rxkB/s or txkB/s of eth0:
              rxkB/s
                     Total number of kilobytes received per second.

              txkB/s
                     Total number of kilobytes transmitted per second.

you can specify the following command:
sar -f "$destDir/sar.data" -n DEV|grep Average|grep eth0 |awk '{print $5}'
sar -f "$destDir/sar.data" -n DEV|grep Average|grep eth0 |awk '{print $6}'

References

  1. sar command for Linux system performance monitoring
  2. Three Benchmarks for SQL Coverage in HiBench Suite ― a Bigdata Micro Benchmark Suite

Sunday, April 3, 2016

Expect Scripts: How to Automate Your Tasks

Task to be Automated

Below shows an interactive session that psm — an Oracle Application Container Cloud Service command line tool—prompted a user for needed authentication and authorization information before he/she can sign in to an Oracle Cloud service and work on a specific identity domain.

$ psm setup
Username: weblogic1 Password: Retype Password: Identity domain: myIdDomain Region [us]: http://anycloudserver.example.com:7103 Output format [json]:
In this article, we will demonstrate how to utilize Expect to automate the above task , but not focus on the correctness of the information needed by psm.

Expect


Expect is an extension to the Tcl scripting language that "talks" to other interactive programs according to a script.[1] Following the script, Expect knows what can be expected from a program and what the correct response should be.

It can be used to automate control of interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, ssh, and others including psm . Expect uses pseudo terminals (Unix) or emulates a console (Windows), starts the target program, and then communicates with it, just as a human would, via the terminal or console interface. Finally, Tk, another Tcl extension, can be used to provide a GUI.


Expect Script


To automate the said task, we need to write an Expect script (i.e., psmSetup.exp) as shown below:

psmSetup.exp
#!/usr/bin/expect -f #exp_internal 1 set argDomain [lindex $argv 0] spawn psm setup expect "Username: " send "weblogic\r" expect "Password: " send "welcome1\r" expect "Retype Password: " send "welcome1\r" expect "Identity domain: " send "$argDomain\r" expect "Region \\\[us\\\]: " send "http://anycloudserver.example.com:7103\r" expect "Output format \\\[json\\\]: " send "\r" expect "\r" spawn psm accs apps expect "\r"

Expect scripts can have any file name suffix you like, though they generally have an .exp extension. Read [5] for more details.

How to Debug Expect Scripts?


Writing an Expect script the first time, it is easy to be completely lost and not getting the result you expect. In this case, un-comment the following line in psmSetup.exp:

#exp_internal 1

Setting "exp_internal 1" at the beginning of an Expect script is similar to -d flag (When using Expectk, this option is specified as -diag.), which enables some diagnostic output. This primarily reports internal activity of commands such as expect and interact. In addition, the strace command is useful for tracing statements, and the trace command is useful for tracing variable assignments.

How to Pass Variables from Shell Script to Expect Script?


Say, if you write a Korn shell script as below:

#!/bin/ksh
...

./psmSetup.exp $domain

and would like to pass $domain from shell script to psmSetup.exp script, you can add the following line to the Expect script:

set argDomain [lindex $argv 0]

To reference the Expect variable (i.e, argDomain), you use prefix "$" as below:

expect "Identity domain: "
send "$argDomain\r"

In the above psm dialog, it depicts the interaction between a sender (i.e., end user) and a receiver (i.e. psm) as:
"Identity domain: " is the prompt you "expect" from the psm; therefore, you enter that expected response (i.e., "$argDomain\r") by using "send".
Read [7] for more explanation.

How to Escape Special Characters?


In the below dialog,
Region [us]: http://anycloudserver.example.com:7103
psm will prompt the user for a response:

Region [us]: 

which includes some special characters "[" and "]". To escape special characters in Expect, you can use backslash. However, to protect backslash from being substituted, you actually need to use "\\\" in front of both "[" and "]":[9-12]
expect "Region \\\[us\\\]: "
send "http://anycloudserver.example.com:7103\r"

References

  1. Expect User Command
  2. Tcl
  3. How to pass variables from shell script to expect script?
  4. How to write a script that accepts input from a file or from stdin?
  5. Using Expect Scripts
  6. Debugging Expect Programs
  7. Using Expect Scripts to Automate Tasks
  8. How to escape unusual/uniq characters from expect scripts?
  9. Passing '\' In Username To Expect
  10. Problem in expect script with password involving trailing backslash
  11. How to send escape characters through Expect
  12. How to escape unusual/uniq characters from expect scripts?
  13. Oracle Application Container Cloud Service
  14. Introduction to the Oracle VM Command Line Interface (CLI)
  15. All Cloud-related articles on Xml and More
  16. Tcl Commands (Tcl 8.4)
  17. New Regular Expression Features in Tcl 8.1
  18. Understanding Login Authentication

Saturday, November 2, 2013

How to Create Load Testing Scripts Using OpenScript

This article is one of the Oracle Application Test Suite (OATS)[1] series published on Xml and More, which includes the following:

In this article, we will show:
  • How to create load testing scripts using OpenScript[2]

Introduction to OpenScript


Application Testing Suite (OATS) is comprised of several tightly integrated products.[1] The script designer —OpenScript—only runs on Windows, but all the runtime components are available for both Linux and Windows. OpenScript is a scripting platform for creating automated test scripts in Java.

You can use OpenScript to create scripts for different testings. For example, OATS supports
  • Functional Testing
  • Load Testing
In this article, we will show you how to create load testing scripts in OpenScript.

The Platform


Scripting platform is based upon the Eclipse open source development environment. Initial OpenScript product provides access to a limited set of the Eclipse development environment.

The workbench is the base layer of software and code that provide the foundation on which the OpenScript Modules and Application Programming Interfaces (APIs) operate. Each Workbench window contains one or more perspectives. OpenScript Workbench provides the following perspectives:
  • Tester perspective
  • Developer perspective
  • Reset perspective
Workspaces are created in Oracle OpenScript. Workspaces store project-related script files and Results Log files. You can use them to organize your various testing projects. Three levels of management are provided:
  • Scripts (lowest)
  • Folders
  • Repositories (highest)
You can download OATS from [3]. The version used in this demonstration is
Version: 12.3.0.1 Build 376

Cheat Sheet


Like every recording task, you need to rehearse and make sure all glitches are resolved before the final recording. If you have decided the click path, prepare a cheat sheet like below:

[1] Bring_up_FUSE_URL
[2] Login_SALESREPUSER00001_Welcome1
[3] Click_the_Opportunities_Card
[4] Select_Quarter_2_2013
[5] Drilldown_on_Pinnacle_Server
[6] Click_on_Sales_Account_Picker
[7] Search_for_CUSTOMER_101328336
[8] Click_Cancel
[9] Click_Add_Revenue_Item
[10] Select_Type_Item
[11] Click_Product_LOV_and_Search
[12] Search_for_Elite_Pro_DG_452
[13] Select_Product_and_click_Ok
[14] Click_Cancel
[15] Logout

The numbering of step is for human readers and can help the recording. Each row on the list corresponds to a click in your click path and will become the title of step group in OpenScript recording.

OpenScript Preferences


Before recording, there are some preferences you want to set. To set them, you click on View and then OpenScript Preferences. For example, we would like to control the grouping, naming and numbering of step groups by ourselves (see "Cheat Sheet"). So, set your "ADF Load" preferences as below:


Creating a New Project


In this demonstration, we will create an open script for load testing (File > New...). Our web application is CRM FUSE. So, we have selected "Oracle Fusion/ADF" wizard from the New Project (see above).
  • Oracle Fusion/ADF
    • This option lets you create a new script for load testing of Oracle Application Development Framework (ADF)-based applications and other applications that utilize HTTP and ADF protocols at the protocol level.

At the next step, you are asked to provide a Script Name. So, we set them as follows:
  • Create script as a Function Library (unchecked)
  • Script Name: FUSE_Saleopty_oct07_wrk
Finally click Finish to create a new script. The resulting script will contain the Initialize, Run, and Finish nodes. The Run node will contain recorded HTTP protocol navigations based upon the defined Step Group preferences and the navigations and ADF protocol for actions performed during recording. You can edit the script tree or Java code to customize the script.[4]

In the following sections, we will demonstrate how to create:
  • First step group
  • Remaining step groups

Creating First Step Group


Creating the first step group is a bit different from the rest. So, we describe it separately. Note that we have NOT clicked on the record button yet.

Before we click on the record button (i.e., red circle), create the first message group as shown below:
Open your notepad and copy the first row into the title field as shown below. Then click OK.

Note that we have chosen "No delay" for the first step. But, for other steps, we will specify "Delay 44 secs."

Start Recording


Now click the record button. Your chosen browser (for our demonstration, it's Firefox) will be brought up. Copy your URL:
http://www.mycompany.com:9006/customer/faces/CrmFusionHome
into the address field and hit Enter. This finishes the recording of the first step group.

Next repeat the following subtasks for the remaining groups until it finishes:
  1. Creating a new step group in OpenScript
    • Right select previous Step Group to bring up context menu and select New > Step Group
  2. Copying the next row of click path from Notepad
  3. Clicking next step in your Browser


Finally, don't forget to stop the recorder.

Exporting Script


If your runtime environment is Linux, you need to export script created in OpenScript as follows:
File > Export Script...

For example, a new zip file was created in our default repository:
D:\OracleATS\OFT\FUSE_Saleopty_oct07_wrk.zip
You can then copy it to your Linux box:
scp FUSE_Saleopty_Server1_wrk.zip aime1@mylinuxserver:/scratch/aime1/work

References

  1. Oracle Application Testing Suite
  2. OpenScript for Load Testing Script Troubleshooting (Tutorial)
    • Version: 12.3.0.1 Build 376 was used in this article.
    • This version requires Firefox 10.0 ESR (Windows download).
  3. Oracle Application Testing Suite Downloads
  4. Oracle Application Testing Suite 12.x: Oracle Load Testing Overview
  5. OATS: Tie All Processes Together — from OpenScript to Scenario (Xml and More)

Sunday, September 9, 2012

When to use -Xbootclasspath on HotSpot?

As Ted Neward described in his article[1], you can use -Xbootclasspath to tweak the Java Runtime API.  For example, we are evaluating a new ArrayList implementation and would like to benchmark its performance.  So, we specify
  • -Xbootclasspath/p:/data/patches/NewArrayList.jar
to load the new ArrayList class from someplace other than the rt.jar file in the jre/lib directory.

-Xbootclasspath


At start-up, JVM load its internal classes and the java.* pacages from the default boot class path.  However, the Java Runtime environment is very configurable.  For example, you can use -Xbootclasspath to append/substitute/prepend a list of directories to/with the default boot class path using the following options:

  • -Xbootclasspath:bootclasspath 
    • Specify a semicolon-separated list of directories, JAR archives, and ZIP archives to search for boot class files. These are used in place of the boot class files included in the Java 2 SDK.
    • Note: Applications that use this option for the purpose of overriding a class in rt.jar should not be deployed as doing so would contravene the Java 2 Runtime Environment binary code license. 
  • -Xbootclasspath/a:path 
    • Specify a semicolon-separated path of directires, JAR archives, and ZIP archives to append to the default bootstrap class path. 
  • -Xbootclasspath/p:path 
    • Specify a semicolon-separated path of directires, JAR archives, and ZIP archives to prepend in front of the default bootstrap class path.
    • Note: Applications that use this option for the purpose of overriding a class in rt.jar should not be deployed as doing so would contravene the Java 2 Runtime Environment binary code license.

How to Verify


To verity the effect of -Xbootclasspath, you can use the following option:
  • -verbose:class
Using the above example, you can find the following output from WebLogic Server's log file[see Note 1]:

[Opened /data/patches/NewArrayList.jar]
[Opened /data/JVMs/nmt_test/jre/lib/alt-rt.jar]
[Opened /data/JVMs/nmt_test/jre/lib/rt.jar]
[Loaded java.lang.Object from /data/JVMs/nmt_test/jre/lib/rt.jar]
[Loaded java.io.Serializable from /data/JVMs/nmt_test/jre/lib/rt.jar] ...
[Loaded java.lang.NoSuchMethodError from /data/JVMs/nmt_test/jre/lib/rt.jar]
[Loaded java.util.ArrayList from /data/patches/NewArrayList.jar]
[Loaded java.util.Collections from /data/JVMs/nmt_test/jre/lib/rt.jar]

From the above highlighted lines, you can see that java.util.Array.List was indeed loaded from the new jar file (i.e., NewArrayList.jar).

In summary, to diagnose any class loading issue, you can use -verbose:class.  There are other useful options which enable verbose output:
  • -verbose[:class|gc|jni]

Note


  1. We have started WebLogic Server with the following line:
    • bin/startManagedWebLogic.sh CRMDemo_server1   http://myserver:7001 >   logs/CRMDemo_server1.log 2>&1 < /dev/null &

    • In other words, we have redirected stdout and stderr from WLS window to CRMDemo_server1.log

References

  1. Using the BootClasspath--Tweaking the Java Runtime API
  2. WebLogic's Classloading Framework
  3. Oracle® JRockit Command-Line Reference Release R28
    • -Xbootclasspath directories and zips/jars separated by ; (Windows) or : (Linux and Solaris)
  4. java - the Java application launcher

© Travel for Life Guide. All Rights Reserved.

Analytical Insights on Health, Culture, and Security.