Thursday, December 21, 2017

JMeter—Use Extractors (Post-Processor Elements) for Correlation



What Is Correlation


Correlation is the process of capturing and storing the dynamic response (e.g., "Session ID" in the above diagram) from the server and passing it on to subsequent requests. A response is considered dynamic when it can return different data for each iterating request, occasionally affecting successive requests. Correlation is a critical process during performance load test scripting, because if it isn’t handled correctly, your script will become useless.

Correlation is 2-step process:
  1. Parse and extract the dynamic value from the response of a step using a Post Processor element such as:
  2. Refer the extracted value in the request of a subsequent step
    • http://.../.../...?sessionID=${sessionId}#....


How to Use Regular Expression Extractor


Watching above video, you can learn how to create Regular Expression Extractors in JMeter in the following steps:[2]
  1. Create a Test Plan where you want to do dynamic referencing in JMeter
  2. Add Regular Expression Extractor in the Step from where response value(s) needs to be extracted
    • You can use RegExr—an online tool—to learn, build, and test Regular Expressions
  3. Refer the extracted value (referred by Reference Name) in subsequent step(s)
  4. Run and validate it


How to Use CSS/JQuery Extractor





Watching above video, you can learn how to create CSS/JQuery Extractors in JMeter in similar steps:[8]
  1. Create a Test Plan where you want to do dynamic referencing in JMeter
  2. Add CSS/JQuery Extractor[9]in the Step from where response value(s) needs to be extracted
    • You can find a detailed explanation of CSS syntax here. jQuery's selector engine uses most of the same syntax as CSS with some exceptions. For selecting an arbitrary locator, you can use field Match No. with the ‘0’ value, which returns a random value from all found results.
    • It is also worth mentioning there is a list of very convenient browser plugins to test CSS locators right into your browser. For Firefox, you can use the ‘Firebug’ plugin, while for Chrome ‘XPath Helper’ is the most convenient tool.
  3. Refer the extracted value (referred by Reference Name) in subsequent step(s)
  4. Run and validate it


How to Use JSON Extractor


Read the companion articles on this subject:

References

  1. Advanced Load Testing Scenarios with JMeter: Part 1 - Correlations
  2. JMeter Beginner Tutorial 19 - Correlation (with Regular Expression Extractor)
  3. Using RegEx (Regular Expression Extractor) with JMeter
  4. RegExr—an online tool (good)
  5. JMeter Listeners - Part 1: Listeners with Basic Displays
  6. Understand and Analyze Summary Report in Jmeter
  7. How to Automate Auth Token using JMETER
  8. How to Use the CSS/JQuery Extractor in JMeter (BlazeMeter)
  9. How to Use the CSS/jQuery Extractor in JMeter  (DZone)
  10. JMeter: How to Turn Off Captive Portal from the Recording Using Firefox (Xml and More)
  11. JMeter―Select on Multiple Criteria with JSONPath  (Xml and More)
  12. JMeter: How to Verify JSON Response?  (Xml and More)

Sunday, October 15, 2017

Nginx—Knowing the Basics

Nginx is a lightweight, high performance web server designed to deliver large amounts of static content quickly with efficient use of system resources. nginx’s strong point is its ability to efficiently serve static content, like plain HTML and media files. Some consider it a less than ideal server for dynamic content.[1]


Concepts of Niginx

  • Nginx Process
    • Nginx has one master process and several worker processes.
      • Master process (1)
        • To read and evaluate configuration, and maintain worker processes
      • worker processes (N)
        • To do actual processing of requests
        • Each worker can handle thousands of concurrent connections. It does this asynchronously with one thread, rather than using multi-threaded programming.
  • Content Handling
    • Static Content
      • Nginx’s strong point is its ability to efficiently serve static content, like plain HTML and media files. 
    • Dynamic Content
      • Rather than using the embedded interpreter approach, nginx hands off dynamic content to CGI, FastCGI, or even other web servers like Apache, which is then passed back to nginx for delivery to the client.
  • Request distribution
    • Unlike Apache, which uses a threaded or process-oriented approach to handle requests, nginx employs event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. 
      • which provides more predictable performance under load.

Configuration 


The way nginx and its modules work is specified in the configuration file.
  • nginx.conf
    • By default, the main configuration file is named nginx.conf and placed in the directory:
      • /usr/local/nginx/conf, /etc/nginx, or /usr/local/etc/nginx
    • Can be also configured by a compile-time option:
      • --conf-path=/u01/data/config/nginx/nginx.conf
  • Need to reload configuration after changes
    • Changes made in the configuration file will not be applied until the command to reload configuration is sent to nginx or it is restarted. 
    • To reload configuration, execute:
      • nginx -s reload
To learn how to start, stop, and reload configuration in more details, read here.


Configuration—Syntax and Semantics 


The syntax and semantics of nginx's configuration files are described below:
  • Comments
    • All lines preceded by a pound sign or hash (#) are comments
  • Directives 
    • nginx consists of modules which are controlled by directives specified in the configuration file.
      • Directives describe the basic behavior of the web server
    • Directives are divided into 
      • Simple directives 
        • All statements end with a semi-colon (;)
      • Block directives
        • Blocks are variables having arguments that are themselves sub-directives enclosed in brackets ({ })
        • Examples
          • http{} block — Universal Configuration
          • server{} block — Virtual Domains Configuration
            • Configures multiple servers virtually on different ports or with different server names
              • The server_name directive, which is located in the server block, lets the administrator provide name-based virtual hosting. 
          • upstream{} block
            • Defines a cluster that you can proxy requests to
              • Commonly used for defining either a web server cluster for load balancing, or an app server cluster for routing / load balancing
  • Include statement
    • Can be used to include directives from a separate file.
    • Anything written in the file is interpreted as if it was written inside the enclosing block. 
  • Context
    • If a block directive can have other directives inside braces, it is called a context (examples: events, http,server, and location).
    • Examples
      • Directives placed in the configuration file outside of any contexts are considered to be in the main context. 
        • The events and http directives reside in the main context, server in http, and location in server.
      • access_log directive sets the location of the nginx access log
        • which can be set in either
          • http block, or 
            • which can be used to log all access to a single file, or as a catch-all for access to virtual hosts that don’t define their own log files
          • server block
            • which sort the output specific to each virtual domain into its own file

Http Request Handling


When clients send http requests to nginx, it distributes requests based on the following information:
  • Request URL
  • Request Headers (e.g., Host  header)
and processes them in the following sequence:
  1. Receives data from clients
  2. Parses the http request
  3. Finds the virtual server
    • Configured by server {} blocks
  4. Finds the location
    • Specified by the location directive 
      • Directs requests to specific files and folders
  5. Runs phase handlers
  6. Generates the http response
  7. Filters response headers
  8. Filters the response body
  9. Sends out the output to the client
Note that nginx always fulfills request using the most specific match.

Sunday, September 17, 2017

JMeter: How to Verify JSON Response?


JSON (Javascript object notation) is a serialization format (in key-value pairs) of data structures .  For REST API, it is widely used for data transfer from server to client.  For example, a client sends an HTTP request with below header:

  • Accept: application/json

The server can respond with below sample JSON data:

  {
    "result": [],
    "ccapiInfo": {
      "createdOn": "2017-09-07T15:25:29.000Z",
      "cachedOn": "2017-09-07T15:21:49.513Z",
      "origin": "cache",
      "canonicalLink": "http://www.myServer.com:9885/computeConsoleApi/infra1626compute1/api/v1/instance/Compute-infra1626compute1/"
    }
  }

with a response header of:
Content-Type: application/json

In this article, we will discuss how to achieve two tasks in Apache JMeter:


JSON Extractor / JSONPath


One of the advantages of XML is the availability of numerous tools to analyse, transform and selectively extract data out of XML documents. XPath is one of these powerful tools.  For JSON, we have a similar tool called JSONPath.

JSONPath is the XPath for JSON.  Since a JSON structure is normally anonymous, JSONPath assigns symbol $ as the root object.

Below is a side-by-side comparison of the JSONPath syntax elements with its XPath counterparts.[9]

XPathJSONPathDescription
/$the root object/element
.@the current object/element
/. or []child operator
..n/aparent operator
//..recursive descent. JSONPath borrows this syntax from E4X.
**wildcard. All objects/elements regardless their names.
@n/aattribute access. JSON structures don't have attributes.
[][]subscript operator. XPath uses it to iterate over element collections and for predicates. In Javascript and JSON it is the native array operator.
|[,]Union operator in XPath results in a combination of node sets. JSONPath allows alternate names or array indices as a set.
n/a[start:end:step]array slice operator borrowed from ES4.
[]?()applies a filter (script) expression.
n/a()script expression, using the underlying script engine.
()n/agrouping in Xpath

JSONPath expressions can use the dot–notation

$.ccapiInfo.canonicalLink

or the bracket–notation

$['ccapiInfo']['canonicalLink']

for input paths. For the internal or output paths, they will always be converted to the more general bracket–notation.  Below diagram shows the evaluation result using a JSONPath Online Evaluator with the input and JSONPath Expression as given in this article.


JSR223 Assertion


Assertion in JMeter help verify that your server under test returns the expected results. JMeter includes quite a few assertion elements for validating the sampler’s response, yet sometimes your validation decision might follow complex logic, and can’t be configured using the out-of-the-box JMeter assertions - scripting is then required.

If you need to write scripting assertion code to extend baseline JMeter functionality, JSR223, in combination with Groovy language is a good choice performance-wise—especially when its compilation caching is enabled.



Groovy Script

String jsonString =  vars.get("myCanonicalLink");
String userNameString = vars.get("user_name");

log.info ("The canonicalLink is " + jsonString);

if ( jsonString != "http://myserver.com:9885/computeConsoleApi/" + 
      userNameString + "/api/v1/instance/Compute-" + userNameString + "/") 
{
AssertionResult.setFailureMessage("The canonicalLink is wrong");
    AssertionResult.setFailure(true); 
}


However, every test element including assertion added to the test plan will increase the total CPU and memory requirements.  So, plan your use of assertions sparingly.

Friday, September 1, 2017

Linux: How to Setup and Get Started wtih cron

When you need to run maintenance jobs routinely in Linux, cron comes in handy. cron is a job scheduler which will automatically perform tasks according to a set schedule. The schedule is called the crontab, which is also the name of the program used to edit that schedule.

cron — Daemon to execute scheduled commands
crontab — Schedule a command to run at a later time

In this article, we will show you how to setup and get started with cron in Oracle Linux Server 6.7.

Commands


The cron service (daemon) runs in the background and constantly checks the following file/directories:
  • /etc/crontab file
  • /etc/cron.*/ directories
  • /var/spool/cron/ directory
    • Each user can have their own crontab, and though these are files in /var/spool/ , they are not intended to be edited directly.
Crontab is the program used to install, deinstall or list the tables used to drive the cron. For example, to display the current crontab, you can do:

# crontab -l

# HEADER: This file was autogenerated at Wed Jan 13 22:49:06 +0000 2016 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: cron.puppet.apply
48 * * * * /usr/local/pdit/bin/puppet-apply > /dev/null 2>&1
00 0 * * * /etc/cron.daily.random/at_daily_random.sh

Configuration Files:

You can control access to the crontab command by using two files in the /etc directory:[2]
  • cron.deny
  • cron.allow
These files permit only specified users to perform crontab command tasks such as creating, editing, displaying, or removing their own crontabfiles. Read [2] for more details.


Who can access to
crontab command?
cron.allow
Exists
Does Not Exists
cron.deny ExistsOnly users listed in
cron.allow
All users except those listed in
cron.deny
Does Not ExistsOnly users with superuser privilege


How to Edit Crontab Entries?


To edit a crontab entries, use
crontab -e
By default this will edit the current logged-in user's crontab.

After changing the crontable file, you don't need to restart cron. Cron will examine the modification time on all crontabs and reload those which have changed. Thus cron need not be restarted whenever a crontab file is modified.

[ramesh@user1 ~] $ crontab -e
# clean up Monitoring Tables weekly
0 0 * * 5 /scratch/user1/scripts/db/cleanMonitor.sh > /dev/null 2>&1 
~
"/tmp/crontab.XXXXSERJLH" 2L, 112C

[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]
When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.

~
"crontab.XXXXSERJLH" 2L, 112C written
crontab: installing new crontab
To edit crontab entries of other Linux users, login to root and use:
crontab -u {username} -e

Syntax of crontab (Field Description)


The syntax is:

1 2 3 4 5 /path/to/command arg1 arg2
OR

1 2 3 4 5 /root/backup.sh

Where,
1: Minute (0-59)
2: Hours (0-23)
3: Day (0-31)
4: Month (0-12 [12 == December])
5: Day of the week(0-7 [7 or 0 == sunday])
/path/to/command – Script or command name to schedule
cron also provides a number of operators that allow you to specify more complex repetition intervals. You can read [9] for more details.


Triggering JFR from Cron job

Below crontab entry will trigger jfr in every 45 minutes for 900 seconds interval.
*/45 * * * * jfr.sh

jfr.sh :



BACKUP_DIR="/opt/app/oracle/backup"
SERVER="OSB"
NODE="MS1"
LOG_DIR="${BACKUP_DIR}/${SERVER}/${NODE}/JFRs"
LOG_FILE="${LOG_DIR}/PRODOSB_${NODE}_`date '+%Y%m%d%H%M%S'`.jfr"
JDK_HOME="/opt/app/oracle/jdk"

PID=`ps -ef | grep ${SERVER}_${NODE} |grep 'Dweblogic' | grep -v grep | awk '{print $2}'`

if [ ! -z "${PID}" ];then

${JDK_HOME}/bin/jcmd ${PID} JFR.start duration=900s filename=${LOG_FILE}

fi

Auditing


Auditing collects data at the kernel level that you can analyze to identify unauthorized activity. The entries in the audit rules file, /etc/audit/audit.rules, determine which events are audited. In the below example, we have set up a rule to audit crontab activities.
# cat /etc/audit/audit.rules
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.

-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged

Each rule is a command-line option that is passed to the auditctl command. You should typically configure this file to match your site's security policy.

Logging


Rsyslogd is a system utility providing support for message logging. It is configured via the rsyslog.conf file, typically found in /etc. For example, in the below statement, it directs all cron messages to the file /var/log/cron.

rsyslog.conf

# Log cron stuff
cron.* /var/log/cron

How to Debug?


If you suspect that your cron job was not executed correctly, here are the steps that you could take to debug:
  • Check the local user's email which will contain the output of cron jobs
    • Read [10] to find out where the email is and how to open and  read it
  • Add the following at the top of your bash script:
    • #!/bin/bash -x 
    • Next time when your script runs, it will show all the commands it executes
  • Check if there are mail messages in /var/spool/mail/root that indicate that mail to your user isn't getting delivered
    • Consider restarting sendmail after fixing your issues by doing:[14]
      • /etc/init.d/sendmail restart

References

  1. HowTo: Add Jobs To cron Under Linux or UNIX?
  2. Controlling Access to the crontab Command
  3. Configuring and Using Auditing
  4. Linux Crontab: 15 Awesome Cron Job Examples
  5. /usr/local : Local hierarchy
  6. How to schedule a biweekly cronjob?
  7. Configuring and auditing Linux systems with Audit daemon
  8. auditctl - Unix, Linux Command
  9. Schedule Tasks with Cron
  10. What is the “You have new mail” message in Linux/UNIX?
  11. How to check if a cron job ran
  12. 25 simple examples of Linux find command
  13. Stop Cron Daemon from Sending Email for Each Job
  14. How to stop and restart sendmail daemon

Monday, August 28, 2017

AWR—"log file sync" Wait Event Analysis

There are some DB wait events could be caused by poor storage performance in Oracle.  For example,
In this article, I will discuss the log file sync wait event in Oracle AWR reports, which in many cases is caused by poor storage performance.

Top 10 Foreground Events by Total Wait Time


EventWaitsTotal Wait Time (sec)Wait Avg(ms)% DB timeWait Class
DB CPU2590.996.6
SQL*Net break/reset to client5,864,510546.4020.4Application
log file sync19,57525.511.0Commit
SQL*Net message to client15,493,939100.4Network
library cache: mutex X8,042.60.0Concurrency
db file sequential read93.55.0User I/O
direct path read158.32.0User I/O
Disk file operations I/O89.11.0User I/O
SQL*Net more data to client945.10.0Network
cursor: pin S39.11.0Concurrency

Log File Sync


An Oracle user session issuing a commit command must wait until the LGWR (Log Writer) process writes the log entries associated with the user transaction to the log file on the disk. Oracle must commit the transaction’s entries to disk (because it is a persistent layer) before acknowledging the transaction commit. The log file sync wait event represents the time the session is waiting for the log buffers to be written to disk.

Sometimes you can find "log file sync" wait event appears on the top list of an AWR report:
Top 10 Foreground Events by Total Wait Time

What does it mean if log file sync is shown on the list?[1]
  • Is this noticeably slowing down all commits?
    • Disk throughput is only one aspect that affects LGWR. It consumes CPU while executing too. 
      •  If you've maxed out your CPU capacity processing "business transactions", then it will be starved for resource. This can lead to you seeing a lot of "log file sync" waits. 
      • If your datafiles are on the same disks as the redo logs, then DBWR will also be contending for the same disk. 
  • Is it just the top wait event on your system? 
    • Remember that there's always something that has to be "top". 

How to Reduce "log file sync" Wait Time


When a user commits or rolls back data, the LGWR flushes the session's redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete.

If log file sync is a problem (e.g., avg wait > 2 ms), try the following solutions based on its causes:[1-4]
  • Slow disk I/O
    • Reduce contention on existing disks
    • Put log files on faster disks and/or increasing the log_buffer size above 10 megabytes
    • Put alternate redo logs on different disks to minimize the effect of archive processes (log files switches)
  • LGWR is not getting enough CPU
    • If the vmstat runqueue column is greater than cpu_count, then the instance is CPU-bound and this can manifest itself in high log file sync waits. The solution is to 
      • Tune SQL (to reduce CPU overhead)
      • Add processors
      • 'Nice' the dispatching priority of the LGWR process
  • High COMMIT activity
    • Review application design, use NOLOGGING operations where appropriate, and reduce the frequency of COMMIT statements in the application
  • LGWR is paged out
    • Check the server for RAM swapping, and add RAM if the instance processes are getting paged-out.

References

  1. Log file sync wait (Ask TOM)
  2. Log file sync wait
  3. Oracle Log File Sync Wait Event
  4. Expert Consolidation in Oracle Database 12c
  5. AWR Wait Events: Free Buffer Waits vs. Buffer Busy Waits (Xml and More)

Saturday, August 19, 2017

JMeter—How to Reorder and Regroup JMeter Elements

The JMeter test tree contains elements that are both hierarchical and ordered. Some elements in the test trees are strictly hierarchical (Listeners, Config Elements, Post-Processors, Pre-Processors, Assertions, Timers), and some are primarily ordered (controllers, samplers).[1,7]

When you add a new JMeter element to a parent element (i.e,. Test Plan, Thread Group, etc.), it adds the new element to the end of child list.  So, sometimes you need to reorder element on the list or regroup elements under different parent elements.  For this purpose, the following JMeter's GUI supports come in handy:
  • Drag and drop
  • Cut, copy, and paste
 In this article, we will demonstrate these editing capabilities of Apache JMeter.

Cut, Copy and Paste (Case #1)


In the following Test Plan, we have three different Thread Groups under Test Plan.  At beginning, all child elements were listed under jp@gc - Stepping Thread Group.


To move these child elements into Thread Group, I can click on CSV Data Set Config, press Shift Key, select all child elements using Down Arrow, and Ctrl-X to cut them.


To paste them into Thread Group, I click on Thread Group, and Ctrl-V to paste them.


In this scenario, it will be easy for you to experiment with three different Thread Group plugins and learn their different capabilities.

Cut, Copy, and Paste (Case #2)


Sometimes you want to copy JMeter elements from one .jmx to another .jmx.  In this case, you can launch two JMeter GUI's following the instructions here .  For example, you can click on jmeter.bat twice to start two different JMeter sessions in Windows.


After you open two Test Plans in two different GUI's, you can then copy-and-paste element from one JMeter to another similar to the previous example.



Drag and Drop


For elements stored in the test tree, you can also drag-and-drop them from one position to another or change their levels in the tree hierarchy.  Note that the level of elements in the test tree determines the scope of its effect.  Read [1,6,7] for more information.

To drag a child element from one position to another position, for example,  I can click on HTTP Cache Manager,


drag it to a new position (i.e., before HTTP Cookie Manager), and drop it.


Note that the ordering of Cookie and Cache Managers in this example doesn't matter.  Read [6] and [7] for the details of execution order and scoping rules in JMeter.

References

Monday, August 14, 2017

JMeter―Using the Transaction Controller

Apache JMeter is a performance testing tool written in Java and supports many operation systems. Controllers are main part of JMeter and they are useful to control the execution of JMeter scripts for load testing.  Logic controllers (e.g., Transaction Controller) also provide runtime scopes for JMeter test elements (read [9] for details).

For example, you can use Transaction Controller to get the total execution time of a transaction (i.e., an end-to-end scenario) which might include the following transaction steps:
Login → Compute Details → Billing Metrics → Back to Dashboard → Logout

Watch this video on YouTube for more details of Transaction Controller.

Controllers


JMeter has two types of Controllers:[3]

  • Samplers
    • Can be used to specify which types of requests to be sent to a server
    • You may add Configuration Elements to these Samplers to customize your server requests.
    • Examples of Samplers include, but not limited to:
      • HTTP Request
      • FTP Request
      • JDBC Request
      • Java Request
      • SOAP/XML-RPC Request
      • WebService (SOAP) Request
      • LDAP Request
      • LDAP Extended RequestAccess Log Sampler
      • BeanShell Sampler
  • Logic Controllers
    • Can be used to customize the logic that JMeter uses to decide when to send requests
      • For these requests, JMeter may randomly select (using Random Controller), repeat (using Loop Controller), interchange (using Interleave Controller) etc.
      • The child elements of a Logic Controllers may comprise 
    • Examples of Logic Controllers include, but not limited to:
      • Transaction Controller
      • Simple Controller
      • Loop Controller
      • Interleave Controller
      • Random Controller
      • Random Order Controller
      • Throughput Controller
      • Recording Controller
In this article, we will focus mainly on Transaction Controller which may be used to
  • Generate a “virtual” sample to measure aggregate times of all nested samples




Option: "Generate Parent Sample"


When "Generate parent sample" in Transaction Controller is
  • Checked 
    • Only Transaction Controller virtual sample will be generated and  all other Transaction Controller's nested samples will not be displayed in the report
  • Unchecked 
    • Additional parent sample (i.e. Transaction Controller virtual sample) after nested samples will be displayed in the report

Option: "Include Duration of Timer and Pre-Post Processors in Generated Sample"


Each Sampler can be preceded by one or more Pre-processor element, followed by Post-processor element. There is also an option in Transaction Controller to include and/or exclude timers, pre and post processors execution time into/from virtual samples.

When the check box "Include duration of timer and pre-post processors in generated sample" is
  • Checked
    • The aggregate time includes all processing within the controller scope, not just the nested samples
  • Unchecked
    • The aggregate time includes just the nested samples; however, excludes all pre-post processing within the controller scope