Sunday, October 15, 2017

Nginx—Knowing the Basics

Nginx is a lightweight, high performance web server designed to deliver large amounts of static content quickly with efficient use of system resources. nginx’s strong point is its ability to efficiently serve static content, like plain HTML and media files. Some consider it a less than ideal server for dynamic content.[1]

Concepts of Niginx

  • Nginx Process
    • Nginx has one master process and several worker processes.
      • Master process (1)
        • To read and evaluate configuration, and maintain worker processes
      • worker processes (N)
        • To do actual processing of requests
        • Each worker can handle thousands of concurrent connections. It does this asynchronously with one thread, rather than using multi-threaded programming.
  • Content Handling
    • Static Content
      • Nginx’s strong point is its ability to efficiently serve static content, like plain HTML and media files. 
    • Dynamic Content
      • Rather than using the embedded interpreter approach, nginx hands off dynamic content to CGI, FastCGI, or even other web servers like Apache, which is then passed back to nginx for delivery to the client.
  • Request distribution
    • Unlike Apache, which uses a threaded or process-oriented approach to handle requests, nginx employs event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. 
      • which provides more predictable performance under load.


The way nginx and its modules work is specified in the configuration file.
  • By default, the main configuration file is named nginx.conf and placed in the directory:
    • /usr/local/nginx/conf
    • /etc/nginx, or
    • /usr/local/etc/nginx
  • Need to reload configuration after changes
    • Changes made in the configuration file will not be applied until the command to reload configuration is sent to nginx or it is restarted. 
    • To reload configuration, execute:
      • nginx -s reload
To learn how to start, stop, and reload configuration in more details, read here.

Syntax and Semantics 

The syntax and semantics of nginx's configuration files are described below:
  • Comments
    • All lines preceded by a pound sign or hash (#) are comments
  • Directives 
    • nginx consists of modules which are controlled by directives specified in the configuration file.
      • Directives describe the basic behavior of the web server
    • Directives are divided into 
      • Simple directives 
        • All statements end with a semi-colon (;)
      • Block directives
        • Blocks are variables having arguments that are themselves sub-directives enclosed in brackets ({ })
        • Examples
          • http{} block — Universal Configuration
          • server{} block — Virtual Domains Configuration
            • Configures multiple servers virtually on different ports or with different server names
              • The server_name directive, which is located in the server block, lets the administrator provide name-based virtual hosting. 
          • upstream{} block
            • Defines a cluster that you can proxy requests to
              • Commonly used for defining either a web server cluster for load balancing, or an app server cluster for routing / load balancing
  • Include statement
    • Can be used to include directives from a separate file.
    • Anything written in the file is interpreted as if it was written inside the enclosing block. 
  • Context
    • If a block directive can have other directives inside braces, it is called a context (examples: events, http,server, and location).
    • Examples
      • Directives placed in the configuration file outside of any contexts are considered to be in the main context. 
        • The events and http directives reside in the main context, server in http, and location in server.
      • access_log directive sets the location of the nginx access log
        • which can be set in either
          • http block, or 
            • which can be used to log all access to a single file, or as a catch-all for access to virtual hosts that don’t define their own log files
          • server block
            • which sort the output specific to each virtual domain into its own file

Http Request Handling

When clients send http requests to nginx, it distributes requests based on the following information:
  • Request URL
  • Request Headers (e.g., Host  header)
and processes them in the following sequence:
  1. Receives data from clients
  2. Parses the http request
  3. Finds the virtual server
    • Configured by server {} blocks
  4. Finds the location
    • Specified by the location directive 
      • Directs requests to specific files and folders
  5. Runs phase handlers
  6. Generates the http response
  7. Filters response headers
  8. Filters the response body
  9. Sends out the output to the client
Note that nginx always fulfills request using the most specific match.

Sunday, September 17, 2017

JMeter: How to Verify JSON Response?

JSON (Javascript object notation) is a serialization format (in key-value pairs) of data structures .  For REST API, it is widely used for data transfer from server to client.  For example, a client sends an HTTP request with below header:

  • Accept: application/json

The server can respond with below sample JSON data:

    "result": [],
    "ccapiInfo": {
      "createdOn": "2017-09-07T15:25:29.000Z",
      "cachedOn": "2017-09-07T15:21:49.513Z",
      "origin": "cache",
      "canonicalLink": ""

with a response header of:
Content-Type: application/json

In this article, we will discuss how to achieve two tasks in Apache JMeter:

JSON Extractor / JSONPath

One of the advantages of XML is the availability of numerous tools to analyse, transform and selectively extract data out of XML documents. XPath is one of these powerful tools.  For JSON, we have a similar tool called JSONPath.

JSONPath is the XPath for JSON.  Since a JSON structure is normally anonymous, JSONPath assigns symbol $ as the root object.

Below is a side-by-side comparison of the JSONPath syntax elements with its XPath counterparts.[9]

/$the root object/element
.@the current object/element
/. or []child operator
..n/aparent operator
//..recursive descent. JSONPath borrows this syntax from E4X.
**wildcard. All objects/elements regardless their names.
@n/aattribute access. JSON structures don't have attributes.
[][]subscript operator. XPath uses it to iterate over element collections and for predicates. In Javascript and JSON it is the native array operator.
|[,]Union operator in XPath results in a combination of node sets. JSONPath allows alternate names or array indices as a set.
n/a[start:end:step]array slice operator borrowed from ES4.
[]?()applies a filter (script) expression.
n/a()script expression, using the underlying script engine.
()n/agrouping in Xpath

JSONPath expressions can use the dot–notation


or the bracket–notation


for input paths. For the internal or output paths, they will always be converted to the more general bracket–notation.  Below diagram shows the evaluation result using a JSONPath Online Evaluator with the input and JSONPath Expression as given in this article.

JSR223 Assertion

Assertion in JMeter help verify that your server under test returns the expected results. JMeter includes quite a few assertion elements for validating the sampler’s response, yet sometimes your validation decision might follow complex logic, and can’t be configured using the out-of-the-box JMeter assertions - scripting is then required.

If you need to write scripting assertion code to extend baseline JMeter functionality, JSR223, in combination with Groovy language is a good choice performance-wise—especially when its compilation caching is enabled.

Groovy Script

String jsonString =  vars.get("myCanonicalLink");
String userNameString = vars.get("user_name"); ("The canonicalLink is " + jsonString);

if ( jsonString != "" + 
      userNameString + "/api/v1/instance/Compute-" + userNameString + "/") 
AssertionResult.setFailureMessage("The canonicalLink is wrong");

However, every test element including assertion added to the test plan will increase the total CPU and memory requirements.  So, plan your use of assertions sparingly.

Friday, September 1, 2017

Linux: How to Setup and Get Started wtih cron

When you need to run maintenance jobs routinely in Linux, cron comes in handy. cron is a job scheduler which will automatically perform tasks according to a set schedule. The schedule is called the crontab, which is also the name of the program used to edit that schedule.

cron Daemon to execute scheduled commands
crontab Schedule a command to run at a later time

In this article, we will show you how to setup and get started with cron in Oracle Linux Server 6.7.


The cron service (daemon) runs in the background and constantly checks the following file/directories:
  • /etc/crontab file
  • /etc/cron.*/ directories
  • /var/spool/cron/ directory
    • Each user can have their own crontab, and though these are files in /var/spool/ , they are not intended to be edited directly.
Crontab is the program used to install, deinstall or list the tables used to drive the cron. For example, to display the current crontab, you can do:

# crontab -l

# HEADER: This file was autogenerated at Wed Jan 13 22:49:06 +0000 2016 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: cron.puppet.apply
48 * * * * /usr/local/pdit/bin/puppet-apply > /dev/null 2>&1
00 0 * * * /etc/cron.daily.random/

Configuration Files:

You can control access to the crontab command by using two files in the /etc directory:[2]
  • cron.deny
  • cron.allow
These files permit only specified users to perform crontab command tasks such as creating, editing, displaying, or removing their own crontabfiles. Read [2] for more details.

Who can access to
crontab command?
Does Not Exists
cron.deny ExistsOnly users listed in
All users except those listed in
Does Not ExistsOnly users with superuser privilege

How to Edit Crontab Entries?

To edit a crontab entries, use
crontab -e
By default this will edit the current logged-in user's crontab.

After changing the crontable file, you don't need to restart cron. Cron will examine the modification time on all crontabs and reload those which have changed. Thus cron need not be restarted whenever a crontab file is modified.

[ramesh@user1 ~] $ crontab -e
# clean up Monitoring Tables weekly
0 0 * * 5 /scratch/user1/scripts/db/ > /dev/null 2>&1 
"/tmp/crontab.XXXXSERJLH" 2L, 112C

[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]
When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.

"crontab.XXXXSERJLH" 2L, 112C written
crontab: installing new crontab
To edit crontab entries of other Linux users, login to root and use:
crontab -u {username} -e

Syntax of crontab (Field Description)

The syntax is:

1 2 3 4 5 /path/to/command arg1 arg2

1 2 3 4 5 /root/

1: Minute (0-59)
2: Hours (0-23)
3: Day (0-31)
4: Month (0-12 [12 == December])
5: Day of the week(0-7 [7 or 0 == sunday])
/path/to/command – Script or command name to schedule
cron also provides a number of operators that allow you to specify more complex repetition intervals. You can read [9] for more details.

Triggering JFR from Cron job

Below crontab entry will trigger jfr in every 45 mininutes for 900 seconds interval.
*/45 * * * * :

LOG_FILE="${LOG_DIR}/PRODOSB_${NODE}_`date '+%Y%m%d%H%M%S'`.jfr"

PID=`ps -ef | grep ${SERVER}_${NODE} |grep 'Dweblogic' | grep -v grep | awk '{print $2}'`

if [ ! -z "${PID}" ];then

${JDK_HOME}/bin/jcmd ${PID} JFR.start duration=900s filename=${LOG_FILE}



Auditing collects data at the kernel level that you can analyze to identify unauthorized activity. The entries in the audit rules file, /etc/audit/audit.rules, determine which events are audited. In the below example, we have set up a rule to audit crontab activities.
# cat /etc/audit/audit.rules
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.

-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged

Each rule is a command-line option that is passed to the auditctl command. You should typically configure this file to match your site's security policy.


Rsyslogd is a system utility providing support for message logging. It is configured via the rsyslog.conf file, typically found in /etc. For example, in the below statement, it directs all cron messages to the file /var/log/cron.


# Log cron stuff
cron.* /var/log/cron

Monday, August 28, 2017

AWR—"log file sync" Wait Event Analysis

There are some DB wait events could be caused by poor storage performance in Oracle.  For example,
In this article, I will discuss the log file sync wait event in Oracle AWR reports, which in many cases is caused by poor storage performance.

Top 10 Foreground Events by Total Wait Time

EventWaitsTotal Wait Time (sec)Wait Avg(ms)% DB timeWait Class
DB CPU2590.996.6
SQL*Net break/reset to client5,864,510546.4020.4Application
log file sync19,57525.511.0Commit
SQL*Net message to client15,493,939100.4Network
library cache: mutex X8,042.60.0Concurrency
db file sequential read93.55.0User I/O
direct path read158.32.0User I/O
Disk file operations I/O89.11.0User I/O
SQL*Net more data to client945.10.0Network
cursor: pin S39.11.0Concurrency

Log File Sync

An Oracle user session issuing a commit command must wait until the LGWR (Log Writer) process writes the log entries associated with the user transaction to the log file on the disk. Oracle must commit the transaction’s entries to disk (because it is a persistent layer) before acknowledging the transaction commit. The log file sync wait event represents the time the session is waiting for the log buffers to be written to disk.

Sometimes you can find "log file sync" wait event appears on the top list of an AWR report:
Top 10 Foreground Events by Total Wait Time

What does it mean if log file sync is shown on the list?[1]
  • Is this noticeably slowing down all commits?
    • Disk throughput is only one aspect that affects LGWR. It consumes CPU while executing too. 
      •  If you've maxed out your CPU capacity processing "business transactions", then it will be starved for resource. This can lead to you seeing a lot of "log file sync" waits. 
      • If your datafiles are on the same disks as the redo logs, then DBWR will also be contending for the same disk. 
  • Is it just the top wait event on your system? 
    • Remember that there's always something that has to be "top". 

How to Reduce "log file sync" Wait Time

When a user commits or rolls back data, the LGWR flushes the session's redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete.

If log file sync is a problem (e.g., avg wait > 2 ms), try the following solutions based on its causes:[1-4]
  • Slow disk I/O
    • Reduce contention on existing disks
    • Put log files on faster disks and/or increasing the log_buffer size above 10 megabytes
    • Put alternate redo logs on different disks to minimize the effect of archive processes (log files switches)
  • LGWR is not getting enough CPU
    • If the vmstat runqueue column is greater than cpu_count, then the instance is CPU-bound and this can manifest itself in high log file sync waits. The solution is to 
      • Tune SQL (to reduce CPU overhead)
      • Add processors
      • 'Nice' the dispatching priority of the LGWR process
  • High COMMIT activity
    • Review application design, use NOLOGGING operations where appropriate, and reduce the frequency of COMMIT statements in the application
  • LGWR is paged out
    • Check the server for RAM swapping, and add RAM if the instance processes are getting paged-out.


  1. Log file sync wait (Ask TOM)
  2. Log file sync wait
  3. Oracle Log File Sync Wait Event
  4. Expert Consolidation in Oracle Database 12c
  5. AWR Wait Events: Free Buffer Waits vs. Buffer Busy Waits (Xml and More)

Saturday, August 19, 2017

JMeter—How to Reorder and Regroup JMeter Elements

The JMeter test tree contains elements that are both hierarchical and ordered. Some elements in the test trees are strictly hierarchical (Listeners, Config Elements, Post-Processors, Pre-Processors, Assertions, Timers), and some are primarily ordered (controllers, samplers).[1,7]

When you add a new JMeter element to a parent element (i.e,. Test Plan, Thread Group, etc.), it adds the new element to the end of child list.  So, sometimes you need to reorder element on the list or regroup elements under different parent elements.  For this purpose, the following JMeter's GUI supports come in handy:
  • Drag and drop
  • Cut, copy, and paste
 In this article, we will demonstrate these editing capabilities of Apache JMeter.

Cut, Copy and Paste (Case #1)

In the following Test Plan, we have three different Thread Groups under Test Plan.  At beginning, all child elements were listed under jp@gc - Stepping Thread Group.

To move these child elements into Thread Group, I can click on CSV Data Set Config, press Shift Key, select all child elements using Down Arrow, and Ctrl-X to cut them.

To paste them into Thread Group, I click on Thread Group, and Ctrl-V to paste them.

In this scenario, it will be easy for you to experiment with three different Thread Group plugins and learn their different capabilities.

Cut, Copy, and Paste (Case #2)

Sometimes you want to copy JMeter elements from one .jmx to another .jmx.  In this case, you can launch two JMeter GUI's following the instructions here .  For example, you can click on jmeter.bat twice to start two different JMeter sessions in Windows.

After you open two Test Plans in two different GUI's, you can then copy-and-paste element from one JMeter to another similar to the previous example.

Drag and Drop

For elements stored in the test tree, you can also drag-and-drop them from one position to another or change their levels in the tree hierarchy.  Note that the level of elements in the test tree determines the scope of its effect.  Read [1,6,7] for more information.

To drag a child element from one position to another position, for example,  I can click on HTTP Cache Manager,

drag it to a new position (i.e., before HTTP Cookie Manager), and drop it.

Note that the ordering of Cookie and Cache Managers in this example doesn't matter.  Read [6] and [7] for the details of execution order and scoping rules in JMeter.


Monday, August 14, 2017

JMeter―Using the Transaction Controller

Apache JMeter is a performance testing tool written in Java and supports many operation systems. Controllers are main part of JMeter and they are useful to control the execution of JMeter scripts for load testing.

For example, you can use Transaction Controller to get the total execution time of a transaction (i.e., an end-to-end scenario) which might include the following transaction steps:
Login → Compute Details → Billing Metrics → Back to Dashboard → Logout

Watch below video for more details of Transaction Controller.


JMeter has two types of Controllers:[3]

  • Samplers
    • Can be used to specify which types of requests to be sent to a server
    • You may add Configuration Elements to these Samplers to customize your server requests.
    • Examples of Samplers include, but not limited to:
      • HTTP Request
      • FTP Request
      • JDBC Request
      • Java Request
      • SOAP/XML-RPC Request
      • WebService (SOAP) Request
      • LDAP Request
      • LDAP Extended RequestAccess Log Sampler
      • BeanShell Sampler
  • Logic Controllers
    • Can be used to customize the logic that JMeter uses to decide when to send requests
      • For these requests, JMeter may randomly select (using Random Controller), repeat (using Loop Controller), interchange (using Interleave Controller) etc.
      • The child elements of a Logic Controllers may comprise 
    • Examples of Logic Controllers include, but not limited to:
      • Transaction Controller
      • Simple Controller
      • Loop Controller
      • Interleave Controller
      • Random Controller
      • Random Order Controller
      • Throughput Controller
      • Recording Controller
In this article, we will focus mainly on Transaction Controller which may be used to
  • Generate a “virtual” sample to measure aggregate times of all nested samples

Option: "Generate Parent Sample"

When "Generate parent sample" in Transaction Controller is
  • Checked 
    • Only Transaction Controller virtual sample will be generated and  all other Transaction Controller's nested samples will not be displayed in the report
  • Unchecked 
    • Additional parent sample (i.e. Transaction Controller virtual sample) after nested samples will be displayed in the report

Option: "Include Duration of Timer and Pre-Post Processors in Generated Sample"

Each Sampler can be preceded by one or more Pre-processor element, followed by Post-processor element. There is also an option in Transaction Controller to include and/or exclude timers, pre and post processors execution time into/from virtual samples.

When the check box "Include duration of timer and pre-post processors in generated sample" is
  • Checked
    • The aggregate time includes all processing within the controller scope, not just the nested samples
  • Unchecked
    • The aggregate time includes just the nested samples; however, excludes all pre-post processing within the controller scope

Sunday, August 13, 2017

JMeter: Using the HTTP Cookie Manager

In a stateless internet, many sites and applications use cookies to retain a handle between sessions or to keep some state on the client side. If you are planning to use JMeter to test such web applications, then cookie manager will be required.

To learn how to enable HTTP Cookie Manager and run tests in JMeter, you can watch below video.

In this article, we will cover two topics:
  1. Why cookie manager?
  2. Where to put cookie manager?

Why Cookie Manager

If you need to extract a cookie data from response body, one option is to use a Regular Expression Extractor on the response headers.[4] Another simpler option is adding a HTTP Cookie Manager which automatically handles cookies in many configurable ways.

HTTP Cookie Manager has three functions:
  1. Stores and sends cookies just like a web browser
    • Each JMeter thread has its own "cookie storage area".
      • Note that such cookies do not appear on the Cookie Manager display, but they can be seen using the View Results Tree Listener.
  2. Received Cookies can be stored as JMeter thread variables
    • Versions of JMeter 2.3.2+ no longer do this by default
    • To save cookies as variables, define the property "" by
      • Setting it in file, or
      • Passing a corresponding parameter to JMeter startup scrip
        • jmeter
    • The names of the cookies contain the prefix "COOKIE_" which can be configured by the property ""
    • See [4] for an example
  3. Supports adding a cookie to the Cookie Manager manually
    • Note that if you do this, the cookie will be shared by all JMeter threads—such cookies are created with an expiration date far in the future.

Where to Put Cookie Manager

Nearly all web testing should use cookie support, unless your application specifically doesn't use cookies. To add cookie support, simply add an HTTP Cookie Manager to each Thread Group in your test plan. This will ensure that each thread gets its own cookies, but shared across all HTTP Request objects.

To add the HTTP Cookie Manager, simply select the Thread Group, and choose AddConfig ElementHTTP Cookie Manager, either from the Edit Menu, or from the right-click pop-up menu.


  1. Using the HTTP Cookie Manager in JMeter
  2. Understanding and Using JMeter Cookie Manager
  3. Adding Cookie Support
  4. Header Cookie “sid” value to a variable
  5. Using RegEx (Regular Expression Extractor) with JMeter
  6. JMeter: How to Turn Off Captive Portal from the Recording Using Firefox (Xml and More)

Friday, August 11, 2017

JMeter: How to Turn Off Captive Portal from the Recording Using Firefox

Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications.

Assume you have installed JMeter and are familiar with it; otherwise, you can watch a good series of videos here to get started.  In this article, we will discuss how to remove the extra HTTP request (1 /success.txt ) in the recording (click below diagram to enlarge).

HTTP(S) Test Script Recorder

You can follow the instructions here to record your web test with "HTTP(S) Test Script Recorder".  In this article, we have chosen Firefox as the browser for JMeter's proxy recorder.

When we recorded a test plan, some repetitive HTTP requests related to Captive Portal feature in Firefox
have been captured.  These HTTP traffics are not related to the tested web application and should be excluded from the test plan.  So, how can we achieve that?

How to Turn Off "Captive Portal"

Captive Portal feature in Firefox covers the detection and implementation of handles for captive portals inside Firefox browser. Firefox is expected to handle the handling of a captive portal page upon detection of such.

There is no UI checkbox for disabling Captive Portal.  But, you can turn off Captive Portal using the Configuration Editor of Firefox:[3]
  1. In a new tab, type or paste about:config in the address bar and press Enter/Return. Click the button promising to be careful.
  2. In the search box above the list, type or paste captiv and pause while the list is filtered
  3. Double-click the network.captive-portal-service.enabled preference to switch the value from true to false
If you are in a managed environment using an autoconfig file, for example, you could use this to switch the default:
user_pref("network.captive-portal-service.enabled", false)


  1. Apache JMeter
  2. JMeter Beginner Tutorial 21 - How to use Test Script Recorder
  3. HTTP(S) Test Script Recorder 
  4. Proxy Step by Step (Apache JMeter)
  5. Book: Apache JMeter (Publisher: Packt Publishing) 
  6. Turn off captive portal (Mozilla Support)

Saturday, June 24, 2017

How to Access OAuth Protected Resources Using Postman

To access an OAuth 2.0 protected resource, you need to provide an access token to access it.  For example, in the new implementation of Oracle Event Hub Cloud Service, Kafka brokers are OAuth 2.0 protected resources.

In this article, we will demonstrate how to obtain an access token of "bearer" type using Postman.

OAuth 2.0

OAuth enables clients to access protected resources by obtaining an access token, which is defined in "The OAuth 2.0 Authorization Framework" as "a string representing an access authorization issued to the client", rather than using the resource owner's credentials directly.

There are different access token types.  For example,

Each access token type specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter. It also defines the HTTP authentication method used to include the access token when making a protected resource request.

For example, in this article, you will learn how to retrieve a bearer token using Postman, in which the generated HTTP response will look like below:

    "access_token": "eyJ4NXQjUzI1Ni <snipped> M8Ei_VoT0kjc",
    "token_type": "Bearer",
    "expires_in": 3600

To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport.


Postman is a Google Chrome app for interacting with HTTP APIs. It presents you with a friendly GUI for constructing requests and reading responses. To download it, click on this link.

You can generate code snippets (see above diagram; however, a better alternative is to export/import a collection) using Postman for sharing purpose.  For example, we will use the following snippets for illustration in this article.

POST /oauth2/v1/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Accept: application/json
Cache-Control: no-cache
Postman-Token: 55cfed4b-509c-5a6f-a415-8542d04fc7ad


Generating Bearer Token

To access OAuth protected resources, you need to retrieve an access token first.  In this example, we will demonstrate with the access token of bearer type.

Based on shared code snippets above, it tells us to send a HTTP POST request to the following URL:

which is composed from the following information in the snippets:

POST /oauth2/v1/token HTTP/1.1

Note that we have used https instead of http in the URL.

For the Authorization, we have specified "Basic Auth" type with an Username and a Password and, in the snippets, it shows as below:


In the "Header" part, we have specified two headers in addition to the "Authorization" header using "Bulk Edit" mode:


In the "Body" part, we have copied the last line from the code snippets to it in raw mode:


Note that the above body part is specifically to the Oracle Identity Cloud Service (IDCS) implementation.  Similarly, the "Authorization" part requires us to specify "Client ID" and "Client Secret" as username and password, which are also IDCS-specific.

How to Use Bearer Token

To access OAuth protected resources, you specify retrieved access token in the "Header" of subsequent HTTP requests with the following format:

Authorization:Bearer eyJ4NXQjUzI1Ni <snipped> M8Ei_VoT0kjc

Note that this access token will expire in one hour as noted in the HTTP response:

"expires_in": 3600


From this article, we have demonstrated that:
  • What a Bearer Token is
  • What an access token looks like
  • How to share a code snippet
    • We have shown to reverse-engineer from the shared code snippets to the final setup in Postman is not straightforward.  For example, the code snippet doesn't tell us:
      • What the "Username" and "Password" to be used.  For example, we need to know that it requires the "Client ID" and "Client Secret" of application to be used in this case.
    • Therefore, if you share the code snippets with co-workers, you also need to add further annotations to allow them to reproduce the HTTP requests to be sent.