Linux - dd Utility (Dataset definition) - (Throughput test validation)

About

The dd utility is a common Unix program whose primary purpose is the low-level copying and conversion of raw data. dd is an abbreviation for “dataset definition.”

Syntax

To use the dd utility, the most important options for using dd are:

  • bs=BYTES: Read and write BYTES bytes at a time
  • count=BLOCKS: copy only BLOCKS input blocks
  • if=FILE: read from FILE; set to your device. For instance: /dev/urandom
  • of=FILE: write to FILE; For instance, set to /dev/null to evaluate read performance
  • skip=BLOCKS: skip BLOCKS BYTES-sized blocks at start of input

Usage

Throughput test validation

A very basic way to validate the operating system throughput on UNIX or Linux systems is to use the dd utility. Because there is almost no overhead involved, the output from the dd utility provides a reliable calibration.

Oracle Database and other application can reach a maximum throughput of approximately 90 percent of what the dd utility can achieve.

To estimate the maximum throughput, you can mimic a workload of a typical application, which consists of large, random sequential disk access.

In your test, you should include all the storage devices that you plan to include for your database storage. When you configure a clustered environment, you should run dd commands from every node.

Management

Random sequential disk access

The following dd command performs random sequential disk access across two devices reading a total of 2 GB. The throughput is 2 GB divided by the time it takes to finish the following command:

dd bs=1048576 count=200 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=200 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=400 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=600 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=800 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=200 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=400 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=600 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=800 if=/raw/data_2 of=/dev/null &

Create a file with Random Content

  • Write a random file to a file (testIOdd)
dd if=/dev/urandom of=testIODd bs=1024 count=102400
102400+0 records in
102400+0 records out
104857600 bytes (105 MB) copied, 17.131 seconds, 6.1 MB/s

where:

  • i = input and o = output
  • bs=BYTES means obs=BYTES and ibs=BYTES
  • obs=BYTES means write obs bytes at a time
  • ibs=BYTES means reate ibs bytes at a time
  • count=BLOCKS: copy only BLOCKS input blocks

Then 1024 Bytes * 102400 Blocks makes a file of 100 MBytes

Documentation / Reference





Discover More
Card Puncher Data Processing
I/O - Benchmark - Workload Generator

for IO Windows: Diskspd...
Card Puncher Data Processing
IO - Throughput / Data Transfer Rate (DTR) / Bit Rate

in storage device. Throughput or data transfer rate (DTR) is : the speed at which data can be transmitted between devices. ie the rate at which information can beread from or written to the storage...
Linux - Swap / Paging

swapping in an Linux Context. You created partitions of the type “swap” when you scheduled the hard disks during the installation of your Linux distribution. The Linux kernel usually does not require...



Share this page:
Follow us:
Task Runner