List of Benchmarks

FIO

sudo apt-get install fio

mukuls@MUKUL:~/thesis/fsbench/fio-2.1.2$ fio --filename=/tmp/smr/sfs/a
--direct=1 --rw=randrw --bs=4k --rwmixread=100 --iodepth=100 --numjobs=16
--name=4ktest --group_reporting --size=15m
4ktest: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=sync, iodepth=100
...
4ktest: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=sync, iodepth=100
2.0.8
Starting 16 processes
4ktest: Laying out IO file(s) (1 file(s) / 15MB)

4ktest: (groupid=0, jobs=16): err= 0: pid=16861
read : io=245760KB, bw=326809KB/s, iops=81702 , runt= 752msec
clat (usec): min=43 , max=2762 , avg=192.77, stdev=41.90
lat (usec): min=43 , max=2762 , avg=192.84, stdev=41.91
clat percentiles (usec):
| 1.00th=[ 173], 5.00th=[ 185], 10.00th=[ 185], 20.00th=[ 187],
| 30.00th=[ 187], 40.00th=[ 189], 50.00th=[ 189], 60.00th=[ 189],
| 70.00th=[ 191], 80.00th=[ 191], 90.00th=[ 201], 95.00th=[ 215],
| 99.00th=[ 270], 99.50th=[ 306], 99.90th=[ 668], 99.95th=[ 772],
| 99.99th=[ 2128]
bw (KB/s) : min=20112, max=20432, per=6.22%, avg=20315.00, stdev=86.35
lat (usec) : 50=0.02%, 100=0.09%, 250=98.61%, 500=1.04%, 750=0.20%
lat (usec) : 1000=0.02%
lat (msec) : 2=0.01%, 4=0.02%
cpu : usr=1.41%, sys=0.95%, ctx=61515, majf=0, minf=369
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=61440/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
READ: io=245760KB, aggrb=326808KB/s, minb=326808KB/s, maxb=326808KB/s,
mint=752msec, maxt=752msec

Postmark

Installation :

sudo apt-get install postmark

mukuls@MUKUL:~/thesis/smr_main/src/test/local$ postmark

PostMark v1.51 : 8/14/01
pm>set location=/tmp/smr/sfs
pm>run
Creating files...Done
Performing transactions..........Done
Deleting files...Done
Time:
1 seconds total
1 seconds of transactions (500 per second)

Files:
764 created (764 per second)
Creation alone: 500 files (500 per second)
Mixed with transactions: 264 files (264 per second)
243 read (243 per second)
257 appended (257 per second)
764 deleted (764 per second)
Deletion alone: 528 files (528 per second)
Mixed with transactions: 236 files (236 per second)

Data:
1.36 megabytes read (1.36 megabytes per second)
4.45 megabytes written (4.45 megabytes per second)
pm>

Filebench (webserver, webproxy, fileserver, createfiles, videoserver, varmail)

webproxy

filebench> load webproxy [2/222]
22815: 17.312: Web proxy-server Version 3.0 personality successfully loaded
22815: 17.313: Usage: set $dir=<dir>
22815: 17.313: set $meanfilesize=<size> defaults to 16384
22815: 17.313: set $nfiles=<value> defaults to 10000
22815: 17.313: set $nthreads=<value> defaults to 100
22815: 17.313: set $meaniosize=<value> defaults to 16384
22815: 17.313: set $iosize=<size> defaults to 1048576
22815: 17.313: set $meandirwidth=<size> defaults to 1000000
22815: 17.313: run runtime (e.g. run 60)
filebench> set $dir=/tmp/smr/sfs
filebench> set $meandirwidth=10000
filebench> set $iosize=16k
filebench> run 10
22815: 260.246: Creating/pre-allocating files and filesets
22815: 260.266: Fileset bigfileset: 10000 files, 0 leafdirs, avg dir width = 10000, avg dir depth = 1.0, 154.045MB
22815: 260.269: Removed any existing fileset bigfileset in 1 seconds
22815: 260.269: making tree for filset /tmp/smr/sfs/bigfileset
22815: 260.270: Creating fileset bigfileset...
22815: 264.734: Preallocated 7979 of 10000 of fileset bigfileset in 5 seconds
22815: 264.734: waiting for fileset pre-allocation to finish
22883: 264.734: Starting 1 proxycache instances
22884: 264.738: Starting 100 proxycache threads
22815: 265.741: Running...
22815: 275.745: Run took 10 seconds...
22815: 275.797: Per-Operation Breakdown
limit 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
closefile6 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
readfile6 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
openfile6 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
closefile5 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
readfile5 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
openfile5 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
closefile4 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
readfile4 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
openfile4 1ops 0ops/s 0.0mb/s 1.9ms/op 0us/op-cpu [0ms - 1ms]
closefile3 1ops 0ops/s 0.0mb/s 1.1ms/op 0us/op-cpu [0ms - 1ms]
readfile3 2ops 0ops/s 0.0mb/s 2.8ms/op 0us/op-cpu [0ms - 3ms]
openfile3 2ops 0ops/s 0.0mb/s 1.6ms/op 0us/op-cpu [0ms - 2ms]
closefile2 4ops 0ops/s 0.0mb/s 1.0ms/op 0us/op-cpu [0ms - 1ms]
readfile2 4ops 0ops/s 0.0mb/s 1.2ms/op 2500us/op-cpu [0ms - 1ms]
openfile2 4ops 0ops/s 0.0mb/s 1.6ms/op 0us/op-cpu [0ms - 2ms]
closefile1 4ops 0ops/s 0.0mb/s 0.5ms/op 2500us/op-cpu [0ms - 0ms]
appendfilerand1 5ops 0ops/s 0.0mb/s 0.9ms/op 2000us/op-cpu [0ms - 1ms]
createfile1 6ops 1ops/s 0.0mb/s 3.3ms/op 3333us/op-cpu [0ms - 5ms]
deletefile1 6ops 1ops/s 0.0mb/s 1.6ms/op 3333us/op-cpu [0ms - 3ms]
22815: 275.797: IO Summary: 39 ops, 3.900 ops/s, (1/0 r/w), 0.0mb/s, 76364us cpu/op, 5.7ms latency
22815: 275.797: Shutting down processes

videoserver

filebench> load videoserver
22419: 17.445: Eventgen rate taken from variable
22419: 17.445: Video Server Version 3.0 personality successfully loaded
22419: 17.445: Usage: set $dir=<dir> defaults to /tmp
22419: 17.445: set $eventrate=<value> defaults to 96
22419: 17.445: set $filesize=<size> defaults to 10737418240
22419: 17.445: set $nthreads=<value> defaults to 48
22419: 17.445: set $writeiosize=<value> defaults to 1048576
22419: 17.445: set $readiosize=<value> defaults to 262144
22419: 17.445: set $numactivevids=<value> defaults to 32
22419: 17.445: set $numpassivevids=<value> defaults to 194
22419: 17.445: run runtime (e.g. run 60)
filebench> set $dir=/tmp/smr/sfs
filebench> set $filesize=16k
filebench> run 10
22419: 92.880: Creating/pre-allocating files and filesets
22419: 92.881: Fileset passivevids: 194 files, 0 leafdirs, avg dir width = 20, avg dir depth = 1.8, 2.859MB
22419: 92.884: Removed any existing fileset passivevids in 1 seconds
22419: 92.884: making tree for filset /tmp/smr/sfs/passivevids
22419: 92.888: Creating fileset passivevids...
22419: 92.923: Preallocated 104 of 194 of fileset passivevids in 1 seconds
22419: 92.924: Fileset activevids: 32 files, 0 leafdirs, avg dir width = 4, avg dir depth = 2.5, 0.545MB
22419: 92.935: Removed any existing fileset activevids in 1 seconds
22419: 92.935: making tree for filset /tmp/smr/sfs/activevids
22419: 92.937: Creating fileset activevids...
22419: 92.940: Preallocated 32 of 32 of fileset activevids in 1 seconds
22419: 92.940: waiting for fileset pre-allocation to finish
22585: 92.945: Starting 1 vidreaders instances
22585: 92.945: Starting 1 vidwriter instances
22587: 92.949: Starting 1 vidwriter threads
22586: 92.950: Starting 48 vidreaders threads
22419: 93.951: Running...
22419: 103.952: Run took 10 seconds...
22419: 103.953: Per-Operation Breakdown
serverlimit 79110ops 7910ops/s 0.0mb/s 2.7ms/op 2063042us/op-cpu [0ms - 3999ms]
vidreader 79227ops 7922ops/s 67.1mb/s 0.0ms/op 3749us/op-cpu [0ms - 11ms]
replaceinterval 0ops 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu [0ms - 0ms]
wrtclose 1ops 0ops/s 0.0mb/s 0.2ms/op 0us/op-cpu [0ms - 0ms]
newvid 1ops 0ops/s 0.0mb/s 0.2ms/op 0us/op-cpu [0ms - 0ms]
wrtopen 1ops 0ops/s 0.0mb/s 0.7ms/op 0us/op-cpu [0ms - 0ms]
vidremover 1ops 0ops/s 0.0mb/s 0.8ms/op 0us/op-cpu [0ms - 0ms]
22419: 103.953: IO Summary: 79231 ops, 7922.517 ops/s, (7922/0 r/w), 67.1mb/s, 0us cpu/op, 0.0ms latency
22419: 103.953: Shutting down processes

createfiles

filebench> load createfiles
22149: 13.084: Createfiles Version 3.0 personality successfully loaded
22149: 13.084: Usage: set $dir=<dir> defaults to /tmp
22149: 13.084: set $meanfilesize=<size> defaults to 16384
22149: 13.084: set $iosize=<size> defaults to 1048576
22149: 13.084: set $nfiles=<value> defaults to 50000
22149: 13.084: set $nthreads=<value> defaults to 16
22149: 13.084: set $meandirwidth=<size> defaults to 100
22149: 13.084: run
filebench> set $dir=/tmp/smr/sfs
filebench> run 10
22149: 28.985: Creating/pre-allocating files and filesets
22149: 29.047: Fileset bigfileset: 50000 files, 0 leafdirs, avg dir width = 100, avg dir depth = 2.3, 778.397MB
22149: 29.068: Removed any existing fileset bigfileset in 1 seconds
22149: 29.068: Creating fileset bigfileset...
22149: 29.146: Preallocated 0 of 50000 of fileset bigfileset in 1 seconds
22149: 29.146: waiting for fileset pre-allocation to finish
22154: 29.147: Starting 1 filecreate instances
22155: 29.151: Starting 16 filecreatethread threads
22149: 30.153: Running...
22149: 68.156: Run took 38 seconds...
22149: 68.156: Per-Operation Breakdown
closefile1 49985ops 1315ops/s 0.0mb/s 0.3ms/op 81us/op-cpu [0ms - 2310ms]
writefile1 49986ops 1315ops/s 20.5mb/s 0.7ms/op 213us/op-cpu [0ms - 4403ms]
createfile1 50000ops 1316ops/s 0.0mb/s 10.9ms/op 2627us/op-cpu [0ms - 4414ms]
22149: 68.156: IO Summary: 149971 ops, 3946.296 ops/s, (0/1315 r/w), 20.5mb/s, 1043us cpu/op, 11.9ms latency
22149: 68.156: Shutting down processes

varmail

filebench> load varmail
21752: 9.711: Varmail Version 3.0 personality successfully loaded
21752: 9.711: Usage: set $dir=<dir>
21752: 9.711: set $meanfilesize=<size> defaults to 16384
21752: 9.711: set $nfiles=<value> defaults to 1000
21752: 9.711: set $nthreads=<value> defaults to 16
21752: 9.711: set $meanappendsize=<value> defaults to 16384
21752: 9.711: set $iosize=<size> defaults to 1048576
21752: 9.711: set $meandirwidth=<size> defaults to 1000000
21752: 9.711: run runtime (e.g. run 60)
filebench> set $dir=/tmp/smr/sfs
filebench> run 10
21752: 26.814: Creating/pre-allocating files and filesets
21752: 26.816: Fileset bigfileset: 1000 files, 0 leafdirs, avg dir width = 1000000, avg dir depth = 0.5, 14.959MB
21752: 26.819: Removed any existing fileset bigfileset in 1 seconds
21752: 26.819: making tree for filset /tmp/smr/sfs/bigfileset
21752: 26.820: Creating fileset bigfileset...
21752: 27.345: Preallocated 805 of 1000 of fileset bigfileset in 1 seconds
21752: 27.345: waiting for fileset pre-allocation to finish
21758: 27.345: Starting 1 filereader instances
21759: 27.346: Starting 16 filereaderthread threads
21752: 28.382: Running...
21752: 38.383: Run took 10 seconds...
21752: 38.384: Per-Operation Breakdown
closefile4 3488ops 349ops/s 0.0mb/s 0.8ms/op 444us/op-cpu [0ms - 6ms]
readfile4 3488ops 349ops/s 5.3mb/s 1.9ms/op 923us/op-cpu [0ms - 22ms]
openfile4 3488ops 349ops/s 0.0mb/s 1.5ms/op 662us/op-cpu [0ms - 10ms]
closefile3 3491ops 349ops/s 0.0mb/s 0.8ms/op 455us/op-cpu [0ms - 8ms]
fsyncfile3 3491ops 349ops/s 0.0mb/s 1.1ms/op 418us/op-cpu [0ms - 18ms]
appendfilerand3 3491ops 349ops/s 2.7mb/s 1.5ms/op 745us/op-cpu [0ms - 22ms]
readfile3 3492ops 349ops/s 5.2mb/s 1.8ms/op 899us/op-cpu [0ms - 24ms]
openfile3 3493ops 349ops/s 0.0mb/s 1.4ms/op 624us/op-cpu [0ms - 22ms]
closefile2 3494ops 349ops/s 0.0mb/s 0.8ms/op 429us/op-cpu [0ms - 18ms]
fsyncfile2 3494ops 349ops/s 0.0mb/s 1.0ms/op 415us/op-cpu [0ms - 22ms]
appendfilerand2 3494ops 349ops/s 2.7mb/s 1.5ms/op 833us/op-cpu [0ms - 22ms]
createfile2 3495ops 349ops/s 0.0mb/s 14.5ms/op 7210us/op-cpu [0ms - 44ms]
deletefile1 3500ops 350ops/s 0.0mb/s 13.4ms/op 6606us/op-cpu [0ms - 43ms]
21752: 38.385: IO Summary: 45399 ops, 4539.454 ops/s, (698/698 r/w), 16.0mb/s, 1613us cpu/op, 10.5ms latency
21752: 38.385: Shutting down processes

fileserver

filebench> load fileserver
18464: 6.044: File-server Version 3.0 personality successfully loaded
18464: 6.044: Usage: set $dir=<dir>
18464: 6.044: set $meanfilesize=<size> defaults to 131072
18464: 6.044: set $nfiles=<value> defaults to 10000
18464: 6.044: set $nthreads=<value> defaults to 50
18464: 6.044: set $meanappendsize=<value> defaults to 16384
18464: 6.044: set $iosize=<size> defaults to 1048576
18464: 6.044: set $meandirwidth=<size> defaults to 20
18464: 6.044: run runtime (e.g. run 60)
filebench> set $dir=/tmp/smr/sfs
filebench> set $nfiles=1000
filebench> set $meandirwidth=20
filebench> set $meanfilesize=16k
filebench> set $nthreads=100
filebench> set $iosize=4k
filebench> set $meanappendsize=16k
filebench> run 10
18464: 69.942: Creating/pre-allocating files and filesets
18464: 69.945: Fileset bigfileset: 1000 files, 0 leafdirs, avg dir width = 20, avg dir depth = 2.3, 14.903MB
18464: 69.948: Removed any existing fileset bigfileset in 1 seconds
18464: 69.948: making tree for filset /tmp/smr/sfs/bigfileset
18464: 69.965: Creating fileset bigfileset...
18464: 70.386: Preallocated 805 of 1000 of fileset bigfileset in 1 seconds
18464: 70.386: waiting for fileset pre-allocation to finish
18473: 70.386: Starting 1 filereader instances
18474: 70.388: Starting 100 filereaderthread threads
18464: 71.390: Running...
18464: 81.392: Run took 10 seconds...
18464: 81.404: Per-Operation Breakdown
statfile1 1524ops 152ops/s 0.0mb/s 2.5ms/op 1982us/op-cpu [0ms - 12ms]
deletefile1 1522ops 152ops/s 0.0mb/s 14.7ms/op 11150us/op-cpu [0ms - 83ms]
closefile3 1541ops 154ops/s 0.0mb/s 3.3ms/op 2790us/op-cpu [0ms - 26ms]
readfile1 1541ops 154ops/s 2.9mb/s 22.5ms/op 16496us/op-cpu [0ms - 94ms]
openfile2 1562ops 156ops/s 0.0mb/s 4.7ms/op 3419us/op-cpu [0ms - 29ms]
closefile2 1564ops 156ops/s 0.0mb/s 3.4ms/op 2449us/op-cpu [0ms - 25ms]
appendfilerand1 1567ops 157ops/s 1.2mb/s 8.6ms/op 6331us/op-cpu [0ms - 33ms]
openfile1 1577ops 158ops/s 0.0mb/s 4.7ms/op 3310us/op-cpu [0ms - 29ms]
closefile1 1581ops 158ops/s 0.0mb/s 3.4ms/op 2751us/op-cpu [0ms - 25ms]
wrtfile1 1582ops 158ops/s 2.5mb/s 28.7ms/op 21018us/op-cpu [0ms - 139ms]
createfile1 1601ops 160ops/s 0.0mb/s 30.8ms/op 23167us/op-cpu [1ms - 110ms]
18464: 81.404: IO Summary: 17162 ops, 1716.054 ops/s, (154/315 r/w), 6.8mb/s, 1147us cpu/op, 42.6ms latency
18464: 81.404: Shutting down processes

webserver

mukuls@MUKUL:~/thesis/fsbench/filebench-1.4.9.1$ filebench
Filebench Version 1.4.9.1
IMPORTANT: Virtual address space randomization is enabled on this machine!
It is highly recommended to disable randomization to provide stable Filebench runs.
Echo 0 to /proc/sys/kernel/randomize_va_space file to disable the randomization.
WARNING: Could not open /proc/sys/kernel/shmmax file!
It means that you probably ran Filebench not as a root. Filebench will not increase shared
region limits in this case, which can lead to the failures on certain workloads.
13832: 0.000: Allocated 170MB of shared memory
filebench> load webserver
13832: 11.930: Web-server Version 3.0 personality successfully loaded
13832: 11.930: Usage: set $dir=<dir>
13832: 11.930: set $meanfilesize=<size> defaults to 16384
13832: 11.930: set $nfiles=<value> defaults to 1000
13832: 11.931: set $meandirwidth=<value> defaults to 20
13832: 11.931: set $nthreads=<value> defaults to 100
13832: 11.931: set $iosize=<size> defaults to 1048576
13832: 11.931: run runtime (e.g. run 60)
filebench> set $dir=/tmp/smr/sfs
filebench> run 10
13832: 24.453: Creating/pre-allocating files and filesets
13832: 24.453: Fileset logfiles: 1 files, 0 leafdirs, avg dir width = 20, avg dir depth = 0.0, 0.002MB
13832: 24.455: Removed any existing fileset logfiles in 1 seconds
13832: 24.456: making tree for filset /tmp/smr/sfs/logfiles
13832: 24.456: Creating fileset logfiles...
13832: 24.457: Preallocated 1 of 1 of fileset logfiles in 1 seconds
13832: 24.460: Fileset bigfileset: 1000 files, 0 leafdirs, avg dir width = 20, avg dir depth = 2.3, 14.760MB
13832: 24.462: Removed any existing fileset bigfileset in 1 seconds
13832: 24.462: making tree for filset /tmp/smr/sfs/bigfileset
13832: 24.479: Creating fileset bigfileset...
13832: 24.982: Preallocated 1000 of 1000 of fileset bigfileset in 1 seconds
13832: 24.982: waiting for fileset pre-allocation to finish
13839: 24.982: Starting 1 filereader instances
13840: 24.982: Starting 100 filereaderthread threads
13832: 26.295: Running...
13832: 36.303: Run took 10 seconds...
13832: 36.446: Per-Operation Breakdown

appendlog 2107ops 211ops/s 1.5mb/s 56.8ms/op 41580us/op-cpu [0ms - 349ms]
closefile10 2009ops 201ops/s 0.0mb/s 2.6ms/op 3783us/op-cpu [0ms - 20ms]
readfile10 2009ops 201ops/s 2.9mb/s 18.6ms/op 14017us/op-cpu [0ms - 123ms]
openfile10 2014ops 201ops/s 0.0mb/s 6.6ms/op 5636us/op-cpu [0ms - 44ms]
closefile9 2016ops 202ops/s 0.0mb/s 2.6ms/op 3819us/op-cpu [0ms - 33ms]
readfile9 2017ops 202ops/s 3.0mb/s 19.3ms/op 13922us/op-cpu [0ms - 110ms]
openfile9 2021ops 202ops/s 0.0mb/s 6.5ms/op 5552us/op-cpu [0ms - 44ms]
closefile8 2021ops 202ops/s 0.0mb/s 2.6ms/op 3835us/op-cpu [0ms - 27ms]
readfile8 2021ops 202ops/s 2.9mb/s 18.9ms/op 14334us/op-cpu [0ms - 104ms]
openfile8 2023ops 202ops/s 0.0mb/s 6.4ms/op 5709us/op-cpu [0ms - 42ms]
closefile7 2024ops 202ops/s 0.0mb/s 2.7ms/op 3918us/op-cpu [0ms - 33ms]
readfile7 2024ops 202ops/s 3.0mb/s 20.1ms/op 14699us/op-cpu [0ms - 108ms]
openfile7 2025ops 202ops/s 0.0mb/s 6.3ms/op 5269us/op-cpu [0ms - 40ms]
closefile6 2025ops 202ops/s 0.0mb/s 2.5ms/op 3620us/op-cpu [0ms - 20ms]
readfile6 2029ops 203ops/s 3.0mb/s 19.8ms/op 14672us/op-cpu [0ms - 152ms]
openfile6 2049ops 205ops/s 0.0mb/s 6.3ms/op 5476us/op-cpu [0ms - 39ms]
closefile5 2053ops 205ops/s 0.0mb/s 2.6ms/op 3707us/op-cpu [0ms - 37ms]
readfile5 2056ops 206ops/s 3.0mb/s 20.2ms/op 14966us/op-cpu [0ms - 132ms]
openfile5 2069ops 207ops/s 0.0mb/s 6.4ms/op 5510us/op-cpu [0ms - 39ms]
closefile4 2073ops 207ops/s 0.0mb/s 2.6ms/op 3816us/op-cpu [0ms - 33ms]
readfile4 2077ops 208ops/s 2.9mb/s 19.2ms/op 14035us/op-cpu [0ms - 155ms]
openfile4 2081ops 208ops/s 0.0mb/s 6.3ms/op 5593us/op-cpu [0ms - 41ms]
closefile3 2082ops 208ops/s 0.0mb/s 2.7ms/op 4121us/op-cpu [0ms - 33ms]
readfile3 2084ops 208ops/s 3.0mb/s 18.3ms/op 13455us/op-cpu [0ms - 100ms]
openfile3 2092ops 209ops/s 0.0mb/s 6.4ms/op 5507us/op-cpu [0ms - 36ms]
closefile2 2093ops 209ops/s 0.0mb/s 2.5ms/op 4009us/op-cpu [0ms - 21ms]
readfile2 2093ops 209ops/s 3.1mb/s 18.4ms/op 13774us/op-cpu [0ms - 134ms]
openfile2 2095ops 209ops/s 0.0mb/s 5.9ms/op 4964us/op-cpu [0ms - 39ms]
closefile1 2097ops 210ops/s 0.0mb/s 2.7ms/op 3534us/op-cpu [0ms - 33ms]
readfile1 2102ops 210ops/s 3.0mb/s 17.0ms/op 12721us/op-cpu [0ms - 103ms]
openfile1 2103ops 210ops/s 0.0mb/s 5.1ms/op 6372us/op-cpu [0ms - 39ms]
13832: 36.449: IO Summary: 63684 ops, 6365.331 ops/s, (2050/211 r/w), 31.8mb/s, 1364us cpu/op, 30.5ms latency
13832: 36.449: Shutting down processes

IOR

mukuls@MUKUL:~/thesis/fsbench/IOR/src/C$ ./IOR -b 4k -t 4k -o /tmp/smr/sfs/aa
IOR-2.10.3: MPI Coordinated Test of Parallel I/O

Run began: Fri Oct 25 00:21:54 2013
Command line used: ./IOR -b 4k -t 4k -o /tmp/smr/sfs/aa
Machine: Linux MUKUL

Summary:
api = POSIX
test filename = /tmp/smr/sfs/aa
access = single-shared-file
ordering in a file = sequential offsets
ordering inter file= no tasks offsets
clients = 1 (1 per node)
repetitions = 1
xfersize = 4096 bytes
blocksize = 4096 bytes
aggregate filesize = 4096 bytes

Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s)
--------- --------- --------- ---------- ------- --------- --------- ---------- ------- --------
write 3.04 3.04 3.04 0.00 777.73 777.73 777.73 0.00 0.00129 EXCEL
read 8.61 8.61 8.61 0.00 2204.05 2204.05 2204.05 0.00 0.00045 EXCEL

Max Write: 3.04 MiB /sec (3.19 MB/sec)
Max Read: 8.61 MiB /sec (9.03 MB/sec)

Run finished: Fri Oct 25 00:21:54 2013

Bonnie++

mukuls@MUKUL:~/thesis/fsbench/IOR/src/C$ bonnie++ -d /tmp/smr/sfs/ -r 1 -s 2 -f -b -n 12
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
MUKUL 2M +++++ +++ +++++ +++ +++++ +++ 1467 8
Latency 1399us 745us 131us 2812ms
Version 1.97 ------Sequential Create------ --------Random Create--------
MUKUL -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
12 675 1 +++++ +++ 5652 5 1829 5 23211 19 5609 5
Latency 1776ms 2759us 12678us 347ms 1534us 60181us
1.97,1.97,MUKUL,1,1382668839,2M,,,,+++++,+++,+++++,+++,,,+++++,+++,1467,8,12,,,,,675,1,+++++,+++,5652,5,1829,5,23211,19,5609,5,,1399us,745us,,131us,2812ms,1776ms,2759us,12678us,347ms,1534us,60181us
mukuls@MUKUL:~/thesis/fsbench/IOR/src/C$

tiobench

mukuls@MUKUL:~/thesis/fsbench/tiobench-0.3.3$ ./tiobench.pl --dir /tmp/smr/sfs --numruns 2 --block 4096 --size 12
Run #2: ./tiotest -t 8 -f 1 -r 500 -b 4096 -d /tmp/smr/sfs -TTT

Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load

Sequential Reads
3.11.0-12-generic 12 4096 1 685.32 77.58% 0.010 0.34 0.00000 0.00000 883
3.11.0-12-generic 12 4096 2 ###### 153.6% 0.009 0.48 0.00000 0.00000 1004
3.11.0-12-generic 12 4096 4 ###### 269.8% 0.012 0.74 0.00000 0.00000 644
3.11.0-12-generic 12 4096 8 ###### 246.8% 0.028 5.37 0.00000 0.00000 418

Random Reads
3.11.0-12-generic 12 4096 1 166.32 30.56% 0.045 1.00 0.00000 0.00000 544
3.11.0-12-generic 12 4096 2 388.68 74.04% 0.038 0.92 0.00000 0.00000 525
3.11.0-12-generic 12 4096 4 549.69 151.2% 0.050 3.41 0.00000 0.00000 363
3.11.0-12-generic 12 4096 8 716.25 313.9% 0.072 11.43 0.00000 0.00000 228

Sequential Writes
3.11.0-12-generic 12 4096 1 43.38 18.74% 0.166 0.94 0.00000 0.00000 231
3.11.0-12-generic 12 4096 2 28.91 12.46% 0.318 589.96 0.00000 0.00000 232
3.11.0-12-generic 12 4096 4 94.66 87.68% 0.289 14.01 0.00000 0.00000 108
3.11.0-12-generic 12 4096 8 79.36 173.9% 0.678 9.66 0.00000 0.00000 46

Random Writes
3.11.0-12-generic 12 4096 1 35.04 20.10% 0.212 2.13 0.00000 0.00000 174
3.11.0-12-generic 12 4096 2 116.00 39.73% 0.120 16.85 0.00000 0.00000 292
3.11.0-12-generic 12 4096 4 100.20 88.68% 0.271 13.75 0.00000 0.00000 113
3.11.0-12-generic 12 4096 8 79.94 188.6% 0.697 15.70 0.00000 0.00000 42

FSMark

mukuls@MUKUL:/tmp/smr/sfs$ ~/thesis/fsbench/fs_mark-3.3/fs_mark -D 10 -d . -N 10 -L 1 -s 4096 -t 10

# /home/mukuls/thesis/fsbench/fs_mark-3.3/fs_mark -D 10 -d . -N 10 -L 1 -s 4096 -t 10
# Version 3.3, 10 thread(s) starting at Thu Oct 24 18:39:42 2013
# Sync method: INBAND FSYNC: fsync() per file in write loop.
# Directories: Round Robin between directories across 10 subdirectories with 10 files per subdirectory.
# File names: 40 bytes long, (16 initial bytes of time stamp with 24 random bytes at end of name)
# Files info: size 4096 bytes, written with an IO size of 16384 bytes per write
# App overhead is time in microseconds spent in the test not doing file writing related system calls.

FSUse% Count Size Files/sec App Overhead
32 10000 4096 844.6 393174

Flexible File System Benchmark

This multi-threaded tool can be used to generate various workloads using customizable profiles. This can be used to generate small and large file workloads.

http://sourceforge.net/projects/ffsb/

Usage:

ffsb examples/myprofile

Where profile can be defined as

directio = 0
time = 10

[filesystem]
location = /home/mukuls/thesis/fsbench/test

num_dirs = 100

size_weight 4k 33
size_weight 8k 21
size_weight 16k 13
size_weight 32k 10
size_weight 64k 8
size_weight 128k 5
size_weight 256k 4
size_weight 512k 3
size_weight 8m 2
size_weight 32m 1
# size_weight 1g 1

# min_filesize = 4k
# max_filesize = 10m

# num_files = 0
init_size = 100m
# init_size = 6GB
# init_size = 1gb
# init_util = 0.002

agefs = 0
[threadgroup]
num_threads = 10
write_size = 400
write_blocksize = 1024
create_weight = 10
append_weight = 10
delete_weight = 1
[end]
desired_util = 0.005


[end]

#[filesystem]
# location = /mnt/test1
# clone = /mnt/test2
#[end]

[threadgroup]
num_threads = 4

FFSB version 6.0-RC2 started

benchmark time = 10
ThreadGroup 0
================
num_threads = 2

read_random = off
read_size = 40960 (40KB)
read_blocksize = 4096 (4KB)
read_skip = off
read_skipsize = 0 (0B)

write_random = off
write_size = 40960 (40KB)
fsync_file = 0
write_blocksize = 4096 (4KB)
wait time = 1

op weights
read = 0 (0.00%)
readall = 1 (10.00%)
write = 0 (0.00%)
create = 1 (10.00%)
append = 1 (10.00%)
delete = 1 (10.00%)
metaop = 0 (0.00%)
createdir = 0 (0.00%)
stat = 1 (10.00%)
writeall = 1 (10.00%)
writeall_fsync = 1 (10.00%)
open_close = 1 (10.00%)
write_fsync = 0 (0.00%)
create_fsync = 1 (10.00%)
append_fsync = 1 (10.00%)

FileSystem /tmp/smr/sfs
==========
num_dirs = 5
starting files = 0

Fileset weight:
4096 ( 4KB) -> 33 (100.00%)
directio = off
alignedio = off
bufferedio = off

aging is on
current utilization = 25.07%
desired utilization = 0.50%

Aging ThreadGroup for fs /tmp/smr/sfs
================
num_threads = 2

read_random = off
read_size = 0 (0B)
read_blocksize = 0 (0B)
read_skip = off
read_skipsize = 0 (0B)

write_random = off
write_size = 400 (400B)
fsync_file = 0
write_blocksize = 1024 (1KB)
wait time = 0

op weights
read = 0 (0.00%)
readall = 0 (0.00%)
write = 0 (0.00%)
create = 10 (47.62%)
append = 10 (47.62%)
delete = 1 (4.76%)
metaop = 0 (0.00%)
createdir = 0 (0.00%)
stat = 0 (0.00%)
writeall = 0 (0.00%)
writeall_fsync = 0 (0.00%)
open_close = 0 (0.00%)
write_fsync = 0 (0.00%)
create_fsync = 0 (0.00%)
append_fsync = 0 (0.00%)


creating new fileset /tmp/smr/sfs
aging fs /tmp/smr/sfs from 0.25 to 0.01
fs setup took 8 secs
Syncing()...2 sec
Starting Actual Benchmark At: Thu Oct 24 11:14:51 2013

Syncing()...29 sec
FFSB benchmark finished at: Thu Oct 24 11:15:31 2013

Results:
Benchmark took 40.33 sec

Total Results
===============
Op Name Transactions Trans/sec % Trans % Op Weight Throughput
=== ======== ===== === ======= ==========
readall : 2038 50.54 5.876% 10.030% 202KB/sec
create : 1187 29.44 3.423% 10.370% 118KB/sec
append : 11280 279.73 32.525% 9.855% 1.09MB/sec
delete : 1185 29.39 3.417% 10.353% NA
stat : 1143 28.34 3.296% 9.986% NA
writeall : 2006 49.75 5.784% 9.663% 199KB/sec
writeall_fsync : 2190 54.31 6.315% 9.872% 217KB/sec
open_close : 1143 28.34 3.296% 9.986% NA
create_fsync : 1139 28.25 3.284% 9.951% 113KB/sec
append_fsync : 11370 281.96 32.785% 9.934% 1.1MB/sec
-
860.03 Transactions per Second

Throughput Results
===================
Read Throughput: 202KB/sec
Write Throughput: 2.83MB/sec

System Call Latency statistics in millisecs
=====
Min Avg Max Total Calls
==== ==== ==== ============
[ open] 0.059000 0.493468 374.946991 9118
-
[ read] 0.000000 0.032523 0.438000 2038
-
[ write] 0.018000 0.088117 329.177002 29172
-
[ unlink] 0.087000 0.238140 0.923000 1185
-
[ close] 0.008000 0.039420 0.513000 9118
-
[ stat] 0.004000 0.062983 0.401000 1143
-

0.5% User Time
3.2% System Time
3.7% CPU Utilization

FFSB version 6.0-RC2 started

benchmark time = 10
ThreadGroup 0
================
num_threads = 2

read_random = off
read_size = 40960 (40KB)
read_blocksize = 4096 (4KB)
read_skip = off
read_skipsize = 0 (0B)

write_random = off
write_size = 40960 (40KB)
fsync_file = 0
write_blocksize = 4096 (4KB)
wait time = 1

op weights
read = 0 (0.00%)
readall = 1 (10.00%)
write = 0 (0.00%)
create = 1 (10.00%)
append = 1 (10.00%)
delete = 1 (10.00%)
metaop = 0 (0.00%)
createdir = 0 (0.00%)
stat = 1 (10.00%)
writeall = 1 (10.00%)
writeall_fsync = 1 (10.00%)
open_close = 1 (10.00%)
write_fsync = 0 (0.00%)
create_fsync = 1 (10.00%)
append_fsync = 1 (10.00%)

FileSystem /tmp/smr/sfs
==========
num_dirs = 5
starting files = 0

Fileset weight:
4096 ( 4KB) -> 33 (50.00%)
2048 ( 2KB) -> 33 (50.00%)
directio = off
alignedio = off
bufferedio = off

aging is on
current utilization = 25.07%
desired utilization = 0.50%

Aging ThreadGroup for fs /tmp/smr/sfs
================
num_threads = 2

read_random = off
read_size = 0 (0B)
read_blocksize = 0 (0B)
read_skip = off
read_skipsize = 0 (0B)

write_random = off
write_size = 4096 (4KB)
fsync_file = 0
write_blocksize = 1024 (1KB)
wait time = 0

op weights
read = 0 (0.00%)
readall = 0 (0.00%)
write = 0 (0.00%)
create = 10 (43.48%)
append = 10 (43.48%)
delete = 3 (13.04%)
metaop = 0 (0.00%)
createdir = 0 (0.00%)
stat = 0 (0.00%)
writeall = 0 (0.00%)
writeall_fsync = 0 (0.00%)
open_close = 0 (0.00%)
write_fsync = 0 (0.00%)
create_fsync = 0 (0.00%)
append_fsync = 0 (0.00%)


creating new fileset /tmp/smr/sfs
aging fs /tmp/smr/sfs from 0.25 to 0.01
fs setup took 8 secs
Syncing()...3 sec
Starting Actual Benchmark At: Thu Oct 24 11:20:42 2013

Syncing()...4 sec
FFSB benchmark finished at: Thu Oct 24 11:20:57 2013

Results:
Benchmark took 14.64 sec

Total Results
===============
Op Name Transactions Trans/sec % Trans % Op Weight Throughput
=== ======== ===== === ======= ==========
readall : 1505 102.82 4.566% 10.210% 484KB/sec
create : 1107 75.63 3.358% 9.888% 230KB/sec
append : 11370 776.75 34.492% 10.156% 3.03MB/sec
delete : 1180 80.61 3.580% 10.540% NA
stat : 1205 82.32 3.656% 10.764% NA
writeall : 1926 131.58 5.843% 10.147% 454KB/sec
writeall_fsync : 1969 134.51 5.973% 9.728% 465KB/sec
open_close : 1082 73.92 3.282% 9.665% NA
create_fsync : 1060 72.41 3.216% 9.469% 218KB/sec
append_fsync : 10560 721.41 32.035% 9.433% 2.82MB/sec
-
2251.96 Transactions per Second

Throughput Results
===================
Read Throughput: 484KB/sec
Write Throughput: 7.19MB/sec

System Call Latency statistics in millisecs
=====
Min Avg Max Total Calls
==== ==== ==== ============
[ open] 0.059000 0.479224 341.735992 8810
-
[ read] 0.000000 0.198503 340.315002 2041
-
[ write] 0.017000 0.077186 2.086000 27992
-
[ unlink] 0.145000 0.246849 6.195000 1180
-
[ close] 0.009000 0.039045 1.807000 8810
-
[ stat] 0.005000 0.067283 0.403000 1205
-

1.3% User Time
8.8% System Time
10.1% CPU Utilization

time = 10
directio = 0

[filesystem0]
location = /tmp/smr/sfs

num_dirs = 5

size_weight 4k 33
# size_weight 8k 21
# size_weight 16k 13
# size_weight 32k 10
# size_weight 64k 8
# size_weight 128k 5
# size_weight 256k 4
# size_weight 512k 3
# size_weight 8m 2
# size_weight 32m 1
# size_weight 1g 1

min_filesize = 1k
max_filesize = 4k

# num_files = 0
init_size = 100m
# init_size = 6GB
# init_size = 1gb
# init_util = 0.002

agefs = 1
[threadgroup]
num_threads = 2
write_size = 400
write_blocksize = 1024
create_weight = 10
append_weight = 10
delete_weight = 1
[end]
desired_util = 0.005


[end0]

#[filesystem]
# location = /mnt/test1
# clone = /mnt/test2
#[end]

[threadgroup0]
num_threads = 2

# bindfs = /mnt/test1

append_weight = 1
append_fsync_weight = 1
stat_weight = 1
# write_weight = 1
# write_fsync_weight = 1
# read_weight = 1
create_weight = 1
create_fsync_weight = 1
delete_weight = 1
readall_weight = 1
writeall_weight = 1
writeall_fsync_weight = 1
open_close_weight = 1

read_random = 0
write_random = 0

write_size = 40k
write_blocksize = 4k
read_size = 40k
read_blocksize = 4k

op_delay = 1

[stats]
enable_stats = 1
enable_range = 0

# ignore = close
# ignore = open
# ignore = lseek
# ignore = write
# ignore = read

msec_range 0.00 0.01
msec_range 0.01 0.02
msec_range 0.02 0.03
msec_range 0.03 0.04
msec_range 0.04 0.05
msec_range 0.05 0.1
msec_range 0.1 0.2
msec_range 0.2 0.5
msec_range 0.5 1.0
msec_range 1.0 2.0
msec_range 2.0 3.0
msec_range 3.0 4.0
msec_range 4.0 5.0
msec_range 5.0 10.0
msec_range 10.0 10000.0
[end]
[end0]

time = 10
directio = 0

[filesystem0]
location = /tmp/smr/sfs

num_dirs = 5

size_weight 2k 33
size_weight 4k 33

min_filesize = 1k
max_filesize = 4k

# num_files = 0
init_size = 100m
# init_size = 6GB
# init_size = 1gb
# init_util = 0.002

agefs = 1
[threadgroup]
num_threads = 2
write_size = 4096
write_blocksize = 1024
create_weight = 10
append_weight = 10
delete_weight = 3
[end]
desired_util = 0.005


[end0]

[threadgroup0]
num_threads = 2

append_weight = 1
append_fsync_weight = 1
stat_weight = 1
# write_weight = 1
# write_fsync_weight = 1
# read_weight = 1
create_weight = 1
create_fsync_weight = 1
delete_weight = 1
readall_weight = 1
writeall_weight = 1
writeall_fsync_weight = 1
open_close_weight = 1

read_random = 0
write_random = 0

write_size = 40k
write_blocksize = 4k
read_size = 40k
read_blocksize = 4k

op_delay = 1

[stats]
enable_stats = 1
enable_range = 0

# ignore = close
# ignore = open
# ignore = lseek
# ignore = write
# ignore = read

msec_range 0.00 0.01
msec_range 0.01 0.02
msec_range 0.02 0.03
msec_range 0.03 0.04
msec_range 0.04 0.05
msec_range 0.05 0.1
msec_range 0.1 0.2
msec_range 0.2 0.5
msec_range 0.5 1.0
msec_range 1.0 2.0
msec_range 2.0 3.0
msec_range 3.0 4.0
msec_range 4.0 5.0
msec_range 5.0 10.0
msec_range 10.0 10000.0
[end]
[end0]

vdbench

It is a multi-threaded benchmark tool. It helps in creating both raw and file system IOs and also provides methods to replay the traffic using a plug-in.

I was able to find another link for vdbench tool on sourcefourge, this is an open source project.
http://vdbench.sourceforge.net/

Usage :

./vdbench -f /tmp/parmfile

where parmfile can be configured as


fsd=fsd1,anchor=/home/mukuls/thesis/fsbench/test,depth=2,width=10,files=100,size=128k

fwd=default,xfersize=4k,fileio=random,fileselect=random,threads=8,
stopafter=100
fwd=fwd1,fsd=fsd1,operation=read
fwd=fwd2,fsd=fsd1,operation=write

rd=rd1,fwd=fwd*,fwdrate=100,format=yes,elapsed=5,interval=1

mukuls@MUKUL:~/thesis/fsbench/vdbench$ ./vdbench -f ../parmfile


Vdbench distribution: vdbench502
For documentation, see 'vdbench.pdf'.

15:21:11.438 input argument scanned: '-f../parmfile'
15:21:11.616 anchor=/tmp/smr/sfs: there will be 110 directories and a maximum of 10000 files under this anchor.
15:21:11.617 Estimated maximum size for this anchor: 39.062m
15:21:11.617
15:21:11.780 Starting slave: /home/mukuls/thesis/fsbench/vdbench/vdbench SlaveJvm -m localhost -n localhost-10-131024-15.21.11.374 -l localhost-0 -p 5570
15:21:12.418 All slaves are now connected
15:21:15.002 Starting RD=format_for_rd1
15:21:15.099 localhost-0: anchor=/tmp/smr/sfs mkdir complete.

Oct 24, 2013 .Interval. .ReqstdOps.. ...cpu%... ....read.... ...write.... ..mb/sec... mb/sec .xfer. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete... ..getattr.. ..se$
attr..
rate resp total sys rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp rate resp rat$
resp
15:21:16.280 1 0.0 0.00 51.7 2.95 0.0 0.00 0.0 0.00 0.00 0.00 0.00 0 110.0 1.58 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.$
0.00
15:21:17.189 2 533.0 1.27 78.8 24.4 0.0 0.00 533.0 1.27 0.00 2.09 2.09 4103 0.0 0.00 0.0 0.00 533.0 13.93 535.0 10.19 533.0 0.86 0.0 0.00 0.0 0.00 0.$
0.00
15:21:18.239 3 576.0 1.16 74.9 29.1 0.0 0.00 576.0 1.16 0.00 2.25 2.25 4096 0.0 0.00 0.0 0.00 575.0 13.75 574.0 10.78 576.0 0.85 0.0 0.00 0.0 0.00 0.$
0.00
15:21:19.153 4 540.0 1.29 76.4 29.2 0.0 0.00 540.0 1.29 0.00 2.11 2.11 4096 0.0 0.00 0.0 0.00 542.0 14.82 540.0 11.48 541.0 0.92 0.0 0.00 0.0 0.00 0.$
0.00
15:21:20.170 5 566.0 1.24 69.2 31.5 0.0 0.00 566.0 1.24 0.00 2.21 2.21 4096 0.0 0.00 0.0 0.00 565.0 13.97 566.0 10.83 565.0 0.88 0.0 0.00 0.0 0.00 0.$
0.00
15:21:21.162 6 446.0 1.67 87.2 25.5 0.0 0.00 446.0 1.67 0.00 1.74 1.74 4096 0.0 0.00 0.0 0.00 445.0 17.88 446.0 13.72 446.0 0.97 0.0 0.00 0.0 0.00 0.$
0.00
15:21:22.257 7 565.0 1.24 78.4 29.7 0.0 0.00 565.0 1.24 0.00 2.21 2.21 4096 0.0 0.00 0.0 0.00 566.0 14.06 566.0 10.82 566.0 0.93 0.0 0.00 0.0 0.00 0.$
0.00
15:21:23.141 8 372.0 2.32 91.6 22.5 0.0 0.00 372.0 2.32 0.00 1.45 1.45 4096 0.0 0.00 0.0 0.00 371.0 21.32 373.0 15.87 372.0 1.07 0.0 0.00 0.0 0.00 0.$
0.00
15:21:24.099 9 585.0 1.24 75.3 30.0 0.0 0.00 585.0 1.24 0.00 2.29 2.29 4096 0.0 0.00 0.0 0.00 585.0 13.65 584.0 10.42 585.0 0.95 0.0 0.00 0.0 0.00 0.$
0.00
15:21:25.088 10 637.0 1.05 70.7 32.1 0.0 0.00 637.0 1.05 0.00 2.49 2.49 4096 0.0 0.00 0.0 0.00 638.0 12.42 637.0 9.66 638.0 0.81 0.0 0.00 0.0 0.00 0.$
0.00
15:21:26.087 11 519.0 1.40 81.1 27.4 0.0 0.00 519.0 1.40 0.00 2.03 2.03 4096 0.0 0.00 0.0 0.00 519.0 15.36 519.0 11.95 518.0 0.93 0.0 0.00 0.0 0.00 0.$
0.00
15:21:27.131 12 555.0 1.04 61.2 29.2 0.0 0.00 555.0 1.04 0.00 2.17 2.17 4096 0.0 0.00 0.0 0.00 555.0 14.32 555.0 11.21 555.0 1.02 0.0 0.00 0.0 0.00 0.$
0.00
15:21:28.107 13 627.0 1.08 71.4 31.4 0.0 0.00 627.0 1.08 0.00 2.45 2.45 4096 0.0 0.00 0.0 0.00 628.0 12.63 628.0 9.88 628.0 0.82 0.0 0.00 0.0 0.00 0.$
0.00
15:21:29.084 14 616.0 1.12 68.8 29.6 0.0 0.00 616.0 1.12 0.00 2.41 2.41 4096 0.0 0.00 0.0 0.00 615.0 12.93 616.0 10.09 616.0 0.86 0.0 0.00 0.0 0.00 0.$
0.00

15:21:30.094 15 644.0 1.00 69.6 32.6 0.0 0.00 644.0 1.00 0.00 2.52 2.52 4096 0.0 0.00 0.0 0.00 645.0 12.37 644.0 9.77 644.0 0.80 0.0 0.00 0.0 0.[3/1989]
0.00
15:21:31.090 16 612.0 1.10 70.2 32.0 0.0 0.00 612.0 1.10 0.00 2.39 2.39 4096 0.0 0.00 0.0 0.00 612.0 12.93 613.0 10.19 612.0 0.79 0.0 0.00 0.0 0.00 0.0
0.00
15:21:32.086 17 545.0 1.34 61.9 29.5 0.0 0.00 545.0 1.34 0.00 2.13 2.13 4096 0.0 0.00 0.0 0.00 545.0 14.57 544.0 11.08 545.0 0.84 0.0 0.00 0.0 0.00 0.0
0.00
15:21:33.142 18 616.0 1.09 71.0 32.4 0.0 0.00 616.0 1.09 0.00 2.41 2.41 4096 0.0 0.00 0.0 0.00 614.0 12.92 616.0 10.14 616.0 0.85 0.0 0.00 0.0 0.00 0.0
0.00
15:21:33.940 localhost-0: anchor=/tmp/smr/sfs create complete.
15:21:34.091 19 444.0 1.44 68.8 25.3 0.0 0.00 444.0 1.44 0.00 1.73 1.73 4096 0.0 0.00 0.0 0.00 446.0 15.75 443.0 12.33 444.0 0.83 0.0 0.00 0.0 0.00 0.0
0.00
15:21:35.066 20 0.0 0.00 29.6 0.75 0.0 0.00 0.0 0.00 0.00 0.00 0.00 0 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:35.089 avg_2-20 526.2 1.25 71.4 27.6 0.0 0.00 526.2 1.25 0.00 2.06 2.06 4096 0.0 0.00 0.0 0.00 526.3 14.16 526.3 10.95 526.3 0.88 0.0 0.00 0.0 0.00 0.0
0.00
15:21:35.347
15:21:35.348 Miscellaneous statistics:
15:21:35.348 (These statistics do not include activity between the last reported interval and shutdown.)
15:21:35.348 FILE_CREATES Files created: 10000 500/sec
15:21:35.349 DIRECTORY_CREATES Directories created: 110 5/sec
15:21:35.349 WRITE_OPENS Files opened for write activity: 10000 500/sec
15:21:35.350 DIR_BUSY_MKDIR Directory busy (mkdir): 4 0/sec
15:21:35.350 DIR_EXISTS Directory may not exist (yet): 57 2/sec
15:21:35.351 FILE_CLOSES Close requests: 10000 500/sec
15:21:35.351
15:21:38.001 Starting RD=rd1; elapsed=5; fwdrate=100; For loops: None

Oct 24, 2013 .Interval. .ReqstdOps.. ...cpu%... ....read.... ...write.... ..mb/sec... mb/sec .xfer. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete... ..getattr.. ..set
attr..
rate resp total sys rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp rate resp rate
resp
15:21:39.036 1 95.0 0.28 27.0 1.40 47.0 0.05 48.0 0.51 0.18 0.19 0.37 4096 0.0 0.00 0.0 0.00 0.0 0.00 8.0 2.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:40.073 2 95.0 0.29 6.3 1.95 48.0 0.02 47.0 0.56 0.19 0.18 0.37 4096 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:41.073 3 97.0 0.31 6.3 2.27 47.0 0.02 50.0 0.59 0.18 0.20 0.38 4096 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:42.081 4 101.0 0.29 6.3 1.77 51.0 0.02 50.0 0.57 0.20 0.20 0.39 4096 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:43.069 5 105.0 0.26 8.1 2.28 57.0 0.02 48.0 0.54 0.22 0.19 0.41 4096 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:43.088 avg_2-5 99.5 0.29 6.8 2.07 50.8 0.02 48.8 0.57 0.20 0.19 0.39 4096 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0 0.00 0.0
0.00
15:21:43.509

15:21:43.509 Miscellaneous statistics:
15:21:43.510 (These statistics do not include activity between the last reported interval and shutdown.)
15:21:43.510 READ_OPENS Files opened for read activity: 4 0/sec
15:21:43.510 WRITE_OPENS Files opened for write activity: 4 0/sec
15:21:43.511
15:21:43.744 Vdbench execution completed successfully. Output directory: /home/mukuls/thesis/fsbench/vdbench/output
mukuls@MUKUL:~/thesis/fsbench/vdbench$

IOmeter

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems, developed by Intel and relased to open source community. Its distributed under Intel Open Source License.

Tested IOmeter with basic workload (512B 100%Read 0% Random) and (4k 100%Read 0% Random) with modifications for max files in buffercache. Further work include populating results in a useful format and trying different workloads.

References
http://www.iometer.org/
http://www.linuxintro.org/wiki/Iometer
http://www.itechstorm.com/iometer-tutorial-introduction

Installation and Setup
http://greg.porter.name/wiki/HowTo:iometer

LFStest

This test is basically derived from Rosenblum's LFS paper where it performs following operations
  1. Create a large file with sequential writes
  2. Read back the file with sequential reads.
  3. Perform random writes to this existing file
  4. Perform random reads to this existing file
  5. Calculates the bandwidth of all above 4 operations.
It's very easy to install and run the benchmark. (Basically a C implementation)

Source
http://fsbench.filesystems.org/bench/sprite-lfs.tar.gz

Usage
The test also provides option of specifying IO size for reads and write at application level e.g. 256KB etc.

largefile [-f file_size] [-i IO_size] [-s seed] dirname (dirname would be /tmp/smr/sfs)

This creates a test file in indicated directory. file_size indicates file size in MBs. IO_size specifies I/O transfers in KBs.
This test was useful to check for operations on files larger than bandsize and a bug has been reported to provide multiple extent support for file on SMRfs

PUMA: Purdue MapReduce Benchmarks Suite

There are a total of 13 benchmarks, out of which Tera-Sort, Word-Count, and Grep are from Hadoop distribution, and others. The three benchmarks from Hadoop distribution are also slightly modified to take number of reduce tasks as input from the user and generate final time completion statistics of jobs.

PUMA jar file: http://web.ics.purdue.edu/~fahmad/benchmarks/hadoop-0.20.3-dev-examples.jar
  1. Word-Count
  2. Inverted-Index
  3. Term-Vector
  4. Self-Join
  5. Adjacency-List
  6. K-Means
  7. Classification
  8. Histogram-Movies
  9. Histogram-Ratings
  10. Sequence-Count
  11. Ranked-Inverted-Index
  12. Tera-Sort
  13. Grep
Ref: http://web.ics.purdue.edu/~fahmad/benchmarks.htm

[hduser@iceberg: ~] $ hadoop jar /home/tejas/Downloads/hadoop-0.20.3-dev-examples.jar wordcount /user/hduser/input /user/hduser/output

Warning: $HADOOP_HOME is deprecated.


13/10/24 22:13:53 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.

13/10/24 22:13:53 INFO input.FileInputFormat: Total input paths to process : 4

13/10/24 22:13:53 INFO util.NativeCodeLoader: Loaded the native-hadoop library

13/10/24 22:13:53 WARN snappy.LoadSnappy: Snappy native library not loaded

13/10/24 22:13:53 INFO mapred.JobClient: Running job: job_201310242209_0005

13/10/24 22:13:54 INFO mapred.JobClient: map 0% reduce 0%

13/10/24 22:14:10 INFO mapred.JobClient: map 25% reduce 0%

13/10/24 22:14:11 INFO mapred.JobClient: map 36% reduce 0%

13/10/24 22:14:14 INFO mapred.JobClient: map 38% reduce 0%

13/10/24 22:14:17 INFO mapred.JobClient: map 42% reduce 0%

13/10/24 22:14:19 INFO mapred.JobClient: map 67% reduce 0%

13/10/24 22:14:20 INFO mapred.JobClient: map 74% reduce 0%

13/10/24 22:14:23 INFO mapred.JobClient: map 75% reduce 0%

13/10/24 22:14:24 INFO mapred.JobClient: map 100% reduce 0%

13/10/24 22:14:28 INFO mapred.JobClient: map 100% reduce 33%

13/10/24 22:14:32 INFO mapred.JobClient: map 100% reduce 70%

13/10/24 22:14:35 INFO mapred.JobClient: map 100% reduce 100%

13/10/24 22:14:36 INFO mapred.JobClient: Job complete: job_201310242209_0005

13/10/24 22:14:36 INFO mapred.JobClient: Counters: 29

13/10/24 22:14:36 INFO mapred.JobClient: Job Counters

13/10/24 22:14:36 INFO mapred.JobClient: Launched reduce tasks=1

13/10/24 22:14:36 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=51806

13/10/24 22:14:36 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

13/10/24 22:14:36 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

13/10/24 22:14:36 INFO mapred.JobClient: Launched map tasks=4

13/10/24 22:14:36 INFO mapred.JobClient: Data-local map tasks=4

13/10/24 22:14:36 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=23575

13/10/24 22:14:36 INFO mapred.JobClient: File Output Format Counters

13/10/24 22:14:36 INFO mapred.JobClient: Bytes Written=75160700

13/10/24 22:14:36 INFO mapred.JobClient: FileSystemCounters

13/10/24 22:14:36 INFO mapred.JobClient: FILE_BYTES_READ=161139412

13/10/24 22:14:36 INFO mapred.JobClient: HDFS_BYTES_READ=78336286

13/10/24 22:14:36 INFO mapred.JobClient: FILE_BYTES_WRITTEN=241821207

13/10/24 22:14:36 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=75160700

13/10/24 22:14:36 INFO mapred.JobClient: File Input Format Counters

13/10/24 22:14:36 INFO mapred.JobClient: Bytes Read=78335822

13/10/24 22:14:36 INFO mapred.JobClient: Map-Reduce Framework

13/10/24 22:14:36 INFO mapred.JobClient: Map output materialized bytes=80373134

13/10/24 22:14:36 INFO mapred.JobClient: Map input records=1327552

13/10/24 22:14:36 INFO mapred.JobClient: Reduce shuffle bytes=80373134

13/10/24 22:14:36 INFO mapred.JobClient: Spilled Records=3827483

13/10/24 22:14:36 INFO mapred.JobClient: Map output bytes=85126210

13/10/24 22:14:36 INFO mapred.JobClient: Total committed heap usage (bytes)=988610560

13/10/24 22:14:36 INFO mapred.JobClient: CPU time spent (ms)=34790

13/10/24 22:14:36 INFO mapred.JobClient: Combine input records=3349872

13/10/24 22:14:36 INFO mapred.JobClient: SPLIT_RAW_BYTES=464

13/10/24 22:14:36 INFO mapred.JobClient: Reduce input records=1267843

13/10/24 22:14:36 INFO mapred.JobClient: Reduce input groups=1252264

13/10/24 22:14:36 INFO mapred.JobClient: Combine output records=2559640

13/10/24 22:14:36 INFO mapred.JobClient: Physical memory (bytes) snapshot=958275584

13/10/24 22:14:36 INFO mapred.JobClient: Reduce output records=1252264

13/10/24 22:14:36 INFO mapred.JobClient: Virtual memory (bytes) snapshot=4764823552

13/10/24 22:14:36 INFO mapred.JobClient: Map output records=2058075

The iteration took 43 seconds.

aio-stress

Test configuration:
Tests were run locally using a 4GB image file formatted with Btrfs (The test crashes when run on a raw partition).
Tests were run for file-sizes: 10MB, 50MB, 100MB and 400MB.

Commandline: aio-stress -s <filesize> a1 a2 -u

File size: 10 M

===========

file size 10MB, record size 64KB, depth 64, ios per iteration 8
max io_submit 16, buffer alignment set to 4KB
threads 1 files 2 contexts 1 context offset 2MB verification off
write on a1 (209.63 MB/s) 10.00 MB in 0.05s
write on a2 (209.38 MB/s) 10.00 MB in 0.05s
thread 0 write totals (260.73 MB/s) 20.00 MB in 0.08s
read on a1 (2409.64 MB/s) 10.00 MB in 0.00s
read on a2 (2396.93 MB/s) 10.00 MB in 0.00s
thread 0 read totals (4507.55 MB/s) 20.00 MB in 0.00s
random write on a1 (380.53 MB/s) 10.00 MB in 0.03s
random write on a2 (379.95 MB/s) 10.00 MB in 0.03s
thread 0 random write totals (412.84 MB/s) 20.00 MB in 0.05s
random read on a1 (1583.28 MB/s) 10.00 MB in 0.01s
random read on a2 (1578.78 MB/s) 10.00 MB in 0.01s
thread 0 random read totals (3095.02 MB/s) 20.00 MB in 0.01s
Running single thread version

File size: 50M

===========

file size 50MB, record size 64KB, depth 64, ios per iteration 8
max io_submit 16, buffer alignment set to 4KB
threads 1 files 2 contexts 1 context offset 2MB verification off
write on a1 (223.43 MB/s) 50.00 MB in 0.22s
write on a2 (223.36 MB/s) 50.00 MB in 0.22s
thread 0 write totals (329.13 MB/s) 100.00 MB in 0.30s
read on a1 (1725.51 MB/s) 50.00 MB in 0.03s
read on a2 (1724.20 MB/s) 50.00 MB in 0.03s
thread 0 read totals (3430.06 MB/s) 100.00 MB in 0.03s
random write on a1 (293.10 MB/s) 50.00 MB in 0.17s
random write on a2 (293.03 MB/s) 50.00 MB in 0.17s
thread 0 random write totals (394.91 MB/s) 100.00 MB in 0.25s
random read on a1 (2776.54 MB/s) 50.00 MB in 0.02s
random read on a2 (2773.16 MB/s) 50.00 MB in 0.02s
thread 0 random read totals (5505.09 MB/s) 100.00 MB in 0.02s
Running single thread version

File size: 100MB

=============

file size 100MB, record size 64KB, depth 64, ios per iteration 8
max io_submit 16, buffer alignment set to 4KB
threads 1 files 2 contexts 1 context offset 2MB verification off
write on a1 (246.02 MB/s) 100.00 MB in 0.41s
write on a2 (245.98 MB/s) 100.00 MB in 0.41s
thread 0 write totals (363.83 MB/s) 200.00 MB in 0.55s
read on a1 (2221.83 MB/s) 100.00 MB in 0.05s
read on a2 (2220.64 MB/s) 100.00 MB in 0.05s
thread 0 read totals (4412.28 MB/s) 200.00 MB in 0.05s
random write on a1 (296.46 MB/s) 100.00 MB in 0.34s
random write on a2 (296.45 MB/s) 100.00 MB in 0.34s
thread 0 random write totals (420.21 MB/s) 200.00 MB in 0.48s
random read on a1 (2608.51 MB/s) 100.00 MB in 0.04s
random read on a2 (2607.15 MB/s) 100.00 MB in 0.04s
thread 0 random read totals (5202.64 MB/s) 200.00 MB in 0.04s
Running single thread version

File size: 200MB

=============

file size 200MB, record size 64KB, depth 64, ios per iteration 8
max io_submit 16, buffer alignment set to 4KB
threads 1 files 2 contexts 1 context offset 2MB verification off
write on a1 (272.05 MB/s) 200.00 MB in 0.74s
write on a2 (272.03 MB/s) 200.00 MB in 0.74s
thread 0 write totals (386.49 MB/s) 400.00 MB in 1.03s
read on a1 (2472.68 MB/s) 200.00 MB in 0.08s
read on a2 (2471.94 MB/s) 200.00 MB in 0.08s
thread 0 read totals (4923.99 MB/s) 400.00 MB in 0.08s
random write on a1 (294.27 MB/s) 200.00 MB in 0.68s
random write on a2 (294.25 MB/s) 200.00 MB in 0.68s
thread 0 random write totals (388.89 MB/s) 400.00 MB in 1.03s
random read on a1 (2686.62 MB/s) 200.00 MB in 0.07s
random read on a2 (2685.90 MB/s) 200.00 MB in 0.07s
thread 0 random read totals (5358.05 MB/s) 400.00 MB in 0.07s
Running single thread version

File size: 400MB

=============

file size 400MB, record size 64KB, depth 64, ios per iteration 8
max io_submit 16, buffer alignment set to 4KB
threads 1 files 2 contexts 1 context offset 2MB verification off
write on a1 (283.86 MB/s) 400.00 MB in 1.41s
write on a2 (283.85 MB/s) 400.00 MB in 1.41s
thread 0 write totals (86.62 MB/s) 800.00 MB in 9.24s
read on a1 (1600.06 MB/s) 400.00 MB in 0.25s
read on a2 (1599.90 MB/s) 400.00 MB in 0.25s
thread 0 read totals (3198.81 MB/s) 800.00 MB in 0.25s
random write on a1 (467.55 MB/s) 400.00 MB in 0.86s
random write on a2 (314.20 MB/s) 400.00 MB in 1.27s
thread 0 random write totals (58.13 MB/s) 800.00 MB in 13.76s
random read on a1 (2461.27 MB/s) 400.00 MB in 0.16s
random read on a2 (2460.95 MB/s) 400.00 MB in 0.16s
thread 0 random read totals (4920.75 MB/s) 800.00 MB in 0.16s
Running single thread version

-- MukulSingh - 25 Oct 2013
Topic revision: r3 - 25 Oct 2013, AmodJaltade - This page was cached on 14 Nov 2024 - 17:40.

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding PDLWiki? Send feedback