The following are three benchmarks and their results that I have worked on.
[Andrew Benchmark]
[Bonnie Benchmark]
[Postmark Benchmark]
Andrew Benchmark
There are number of phases to the benchmark, each exercising a different aspect of the file system:
- Phase I: many subdirectories are recursively created.
- Phase II: stresses the file system's ability to transfer large amounts of data by
copying files.
- Phase III: recursively examines the status of every file in test directory, without actually
examining the data in the files.
- Phase IV: examines every byte of every file in the test directory.
- Phase V: is computationally intensive.
The output of the benchmark contains embedded timestamps after each phase, and these are used to
calculate the time taken for the individual phases and the overall benchmark time. Inorder to accomplish this, I had to write a simple C program that replaced the "date" command used by the original makefile. I changed this inorder to get a more precise measurement of the times since the date command only is as detailed as seconds, which is not small enought for the speed at which this benchmark ran. Here is the
gettime.c file I used. Here is also a copy of the modified
makefile that I used.
I also ran into problems with Phase V of the original benchmark with the files not compiling correctly. Therefore I changed Phase V in the makefile so that instead of compiling the files, it compiled an openssh2.5.2p2 client.
Bonnie Benchmark
Bonnie performs a series of tests on a file of known size. For each test, Bonnie reports the bytes processed per elapsed second, per CPU second, and the % CPU usage (user and system).
One of the other constraints that I ran this benchmark under, was that of the rsize and wsize when the server directory was mounted. Each of these test was ran with a file size of 300mb.
For more information about Bonnie and source code, click here. More tests and analysis that were ran with this test harness can be found here
Postmark Benchmark
Postmark is used to simulate heavy small-file system loads with a minimal
amount of software and configuration, which inturn will provide complete
reproducibility. The benchmark was created to simmulate a mail server
scenario with many small files that are constantly being created, read,
written or deleted. It measures the transaction rates of these files for
a workload that resembles a large Internet electronic mail server.
When Postmark is ran, a large pool of random sized files are created whose sizes are bounded by a file size which can be changed easily. Once created, the files are manipulated either by a read, create, delete, or append. The files and the transaction types are all chosen at random in order to minimize the influence of file system caching. The program is command line driven and does not fork.
For more information about Postmark and its source code, click here.