I've noticed a couple of threads using Crystal Disk Mark to benchmark drives. I hadn't heard of this tool before and was curious about its implementation.
Interestingly, the threads I saw were pasting digital pictures of the monitor where the application was running. I found that it's easy enough to take a screen shot with Alt+PrintScreen to capture the benchmark results. This isn't exactly a clear represenatation of the results. The program features a "paste" command in its "Edit" menu that copies the results formatted as plain text to the clipboard.
I ran the program against a single 300 gig Hewlett Packard 10KRPM SAS drive (HP Part 492620-B21)attached to one of the servers I have at work. All of the tests I did were with Win64; in this case, the server is a HP ProLiant rig with a StorageArray 400 backplane running Windows 2003 Server R2 x64. It has 32 gigs of memory.
I spent some time reviewing the source code, and found some interesting anomolies. One of the most important is that the program uses FILE_FLAG_NO_BUFFERING when creating the file handle that it reads to or writes from, but does not provide an aligned buffer to the ReadFile() or WriteFile() calls subsequently used against the handle. This means that the driver must still do some buffering, which enables it to do caching, which will alter the results.
I also ran the program against an Intel X-25E 64 gig drive on my desktop machine. That rig is a Core i7 945 box running Windows Vista 64 with a EVGA SLI motherboard and 12 gigs of memory, and the drive attached to a SATA port on the motherboard. These are the results:
The SAS drive handily out-performs the SSD drive. Right now, the SSD drive costs about the same as the spinning HP drive, though the HP drive is about five times cheaper when measuring cost per gigabyte. It's also about twice as cheap when measuring cost per IO operation per second.
The HP server has a fiber channel card connected to an external RAID chassis which hosts sixteen Seagate ST373455SS SAS drives in RAID10. There are only 14 actual drives in the array, and 2 drives are hot spares. I was surprised at the poor performance of the array:
I was able to reproduce the poor performance of this setup using a different machine with a similarly configured array. However, when doing so, I was surprised to find that the access pattern of the test seems to be off. When running the test, I only saw five disk drive lights in the array flickering. I would have expected the accesses to be spread across all drives in the array, making all the lights flicker. (It's easy to verify this happens by copying a file to the array--all drives are pretty equally active.)
The stripe size on the array is 256 kilobytes, so my theory is that the tool doesn't appropriately generate random numbers which access all the drives in the broadly-striped array.
I'm also a bit concerned with the way that the tool initializes its test file, as it seems to create it very quickly. Since most people (me, too) run the tests as administrator, it's possible that the file system is creating the file and being extended with no fill, then the first writes to the file actually cause that fill. This leads to another concern with the tests--that the order of the tests matters because nothing is done to flush cache or reset the file between the tests.
Has anyone else investigated this benchmark?
Interestingly, the threads I saw were pasting digital pictures of the monitor where the application was running. I found that it's easy enough to take a screen shot with Alt+PrintScreen to capture the benchmark results. This isn't exactly a clear represenatation of the results. The program features a "paste" command in its "Edit" menu that copies the results formatted as plain text to the clipboard.
I ran the program against a single 300 gig Hewlett Packard 10KRPM SAS drive (HP Part 492620-B21)attached to one of the servers I have at work. All of the tests I did were with Win64; in this case, the server is a HP ProLiant rig with a StorageArray 400 backplane running Windows 2003 Server R2 x64. It has 32 gigs of memory.
Code:
--------------------------------------------------
CrystalDiskMark 2.2 (C) 2007-2008 hiyohiyo
Crystal Dew World : http://crystalmark.info/
--------------------------------------------------
Sequential Read : 536.489 MB/s
Sequential Write : 315.936 MB/s
Random Read 512KB : 522.720 MB/s
Random Write 512KB : 310.560 MB/s
Random Read 4KB : 91.374 MB/s
Random Write 4KB : 73.389 MB/s
Test Size : 100 MB
Date : 2009/07/09 7:46:08
I spent some time reviewing the source code, and found some interesting anomolies. One of the most important is that the program uses FILE_FLAG_NO_BUFFERING when creating the file handle that it reads to or writes from, but does not provide an aligned buffer to the ReadFile() or WriteFile() calls subsequently used against the handle. This means that the driver must still do some buffering, which enables it to do caching, which will alter the results.
I also ran the program against an Intel X-25E 64 gig drive on my desktop machine. That rig is a Core i7 945 box running Windows Vista 64 with a EVGA SLI motherboard and 12 gigs of memory, and the drive attached to a SATA port on the motherboard. These are the results:
Code:
Intel X25-E SSD (64 gigs)
--------------------------------------------------
CrystalDiskMark 2.2 (C) 2007-2008 hiyohiyo
Crystal Dew World : http://crystalmark.info/
--------------------------------------------------
Sequential Read : 230.891 MB/s
Sequential Write : 77.822 MB/s
Random Read 512KB : 163.622 MB/s
Random Write 512KB : 78.434 MB/s
Random Read 4KB : 17.134 MB/s
Random Write 4KB : 42.133 MB/s
Test Size : 100 MB
Date : 2009/07/09 8:11:07
The SAS drive handily out-performs the SSD drive. Right now, the SSD drive costs about the same as the spinning HP drive, though the HP drive is about five times cheaper when measuring cost per gigabyte. It's also about twice as cheap when measuring cost per IO operation per second.
The HP server has a fiber channel card connected to an external RAID chassis which hosts sixteen Seagate ST373455SS SAS drives in RAID10. There are only 14 actual drives in the array, and 2 drives are hot spares. I was surprised at the poor performance of the array:
Code:
--------------------------------------------------
CrystalDiskMark 2.2 (C) 2007-2008 hiyohiyo
Crystal Dew World : http://crystalmark.info/
--------------------------------------------------
Sequential Read : 189.025 MB/s
Sequential Write : 100.712 MB/s
Random Read 512KB : 183.226 MB/s
Random Write 512KB : 101.221 MB/s
Random Read 4KB : 26.228 MB/s
Random Write 4KB : 10.310 MB/s
Test Size : 100 MB
Date : 2009/07/09 7:50:49
I was able to reproduce the poor performance of this setup using a different machine with a similarly configured array. However, when doing so, I was surprised to find that the access pattern of the test seems to be off. When running the test, I only saw five disk drive lights in the array flickering. I would have expected the accesses to be spread across all drives in the array, making all the lights flicker. (It's easy to verify this happens by copying a file to the array--all drives are pretty equally active.)
The stripe size on the array is 256 kilobytes, so my theory is that the tool doesn't appropriately generate random numbers which access all the drives in the broadly-striped array.
I'm also a bit concerned with the way that the tool initializes its test file, as it seems to create it very quickly. Since most people (me, too) run the tests as administrator, it's possible that the file system is creating the file and being extended with no fill, then the first writes to the file actually cause that fill. This leads to another concern with the tests--that the order of the tests matters because nothing is done to flush cache or reset the file between the tests.
Has anyone else investigated this benchmark?