Intel X25-M G2 under Linux - anyone else?

graysky

Gawd
Joined
May 6, 2007
Messages
620
Just installed an 80 G version of the X25-M (G2). Seems nice an snappy. Here are a few benchmarks on my machine. Anyone else have this SSD and care to post?

Code:
# hdparm -Tt /dev/sdb
/dev/sdb:
 Timing cached reads:   15644 MB in  1.99 seconds = 7845.48 MB/sec
 Timing buffered disk reads:  788 MB in  3.00 seconds = 262.52 MB/sec

Here is the read-only benchmark from "Disk Utility 2.30.1" which is installed under Applications>System Tools>Disk Utility
ssdtw.jpg
 
Last edited:
Instead of measuring sequential speeds with hdparm, try dd:

$ dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 13.3236 s, 80.6 MB/s

$ echo 3 > /proc/sys/vm/drop_caches

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.96597 s, 216 MB/s

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.196152 s, 5.5 GB/s

The last result is just showing the speed of the buffer-cache. It is important to clear the buffer-cache (using the drop_caches command above) if you want to measure read speed directly from the device.
 
Nice, thanks! I added your suggestion to a wiki article I wrote.

Code:
$ dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 13.3236 s, 80.6 MB/s

# echo 3 > /proc/sys/vm/drop_caches
$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.00297 s, 268 MB/s

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.208853 s, 5.1 GB/s

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.200393 s, 5.4 GB/s

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.169713 s, 6.3 GB/s
 
Interestingly, writing to my Seagate 7200.12 drive is faster by ~33 % and writing to my DDR2 @ 1,066 MHz (via tmpfs on /dev/shm) about 22 times faster :p

HDD:
Code:
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.95836 s, 108 MB/s

/dev/shm:
Code:
$ dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.566861 s, 1.9 GB/s
 
Is the filesystem on the SSD properly aligned to 4k blocks?
On the other hand X25-M SSDs are not known for their high sequential speed but for random access.
 
Most people actually think MB/s equals performance and that SSDs are just double as fast as HDDs; while in fact they are hundreds to thousands of times faster than HDDs when you look at strong random I/O workloads. The sequential write speeds can even be lower than HDDs, especially in the case of Intel.

So if you want to benchmark your SSD, use random I/O benchmarks. raidtest and rawio are the ones i prefer.
 
I'm not sure where to get rawio and raidtest for linux.

I like seeker_baryluk.c and iozone for some linux file tests. seeker_baryluk.c is nice for doing some quick random read access time tests with multiple threads. But it can only do random read tests. For random write tests, I like iozone, but there is a complication since it needs to either use direct I/O, or you must write a file to your disk that is several times larger than your RAM, and it takes a long time to run. Here's an iozone 4KB random I/O test with direct I/O:

Code:
# iozone -s16M -r4K -I -i0 -i2
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.308 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Thu Jul  1 10:52:42 2010

	File size set to 16384 KB
	Record Size 4 KB
	O_DIRECT feature enabled
	Command line used: iozone -s16M -r4K -I -i0 -i2
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                            random  random
              KB  reclen   write rewrite    read    reread    read   write
           16384       4   64194   70771                     20565   67917
 
The X25-M really excels at random I/O with high queue depths. Here is an iozone 4KB random I/O run with QD=32. Note the 156 MB/s 4KB random read result. If you try that on an HDD, it will probably be about 100 times slower.

Code:
# iozone -s16M -r4K -I -t32 -T -i0 -i2
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.308 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Thu Jul  1 11:13:13 2010

	File size set to 16384 KB
	Record Size 4 KB
	O_DIRECT feature enabled
	Command line used: iozone -s16M -r4K -I -t32 -T -i0 -i2
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 32 threads
	Each thread writes a 16384 Kbyte file in 4 Kbyte records

	Children see throughput for 32 initial writers 	=   79705.48 KB/sec
	Parent sees throughput for 32 initial writers 	=   77252.05 KB/sec
	Min throughput per thread 			=    2471.61 KB/sec 
	Max throughput per thread 			=    2508.31 KB/sec
	Avg throughput per thread 			=    2490.80 KB/sec
	Min xfer 					=   16168.00 KB

	Children see throughput for 32 rewriters 	=   72401.53 KB/sec
	Parent sees throughput for 32 rewriters 	=   72367.07 KB/sec
	Min throughput per thread 			=    2242.25 KB/sec 
	Max throughput per thread 			=    2273.83 KB/sec
	Avg throughput per thread 			=    2262.55 KB/sec
	Min xfer 					=   16160.00 KB

	Children see throughput for 32 random readers 	=  156661.73 KB/sec
	Parent sees throughput for 32 random readers 	=  156528.60 KB/sec
	Min throughput per thread 			=    4853.77 KB/sec 
	Max throughput per thread 			=    4938.54 KB/sec
	Avg throughput per thread 			=    4895.68 KB/sec
	Min xfer 					=   16104.00 KB

	Children see throughput for 32 random writers 	=   86205.72 KB/sec
	Parent sees throughput for 32 random writers 	=   84513.98 KB/sec
	Min throughput per thread 			=    2560.86 KB/sec 
	Max throughput per thread 			=    2703.56 KB/sec
	Avg throughput per thread 			=    2693.93 KB/sec
	Min xfer 					=   15520.00 KB
 
Here are results for seeker_baryluk.c for 10 threads and 1 thread:

# ./seeker_baryluk /dev/sdb1 10
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdb1 [149841920 blocks, 76719063040 bytes, 71 GB, 73165 MB, 76 GiB, 76719 MiB]
[512 logical sector size, 512 physical sector size]
[10 threads]
Wait 30 seconds..............................
Results: 27576 seeks/second, 0.036 ms random access time (50979 < offsets < 76718915995)

# echo 3 > /proc/sys/vm/drop_caches

# ./seeker_baryluk /dev/sdb1 1
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdb1 [149841920 blocks, 76719063040 bytes, 71 GB, 73165 MB, 76 GiB, 76719 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 9517 seeks/second, 0.105 ms random access time (38333 < offsets < 76718682210)
 
It is aligned:
Code:
# fdisk -l /dev/sdb

Disk /dev/sdb: 80.0 GB, 80026361856 bytes
32 heads, 32 sectors/track, 152638 cylinders
Units = cylinders of 1024 * 512 = 524288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               2       24578    12583424   83  Linux
/dev/sdb2           24579      152638    65566720   83  Linux
 
What's the best SSD for < $250 for a database?

Is the intel still awesome due to access time ?

Database will be for reports on existing data, so no inserts, updates, etc, just queries to select data.
 
Is the filesystem on the SSD properly aligned to 4k blocks?

4k? shouldn't x25-m gen2 be aligned with 512K blocks?
I aligned both partitions and file system (ext4).
I also use a kernel >= 2.6.33 because of trim (need to use discard mount option in fstab) and a modified version of the wiper.sh script which works with x25-m gen2 with the latest firmware.
 
Back
Top