Ask an Intel Solid State Drive Engineer @ [H]

now if only decently sized SSDs didn't cost $500.

My first question to the kind engineer would've been "when can I buy an intel SSD for under $300?"

Of course, my second would've been "can you give me one?":p
 
Can't wait for these to come down in price :p

~AU$1200 for a 32GB X25-E in Australia :(

Has anyone seen some benchmarks with SSD's that focus on games?
 
Nice article, thanks.

My only concern right now is that the $160 I put into 2x WD 640's might not last very long if the competition and quality of these SSD's continue to increase.
 
Has anyone seen some benchmarks with SSD's that focus on games?

the difference for games are faster load times in maps and such.
ssd do nothing for fps, unless the game swaps textures to disk, then the difference might be a less noticable lag, for example with MMOG games.
Since ssd, are fast doing sich tasks.
 
I would be very interested to know how the Intel SSD's stack up against this badboys..


http://managedflash.com/index.htm

I recommend these for Enterprise DB's in my line of work.. since they come in under the cost of SAS/SCSI 15k drives.. :)

Desmack.
 
why couldn't the SATA contoller just reserve 64m from your system ram(probably fast enough to replace the on-board cache) and use that as a buffer to alleviate the stuttering issue?
 
I'm a DRAM guy, not a flash engineer, but I do have a little peripheral (no pun intended) involvement with SSD development for a not-Intel company. That said, I think that way too much is being made out of the issue of write leveling and long-term flash reliability.

I don't know how Intel does it, but where I work, we've essentially gone nuts creating tests to kill an SSD in as many ways as we can think of, and one of those ways, obviously, is to write to it until cells start to die. And, yes, eventually, they do die, even with wear leveling. But not quickly. Not really even mediumly. There's really nothing that keeps an SSD from lasting as long as a regular old hard drive, if not longer.

And that OCZ drive looks nice...really nice. Now I just need to stumble across a briefcase stuffed with cash.
 
Nice Article, but it doesn't answer any of the questions in my head. It was more like a gathering of already known facts.

1. Does the SATA-III Protocol generally limit the potential / performance of SSD. Since SATA was originally designed for HDD in mind.

2. Will PCI-E be a better fit for SSD. Seeing SSD could easily break the 600MB/s barrier that SATA-III proposed but still no available. PCI-E 3.0 2x should offer 1GB/s. And provide Power within the slot. Therefore eliminate the need for Power Cord. However Gen 1 PCIe has reported problems with small files transfers. Is this still an Issue?

3. As someone has pointed out already. Is using System Memory for SSD Cache really that hard? I suppose the limitation would lie in SATA, but surely PCIe Interconnect could handle the response time required?

4. Are the parallelism of SSD being limited by space of the SSD layout? i.e Is it possible to get 16 or even 20 Channel of SSD within 2.5 / 1.8 inch? Therefore doubling current Intel SSD transfer rate to 480 MB/s? Where is the potential limit of SSD drive.
 
Say.. Someone has a SQL database of 1200gb with 600 million entries.

It sounds like SSDs would be the way to go, but early on there was a lot of issue with lifespan. Reindexing will require a large number of reads and writes, and additional space. Currently we have the temp space on a separate array.

What is the read/write limit on current SSDs?
 
why are people still asking questions?? hasn't the questioner already been answered?
 
why are people still asking questions?? hasn't the questioner already been answered?

Because we still have questions. Did everyone in the world that knows anything about these SSD's die, never to speak to another living human being again? I sure hope not.
 
Has anyone heard anymore information about the reduction in transfer rates that PC Perspective triggered by using a combination of read/writes and OS install(s) ? The last I heard was that Intel could not duplicate the condition - but they were checking into it..
 
Can't wait for these to come down in price :p

~AU$1200 for a 32GB X25-E in Australia :(

Has anyone seen some benchmarks with SSD's that focus on games?

17331.png


There appears to be real-world gains in minimum framerate in Crysis. That said, games are rarely disk I/O limited. However, games where texture streaming are necessary--big game worlds like Crysis, or single player RPG, exploration-type games, will have real-time performance gains.

What's the recommended particion alligment for Intel M and E series drives ?

For SSDs with built-in cache there appears to be neglible differences in partition alignment sizes (according to ATTO or CDM 2.2 scores). Some theorized there might be a performance difference if you align it to the SSDs erase block size (256KB for Intel drives, 512KB for Samsung NAND) but it has yet to be proven.

The standard Windows Vista/7 alignment of 1024KB is fine for general usage.
 
I have some questions for the SSD experts.

1) How well would 2-4 smaller SSDs in RAID0 work for scratch space in a "simulated" production environment?
2) Do they multitask well?

I have 6-10 users. When a user connects it creates a work space directory, and if they start sorting data it creates another directory for sorting. When they exit the client, the directories are deleted automatically. The normal pattern reads a large file into temp work space, then processing begins by writing many sequential temp files a RAID in scratch space that is created when they connect with the client. Files are up to 7GB in size, although there are usually many smaller intermediate files created during processing. Would SSDs be a good choice? Would an SSD have maintenance issues with this type of processing? I read that SSDs don't multitask well if there are multiple users hitting them at the same time, is this fact or fiction? My benchamarks will consist of running 2-3 processes simultaneously.

The current box has 5yr old 4x1.28Ghz Sparc III CPUs, shared LUN disk space, and I want to benchmark a single i7 920 or ~Q9550 at about 3GHz (similar speed to a 2.93GHz 5580 Xeon) to see how it would perform in comparision to the old Sun box, with dedicated drives. The current system OS is Solaris 9, new OS will likely be Redhat or Centos, perhaps 64bit Win2003 or Vista64 for initial testing. I would like to utilize the onboard ICHR10 controller for this. My second choice is using some WD 640GB drives I already have in RAID0, they're fairly fast.

We currently don't have enough work space at 55GB, looking to test with about ~160GB perhaps, either with slivers off the 640s or some smaller SSDs.
 
Last edited:
i have a question, does the intel x-25 ssd have onboard cache memory?
 
I have some questions for the SSD experts.

1) How well would 2-4 smaller SSDs in RAID0 work for scratch space in a "simulated" production environment?
2) Do they multitask well?

I have 6-10 users. When a user connects it creates a work space directory, and if they start sorting data it creates another directory for sorting. When they exit the client, the directories are deleted automatically. The normal pattern reads a large file into temp work space, then processing begins by writing many sequential temp files a RAID in scratch space that is created when they connect with the client. Files are up to 7GB in size, although there are usually many smaller intermediate files created during processing. Would SSDs be a good choice? Would an SSD have maintenance issues with this type of processing? I read that SSDs don't multitask well if there are multiple users hitting them at the same time, is this fact or fiction? My benchamarks will consist of running 2-3 processes simultaneously.

The current box has 5yr old 4x1.28Ghz Sparc III CPUs, shared LUN disk space, and I want to benchmark a single i7 920 or ~Q9550 at about 3GHz (similar speed to a 2.93GHz 5580 Xeon) to see how it would perform in comparision to the old Sun box, with dedicated drives. The current system OS is Solaris 9, new OS will likely be Redhat or Centos, perhaps 64bit Win2003 or Vista64 for initial testing. I would like to utilize the onboard ICHR10 controller for this. My second choice is using some WD 640GB drives I already have in RAID0, they're fairly fast.

We currently don't have enough work space at 55GB, looking to test with about ~160GB perhaps, either with slivers off the 640s or some smaller SSDs.


SSD's would work very well for this type of workload, they multitask orders of magnitude faster than the fastest hard drives.
 
thanks. i just bought and install today my x25-m... how do i know if it is a g1 or g2?

oldvnew.jpg


G2 is Top, G1 is bottom.

You can also check by the model #s. G2s have G2 at the end of the number, and G1s have G1.
 
How would a pair x25-m g1 80gb do in a raid 0 config in an evga 780i sli motherboard preform? What is the best stripe size for that? I do realise that I would have to upgrade the firmware in another motherboard(laptop in my case).

Thanks!
 
"with shipments expected to commence in the first quarter of 2009. The Nitro Series SSDs will be available in Q3 2009, pricing TBD. For more information about the "

where are these drives? :(
 
Question - what controller card would you recommend for a RAID 0 pairing of G2s?
 
Back
Top