RAM Drive

SeraphicalChaos

Weaksauce
Joined
Nov 9, 2010
Messages
110
I've done a bit of searching on the topic and have come up empty handed. So... I'm branching out to see if any [H] readers have heard of a PCI-E RAM disk solution. Preferably I'd like to see 8-16GB capacity (but certainly won't scoff at more) with a large enough battery to support the ram for at least 5-10 minutes of down time.

It'd be wonderful if it could fully saturate the a PCI-E 16x 2.0 bus. So far my search has led me to Gigabyte's failure (iRam) and a HyperDrive5, both which are limited by SATA (beyond me why you'd stick RAM on a bus that limited).

A few answers before questions are asked:
This will be going into a production machine. Major hardware (ie. new board) swapping is not really an option at the moment.
The maximum amount of memory the board supports is already in use, more RAM is not an option.
The second runner up is the PCIE SSD flavor from OCZ. We certainly won't see the kind of performance we'd see from a RAM disk, but it is definitely being considered. The lack of volatility will be a plus, but the amount of writes that will be done to the disk on a daily basis is a cause for concern.
We already have a ton of DDR2 sticks lying around. That would make a decent PCIE controller very attractive.

Thanks in advance for any input you can give.
 
The biggest issue with these is size. I mean 4GB or 8GB ram disk today would not be that useful to install the OS on. Then the second issue is cost / GB.


The second runner up is the PCIE SSD flavor from OCZ. We certainly won't see the kind of performance we'd see from a RAM disk, but it is definitely being considered. The lack of volatility will be a plus, but the amount of writes that will be done to the disk on a daily basis is a cause for concern.

How many GB of writes per day? 100GB? 50GB? 25GB?

One way to solve this is get a much bigger drive than you need. Although this will be expensive.

A third option would be to raid SSDs on your own. The OCZ PCIe ssds are basically Sandforce based ssds with a bundled raid0 controller.
 
The machine is already well in use. This controller / disk will not be booted from. No OS will be run from it. It would be purely used for high IO performance storage.

If the controller used DDR2, I think the cost could be kept low'ish. It's such a nitch product though... which makes me think that maybe I'm looking for something that doesn't exist. Price-wise, it's not too much prettier in the SSD camp. You can easily spend over a thousand dollars on a PCIE SSD and barely approach the speeds you'd see with a true RAM disk.
 
how many IOps and/or concurrent connections does your server see?
how much disk bandwidth do you need or normally use?
what type of files do you serve? lots of sequential reads? lots of small random writes?
what is the OS you are using?
what is your budget?

edit-
have you seen the ACARD RAM drives? they are kind of old-tech, but fits your description of using DDR2 modules. a high end SSD will outperform one though in IOPS. it does have a battery backup.
 
Last edited:
how many IOps and/or concurrent connections does your server see?
how much disk bandwidth do you need or normally use?
what type of files do you serve? lots of sequential reads? lots of small random writes?
what is the OS you are using?
what is your budget?

edit-
have you seen the ACARD RAM drives? they are kind of old-tech, but fits your description of using DDR2 modules. a high end SSD will outperform one though in IOPS. it does have a battery backup.

This is a tad embarrassing, but I can't give you the specifics you're asking for (because I do not know). The disk will not be so much for serving over the wire, as it will be used for internal (local to the box) processing. The server is running a few VMs, one being an NT4.0 VM running (a very ancient) Filemaker server 5.5. Suffice to say, once this box is done processing files / performing backups... users accessing the system can get back to work. This will be a bandaid at best.

I'm going to stay away from anything SATA based. I don't think we'll see too much of a boost in performance going that route. Thank you for the suggestion though.
 
The big problem here is these consumer based ram disks basically lost their market when SSDs became a product so there has been no new products for several years.

I'm going to stay away from anything SATA based.

SSDs (even the PCIe ones) and the 3 ram disks mentioned were all SATA based.
 
you could look into an OCZ Ibis. 100k IOPS and ~700mb/s read/write speed. it uses a HSDL (10GB) cable which connects to a PCIe RAID-like controler card, included. all for around $600 depending on the size.
 
I've been looking into this arena as well for a solid OS drive.

Sadly, it does not appear that what you are looking for exists. I thought it would somewhere in the server world, seeing as how they'd need the capabilities, but the volatility of these devices tends to drive server admins away. They are far more conservative in their actions and don't take unecessary risks with their data.
 
you could look into an OCZ Ibis. 100k IOPS and ~700mb/s read/write speed. it uses a HSDL (10GB) cable which connects to a PCIe RAID-like controler card, included. all for around $600 depending on the size.

Looks like a dedicated SAS card that is using two of the 4 SAS links.
 
I had a feeling I was looking into something that just didn't exist.

SSDs (even the PCIe ones) and the 3 ram disks mentioned were all SATA based.

Perhaps I'm missing something here. What about the OCZ revodrive product line? That was our strong runner up should this prove to be a wild goose hunt. I was under the impression that the device was as close to a pure PCIe device as you could get (once you look past the fact that they use a PCIx to PCIe bridge chip that is) in the SSD flavor. SATA would choke after 300MB/s... let alone 700'ish (which I doubt we'll see depending on the writes to the drive).

http://www.newegg.com/Product/Produ...227661&cm_re=revodrive-_-20-227-661-_-Product

Other then a moderate performance hit, my only other qualm would be maximum writes before it started to fail. The VM will be using a fixed size vdi file. Will this device still play the shuffling game that most SSDs play with data writes (to prolong life) that most SSDs do? My understanding was that it was basically a raid hack localized to the card itself in order to get the performance/size that they claim. I recall reading awhile back that others were hesitant because they could not get that feature up and running.
 
Last edited:
Perhaps I'm missing something here. What about the OCZ revodrive product line?

They are using a SATA2/3 raid card on the board connected to 2 or more Sandforce based SATA2/3 drives. You can even see the sandforce chips in a few of the pictures. I believe the SATA raid0 chip is the one with the blue R sticker on it.
 
Last edited:
my only other qualm would be maximum writes before it started to fail.

Depends on the size of the drive and the amount of data you are writing. Remember larger drives will last longer since they have more cells to do wear leveling.

I believe intel Specifies their drives at 20GB of writes a day every single day of the year for 5 years. I am not sure of what OCZ specifies for these. However I expect a 240 GB drive to give you many years of usage if you only use 16GB of the drive and keep the writes down to 32GB or less per day.
 
Last edited:
I did not see a mention of budget, but if you can afford it, the ioDrive Octal is about as fast as you are going to get:

http://www.fusionio.com/products/iodriveoctal

One million IOPS, over 4GB/s speeds, 30 microsecond latency. But it will probably cost a sports car....

For something less expensive (although still not cheap), maybe the 160GB SLC ioDrive. IOPS are 100+K, 700+ MB/s sequential speeds, 26us latency:

http://www.fusionio.com/products/iodrive

One other option, probably the least expensive, would be an LSI 9211-4i PCIe card, with a RAID 0 of four 128GB Crucial C300 SSD drives. That should get you close to the performance of the single ioDrive (except for latency), for less than one-quarter the price.
 
Last edited:
What about the OCZ revodrive product line?

Since you said that you are going to write a lot of data to it, then anything Sandforce based is going to give you problems.

For some reason (a bug, although some claim it is a feature), the Sandforce performance can drop to about 30% of the specification after the drives have been heavily used, and it can take a week for it to recover. Neither "heavily used" nor the exact recovery time has ever been specified, as far as I know. But this bug has been documented by several review sites, and numerous users.
 
its not a bug, its a design feature that was added on purpose and by design. i dont know how you call something like that a bug. when hammering consumer grade sandforce1222-controllr-based drives, they protect themselves. this is not true on all sandforce controllers, though i cant tell you which do this and which dont, and it is true they dont really publish much information about how this is implimented. it seems to kick in when writing a significant percentage (40-60% or greater) of the drives capacity over and over in short periods of time (minutes/hours).

revodrives have mixed reviews, and too many user complaints and hardware conflicts for me to consider using one in a mission critical application. they are afterall, just a pcie SATA RAID controller, and 2-4 SSD's in one package in RAID0.
 
its not a bug, its a design feature that was added on purpose and by design.

It is a bug, because there is no documentation specifying under what conditions the performance will degrade, how much it will degrade, and for how long.

It is also a bug because any such "protection" is not needed for many users.

Although I am interested in how you know that it was added by design. Are you associated with Sandforce? What were the design specifications for this "feature"?
 
@john4200
Oh wow... well beyond our price range, but very interesting. $200-$500 for this bandaid.

Depends on the size of the drive and the amount of data you are writing. Remember larger drives will last longer since they have more cells to do wear leveling.

I believe intel Specifies their drives at 20GB of writes a day every single day of the year for 5 years. I am not sure of what OCZ specifies for these. However I expect a 240 GB drive to give you many years of usage if you only use 16GB of the drive and keep the writes down to 32GB or less per day.

Very informative! Thank you for the replies drescherjm. The million dollar question is: Will these revodrives perform wear leveling like this as well? I have read somewhere that this was an issue due to the nature (probably due to how it sets up raid) of the card.

SandForce SF-1200 series SSD controllers have been designed with a focus on high-performance operational and data transfer speeds, and include encrypted data protection and improved NAND wear-leveling through their proprietary DuraWrite technology.

I think I answered my own question looking for the term that is used. :D

Edit: Gah! It was quite the opposite of what I was thinking. The issue with these drives was lack of TRIM support.
 
Interesting suggestion. Unfortunately I'm limited to a SataII MB, so I'll need to factor in a SataIII card. Out of curiosity, why the Crucial part over the OCZ revodrive?

No, just use the C300 on a SATA 3 Gbps port.

The OCZ Revodrive is Sandforce based, and so would be a horrible choice for your usage because of the bug I already mentioned in this thread.
 
Back
Top