Apps' Scratch Files on RamDisk? What say You?

riotubes

n00b
Joined
Dec 4, 2010
Messages
25
I'm leaning towards experimenting with using RamDisk for scratch files (autocad, photoshop, illustrator and sketchup)...something like Superspeed's RamDisk Plus that will rationalize RAM across system and RamDisk and enable storage upon shutdown.

I posted a similar question about this use in a design/engineering forum and received nothing but pessimistic and negative feedback. In essence, many voiced concern over RAM's volatile nature....concern over reliability and concern in the event of power outage.

For those of you with RamDisk experience, what say you? Good for scratch files or just browser cache and system temp files?
 
Now that I'm "back from the dead," so to speak, maybe I should get around to doing that RAMdisk guide I promised to do in the past, seems like a long time ago now. ;)

Anyway, the simple answer is HELL FUCKING YES that's precisely the reason why RAMdisks are so awesome, actually.

If you have plenty of system RAM to spare, using a big chunk of it for a RAMdisk especially if:

- you do a lot of data sets (serious number crunching)
- you have databases (needs fast access, the faster the better)
- you require a lot of temp space where speed can be crucial when the clock is ticking (huge temp images, undo data, plain old "scratch" space)

If any of those or combinations of them are what you have to deal with, using a large RAMdisk is not only the smart thing to do but the most efficient way to handle large amounts of data with the fastest possible speed. SSDs are nice nowadays but, even in a RAID setup with several together you're still looking at what, 400-700MB/s or more for read speed, and typically less for writes?

Compare that with a RAMdisk where the reads and writes are in the gigabytes per second range, and yes that's plural. I built a machine a few months ago with 24GB of DDR3 triple channel in it (single i7 processor, tried to get the guy to let me build him a dual Nehalem Xeon but he wouldn't front the cost) and it's set up for doing some heavy duty 3D rendering work.

Doing a RAMdisk speed test using tools like HDTune (since the OS sees it as a storage drive), HDTach, CrystalDiskMark, and even SiSoft Sandra showed read speeds in excess of 12.5GB/s and write speeds at around 10GB/s, so yeah, he's one happy camper. :D

The basic gist is this: if you're serious about serious computing - and this doesn't include gaming, sorry; I mean serious computing for serious reasons: 3D work, computer science, coding, compiling, video encoding (the real killer), or anything that truly requires the absolute fastest box you can build - then using a shitload of RAM with a huge freakin' RAMdisk will simply blow your socks off when it's up and running and configured properly.

I know I'll get people asking for more info and I'll do what I can but, due to some other issues presently the actual "RAMdisk Manifesto" that I had originally planned to write is still a ways off, but I'll get to it someday. For now, the best advice is don't listen to anybody that says anything negative at all about using a RAMdisk in the ways I mentioned, or the capabilities of them in the long run.

The only aspect where they do have one drawback is what was mentioned in the OP: the volatility of such a setup, but since modern RAMdisk software allows for real-time mirroring to a file on physical storage like hard drives or SSDs, it's typically a non-issue. The mirroring is done with lazy writes as required (not real-time) and while it's technically possible to lose some data if shit hits the fan, a properly configured box with a UPS attached and a stable host OS to work on will very rarely even hiccup at all.

To paraphrase a line from the kick-ass movie "Heat":

"The speed is worth the risk..."

OP:

Do a search for "ramdisk" here, posts made by me (or the other account nickname I had temporarily, Bahamut) and you'll find other threads with posts full of similar info.

Stick with it: using a lot of RAM for a RAMdisk can boost your overall system performance in ways most people just can't comprehend. SSDs are nice but, honestly they're still pretty damned slow. :p
 
LOL! So Joe Average and Bahamut are one and the same! I for one appreciate your direct no B.S. prose. I've only been reading this forum for a week but earlier I PM'd you both asking for help with my system setup (I'm a NOOB). If you have a chance give it a read...I'd appreciate your expert feedback. This is a Win 7 Pro, i7 870, 16GB dual channel RAM build I'm trying to do right the first time...

Thanks ~ MIke
 
Yah, I see the PM, just haven't had a chance to do a proper reply, I'll get to it. ;) Kinda busy with the holiday stuff...
 
video encoding (the real killer)

Could you please describe how a RAM disk will help video encoding? In my experiences disk speed isn't even remotely related to video encoding. You could easily encode to and from CF micro drives. Video encoding is all about CPU power.
 
By "video encoding" what I mean - for clarification - is that if you're dealing with such activities, and you've got the RAM to spare, you can use the RAMdisk either for holding the source material, the target material, or even both aspects and in doing so you free up the physical hard drive or SSD from being involved in the process so that device is free to do other things.

Modern computers are quick machines, certainly, but they are still crippled in the long run by the storage devices - and since all modern OSes are disk-based operating systems the trick here is removing hard drives and even solid state storage from this "equation" to the maximum extent possible.

Modern hard drives and SSDs are fairly quick, yes, but they're still ultimately limited by the one-thing-at-a-time syndrome - technically so is RAM I suppose but, since it's thousands upon thousands upon thousands of times faster, in the long run you're better off using the RAM as much as you possibly can.

A lot of people will say something like "Oh well, SuperFetch does all that shit and caches stuff for me, this is no different" and they'd be very wrong - SuperFetch doesn't cache everything, just the most commonly used executables, data libraries (.dll files), and some other types but it doesn't cache everything 24/7 that crosses the hard drive or SSD as a data request.

Even if SuperFetch could or did do that, you don't have any specific control over what does get cached and what doesn't, which is exactly what you'd be doing when you choose to put things like browser caches, scratch disks, etc into RAM on purpose for a given effect. Big difference there.

Let's make this really simple with two true statements:

1) Having a fast computer with a ton of RAM and high end storage (either high rpm hard drives or SSD hardware) != your machine is as fast as it can be. Just because you have a fast boxen with tons of RAM doesn't mean your shit is fast. ;)

2) Having a fast computer with a ton of RAM + a properly configured RAMdisk designed to alleviate a lot of unnecessary disk activity where the RAMdisk is dramatically faster than any other type of storage available today = your machine is wicked evil fast and snappier than you ever dreamed possible. This is the optimal solution and something I've been designing for over a decade now.

If you think your machine is fast just because it's got a high clock rate CPU (or even overclocked over spec), high clock rate RAM (overclocked here too maybe), fast hard drives and even faster solid state drives... boy have you got a lot to learn. ;)

As far as benefits to actual video encoding, in simple testing I've done by placing a single DVD chapter in RAM (284MB VOB file, 5:46 length, MPEG2/AC3 soundtrack, straight rip from retail DVD) and encoding it not only from RAM but back into RAM resulted in a 15% faster encode overall than having the VOB file remain on the hard drive and be encoded back to the hard drive. It ain't much, but... every little bit helps.

The ultimate truth:

When you can take physical storage like hard drives and even solid state storage based on NAND Flash-RAM (the current high end trend) and remove them from the equation by using RAM as extensively and wherever possible instead of those technologies you'll see tangible benefits in the overall performance of your system, often quite dramatic ones.

How's that? ;)
 
The speed on a RamDisk is hugely noticeable on large DB sets (in the several GB). A gentleman I used to build systems with ran a 40GB DB Set on a Ramdisk for his company. Using a server with Dual Xeons and 64GB ram in it, he allocated around 48GB to the ramdisk, and used (at the time) 8 SAS drives in raid 10 to store user files and the backup of the ram disk. A small script he had would take the data from the Ramdisk and write it to the SAS drives on shutdown calls. It was mainly focused for when the UPS said it was time to shutdown. At the time, he didnt have any lazy writes setup, but the script was also called every 10m to check the system load and copy the db if the load was 0 for over a hour (mostly only helpful at lunch time and late at night)

The reasoning behind it was mostly due to random reads of data from the disk taking a while to get to. Indexed data was nice an all, but a lot of calls (hundreds-thousands a second at peak times) is still taxing on a well indexed DB. In the end, it probably made the thing 10x faster on peak for random access reads on the DB, since it didnt take but 1/1000th of a second to access the db data.

The only time he ever ran into a problem was when some genius decided to move the server, and they didnt run a shutdown command since they didnt know how to login to the linux server. They wiped around 20m of written data to the ramdisk. Not enough to drop the idea, but enough that they figured out lazy writes a little better.

Now I just wish I had the money to get a massive amount of ram for a ramdisk XD
 
And another clarification just to help:

When talking about RAMdisks, I'm talking about real RAMdisks - using pure RAM at pure RAM capable speeds that's mounted directly on the motherboard of a computer, aka "system RAM," and not RAM used on PCI cards, etc.

There are devices on the market (have been for many years now) like the Gigabyte iRAM and some others that are PCI cards with RAM slots on the card - the idea is that you populate the slots with actual RAM and when the card is used, the RAM is seen as a "hard drive" in Windows and allows some relatively fast access to it.

The problem is that those types of devices can only provide one real benefit: the pure raw blistering random access times. They don't offer the pure raw blistering transfer rates which is what real RAMdisks provide.

The reason those devices don't provide the speed benefits from the raw transfer rates is because they're coupling the RAM on the card to either an IDE or SATA controller which basically hamstrings it, crippling the massive potential bandwidth and being basically a waste of money for those that really want or even require that bandwidth.

I've always disliked those types of cards and favored pure RAM, as much as you can fit into a machine, maxing it out whenever possible because I know what the real benefits are.

So while a lot of people might see those PCI cards and think "Wow, that's gonna be awesome once I put RAM on it," the only real benefit will be faster access times but you're still going to be stuck with the rather limited bandwidth. If you had such a PCI card that coupled with an SATA II controller, you're going to get two things:

1) The incredible random access speed of RAM which is measured in nanoseconds - not milliseconds like hard drives are (5 to 20 ms is a good range) or even SSD hardware (about 1 ms typically). In milliseconds RAM access times would be about a thousand times faster. :D

2) The maximum throughput that SATA II controller is capable of, or about 280-285MB/s solid. Similar in principle to an SSD in that respect, I suppose.

Those PCI cards are only useful in situations where you actually have a lot of RAM you can use (hopefully you got it cheap) and you require very fast access times that only true RAM can provide. In the access time aspect, RAM rules, period. It also rules in pure bandwidth as well:

RAM would even look at SATA III with it's 6 Gigabit per second theoretical max (that's about what, 700-725MB/s maybe) and just laughs at it when that RAM could be doing 9-14 GIGABYTES per second depending on your particular configuration (triple channel DDR3 can do in excess of 12-15GB per second and higher).

Fun stuff... :)
 
While I agree that minimizing the amount of disk activity on the OS/apps drive will increase performance a RAM drive is not going to speed up video encoding unless you are using some encoder I don't know about which is many X faster than the common ones. Even using x264 with the fastest presets and decoding with the GPU you will still be nowhere near maxing out a HDD for read/write speed. One of these green drives gets something like 100 MB/s. I have yet to see any decent (I'm only informed on decent encoding tools) encoder that can produce a file as fast as that.

EDIT:
Just to elucidate, if storage speed were a bottleneck for encoding it would mean that your encoder would be producing, say, a 4 GB HD movie in less than a minute. Roughly 40 seconds. With my Q6600 and a 50% OC that kind of encoding takes me about 10 - 15 hours. So you mean to say that the minuscule fraction of a MB that's written to my drive every second is held up by my slow drive? I have encoded to and from CF micro drives as well as SD cards. I will often encode from a USB drive. These days all of my material is stored over the network. I have compared encoding times for the same files both on a local non OS SSD with a drive on my storage server and there has never been a difference in speed. Even before I had a gigabit switch.
 
Last edited:
Back
Top