Store 1000k+ photos a day on an SSD?

sc3252

Gawd
Joined
Jan 3, 2005
Messages
680
Ok, so I am trying to figure out how to speed up a workstation at work, pretty much it is constantly being written to(acts as an FTP server for security cameras) and it makes it very hard to actually look at the files its writing(slows it down big time). So my plan was to write all the files to an ssd instead of a conventional hard drive and then at the end of the day have a script zip the files and send it to the conventional hard drive(around 85GB, some where around 700-1000k photos). I was hoping an SSD would be fast enough to view and write the files at the same time, am I wrong? Also can an SSD take doing that every day or will it just die one day?

note: I was looking in the price range of a agility or a solid 3, and the files written are around 100KB each

Thanks
 
There are 86,000 seconds in a day. Therefore you are writing ~10 images a second which is about 100 msec per image. At 100 kByte each, that is a 1 MByte/second. The math is indicating that you are close enough to the seek time for a file where any sort of "interruption" will be felt. My guess is the SSD will help significantly since the seek time is near zero relatively speaking.

Also, a small RAM drive (4GB) that copies to a HDD every hour should help as well. This will keep the drive pretty easy to manage since the data will be written in a continuous stream and not a burst.
 
Yeah its a problem, probably lose hundred photos trying to view them... The only real question I have is can an SSD handle doing that every day?
 
You really need to find a way to toss that stuff around in RAM. An SSD will probably be helpful but still not ideal for what you describe. Maybe a setting in the FTP server to increase it's buffer size so that it can continue receiving and sending files while the disk is busy. A hard drive, even a fairly slow one shouldn't have any trouble doing the job, it just needs an intelligent cache of some kind.
 
Agreed on the RAM. SuperSpeed SuperCache is fantastic for things like this. Grab that and as much RAM as your mobo will hold, set the cache on the drive that's doing the writing as large as it requires to be smooth (you'll have to play around with it) but the more the merrier, and you're good to go.

Maybe one of the other guys can help ya figure out how much impact that would have on the health of the SSD though if you wanted to go the SSD route, I couldn't say for sure.
 
I would invest in a raid controller with 2GB or 4GB of cache and a BBU and use raid 5 or raid6 with 10K or 15K rpm sas drives. If that is not an option because of cost (> $1500 US) look at enterprise SSDs.
 
To store small jpegs? This seems like throwing money at a problem instead of finding and addressing the true bottleneck which is the FTP server not allowing the incoming files to be buffered until the disk is ready to store them. We're talking about a fairly low demand as far as actual data throughput is concerned, FTP servers have done alot more with alot less in the past. My understanding of the OPs question is that this FTP server is running on a shared host, a workstation to be specific. The next step after trying to get the cache/buffer correctly configured is to move the FTP server to a dedicated host, which is still a much cheaper option than expensive RAID controllers and 15k SAS drives. I'm pretty sure that a properly configured P4 box with 1GB of RAM and a single SATA drive would have no trouble with this job whatsoever.
 
85GB per day on a consumer SSD is beyond specs (which hover around 20GB/day), so yes it should die more quickly, maybe less than a year depending on the model (and size).
 
85GB per day on a consumer SSD is beyond specs (which hover around 20GB/day), so yes it should die more quickly, maybe less than a year depending on the model (and size).

no, it will not. please don't myth spread.

http://www.xtremesystems.org/forums/attachment.php?attachmentid=119154

the intel 320 listed there has survived 314tb of data, and is still going. assuming that intel drive dies at exactly this second, and the OP only writes 85gb of data a day to it, it will last 3694 days, or 10 years. try and write 85gb a day to a normal hd and come back in 10 years when it's around that long.

on top of that, if he went with a sandforce drive, it would literally be impossible for him to kill the drive with usage before the warranty expires, because sandforce drives throttle their write speed to line up with warranty time. so there's no write issue with the samsung drive, the intel drives, the crucial drives, or any sandforce drives. there's not much left after that, that's nearly every family of new drives that will not die from 85gb a day.

that said, get an m4. do not purchase an agility 3 or solid 3, they use async nand and will have bad performance. if you really want to get a sandforce drive, grab a 120gb corsair force gt while it still has the mail in rebate for $184
 
Last edited:
So because some drives go beyond specs, it's safe to do so ?

hahahahahahaahahahaha

the limit of "nand" is a physical thing. if one drive is capable of that much, they all are. you're trying to tell him his drive will die due to nand, and it's simply not true. some nand is not better than other nand, it is all the same.

the drive will due to natural causes before it will die to wearing out the nand as you were trying to imply above. also, I'd like to point out that your "specs" don't even have a timeline on them. 20gb a day for how long exactly is it?
 
The Intel 320 Series is guaranteed to write 20GB of data per day for five years in a consumer environment.

If I follow your logic, how come all CPUs don't overclock the same ?
 
The Intel 320 Series is guaranteed to write 20GB of data per day for five years in a consumer environment.

If I follow your logic, how come all CPUs don't overclock the same ?

well, because differing cpus actually run at different speeds with differing levels of voltages.

nand isn't a cpu. nand is nand. it has a set amount of cycles. if one intel 320 will write 300tb of data, they all will.

let's assume your made up number of less than a year is correct.

there's already an example of a drive that lasts 10 years with 85gb of data written daily (310tb of data total)

are you telling me you honestly believe that some intel 320 series drives can only handle 36tb of data (20gb at 5 years) and will die due to the nand wearing out, while others can handle 10 years at 85gb a day, for a total of 310tb and still not be dead? you really think nand is that volatile that some nand only has 300 cycles, while other nand has 3000? any ssd that dies is not due to the nand reaching its write cap, which is what you are trying to myth spread in this topic.

maybe I've been overclocking the wrong cpus, but I've never heard of a cpu that can clock 10x better than other cpus in the exact same lineup.
 
Personally I killed a SSD with even less use than that (a 64GB M225, so with supposedly better NAND than newer drives), but I don't take it as proof of anything. Maybe it just wanted to die.

And I was talking about the same CPUs, obviously (like, a 2600k and another 2600k).

I agree that the Intel guarantee is conservative, in fact in enterprise use they give it 60TB. But I'm guessing that's because they think there will be compression involved, or the difference doesn't really make sense (do the 320 use compression ?).
 
Personally I killed a SSD with even less use than that (a 64GB M225, so with supposedly better NAND than newer drives), but I don't take it as proof of anything. Maybe it just wanted to die.

definitely due to just the drive itself dying. you can check if it's your write cycles with certain ssd health check programs like crystaldisk info. of course... the drive has to be alive to check it. but if you check it often enough, you can see where you were on the write scale.

I don't think 320s compress, but if they do, the data written to them is done all the same. I think the test on xtremeforums is using maybe 50% to 100% compression, and intel drives don't compress that well either way so I definitely believe that the 300tb have been written.
 
Just thought I should say I decided on just adding more hard drives(two Samsung f3 drives) and splitting up all the writes(Now its writing to 3 hard drives instead of one). If that isn't good enough I will go down the ram disk path, since it is probably the next cheapest option. I will ask the guy some time next week if that fixed the problem(Crosses fingers). Anyways, thanks for the ideas, if this current one doesn't work I have a whole thread of ideas to come back to :).
 
To store small jpegs? This seems like throwing money at a problem instead of finding and addressing the true bottleneck which is the FTP server not allowing the incoming files to be buffered until the disk is ready to store them. We're talking about a fairly low demand as far as actual data throughput is concerned, FTP servers have done alot more with alot less in the past. My understanding of the OPs question is that this FTP server is running on a shared host, a workstation to be specific. The next step after trying to get the cache/buffer correctly configured is to move the FTP server to a dedicated host, which is still a much cheaper option than expensive RAID controllers and 15k SAS drives. I'm pretty sure that a properly configured P4 box with 1GB of RAM and a single SATA drive would have no trouble with this job whatsoever.
I wanted to move it to a dedicated host, but money is always an issue. So I was just looking at the cheapest option, and adding more hard drives seemed to be the easier and cheapest option(spent $150 instead of closer to $500-600). Anyways, just for reference he is using Filezilla, maybe its configured wrong which wouldn't surprise me since I am running some backup servers with Debian and vsftpd and it is having no problems keeping up at a different location(but also a quarter of the files).
 
dont go with any SandForce drive. they write throttle. you will be throttled in less than a week!
 
While the erase cycles are a physical limitation of NAND, it is not like if it has 10,000 guaranteed cycles it will stop to work after 10,000 cycles. Like all manufactured things, NAND flash cells are subject to stochastical parameter spreads. This means there are a lot of cells that may survive 100,000 or even more cycles. And there are cells that will not survive 10,000 cycles. The manufactures just chooses a number where the vast majority of devices will remain reliable.

Only because one drive can sustain a specific amount of data transfers definitely doesn't mean that all drives of the same type will. Not even all drives will survive the manufacturer-guaranteed amount of data, but the failure percentatge will be very low. It is not possible to determine how many cycles a specific cell can tolerate without using them up.

are you telling me you honestly believe that some intel 320 series drives can only handle 36tb of data (20gb at 5 years) and will die due to the nand wearing out, while others can handle 10 years at 85gb a day, for a total of 310tb and still not be dead? you really think nand is that volatile that some nand only has 300 cycles, while other nand has 3000?
Exactly that. Please understand how stochastics and semiconductors in general work. This is the same for NAND and all other semiconductors like CPUs. While one may fail after 1 year, another one of the same type may endure a million years, but ALL will eventually fail if you wait long enough.
 
Last edited:
85GB per day on a consumer SSD is beyond specs (which hover around 20GB/day), so yes it should die more quickly, maybe less than a year depending on the model (and size).

I was thinking this too. I've killed USB flash drives like this. I had a flash drive I used for movies before I got a HTPC. I'd copy the movie to it, then put it in the TV to watch. After about a month of watching a movie every weekend or so, all that data transfering (among other data as I used it for other stuff too) just killed the USB stick. I could see the files, but not write or change anything. The last file I had attempted to copy also corrupted. It was toast. SSDs can tollerate way more, but at that rate I don't see it lasting long.

Personally, I'd do a raid 0 with high speed drives, and then have a raid 5 or 6 for archiving. You could maybe also get away with a big raid 10, and just keep them there.
 
Throughput of the drive isn't the issue here, the amount of data involved isn't huge. We're talking about a few MB a second at the most, one file being written at a time. It needs proper caching is all. At most, moving the FTP archive to a dedicated drive to avoid excessive seeking, combined with working out whatever is preventing the server from buffering writes while the disk is busy. OPs problem is mainly on the software side IMHO. It's just an FTP server, there are many options available on all platforms. If you can't get the result you need with FileZilla, try something else. 80 or 100GB a day is not going to be a problem for a single drive.
 
I was thinking this too. I've killed USB flash drives like this.

this is why we need less irresponsible reposting of myths that lead people to actually believe they are going to hit the write cycle cap on 34 or even 25nm ssds.

guys. you are not going to hit your write cycle. 85gb a day? no. still won't happen. if you purchase a sandforce drive it is literally impossible to hit your write cycle limit for the first 3 or 5 years, the firmware won't let you.

if you purchase an intel 320 drive, you have over 300tb worth of writes to use. you aren't going to come close.
 
Wow that's awesome. I want one. :D Actually, looking at the specs, it's not THAT impressive, unless there's a typo there:

175MB/s Read rate.
145MB/s Write rate.

For writes I get close to 200MB/sec... even clocked 300MB/sec burst on my raid array. For read I've clocked 3GB/sec but 2GB/sec seems to be the average. This is nothing fancy, just WD black drives using MD raid.
 
Back
Top