Hot? 64GB $75 AR

that's a huge rebate.

seems like a good deal for a SSD

will these fit in a netbook?

looking at getting a netbook soon here, might be fun to swap in for possible extra battery life?
 
The issues are with most MLC-based SSD technology, not just OCZ products. As SSD tech matures, it'll get faster, but for the price, and the ability to "get around" some of the issues with MLC-based SSDs (using Windows SteadyState which converts all random writes to such devices into sequential ones), I'd say it's a fantastic price - it would of course be better if it was just $74.99 out the door, so to speak, instead of after a stupid rebate which could take a while to get to ya.

Not a bad deal at all... but then again, $100 rebates... on a $175 product... that tells ya something. :)
 
ssds can only read/write a certain amount of times before exploding...or something like that...
 
LOL , they dont explode and basically they will last about 3 years if you did constant read and writes to it.
 
We're talking about decades of use from one of these devices, not days, not not weeks, not months, not even years of daily regular usage... we're talking decades to do any serious cell damage. People frown upon SSD hardware as soon as someone mentions "they have limited write cycles" but fail to realize that in a given day if you rewrite a given sector once it's almost a miracle considering the wear-leveling technology all SSD hardware has (at least the modern stuff and not first generation SSD hardware from what, 1-2 years ago).

To put it bluntly, here's the simple truth:

With wear-leveling technology in place, the only possible way you can actually "wear out" a cell of SSD memory would be to write an application that specifically wrote/erased/rewrote/erased/etc one single cell over and over and over again. But, because of wear-leveling, it's practically impossible to do that - the wear-leveling technology will not allow you to write to a cell that was just written to even if you just erased the content of that cell.

The data goes to another cell. Think of it this way:

If you have a 64GB SSD (and I could care less about the binary math here, I'm just making this simple to understand), before even 1 cell of those 64 billion bytes get used twice, you'd have to write a byte of data to the other 63,999,999,999 cells (thinking 1 byte = 1 cell, sue me if that's inaccurate) before that 1st one gets erased and rewritten. Wear-leveling ensures that all the cells get written to at least one time before the erasing/rewriting starts.

This isn't hard drive technology, and most people simply don't get it. If I write a 1 byte file to the 64GB SSD, and I delete it, I'd have to write that file 63,999,999,999 more times before it got written to the exact same cell it was in the very first time I wrote it to the drive.

So, stop worrying about "burning out" an SSD - it simply won't happen. One person showed the math that if you wrote 16GB a day, it takes 4 days to fill up that 64GB drive - that means each cell gets written to 1 time in 4 days, so simple math works out to:

1 cell = 1 write x 4 days = lifespan of 10,000 write cycles per cell = 40,000 days = 109 YEARS before you'd have to worry about 1 cell going bad, theoretically speaking. :D

And one other benefit: you don't need to defrag SSD hardware because:

- the random access time remains the same from start to finish, unlike physical hard drives with spinning platters
- the wear-leveling technology would get in the way as you couldn't "move" data to the "beginning" of the drive, etc.

File placement becomes moot in terms of performance because SSDs don't have the physical constraints of physical hardware moving because of mechanics.
 
if the buisnesses acronyme was IBM instead of OCZ, I'd be on it like white on rice! Are we still alowed to say that? Can someone ask the great Obama if we are and get back with me... (best lumberge voice ) "Thaaanks."
 
if the buisnesses acronyme was IBM instead of OCZ, I'd be on it like white on rice! Are we still alowed to say that? Can someone ask the great Obama if we are and get back with me... (best lumberge voice ) "Thaaanks."

WTF ?!?!?! :) Uhmmm... ok, I'm stumped...
 
If you have a 64GB SSD (and I could care less about the binary math here, I'm just making this simple to understand), before even 1 cell of those 64 billion bytes get used twice, you'd have to write a byte of data to the other 63,999,999,999 cells (thinking 1 byte = 1 cell, sue me if that's inaccurate) before that 1st one gets erased and rewritten. Wear-leveling ensures that all the cells get written to at least one time before the erasing/rewriting starts.

Not quite. For example, I have a 64GB SSD with 40GB used. If I delete/change a file I will have the last 24GB plus whatever extra there may be to wear level. For your example to be true, there could only be 1 cell being used for storage ever.
 
I did say I was trying to make it simple, yanno. But the concept of wear-leveling still holds: it's designed to keep an eye on every cell that's available for storage and to make sure that no uneven distribution of writing happens. Yes, if you fill a 64GB SSD with 40GB of data and you never delete/move that data that would only technically leave ~24GB of space that's used for the normal write/erase/rewrite cycles, but wear-leveling takes that into account as well (it's pretty wicked stuff, actually).

From how I've learned about wear-leveling, it's intelligent enough to actually move data as required to maintain the wear across all the cells, not just the ones that happen to not be holding data at any given time, hence my example above. If the drive's wear-leveling allocation tables or lists shows that the ~24GB is showing more "wear" the circuitry will move data from the areas that make up the 40GB to places so the leveling either returns to an even state or at the bare minimum a "more" even state.

Sorta like those sliding picture puzzles like this:

prototype1pd8.png


You have a finite amount of space and data capacity (with some room to spare - the empty space) and the data can move around... something like that. ;)
 
Best way to understand just what that stutter is like would be to reset your hard drive controller to PIO mode and do some stuff. Trust me, when you see what PIO mode is like (most people never encounter it, lucky for them), you'll get a good understanding of just how bad that stutter can be. :D

And considering that gaming tends to deal with small data files being read from and written to the storage device the game is installed on, using an SSD strictly for gaming will not alleviate that issue at all.
 
Stay away, all I've heard is bad things about these, their read write speeds look fine but they stutter due to their filing system, not really sure if that's true but they will stutter no doubt!
 
Stay away, all I've heard is bad things about these, their read write speeds look fine but they stutter due to their filing system, not really sure if that's true but they will stutter no doubt!

Then you might wanna read this post as this has been covered, and a "fix" of sorts for the stuttering has been mentioned as well...
 
Then you might wanna read this post as this has been covered, and a "fix" of sorts for the stuttering has been mentioned as well...

That doesn't help for us linux users :) I'm interested in using this in a Linux server. Is there anything equivalent for that ?
 
People are saying that using TrueCrypt alleviates a lot of the random write speed issues with SSD hardware, as noted by many posters in that thread and also over at NotebookReview.com in this one:

http://forum.notebookreview.com/showthread.php?t=208242&page=167

The reason I mention this is because I know TrueCrypt has a Linux compatible version, basically. Because of how TrueCrypt works, it effectively sits in between the storage media and the OS as a system-level driver that effectively does the same thing: it turns what would be random writes into sequential ones after the encryption process. While it's not necessary to read all 177 pages of that thread, if you go back a few pages from that link above, you'll find that people are having great success using it (meaning TrueCrypt) and getting spec'ed performance from their SSD hardware.

While I can't speak from experience on that aspect of using Linux with SSD hardware, considering that - once again - TrueCrypt is a free method of addressing the random write speed issues with SSD hardware, surely it must work in a similar fashion as it does under the Windows platform.

Only way to know for sure is do some testing yourself if you have a Linux box with an SSD on it.
 
Thansk for that Joe Average... I think there are a lot of misconceptions about SSD's and I used to believe them.

Definitely considering SSD's... even MLC's as a viable alternative for an OS drive over Raptors... I don't store anything important on my OS drive + have daily backups with WHS so I wouldn't be really concerned about data, just speed/performance. :D
 
People are saying that using TrueCrypt alleviates a lot of the random write speed issues with SSD hardware, as noted by many posters in that thread and also over at NotebookReview.com in this one:

http://forum.notebookreview.com/showthread.php?t=208242&page=167

The reason I mention this is because I know TrueCrypt has a Linux compatible version, basically. Because of how TrueCrypt works, it effectively sits in between the storage media and the OS as a system-level driver that effectively does the same thing: it turns what would be random writes into sequential ones after the encryption process. While it's not necessary to read all 177 pages of that thread, if you go back a few pages from that link above, you'll find that people are having great success using it (meaning TrueCrypt) and getting spec'ed performance from their SSD hardware.

While I can't speak from experience on that aspect of using Linux with SSD hardware, considering that - once again - TrueCrypt is a free method of addressing the random write speed issues with SSD hardware, surely it must work in a similar fashion as it does under the Windows platform.

Only way to know for sure is do some testing yourself if you have a Linux box with an SSD on it.

The problem is that whatever you save in SSD slowness, you get back in encryption overhead, so that doesn't really help too much. Thanks for the suggestion though.
 
Oh, I bit on it... what the heck. I am building an HTPC and this should help the noise floor :)
 
The problem is that whatever you save in SSD slowness, you get back in encryption overhead, so that doesn't really help too much. Thanks for the suggestion though.

I've used TrueCrypt in some machines for testing just such so-called overhead and nothing was noted. The response times of the system were not affected to any degree that was noticeable by the user (nor myself) during daily use of the machines being tested, but obviously a benchmark of the file system with encryption enabled and disabled did show a ~2MB/s "hit" on the write speeds when encryption was in action as opposed to not.

Considering today's high powered CPUs that can handle encryption without breaking a sweat, I'd think that running TrueCrypt to get ~88MB/s write speeds because of the sequential writes on some particular SSD hardware instead of not running TrueCrypt and getting < 10MB/s write speeds because of random writes is worth the so-called overhead, wouldn't you? :)

That sub-PIO mode writing speed and subsequent massive hit to system performance - aka the stuttering - would make most anyone realize "Holy shit, gimme TrueCrypt so I can get work done again..."
 
Lets face the facts here. Putting TrueCrypt on a system in order to get around a performance issue with hardware is a crappy solution. You are paying for performance in the first place. You expect to get it. I wouldn't touch one of these, I would wait for the next gen ones, which have a different controller, to come out.
 
Back
Top