Next generation of SSDs to have a shorter lifespan?

Next generation SSDs will of course have higher lifespan because they are greater capacity / more flash cells. That means wear leveling has much more cells to play around with and the average predictable lifespan of the unit will increase.
More cells, each with fewer write cycles. This means that total number of write cycles doesn't increase linearly with the increase in cells. For what you say to be true, more and more blocks would have to be set aside for wear leveling, else someone could fill up the SSD to 95% capacity and have the SSD die within a few months.

Better yet, when an SSD has exceeded its write cycles; it will turn read-only. So while it would mean you lost hardware, you don't lose your data.

Assuming that the bad block algorithm detects it in time. There's no 100% guarantee that this will always work. Also, generally first a block is written, then read, if the read is bad, the data is written to another block. I'm not even sure SSDs do turn read-only at some point, or will merely shrink in capacity.
 
else someone could fill up the SSD to 95% capacity and have the SSD die within a few months.
Explain that please? You do know the SSD is able to swap all sectors whether it contains data or not? An SSD thats 99% full could do wear leveling just fine.

The Intel controller doesn't let any two flash cells differ more than 2% in write-cycle count. It will swap any that crosses this boundary. The fact that it contains live data is unimportant; SSDs use internal sector remapping and Windows will never know.

So what windows thinks is sector 1, 2, 3 may in fact be sector 2400, 600, 5000000; i.e. totally random, depending on usage.
 
Explain that please? You do know the SSD is able to swap all sectors whether it contains data or not? An SSD thats 99% full could do wear leveling just fine.

The Intel controller doesn't let any two flash cells differ more than 2% in write-cycle count. It will swap any that crosses this boundary. The fact that it contains live data is unimportant; SSDs use internal sector remapping and Windows will never know.

So what windows thinks is sector 1, 2, 3 may in fact be sector 2400, 600, 5000000; i.e. totally random, depending on usage.

The wear level algorithm generally becomes less efficient when more blocks are in use. Maybe they fixed that now.
 
Meaning with 'efficient'? It should be impossible for an Intel SSD to have two flash cells that differ more than 2% in write cycles. It would stop all write I/O until it has been remapped. So if you mean the performance on writes will be lower; then yes; but it will never allow one flash cell to have much greater write cycles than the others.
 
I have found that the limiting factor of any drives life span is limited to it's capacity.

anyway, ssd life span is pretty good right now. If they lower it and it lowers prices a bit, I'm all for it.
 
Next generation SSDs will of course have higher lifespan because they are greater capacity / more flash cells. That means wear leveling has much more cells to play around with and the average predictable lifespan of the unit will increase.

Did you read the thread? The point is the smaller nanometer size makes the cells less stable. There will be fewer write cycles. Having more cells to compensate for this is a bad, bad crutch.
 
Perhaps the real point is then, that fewer write cycles is irrelevant. What *is* relevant, is the function of the number of flash cells and their estimated maximum write cycle count; thus the number of total write iterations you can perform.

To make things simple, you can regard that as x GB written per day, and then calculate how many years the drive lasts. Now if you double the number of flash cells, but the write cycles of each cell are reduced by 25% - the overall write iterations increase by 50%. So even though the individual NAND flash cells have lower write cycles; such a design would yield higher overal endurance.

Good SSDs like Intel with proper wear leveling and low write amplification (1.1x) due to intelligent remapping of random writes, will have over 10 years of use before the drive switches to read-only mode and you recover your data and replace the unit.

This kind of failure is not really failure, its end of lifespan without losing your data. Its not like the SSD crashes in a thousand pieces or the flash cells melt together. None of that, and that makes SSD reliability much more relaxed than HDD reliability. With HDDs; it can fail any time any moment; CLICK TRRRRRR byebye data.

With an SSD, such failure is extremely uncommon, just like very few CPUs fail without external cause. So in normal cases, you can predict the end of life of the SSD unit; like 3.7 years lasted so far, 5.9 years estimated with your usage pattern; just an example. That gives you plenty of heads-up when to move your data over; and even if you wait too long you can still hook it up to some other computer (linux) and copy the data over in read-only mode.

So both in terms of write iterations and failure reliability, the SSD is superior i would say.
 
Fewer write cycles means that blocks start dying sooner, meaning that capacity will decrease sooner. MLC started at ~10k cycles, now it's closer to 1k, that means you'd need 10 times the number of blocks just to compensate for the reduced number of write cycles. Say a 200 GB SSD instead of a 20 GB one. I'd say that's pretty darn relevant.

Also, HDDs don't have a set lifespan limit. There's a general MTBF, but I have seen many HDDs which went way past that. Even then it's never the magnetic media itself which wears out. This seems to make HDDs more desirable than SSDs which have both an MTBF and a set limit on the number of writes and thus lifespan.
 
Fewer write cycles means that blocks start dying sooner
Wrong. If one flash cell exceeded its write cycle count, then all flash cells are at 98%+ of their write cycle count. So that means you're talking about the end of lifetime for the SSD. Blocks don't 'die' they become read-only. Capacity won't decrease; it will stay the same.

The question whether HDDs can go beyond their rated lifespan is TOTALLY IRRELEVANT. What IS relevant, is that their failure is totally unpredictable while SSDs would in normal cases only fail with you already knowing. And then it didn't really fail, its just read-only.

Don't you guys see that means SSDs are inherently more reliable? No more "disk crashes" that should be something of the past.

And most SSDs still have a usable life of 10+ years, i'd rather take that than a HDD that crash anytime and in some cases last longer than 10 years.
That's nearing the end of its lifetime, and the SSD will continue to remap sectors until virtually all sectors are 100% and the device probably won't wait for real defects
 
I think this is exagerration in huge part.
I have Kingston SSD for 70-80 days and so far it has slightly below 800 GBs of writes so more or less 10 GBs per day.

Even if number of writes to single cell is reduced in bigger SSD i won't care about it that much becouse then the writes will be shared beetween for example 256 GB of SSD instead of 40.

Also i still have my jmicron SSD which is now relegated to drive for installing games - i suspect it doesn't even get 1 GB of writes per day.
 
Games are something you install on your system drive (SSD) as well, and shouldn't be write-heavy also. The fact that you write 10GB a day to your system drive may say something about your specific usage pattern, which may be very different from others.

Also, if you don't have a Kingston with Intel controller (40GB/80GB versions) then the write amplification will be higher. Meaning you write 128KiB but sometimes that means it has to erase and write a whole flash block (512KiB) - so the writes get 'amplified'. This may be especially true for small writes that force a read-erase-program cycle that is very slow and also means it increases write iterations and thus cause more wear.

The write amplification of Intel SSDs is very low, 1.1x, while others can go up to 20x. This is very important for the durability of the SSD as well. So in essence, you may have only written 200GB but amplified to 800GB due to small writes triggering more flash write cycles. If this were to be correct, your average write amplification would be 4x.
 
Don't you guys see that means SSDs are inherently more reliable?

I think we're waiting to see if that's a fact.

I jumped on this SSD bandwagon in May because I believe in the SSD benefits but these things are fairly new in "consumer-land".

The lack of moving parts should mean increased reliability but these first generations will have to prove themselves and that may take awhile.
 
By the way, the first-generation X25-M doesn't have TRIM support, so you should leave a portion unused. If you don't do this, it would have to do read-erase-program cycles much more often, causing the write amplification of the Intel SSD to rise.

With unused, i mean that never in its lifetime you wrote to that location. So when you get a new SSD you partition it and make a big 30GB partition on the 40GB drive and leave 10GB unused forever. Those flash cells will actually get used by the controller to accelerate random writes and keep write amplification to a minimum.
 
By the way, the first-generation X25-M doesn't have TRIM support, so you should leave a portion unused. If you don't do this, it would have to do read-erase-program cycles much more often, causing the write amplification of the Intel SSD to rise.

With unused, i mean that never in its lifetime you wrote to that location. So when you get a new SSD you partition it and make a big 30GB partition on the 40GB drive and leave 10GB unused forever. Those flash cells will actually get used by the controller to accelerate random writes and keep write amplification to a minimum.

IDK anyone with an Intel G1 that did that.

I purchased a second 80GB unit for RAID0 to aid the writes and increase unused sectors but I've never seen any recommendation for a partition.

Is this an Intel recommendation?
 
Not sure, it is my recommendation, though. :)

But you need to do this early on. If you haven't, you need to wipe the SSD using zero-write or using HPA erase command with HDDErase utility (google for it). The HDDErase utility will simply wipe all special memory on the SSD that stores where the SSD is *really* storing its data. Something like:

Windows sector 1 = FLASH cell 2411
Windows sector 2 = FLASH cell 2
Windows sector 3 = FLASH cell 3
Windows sector 4 = FLASH cell 4
Windows sector 5 = FLASH cell 5567
Windows sector 6 = FLASH cell 9984
(actually flash cells are larger than sectors; but that's not relevant to my point)

When you wipe there areas, the table will be empty. That would have the same effect as a complete zero-write, without using up write iterations on your SSD. All you really want is to reset those tables. It would lead to total data loss, so you'll have to backup & reinstall if you are to follow this recommendation.

Then you create a partition smaller than the total capacity, so some capacity stays unused. Good SSDs like Intel already has some storage space reserved and hidden from Windows/OS though. The 80GB Intel really is 128GB raw capacity i believe. The X25-E has only half the raw capacity if i recall correctly, so this wouldn't be needed there.
 
Not sure, it is my recommendation, though.
It's definately not Intel's but I like your line of thought. :)

I used HDDErase when I went from one drive to two and I'lll probably use it again when I install W7 but I think I'll leave the partitioning alone.

Your idea may have some merit but unless there's more info about the results I'm not in the mood for experimenting. :)
 
The fact you have to be careful about how much you write to an SSD makes it inappropriate for many applications. What about people that use bittorrent or unzip things frequently on their hard drives? What about database or logging applications? What if you forget to turn off defrag? Oh no!

The fact that there are threads galore about what you have to turn OFF in Windows (indexing, logs, etc.) means that hard drives are better at some thing because they can write infinite number of times within their life span.

I just bought an SSD, but I know that I have to be careful about how I use it. Nobody has to think like that with a HDD. There is no math you can present which changes that fact. But that being said, I want to worry less, which is why I bought an SLC based SSD.
 
What about people that use bittorrent or unzip things frequently on their hard drives?
I do all those things and am using my SSD drives just the same as I used my mechanical drives but I've always turned off many of those things you mention and more because I consider them unnecessary.

My aim is to use these drives as normal for my situation.

Many of those guides for "decreasing writes" were written for the first generation drives with less tham optimal controllers.

I bought these things to use them not baby them.
 
The fact that there are threads galore about what you have to turn OFF in Windows (indexing, logs, etc.) means that hard drives are better at some thing because they can write infinite number of times within their life span.
The things you turn off were made for HDDs; and have no relevance to SSDs. Fragmentation on the filesystem level is nothing to worry about when you are using an SSD. However, defragmenting the SSD would cause the SSD to become internally fragmented; thus it has no useful application and should never be used on SSDs.

Same counts for SuperFetch and the many low level I/O optimizations that increase performance for HDDs but decrease performance on SSDs. Many of the optimizations were bound on transforming random I/O into sequential I/O. That makes sense on HDDs, since they would have to seek less often and HDDs are very fast without seeking. But on SSDs this limits the parallel I/O potential of the unit, limiting performance.

So its more that traditional filesystems and operating systems are stuck in ancient mechanical technology - and won't be free of legacy source code for some time. Fact is Windows doesn't have anything more modern than their light metadata-only journaling NTFS filesystem. That's it; while Linux offers you a wide range of advanced filesystems and UNIXes like FreeBSD/OpenSolaris give you the almighty ZFS.

You would say Microsoft has all the opportunity to modernize their I/O subsystem, providing good stackable software layers like the FreeBSD Geom I/O-framework provides. Microsoft didn't feel reluctant to borrow BSD's networking stack in the past.

I just bought an SSD, but I know that I have to be careful about how I use it.
If you bought an Intel you shouldnt have to worry, other controllers may be less intelligent and have higher write amplification causing you to have to be careful on how you use the drive.

Future generation SSDs will become better, for now Intel holds the cards. Sandforce and Marvell controllers look nice, but i haven't seen specs on their write amplification (and thus lifespan) yet.
 
....
The 80GB Intel really is 128GB raw capacity i believe. The X25-E has only half the raw capacity if i recall correctly, so this wouldn't be needed there.

AFAIK, the extra space on Intel drives only amounts to 7% of total capacity.
So, the 80GB is really only a 85.6GB.
 
sub.mesa,
You admit (and we know) that the write capacity is very limited on SSD drives and manufacturers have to resort to tricks to make them useful. What if we need to use the whole drive? And per a statement made earlier in this thread, what about longevity of these new SSD's when the data is stagnant?

You may say that SSD's are more reliable than HDD's because they have no moving parts. But that simply eliminates one factor. SSD's include their own factors which can account for problems too. The truth is none of us know how well MLC SSD's will do with normal usage. We'll find out in a couple years. The marketing of CDR's lasting 50 years has already been disproven in 10.
 
I've got news for you; no modern HDD really knows what data its storing. It needs ECC to correct millions of I/O errors; else it wouldn't really know what data is being stored because of the many I/O errors. That's all due to data density being so low bit flips may and will occur.

You can view this in SMART logs quite easy: Hardware ECC Recovered, like in this screenshot:
http://eric514.mailpeers.net/hd-tune-ST3500630A.jpg

So the argument that SSDs are less good in any degree than HDDs because they use 'tricks' is false; as both use these as well as CD/DVD. Essentially the medium needs redundancy internally to yield acceptable reliability.

"No one knows" is of course not true, the Intel X25-M has been out for quite a while and during testing&certification it runs at 100% duty cycle for 24/7 until after a month it has about '10/20 virtual years' on it. So its endurance was already tested, and NAND flash is being used for such a long time already that its not new at all. In the future, Phase Change memory may replace NAND flash, but they will stay SSDs because of lack of moving parts; i.e. it stays an electronical storage device, not mechanical in nature.

Also, your distrust in specific storage media is largely unwarranted; even with a 100% crash-free SSD, you still need to make backups. Virusses, accidental deletion, filesystem damage, etc may all cause your data to be lost. For example, many people lost data from a RAID-array while no HDD/SSD crash has occured; they just don't have access to their data and think they never will; so they reformat their drives while in fact the data was still recoverable.

So you need backups anyway, what we really want to know if buying an SSD would mean you can enjoy its useful operation for at least a decent 6-10 years before it either becomes economically obsolete or reaches the end of its technical lifespan (max write cycles) and becomes a read-only device.
 
HDDs have been in use since the 50s. Flash-based SSDs for less than a decade. There's no comparison in reliability statistics there. Simulations will never match reality.

PCM would and probably will completely replace Flash within the next ten years. It does not suffer from limited data retention, have no fixed limited on write cycles, feature random access (no more blocks) and be much, much faster than Flash (no block erase, block verify, block write and such). It comes pretty close to DRAM speeds, even.
 
sub.mesa , I am aware of everything you said already. But you're not directly contradicting my argument. Despite the error recovery going on in the background, hard drives still have unlimited write cycles. For a database server which writes 24/7, an SSD couldn't hold up for as long as the manufacturer claims.

I realize that may not directly translate into home usage, but it still means you have to be aware of how you use your computer. Not true with a hard drive. I'm okay with SSD's myself and just ordered one, but they are not the panacea everyone thinks. Just wait until the virus writers create something that writes a block 1000 times per second.
 
I realize that may not directly translate into home usage, but it still means you have to be aware of how you use your computer. Not true with a hard drive. I'm okay with SSD's myself and just ordered one, but they are not the panacea everyone thinks. Just wait until the virus writers create something that writes a block 1000 times per second.

With the right approach it'd be possible to create a script which writes to an SSD small 4kB files a few times a second, killing the SSD in a mere hour or two. A virus could use the same approach, thus wiping out any SSD in a system.

Ultimately TRIM and reshuffling algorithms are just patches on the failings of Flash memory. One could never use an SSD in a video editing system, as the TBs of data written to it every day would absolutely ravage it. An HDD would just keep spinning along. A PCM-based SSD wouldn't care either.
 
HDDs have been in use since the 50s. Flash-based SSDs for less than a decade. There's no comparison in reliability statistics there. Simulations will never match reality.

I agree but don't you think the simulations will be close to real world?

If a HD doesn't fail on me in 30 days, I always figure it'll give me 5 yrs of relaible duty.

I expect @ the same from my Intel G1s and from what I see, that's what they are predicting?
 
I agree but don't you think the simulations will be close to real world?

If a HD doesn't fail on me in 30 days, I always figure it'll give me 5 yrs of relaible duty.

I expect @ the same from my Intel G1s and from what I see, that's what they are predicting?

Well, features like TRIM were introduced after the first SSDs had been released. Clearly the need for this wasn't predicted. Flash technology when it comes down to it also keeps degrading as feature sizes shrink, which means that while a 90 nm SLC drive could easily last 5 years in an enterprise system, a 32 nm MLC would get slaughtered in that same setup within 5 months. There are just too many differences between Flash chips and the SSDs containing them to really make any definite statements about the reliability of Flash-based SSDs. HDDs on the other hand have only improved, with SATA HDDs having virtually the same reliability (just lower warranty and such) than SAS HDDs, with the latter aimed at enterprise environments.

Basically while we have some numbers on the very first Flash chips being used wide-scale since the late 90s now, each year those same statistics get thrown out of the window.
 
Despite you're correct about HDDs having no particular weakness for write-heavy operations, i think you're underestimating the endurance of SSDs, both now and in the future.

With SSD you actually "know" the lifetime of the product, and can adjust it to your usage pattern. In the case of database write-heavy I/O which goes on 24/7 etc, a large 8-disk RAID0 array of SSDs is an excellent way to increase endurance. Now you have 8 times as much write iterations, assuming the striping works to keep wear leveling between drives in sync.

For normal usage this would not be necessary. But i sure prefer the SSD endurance over the one that HDD offers. It may last 10+ years, but also die in 3 months. I also like to add the MTBF rated values for modern SSDs are very comparable to high-grade HDDs - though i'm unsure how they calculate this as SSDs have infinite read-cycles but finite write cycles.
 
Isnt the OP's point about SSD longevity originally about data retention? The limited write issue with SSDs has long been known, but for the average consumer, the SSD will probably outlive the system (in terms of limited writes). This is why I think the future of SSD is on the consumer level, atleast with current flash chips. When the SSD dies due to limited writes, atleast you'll know about it. However, as bits start to randomly flip in 10 years with the new 25nm process (in theory), then you could be losing data and never know it. That's the scary part. You could argue that this is only a theory with no proof, but personally, I wouldnt take that gamble with my data.

So basically, for long time archiving of files, perhaps an HDD would be better, especially if you have a dedicated HDD that only gets turned on once a month for archiving. But then again, you dont really know how long an HDD is good for, mechanical issues aside. Sure a 50 year old hard drive might still work today, but will a modern day hard drive work 50 years from now, especially considering how tightly packed everything is on the platters?
 
However, as bits start to randomly flip in 10 years with the new 25nm process (in theory), then you could be losing data and never know it. That's the scary part. You could argue that this is only a theory with no proof, but personally, I wouldnt take that gamble with my data.
Bit flips occur frequently and are usually corrected by the ECC; though in some cases the lost bits may exceed the correction capability of ECC - then you got a 'bad sector' that can be remapped just like HDDs do.

However, if you're really afraid about corruption, you should really get to know the ZFS filesystem. ZFS keeps checksums of your files so it can detect corruption, and correct the corruption by using redundant data source (raidz/mirror/copies=2). Best yet, you can set the required redundancy per dataset, meaning your personal documents are triple-mirrored while your downloaded files have only basic raid-z protection.

So its not just a question of SSD or HDD, but also a question of which filesystem. Also, i may add that HDDs have more uncorrectable errors as their data density increases and thus requires more redundancy (or 'tricks') to still remember your data. Also, a HDD stored on a shelf for multiple years will degrade magnetically, and should be re-freshed (rewritten) to prevent massive bit flips and possibly data loss.
 
Also, a HDD stored on a shelf for multiple years will degrade magnetically, and should be re-freshed (rewritten) to prevent massive bit flips and possibly data loss.

If kept in a stable environment this could take 10 years for any significant degradation to take place, maybe even 20. Magnetic media is quite stable.
 
]The fact that there are threads galore about what you have to turn OFF in Windows (indexing, logs, etc.) means that hard drives are better at some thing because they can write infinite number of times within their life span.

I just bought an SSD, but I know that I have to be careful about how I use it. Nobody has to think like that with a HDD. There is no math you can present which changes that fact. But that being said, I want to worry less, which is why I bought an SLC based SSD.

You actually don't have to turn off indexing or any kind of logging (unless you have an app that does an abhorrent amount of logging), if you moved your docs folders to a HDD indexing would be configured to monitor those only anyway... And defragging is disabled by default on SSD (even tho the service itself is not disabled, so it can be allowed to run on other HDD).

I agree that we could all benefit from more discussion and testing on the longevity of SSD, 'specially future models w/smaller flash... But there's a fine line between advocating that and some of the FUD that's propagating in this thread.

Oh and people that would unzip huge files or download a ton of data via bittorrent would usually know better than to do it directly to their SSD unless they actually want to do it for performance reasons... The average user doesn't do any of that. I'd still argue that getting an SLC drive was a completely panic-driven decision on your part. /shrug

Well, features like TRIM were introduced after the first SSDs had been released. Clearly the need for this wasn't predicted. Flash technology when it comes down to it also keeps degrading as feature sizes shrink, which means that while a 90 nm SLC drive could easily last 5 years in an enterprise system, a 32 nm MLC would get slaughtered in that same setup within 5 months.

AFAIK TRIM was introduced to preserve performance as the drive is used, not prolong it's lifespan... And TRIM was in discussion for SSD manufacturers pretty early on, but MS held it back from their OS 'till the Win7 release, so that was out of their hands to an extent. We were reading about TRIM support before the G2 Intel drives had even been announced (almost a year ago now), not soon after the first decent SSD started to undergo real-world usage (the G1's and first tweaked Indillix drivs).
 
Oh and people that would unzip huge files or download a ton of data via bittorrent would usually know better than to do it directly to their SSD unless they actually want to do it for performance reasons.

What? :confused: You just made my point. This means I have to be careful about what I do. I simply cannot use it as a hard drive. The same is true of using it for video editing or recording. In fact, the best use of a fast SSD are the very things that would destroy it faster. That's quite an Achilles heel.
 
There's absolutely no reason you can't un-zip large files to your SSD, if you want to do it because they'll un-zip faster then by all means... Go ahead. I don't see why you would want to download bittorrent streams directly to the SSD tho, they're not really gonna benefit all that much from the SSD vs a HDD, and you're gonna have to juggle stuff around anyway for logical space reasons (not write cycle concerns or any other sort of paranoia).

I use mine as I would use a regular drive, with the exception that I keep most of my data on a HDD... But whenever I'm gonna work on a large batch of photos or a video I move it right to the SSD (I just drag the file to dekstop for convenience), I don't see why you wouldn't do that. You're taking things entirely out of context, the only reason not to do any of that with a current-day SSD is not because it'd kill it faster, but for logical space constraint reasons, period.

On my netbook I do everything on the SSD, obviously I don't work w/large content anywhere near as much on my netbook as I do on the desktop... But I intend to recycle the 40GB X25-V in it on a CULV laptop in a few months and it'll still be a regular daily-use drive. /shrug
 
This issue is being blown out of proportion. It seems like a lot of nitpicking going on.

Intel says you can do 20GB/day of writes for 5 years. That is much more than any normal, or even heavy user can manage. Sure if you have some database that is going to be hammered 24/7 maybe you can, but if that is you you know your needs are unique. Even if this is you, you really have to be hammering a database 24/7. Most databases I see would really not have much of an issue on the MLC Intel SSD.

Unzipping files or video editing... we're talking about a 80GB drive here. No one does video editing with 80GB of space. Video editing doesn't need high ops. That said, sure unzip all the files you want. I'd like to see you manage to unzip 20GB of data per day to a 80GB drive for 5 years nonstop. Unless you're talking about purposely trying to exceed the SSDs write limit, in that case yeah, if that is your goal sure, its possible, but it won't unknowingly happen.
 
Well, to each their own. But I can move 20GB of data around a day without blinking.

You're missing the point sure you can do that in a day, but are you going to be doing that every day for 5 years straight?

7 days a week and twice on sundays :p
 
Intel says you can do 20GB/day of writes for 5 years. That is much more than any normal, or even heavy user can manage.

Actually, according to this article Intel guarantees 100GB/day unless something has changed:

OEMs wanted assurances that a user could write 20GB of data per day to these drives and still have them last, guaranteed, for five years. Intel had no problems with that.

Intel went one step further and delivered 5x what the OEMs requested. Thus Intel will guarantee that you can write 100GB of data to one of its MLC SSDs every day, for the next five years, and your data will remain intact.
 
This issue is being blown out of proportion. It seems like a lot of nitpicking going on.

Intel says you can do 20GB/day of writes for 5 years. That is much more than any normal, or even heavy user can manage. ...

My system generates about 1TB worth of I/O writes/day, hence my inclination to worry about the move in the wrong direction.

No one is saying current SSDs are bad.
I own and use an Intel 160GB G2 (well worth the money).

However:
- increasing the block size from 512K to 2MB is bad
- ending up with NAND cells that can only tolerate 1/2 of the writes of the previous generation is bad
- ending up with NAND cells that have a shorter data retention period than the previous generation is bad
- ending up with NAND cells that have greater operation errors than the previous generatio is bad

Wasn't this true for the current generation of SSDs (vis a vis of the generation before it)?
True. What has changed, however, is that we are too close to the threshold where all these "bads" are no longer a good compromise for capacity.

What do we whiners want?
Stick with what works and find a way to produce it cheaper (unless there is something truly better and not just more of a compromise).
Though we want things cheap, we don't want a far too watered down product for less money.
 
Back
Top