Intel new SSD 34nm?

Status
Not open for further replies.
Any more word on new 34nm INTEL drives?

What I have is that they're expected before the end of July, that they'll have a 320-gig model, and that they use the same controller chip. They'll use a bit less power, initially be more expensive than the current Intel SSDs, and offer no fixes for their reliability issues.

I guess you don't quite understand how the wiper.exe app works,
Since the subject hasn't previously come up in this conversation, I'm not sure how you've found your guess. But what is it, specifically, that you think I don't udnerstand?

I imagine it's because smaller writes are less likely to cause a system hang since you aren't writing sequentially for a long period of time
Sequential writes aren't required to trigger the problems with the Intel SSDs. Random writes work just as well.

It's called an educated assumption. Because 1) you don't have the space you would like on a data drive 2) typical desktop user is well informed and is interested in limiting disk writes.
Everyone limits disk writes; systems which don't do I/O and hit cache instead are orders of magnitudes faster than those that don't. But how does that support your assertion that "you rarely do sustained writes on a 80-160GB hard drive" ? If you have data to write, you write it; it doesn't matter how big the drive is. Note that hard drives let you re-write the same sector again. If you store a bank balance for a particular account on a given sector, you can read that sector, change the balance, and write it out again to the same sector. That doesn't take additional space, so the amount of free space, or the amount of total space on the drive doesn't really limit the number of writes the drive might see.

I'm pretty sure benching IOMeter constantly is horrible for the drive. Also if you're running a database (array or what not) you should leave at least 25% free space...clearly..
Mechanical drives hold up to the same test without any trouble. In operation, the access the drives sees is not much different than I/O meter's access pattern -- assuming the I/O meter test was set up correctly.
 
Some info on how moving to 34 nm feature sizes has affected write cycles and data retention would be nice. I doubt it'll be the 100k cycles and 10 years of old SLC drives.
 
Obviously it won't be; they're MLC chips, just like some of the current products. I don't think anyone is using 10-year old technology in their SSD products.
 
Obviously it won't be; they're MLC chips, just like some of the current products. I don't think anyone is using 10-year old technology in their SSD products.

I think you misunderstood. He meant the 10 year lifespan. Also, almost any computer tech has 10 years worth of technology... ATA specifications is just one example :p
 
Some info on how moving to 34 nm feature sizes has affected write cycles and data retention would be nice. I doubt it'll be the 100k cycles and 10 years of old SLC drives.

I assumed that the 34nm transition would affect the MLC drives mainly?
 
I assumed that the 34nm transition would affect the MLC drives mainly?

Any Flash chip is affected by smaller feature sizes, as the cells are smaller and thus more easily damaged while writing, plus electron leakage has increased.
 
Since the subject hasn't previously come up in this conversation, I'm not sure how you've found your guess. But what is it, specifically, that you think I don't udnerstand?

That write degradation does not happen with those drives.

In any case, I am fairly certain that the next gen drives (Especially from Intel and the HDD manufacturers) will all properly support native TRIM, and this performance degradation will be a thing of the past for all SSD's.
 
That write degradation does not happen with those drives.
Which drives? The Intel drives? It does.

Or do you mean drives that support TRIM? It can happen to such drives, as well. If nothing in the stack is issuing the TRIM command, then the issue happens, same as before. If something does issue the TRIM command, then it ends up resetting the drive's usage tally for the sectors which it thinks are not virgin and pre-erases those sectors.

While subsequent writes would not have to worry about pre-conditioning the cells in that sector, that's a savings if and only if high-bandwidth access to the drive can continue while the TRIM command is working, and that the TRIM command's background work rate can keep up with the need to find fresh sectors. Otherwise, it's back to the original situation. And even that withstanding, I think there might be issues with the write-leveling algorithm.
 
I'm talking about Indilinx drives.

Yes wiper.exe issues that command.
 
And, as you all know, the latest Vertex firmware has enabled "Advaced Garbage Collection" which is seperate from any TRIM or OS-related connections, which is great for my ICH10R RAID0config. I've been running it for a while and it works. My Vertexs bench like new after weeks of use now. Good stuff.
 
Unless they've fixed the write leveling issues, these drives will be just as useless as the previous generation.

What's this problem with SSD's making them useless? Is this something that only happens in non-normal conditions or can it affect normal users - I scanned the rest of the messages for some sort of logical reasoning/explaination but didn't understand the techno-babble in most.

In normal english terms, to normal users of SSD, what's the problem with the current generation of SSD's making them useless?
 
Is this something that only happens in non-normal conditions or can it affect normal users - I scanned the rest of the messages for some sort of logical reasoning/explaination but didn't understand the techno-babble in most.
It depends on what you mean by "normal". Since it's a technical issue with a technical product, I'm not sure how to explain it in a way that a non-technical reader can understand, particularly in a way that's different than what's already in the thread and the links.
 
It depends on what you mean by "normal". Since it's a technical issue with a technical product, I'm not sure how to explain it in a way that a non-technical reader can understand, particularly in a way that's different than what's already in the thread and the links.

I'm certainly no techie when it comes to SSD's nor fully familar with the proprietary jargon used by data storage experts. However, as I have to deal with high powered science/engineering/quant type guys on a daily basis, I'm used to reasoning out complex subject matters if communciated in a logical manner.

So let's try a logically reasoned out explanation: what's wrong with current generation SSDs making them useless (or close to useless)?

(This ain't a trick question and I'm not having a go at your communication skills - I am curious, mainly to see if it applies to my usage of SSDs.)

EDIT UPDATE: Nevermind. I've had a search of the Internet about write-leveling and SSD's. Found your name in quite a few of these threads. Read the arguments for and against, and made up my own mind about this. I'm happy now and am not really gonna worry about this ;)
 
Last edited:
"Unless they've fixed the write leveling issues, these drives will be just as useless as the previous generation."

Useless for who? You? Sorry the company you work for? Except for a very few home users doing some retarded crap, no home users will run into the issues you bring up. It is stupid to suggest the Intel SSDs, or any SSD, is useless because of a problem that will effect no one (in context of [H] forum readers).

I could just as easily say all spindle disks are useless because they all produce too much noise to be used in a device I'm working with. I'm not going around saying they're useless based on my random application requirement.
 
I could just as easily say all spindle disks are useless because they all produce too much noise to be used in a device I'm working with.
That's pretty subjective, actually. :p
I'm not going around saying they're useless based on my random application requirement.
Well, when Intel's marketing states that the product is for enterprise applications, and fails under such conditions, what is it, then? If you can't depend on it to remain functional (like, not just slowing down, but falling of the bus completely), just how useful is it? If a spinning disk can take the beating of the most demanding databases vs. a much faster, yet less reliable SSD, which are you going to use?

The performance of Intel SSDs are great; the reliability, not quite so much.
 
It is stupid to suggest the Intel SSDs, or any SSD, is useless because of a problem that will effect no one (in context of [H] forum readers) ...
I could just as easily say all spindle disks are useless because they all produce too much noise to be used in a device I'm working with. I'm not going around saying they're useless based on my random application requirement.
If you're not using the drive to its potential, then it seems remarkable that you've justified its exorbitant cost. Wouldn't you be happier with a drive that's almost as fast, more predictable, more reliable, and about one fifteenth the price?

I think your generalization about nobody at the forum having enterprise-level requirements for their hardware is a little bit hasty. While such users are in the minority, there are many people here doing demanding work with their machines.

That work often involves aggressive access patterns including lots of writes and the expectation that the device will respond within a reasonable amount of time, that the performance won't degrade, and that the device won't prematurely fail ... especially at twice the price per gig as enterprise-quality SAS drives.

I think such expectations from a storage product are far more common than worries about noise requirements. If your priorities lie elsewhere, that's nothing to be upset about. Enjoy your drives and carry on -- though I wonder why you spent so much.
If you can't depend on it to remain functional (like, not just slowing down, but falling of the bus completely), just how useful is it? If a spinning disk can take the beating of the most demanding databases vs. a much faster, yet less reliable SSD, which are you going to use?
Even beyond reliability, predictability is an important issue.

The current SSD drives, when degraded, really aren't much faster than fast, mechanical drives. SSDs have pretty slow burst transfer rates, but their advantage is that random accesses are practically free. If transfers are so slow that they're overcome the the seek times of mechanical drives, they're inapplicable.

While the TRIM command might help more casual users, I don't think it does much for enterprise-class storage. I don't see anything in the 34nm product rumors that make it appear that this problem has been solved.
 
As Intel prepares to release the 34nm models, I guessed that the current 80 + 160 GB X25-M's
would start to taper down in price. Intel is quite intelligent about their pricing strategies,
and they never resort to price slashing.

I was almost stunned to see the X25-M 160GB go up in price at Newegg today.

Do you see this as pure supply and demand?

I have no concern with performance degradation, or the need to backup, wipe and re-install
once every 6 months to a year (if needed) but It would seem to be a dumb time to buy a
current X25-M model. Am I wrong?
 
Last edited:
A "good time" is generally a no-man's land. Intel rarely cut prices rather it just EOLs products and slots their newer offerings at the same price brackets.

When the new products go out, basically its search around for whoever is having a fire-sale.
 
As Intel prepares to release the 34nm models, I guessed that the current 80 + 160 GB X25-M's
would start to taper down in price. Intel is quite intelligent about their pricing strategies,
and they never resort to price slashing.

I was almost stunned to see the X25-M 160GB go up in price at Newegg today.

Do you see this as pure supply and demand?

I have no concern with performance degradation, or the need to backup, wipe and re-install
once every 6 months to a year (if needed) but It would seem to be a dumb time to buy a
current X25-M model. Am I wrong?

I think you're wrong, they cut prices a while back due to competition, I think this was back in April or so..
 
SSDs have pretty slow burst transfer rates, but their advantage is that random accesses are practically free. If transfers are so slow that they're overcome the the seek times of mechanical drives, they're inapplicable.

There is not a single mechanical drive on the planet that can keep up with even an Indilinx based drive in ANY performance category. In some cases the difference is comparatively small (factors of ~2x) and in some cases its rather large (factors of ~20x+).
 
It's a dumb time to buy most hardware right now period, as the sales / clear outs / price cuts usually happen around August / September.
 
I think you're wrong, they cut prices a while back due to competition, I think this was back in April or so..

Yes, they did drop in April, but the 160GB X25-M went up at Newegg yesterday about $20+ to just under $650. It had been double the price of the 80GB which remains at $315.

It's a dumb time to buy most hardware right now period, as the sales / clear outs / price cuts usually happen around August / September.

Right. The "back to school" stuff, etc. You are correct.
 
There is not a single mechanical drive on the planet that can keep up with even an Indilinx based drive in ANY performance category. In some cases the difference is comparatively small (factors of ~2x) and in some cases its rather large (factors of ~20x+).
OK, I'll bite. Which Indilinx drive are you using? The OCZ Vertex ones?
 
Many are saying that the Corsair P128/P256 are the best overall value currently in SSD's.
They are the second generation Samsung controllers.

Anyone done any testing or configurations with them?
 
Last edited:
Even beyond reliability, predictability is an important issue.

The current SSD drives, when degraded, really aren't much faster than fast, mechanical drives. SSDs have pretty slow burst transfer rates, but their advantage is that random accesses are practically free. If transfers are so slow that they're overcome the the seek times of mechanical drives, they're inapplicable.

While the TRIM command might help more casual users, I don't think it does much for enterprise-class storage. I don't see anything in the 34nm product rumors that make it appear that this problem has been solved.
Thanks for pointing that out, predictability is definitely a big thing, now that you mention it.
 
Last edited:
Yep, but mechanical drives don't degrade just because of ordinary usage.

In real-world use, random access latency is most important because most accesses aren't sequential. This means that burst transfer speed is more important than STR, since each random access constitute a burst and the transfers aren't sustained.
 
After hearing all of these problems with SSDs, I am glad I just stuck to my 6400AAKS. 30 second boot up, pretty quick apps...Don't need much more :p
 
Yep, but mechanical drives don't degrade just because of ordinary usage.

In real-world use, random access latency is most important because most accesses aren't sequential. This means that burst transfer speed is more important than STR, since each random access constitute a burst and the transfers aren't sustained.

So with this in mind..

If a mechanical drive bursts at say 300mb/s and has a random access time of 5m/s, that's 300mb/s of bandwidth (potentially) over 5 m/s.

If a SSD drive bursts at say 200mb/s and has a random access time of .1m/s, that's a 10,000mb/s of bandwidth (potentially) over 5 m/s.

Right? And this is why in PCMark Vantage SSDs score so much higher, from 5-10x faster..
 
I think your generalization about nobody at the forum having enterprise-level requirements for their hardware is a little bit hasty. While such users are in the minority, there are many people here doing demanding work with their machines.

That work often involves aggressive access patterns including lots of writes and the expectation that the device will respond within a reasonable amount of time, that the performance won't degrade, and that the device won't prematurely fail ... especially at twice the price per gig as enterprise-quality SAS drives.

I think such expectations from a storage product are far more common than worries about noise requirements. If your priorities lie elsewhere, that's nothing to be upset about. Enjoy your drives and carry on -- though I wonder why you spent so much.

What are you talking about regarding enterprise requirements?! I am running a bank of SSD on a Symmetrix DMX4 and the performance is mind-blowing. It is plenty fast and predictable.
 
What are you talking about regarding enterprise requirements?! I am running a bank of SSD on a Symmetrix DMX4 and the performance is mind-blowing. It is plenty fast and predictable.

Maybe filling up the drives all the way when you're told you shouldn't?
 
Right? And this is why in PCMark Vantage SSDs score so much higher, from 5-10x faster..
Yep.

What are you talking about regarding enterprise requirements?! I am running a bank of SSD on a Symmetrix DMX4 and the performance is mind-blowing. It is plenty fast and predictable.
I'm thinking of any application that involves lots of writes and reads, twenty-four hours a day, seven days a week, on internet-scale systems, with predictable response requirements.

For read-mostly applications, the current generation of drives aren't bad. For applications where they're not stressed, the drives perform pretty well. When the write mix gets too thick, they start falling apart. This is why so much has been written about the problem, why workarounds like proprietary "defragmentation" and commands like TRIM are being proposed -- but still aren't stabilized.

These aren't things that people developing sensible enterprise solutions like to rely upon.

Maybe filling up the drives all the way when you're told you shouldn't?
That's not what I'm talking about. That said, a drive that falls over sharply when used to its documented storage capacity probably isn't the right one for any application that expects any service level. Is it? Is it somehow acceptable for drives in a price range that's ten to fifteen times higher than competitive, more-established technology?
 
After hearing all of these problems with SSDs, I am glad I just stuck to my 6400AAKS. 30 second boot up, pretty quick apps...Don't need much more :p
and I'll keeping enjoying my SSD and ignoring all these ridiculous scenarios people create where the drive is "useless" :rolleyes::rolleyes::rolleyes:
 
That's not what I'm talking about. That said, a drive that falls over sharply when used to its documented storage capacity probably isn't the right one for any application that expects any service level. Is it? Is it somehow acceptable for drives in a price range that's ten to fifteen times higher than competitive, more-established technology?

I'm sure they warn you to leave 25% free space? I mean the technology is different so you treat it/use it accordingly, no? I guess I'm confused as to how much of the drive you're filling up/using.

Say 400gb out of 400gb? or 200gb out of 400gb? etc..
 
I'm sure they warn you to leave 25% free space?
Who's "they"?

I mean the technology is different so you treat it/use it accordingly, no? I guess I'm confused as to how much of the drive you're filling up/using.
None. I'm just running IOMeter or SQLIo against it. I think the test files these programs create would end up covering about an eighth of the drive, at most.
 
I think you're heavily confusing your "real world" use with an average consumer of a drive like this. Saying because these drives don't perform at the same sustained level as an enterprise spindle based SAS drive while under heavy simulated load like IOmeter calls running all day is a bit of a straw man in this argument.

These drives do suffer a level of fragmentation and slowdown from the problems associated with flash and controller design, but it's nowhere near as bad or game ending as you're trying to suggest in this discussion..

While the number of SSD adopters is a tiny fraction compared to the existing spindle based drive users, it is growing by leaps and bounds every month as word spreads that unlike most other components of the current day PC, this is one area that an upgrade can have a dramatic effect to the every day performance experience of an end user.
 
Status
Not open for further replies.
Back
Top