1TB SSD $84.99 + TAX @ Microcenter

I still have a 1tb Western Digital Black in my desktop. Might be time for it to go.
 
Keep in mind that there is a $5 off coupon at RetailMeNot. You can text it to your phone and have it scanned at the checkout.

I still have a 1tb Western Digital Black in my desktop. Might be time for it to go.

If you value your time it is. It will also save some real money from using less power than your WD Black, although I'm not sure it will ever fully offset the cost that way. Still it is something.

Do these bad boys have DRAM?

No, sadly these do not have any DRAM on board. Good for secondary storage, wouldn't suggest running an OS off it.
 
You can always just get a UPS for the whole system and not have to worry about power issues at all. (y)

As far as replacing the black, I would definitely still keep it as a backup drive. I still don't think there's enough information to fully know what ssd failures look like and what the warning signs may be. A sudden failure without a backup would be bad.
 
You can always just get a UPS for the whole system and not have to worry about power issues at all. (y)

As far as replacing the black, I would definitely still keep it as a backup drive. I still don't think there's enough information to fully know what ssd failures look like and what the warning signs may be. A sudden failure without a backup would be bad.

Not sure what power issue you are referring to that an UPS would solve?

I build the computer with a 128gb SSD and 1tb WD Black as well as a spare WD Black. After 67,000 hours the first WD black died but the SSD is still going strong. I imagine SSDs will on average easily outlast a mechanical.
 
It is also a different interface which my motherboard does not have.


I see, however you can get an adapter card for $10 to $20 to make this work on just about any motherboard. And that would have the advantage of allowing you to take this SSD to your next build and get great value out of it.
 
I see, however you can get an adapter card for $10 to $20 to make this work on just about any motherboard. And that would have the advantage of allowing you to take this SSD to your next build and get great value out of it.

But then you're looking at $25-35 more money between the drive and the adapter which is 30-40% of the cost of this drive. Real world usage in an older board probably wouldn't be too much different. I've used the SATA Inland drives and they work fine.
 
I see, however you can get an adapter card for $10 to $20 to make this work on just about any motherboard. And that would have the advantage of allowing you to take this SSD to your next build and get great value out of it.

An NVME to PCIE adapter can actually be had for $3 shipped if you don't mind waiting a while for it.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
But then you're looking at $25-35 more money between the drive and the adapter which is 30-40% of the cost of this drive. Real world usage in an older board probably wouldn't be too much different. I've used the SATA Inland drives and they work fine.

True, however, I am thinking that a person whom is value minded thinks about their costs over the long term. I have drives in my computers that are more than 10 years old, so when I think about having a drive and paying 25 more to have it be one that could last 10 years I imagine what system I might have the next 3 years or so and what speeds it would be running in there as well as space savings and all the other advantages of NVME, for instance you build a new computer and you have to reserve space for 2.5 inch drives due to this purchase. And we arent talking about say going up to samsungs pro line, which would cost like $100 more we are just talking about $25 more, and now we know you could also buy a $3 adapter even less.

The only downside you might not be able to use it as a boot drive.
 
Not sure what power issue you are referring to that an UPS would solve?

I build the computer with a 128gb SSD and 1tb WD Black as well as a spare WD Black. After 67,000 hours the first WD black died but the SSD is still going strong. I imagine SSDs will on average easily outlast a mechanical.
Others in the thread expressed fears of losing data in the middle of a write if there is a power failure.

I tend to see that same trend on most brand name ssds. But now we've got the bargain brand ssds entering the market (like the one mentioned in this thread), even by the big names, so those have yet to be seen. You definitely got some good life out of that black though.
 
I think long-term too, but there's a limit to how far cheap upgrades go. This drive wouldn't be what I consider an 'heirloom' type of quality piece--it's a bargain brand by a chain store--it's walmart brand fake cheese vs real cheese. Plus, by the time one builds a new system, the technology has changed. Just consider the fact that 1TB ssds were $200+ not even a year ago.
 
An NVME to PCIE adapter can actually be had for $3 shipped if you don't mind waiting a while for it.
and these wont cause slowdowns or degrade the SSD/system somehow, introduce failures or fail themselves?

might be in for one... my nvme m.2 is directly behind the GPU which is the stupidest thing ever.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Not sure what power issue you are referring to that an UPS would solve?

Consumer SSDs generally lack capacitors so drives with DRAM (not this one) are at a risk of losing mapping data on sudden power loss. All SSDs (DRAM-equipped or not) are still susceptible to typical data loss due to write-combining (deferred writes) in system RAM, of course.
 
and these wont cause slowdowns or degrade the SSD/system somehow, introduce failures or fail themselves?

might be in for one... my nvme m.2 is directly behind the GPU which is the stupidest thing ever.
These are really simple devices. They aren't converting the interface, they are just passing the pcie lanes from the slot to the drive. Same goes for power, the slot provides that. This is a low risk adapter. It is roughly the same quality as the ones that cost $15-20 that you find at stores like microcenter.
 
These are really simple devices. It is roughly the same quality as the ones that cost $15-20 that you find at stores like microcenter.
For some of the house branded and generic stuff there, yes, and it's unfortunate that it's garbage quality soldering. There are also better quality products at microcenter too. Anyone who knows what junk soldering looks like versus good will never tell you to buy that crap. It's why these cheap components start having weird issues after about a year of being on and the reliability starts getting tested while the more expensive ones with better soldering still keep working. I've seen enough junk in junk and 'house' brands to not even waste my time anymore with that crap no matter how cheap it is.
 
For some of the house branded and generic stuff there, yes, and it's unfortunate that it's garbage quality soldering. There are also better quality products at microcenter too. Anyone who knows what junk soldering looks like versus good will never tell you to buy that crap. It's why these cheap components start having weird issues after about a year of being on and the reliability starts getting tested while the more expensive ones with better soldering still keep working. I've seen enough junk in junk and 'house' brands to not even waste my time anymore with that crap no matter how cheap it is.

You're not wrong about garbage soldering. That said, this particular adapter is almost all done with the PCB because it is just passing the lanes and power through. It has 4 or 5 components soldered to the PCB in total. This is a super low risk item. If it works initially, it should work for a good while.
 
You're not wrong about garbage soldering. That said, this particular adapter is almost all done with the PCB because it is just passing the lanes and power through. It has 4 or 5 components soldered to the PCB in total. This is a super low risk item. If it works initially, it should work for a good while.
But that type of soldering still need precision, and a lot of the cheap sweatshops out there will solder by hand vs machine and hence the quality issues. Sure it's cheap because labor is dirt cheap, but it's not something to trust if you need reliability. Plus, it's a 'who cares if they (customer) has a problem with it, they'll never contact us' type of deal so qa testing is out the window and defect rates can be much higher than industry standards for the same part.

All retail goods are still held to the 'triple constraint' as its known in project management--you have to pick if you want something that's going to work, be cheap, or is available right now--and you only get to pick two of those characteristics.
 
All retail goods are still held to the 'triple constraint' as its known in project management--you have to pick if you want something that's going to work, be cheap, or is available right now--and you only get to pick two of those characteristics.

well it does take months to ship... so its cheap and works?
 
Any reason against buying three of these and stripping them for a Raid 0 game drive on a x99 motherboard?
 
Any reason against buying three of these and stripping them for a Raid 0 game drive on a x99 motherboard?
Aside from the fact that even a single glitch in writing during a power failure will probably take down the raid? Speed-wise it would be quite nice. (y)
 
Consumer SSDs generally lack capacitors so drives with DRAM (not this one) are at a risk of losing mapping data on sudden power loss. All SSDs (DRAM-equipped or not) are still susceptible to typical data loss due to write-combining (deferred writes) in system RAM, of course.
Is that kind of functionality only on enterprise level drives? Does it behave differently than a platter drive raid?
 
Does it behave differently than a platter drive raid?
It depends on if the raid cache is set to write-back or write-through. Write-back can suffer the same issue as the lack of drams, but it's nothing that a UPS on the whole setup can't solve. ;)
 
RAID on SSD drives is not worth it unless you are just doing a RAID-1 mirror for redundancy.
Doing a RAID-0 may actually be slower in some cases with SSD drives.

RAID-0 is intended to overcome the latency and delays of waiting for mechanical data access
on the drive. SSD drives are not mechanical and do not have those latency issues.

.
 
But most drives do burst at higher speeds with smaller bits of data, so would not ssds share the same trait? That's always what made striped raid systems faster than a single drive in my experience even with sequential files.
 
But most drives do burst at higher speeds with smaller bits of data, so would not ssds share the same trait? That's always what made striped raid systems faster than a single drive in my experience even with sequential files.

With mechanical drives you have to wait for track seek time, head settle time, then wait for the correct data to rotate in place under the head.

It's relatively a lot of time to wait for mechanical data access. RAID attempts to optimize data access time by not requiring as many seeks
and related latency issues. The controller can time it's commands and access data on multiple drives to maximize efficiency too.

The SSD drives suffer none of that mechanical latency, so the RAID techniques to minimize mechanical latency are pointless.

It's not that SSD drives have zero latency though. If you place multiple SSD drives in a RAID array, now you compound the latency of just
accessing a single SSD drive. It does take time for the controller to select a different drive, enable that drive on the bus, request data, etc.

Just keep all of the data on a single drive and then you have no latency inherent in having the controller keep switching drives.

As far as burst transfers.... that would be cached data. You may see similar burst transfer times on both mechanical drives and SSD drives.
I haven't looked into that particular comparison, but my bet is that SSD drives are still faster on burst transfers for a few reasons.
I can see where you might have better burst transfers on an SSD RAID, mostly because you have more cache on multiple drives.

You could probably achieve the same thing by using a single drive on a RAID controller and taking advantage of the controller's large cache.
Would be interesting to test that.

There may be also some cases where you could setup an SSD RAID and see slightly better transfer rates, but it may only be specific cases.
Overall it will not be worth it. Any possible latency gains in some cases would probably be offset by latency losses in most cases.

RAID was designed a long time ago when mechanical drives were the only thing available. It's old technology and SSDs do not benefit from it.

RAID-1 mirroring is a different use case, that is still very viable with SSD drives. I've setup a few servers with SSD RAID-1 arrays.
However, that case has nothing to do with trying to increase performance.

.
 
There may be also some cases where you could setup an SSD RAID and see slightly better transfer rates, but it may only be specific cases.
Overall it will not be worth it. Any possible latency gains in some cases would probably be offset by latency losses in most cases.

RAID was designed a long time ago when mechanical drives were the only thing available. It's old technology and SSDs do not benefit from it.

RAID-1 mirroring is a different use case, that is still very viable with SSD drives. I've setup a few servers with SSD RAID-1 arrays.
However, that case has nothing to do with trying to increase performance.

.

This information is incorrect. It is true that RAID is old technology, but it is false that SSDs do not benefit from it. They do, and the performance gains can be massive. The caveat is that the gains are not across the board, they are situational. Here's an example of a benchmark showing how a pair of SSDs in RAID 0 blow away a single SSD.

RAIDBenchmarks.png


A valid use case for SSDs in RAID 0 is working with very large files. Maybe someone is rendering or editing video on a scratch drive. In this case all that matters is the raw performance, as no data is stored there for any length of time. For maximum speed, this use case calls for RAID 0 (or NVME, or maybe NVME RAID 0).

You are correct in your assessment that RAID 0 helps mechanical drives in more circumstances because they have much higher latency. However, it is incorrect to conclude that because SSDs have low latency they do not benefit from RAID 0. Their IOPS will still greatly increase in RAID 0, it is just that it becomes more unusual to see situations where the demand for more than 80-100k IOPS exists.

I used to boot Windows 7 off a three SSD RAID 0 array. This was 10 years ago when SSDs were pretty new, and I had 3 80GB Intel X25-M drives. At that time, such an array was noticeably faster than a single SSD for an OS drive. My PC booted faster, all my apps and games loaded faster, and browsing the web was quite snappy. However, between then and now the performance of individual SSD drives has improved to the point that the situation is different. Those X25-M drives only had 6.6k IOPS for random 4k writes and 35k IOPS for random 4k reads. The Inland 1TB drive that this thread is dedicated to has 2x the 4k read IOPS and 10x the 4k write IOPS.

So, bottom line is any modern SSD or NVME with DRAM cache is going to be sufficient for a boot drive, and booting off a RAID 0 will not provide a benefit. However, there are valid use cases for SSD RAID 0 where the higher IOPS will matter.

One final thought - not all RAID 1 implementations are the same. There are some where RAID 1 actually provides a noticeable performance boost for read operations. So the view that it is just for redundancy is not always true.
 
Ok, no argument from me. If it worked out well for you, great!

>>The caveat is that the gains are not across the board, they are situational.

That's kinda what I said though....

"There may be also some cases where you could setup an SSD RAID and see slightly better transfer rates, but it may only be specific cases.
Overall it will not be worth it. Any possible latency gains in some cases would probably be offset by latency losses in most cases."

Unless the SSD RAID array is taking advantage of a large cache on the RAID controller, I'm still not convinced of the benefits IRL.

An interesting test would be to compare transfer rates of a single SSD on a typical mobo controller to the same drive connected to a real live
RAID controller but still as a single drive.

In spite of what the benchmarks say, I will not be ever configuring an SSD RAID-0 array.

ETA: I found the article where you snagged that benchmark pic. Kind of surprising results, but it looks
like Intel found a way to speed up SSD RAID especially with their own drives. Hard to argue with positive
results. I'd like to know more about what they are doing to get the increased performance.

https://www.pcworld.com/article/2365767/feed-your-greed-for-speed-by-installing-ssds-in-raid-0.html

.
 
Last edited:
>>The caveat is that the gains are not across the board, they are situational.

That's kinda what I said though....

"There may be also some cases where you could setup an SSD RAID and see slightly better transfer rates, but it may only be specific cases.
Overall it will not be worth it. Any possible latency gains in some cases would probably be offset by latency losses in most cases."

Unless the SSD RAID array is taking advantage of a large cache on the RAID controller, I'm still not convinced of the benefits IRL.

An interesting test would be to compare transfer rates of a single SSD on a typical mobo controller to the same drive connected to a real live
RAID controller but still as a single drive.

In spite of what the benchmarks say, I will not be ever configuring an SSD RAID-0 array.

ETA: I found the article where you snagged that benchmark pic. Kind of surprising results, but it looks
like Intel found a way to speed up SSD RAID especially with their own drives
. Hard to argue with positive
results. I'd like to know more about what they are doing to get the increased performance.

https://www.pcworld.com/article/2365767/feed-your-greed-for-speed-by-installing-ssds-in-raid-0.html

.

What I stated and what you stated are not the same, or even close. I bolded the key difference in your statement. The speed gains from SSD RAID 0 are not slight. The transfer rate is massively better. That massively better transfer speed just doesn't improve performance across the board, it has to be a high IOPS scenario that will take advantage of the extra bandwidth.

I linked a benchmark showing clear IRL benefits to SSD RAID 0. Can we agree people write large files to disk? If you do that sort of thing you'll see the huge benefit of SSD RAID 0. I'll give you more evidence.



This is not limited to Intel. It is not a trick. Your argument is basically saying that going from a one lane road to a two lane road will only make traffic go faster if all the vehicles driving on the road are semi trucks. That's just not how it works. RAID 0 is much like adding lanes to the road. If the traffic volume is high enough, more cars will be able to go down the road in a given amount of time and people will notice the improved driving experience. This is a universal situation. The problem is not every road sees enough traffic to warrant adding a second lane, so it is not a one size fits all solution.

It is fine if you want to write off RAID 0 as an option, but there are plenty of others who use it regularly and enjoy the benefits. If anything, this is a lot safer to do with SSDs because they are more reliable than HDDs. All three of the Intel X25-Ms I used to boot off of are still in use to this day. One is the boot drive in a PC I built for my parents, and the other two are boot drives in mining rigs in my basement. If I get a chance I'll post a comparison pic of the AS-SSD score of an individual drive vs the RAID 0 array I used to boot from.
 
Dude wearing nail polish freaked me out in the product shots. I hope Samsung tells him next time to either remove that or have a woman handle the products so it doesn't freak anyone out. :vomit:
 
Back
Top