HOT ! Various 1TB NVMe with coveted E12 Controller $135 aprox retail

Status
Not open for further replies.

UZ7

Limp Gawd
Joined
Dec 5, 2008
Messages
171

SixFootDuo

Supreme [H]ardness
Joined
Oct 5, 2004
Messages
5,654
You promised us the price would never go up.
Do you think it will go back up to $139? It shouldn't but the one thing the powers that be are good at are manipulating the market.

I stand by my promise that it shouldn't go back up under normal circumstances.
 

SixFootDuo

Supreme [H]ardness
Joined
Oct 5, 2004
Messages
5,654
Last edited:

CruisD64

2[H]4U
Joined
Mar 6, 2007
Messages
2,173
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
 

Nenu

[H]ardened
Joined
Apr 28, 2007
Messages
19,225
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
Haha, you said rad :D
A presence I've not felt since ...
 

filip

2[H]4U
Joined
Aug 15, 2012
Messages
2,232
my speeds dropped form 12.2 to 12.3 in almost every thing. why you guys lie to me?
 

tempertantrum

Limp Gawd
Joined
Apr 19, 2009
Messages
392
...simple programs that read your wear will tell you how much read/writes you do. I somehow already have like 9TB of writes on my 970 EVO Plus I bought like a month ago...wtf?

In the past I was averaging like 100-300GB of writes a day and 500GB or more of reads a day. So I am fairly picky on my OS drives for my main rig.
Maybe it's your page file?
 

Destruya

Limp Gawd
Joined
Jul 15, 2007
Messages
405
Supposedly the 12.3 F/W isn't destructive *unless* you're upgrading to it from 11.x. But that shouldn't be an issue unless you bought *very* early on.
 

xx0xx

Gawd
Joined
Oct 20, 2005
Messages
653
This will definitely be a top choice for any future nvme purchases, the price/perf ratio is pretty great. Thinking any time I need an external drive now, I might start using one of these + an enclosure, mostly because nvme drives seem to have decent 4k R/W performance, usually significantly better than standard SATA SSDs
 

Kardonxt

2[H]4U
Joined
Apr 13, 2009
Messages
3,188
NVME enclosures that don't overheat are kind of spendy just an FYI. I looked into getting one for the 500gb NVME this replaced, but am having trouble justifying a $50 enclosure for it.
 
  • Like
Reactions: xx0xx
like this

xx0xx

Gawd
Joined
Oct 20, 2005
Messages
653
That's definitely true. I was kinda surprised at how much more expensive they are than standard enclosures

I see the M2X is decently reviewed at $40, and Newegg has some on sale right now for $30 with good reviews, so might be able to find a middle ground.

Not sure why the brand name I tried to post for the ones on Newegg is censored? I might have missed something. Maybe there's something bad about that brand- if anyone knows and can clue me in that'd be great (it starts with an O). Didn't see anything in rules and can't search censored terms.
 

IdiotInCharge

[H]F Junkie
Joined
Jun 13, 2003
Messages
14,514
NVME enclosures that don't overheat are kind of spendy just an FYI.
Lots of this. These things just get hot. I picked up a smaller one, and the enclosure itself got to skin-burning temperatures by feel during sustained writes (imaging from another drive). You're going to want one with mass and surface area just to keep the external temperature down to 'really warm'.

I'll note that the drive itself, a 660p, didn't seem to mind.
 

CruisD64

2[H]4U
Joined
Mar 6, 2007
Messages
2,173
I'm having a strange issue now with my 1TB drive where my reads are 3300 MB/s and my writes slowed down to about 1000 MB/s. Just all of the sudden one day...
 

SixFootDuo

Supreme [H]ardness
Joined
Oct 5, 2004
Messages
5,654
How full is the drive?
Careful here, let's not suggest that it's slowing down due to to the drive being a certain percentage full. This has already been discussed, explained and reported on here AFAIK. This drive does not suffer from being 50% - 70 - 80 - 90% full. I got the same speeds at 90% full vs 5% full but then again, I have a 9900K @ 5.0Ghz across all cores and a slim lite decrapified version of Windows 10 that is highly tuned.

I also don't run any of the Meltdown and Spectre or other CPU exploit patches, or firmwares that kill off 20% - 30% performance like most of you do. I don't do anything critical on my desktop, zero banking, shopping, etc. So have zero worries in that dept.

?????? PCIe bus over-saturated, older drivers and or firmware, 4X port speed not selected in bios, host of other reasons. ??????

Move 1/2 the data off your drive and re-bench the drive again. Report back here.

If this drive is being used in a boot capacity role to store downloads or large work files at least if doing so takes to / near the capacity of he drive. Not sure that would be smart.
 

CruisD64

2[H]4U
Joined
Mar 6, 2007
Messages
2,173
How full is the drive?
Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.

Careful here, let's not suggest that it's slowing down due to to the drive being a certain percentage full. This has already been discussed, explained and reported on here AFAIK. This drive does not suffer from being 50% - 70 - 80 - 90% full. I got the same speeds at 90% full vs 5% full but then again, I have a 9900K @ 5.0Ghz across all cores and a slim lite decrapified version of Windows 10 that is highly tuned.

I also don't run any of the Meltdown and Spectre or other CPU exploit patches, or firmwares that kill off 20% - 30% performance like most of you do. I don't do anything critical on my desktop, zero banking, shopping, etc. So have zero worries in that dept.

?????? PCIe bus over-saturated, older drivers and or firmware, 4X port speed not selected in bios, host of other reasons. ??????

Move 1/2 the data off your drive and re-bench the drive again. Report back here.

If this drive is being used in a boot capacity role to store downloads or large work files at least if doing so takes to / near the capacity of he drive. Not sure that would be smart.
These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.

have you trimed it?
Win 10 enables trim by default if a drive supports it so I assume yes?
 

zpackrat

Gawd
Joined
Jan 28, 2002
Messages
761
Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.



These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.



Win 10 enables trim by default if a drive supports it so I assume yes?
This is true as long as the scheduled task is not altered. just click start then type optimize and open the defragment and optimize drives. worth a check....
Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.



These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.



Win 10 enables trim by default if a drive supports it so I assume yes?
It's a scheduled task, and no harm in checking to be sure it's running.
 

CMDR1337

n00b
Joined
Jul 17, 2019
Messages
13
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
/agree - my new win 10 x570 system boots 35 seconds tops. but lets see after a few months of windows bloat.
NO! i dont want CANDY Crush Dammit!
 

IdiotInCharge

[H]F Junkie
Joined
Jun 13, 2003
Messages
14,514
/agree - my new win 10 x570 system boots 35 seconds tops. but lets see after a few months of windows bloat.
NO! i dont want CANDY Crush Dammit!
It'll take a reload, but I tested this yesterday in a VM with 1903, and it does work well with default settings for the script.

Windows Decrapifyer

If you're willing to reinstall software and save off your data (either by wiping and reinstalling or by using Microsoft's sysprep tool), you can run the script before you log in for the first time and prevent pretty much anything you want from installing in the first place.
 

groebuck

2[H]4U
Joined
Mar 9, 2000
Messages
2,502
one thing to note on these if you plan on raid 0 ing them the performance increase is not that great over single drive performance.
 

MixManSC

║▌║█║▌│║▌║▌█
Staff member
Joined
Aug 12, 2004
Messages
7,069
one thing to note on these if you plan on raid 0 ing them the performance increase is not that great over single drive performance.
And you would never notice it at all in any real world typical usage. Now if all you do is run benchmarks or constantly move unimportant gigantic files around then raid 0 might be useful.
 

groebuck

2[H]4U
Joined
Mar 9, 2000
Messages
2,502
And you would never notice it at all in any real world typical usage. Now if all you do is run benchmarks or constantly move unimportant gigantic files around then raid 0 might be useful.
I actually do move large files around on the daily,
 

arnemetis

2[H]4U
Joined
Aug 2, 2004
Messages
2,924
I actually do move large files around on the daily,
Ahh excellent, a real life example! Could you please share some more details of what you do? I'm quite curious of your setup. Most of my data transfers are limited to my raid arrays that can only sustain about 350MB/sec reads and writes, and even going to regular ssds hits their limit around 550MB/sec. Do you use these in pci express cards for mass storage? I am assuming you're using 40gbit fiber to your lab in such a case to make use of the speed? I only have 10gbit fiber to my server, so I could never hope to get full speeds going nvme to nvme, unless it was all in the same pc (and I can't figure out the use case there?)
 

groebuck

2[H]4U
Joined
Mar 9, 2000
Messages
2,502
I work as an Sales Engineer for a tech company - I have to set up breakdown and setup labs all the time. I move vms all over my labs - mostly sql databases and my OVA appliances. I have an threadripper with the asus raid card and 4 nvme drives w2 512 hp drives and 2 of the inland ones 1tb each - That is my live lab machine - the vms and ovas I back up to a dell server I bought off of ebay with 12 TB if storage - I wish I had 40gbit fiber lol...my lan backbone is 1gb
 

arnemetis

2[H]4U
Joined
Aug 2, 2004
Messages
2,924
I work as an Sales Engineer for a tech company - I have to set up breakdown and setup labs all the time. I move vms all over my labs - mostly sql databases and my OVA appliances. I have an threadripper with the asus raid card and 4 nvme drives w2 512 hp drives and 2 of the inland ones 1tb each - That is my live lab machine - the vms and ovas I back up to a dell server I bought off of ebay with 12 TB if storage - I wish I had 40gbit fiber lol...my lan backbone is 1gb
So it sounds like you can only use the speed when transferring then in your live lab machine, curious. Seems like everything heading off your machine is going to be limited by the gigabit network, that of course could be saturated with a run of the mill mechanical drive. I didn't really think there were many scenarios where moving files among multiple drives in the same machine. Thank you for sharing more with me.
 

MixManSC

║▌║█║▌│║▌║▌█
Staff member
Joined
Aug 12, 2004
Messages
7,069
I can see that scenario...... I own a sign company and we are frequently working with 4+ GB files. For our needs I'm not going to RAID NVMe drives in the desktops but I do have everything on 10Gbe which helped when 2 or more designers are pulling/saving huge files. My biggest bottleneck is the RAID 10 arrays on the two servers. I need to eventually look at spending the money and upgrade the servers to all solid state storage. That is going to hurt my wallet with 28 drives to replace. :grumpy:
 

IdiotInCharge

[H]F Junkie
Joined
Jun 13, 2003
Messages
14,514
My biggest bottleneck is the RAID 10 arrays on the two servers.
Not keeping up with the IOPS, or not maxing out 10Gbit? I figure four mirrored pairs (eight drives total) in a pool with any modern spinners should be able to sustain >10Gbps in read / write, but if there is other stuff going on that might not be the case.

Might even consider an SSD cache?
 

MixManSC

║▌║█║▌│║▌║▌█
Staff member
Joined
Aug 12, 2004
Messages
7,069
Its a slightly older setup. MR 9270CV-8i (with the supercap instead of battery). Seagate NL-SAS drives (2TB Constellation ES.1 64MB cache). 12 drive raid 10 set (6 spans of 2 drives each). I'm thinking the LSI card and drives are just holding things back. Read speed on the array on the server only hits around 800MB/s which is really not terrible for what it is. I can probably tweak some settings and get it closer to 900 maybe even a bit more. Much more than that and then of course I'll start having to consider 40Gbe networking.... Of course the 7200 rpm drives are not the most responsive either. Tough being a small business. I have to be careful on weighing employee time costs (and their aggravation) versus $$$ for new hardware. Heck Only just recently upgraded everything to 10Gbe - that was not cheap either.
 

IdiotInCharge

[H]F Junkie
Joined
Jun 13, 2003
Messages
14,514
I hit a sustained 700MBs / 350MBs with four 6TB Ironwolf drives using onboard SATA and ZFS 0.7x on CentOS7 in a pair of mirrors. The platter density certainly helps.

The challenge is that you're simply not going to get much better for large, sustained reads of random files unless you go full SSD- and then you'd want a faster backbone, at least to the distro switch.

You can certainly accelerate common reads as well as sustained writes with smaller SSDs- you just need good ones for the sustained writes, or you'll be seeing lower writes speeds than the spinners can handle (tried it).
 

MixManSC

║▌║█║▌│║▌║▌█
Staff member
Joined
Aug 12, 2004
Messages
7,069
Good info, thanks! I can do the upgrade to the switch easy as it has two 40Gbe ports that I'm not using for anything. Just the costs of going all SSD is what kills it for me right now, then again 40Gbe cards and DAC's are not exactly super cheap either...
 
Status
Not open for further replies.
Top