HOT ! Various 1TB NVMe with coveted E12 Controller $135 aprox retail

Status
Not open for further replies.
Last edited:
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
 
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
Haha, you said rad :D
A presence I've not felt since ...
 
my speeds dropped form 12.2 to 12.3 in almost every thing. why you guys lie to me?
 
...simple programs that read your wear will tell you how much read/writes you do. I somehow already have like 9TB of writes on my 970 EVO Plus I bought like a month ago...wtf?

In the past I was averaging like 100-300GB of writes a day and 500GB or more of reads a day. So I am fairly picky on my OS drives for my main rig.

Maybe it's your page file?
 
Supposedly the 12.3 F/W isn't destructive *unless* you're upgrading to it from 11.x. But that shouldn't be an issue unless you bought *very* early on.
 
This will definitely be a top choice for any future nvme purchases, the price/perf ratio is pretty great. Thinking any time I need an external drive now, I might start using one of these + an enclosure, mostly because nvme drives seem to have decent 4k R/W performance, usually significantly better than standard SATA SSDs
 
NVME enclosures that don't overheat are kind of spendy just an FYI. I looked into getting one for the 500gb NVME this replaced, but am having trouble justifying a $50 enclosure for it.
 
  • Like
Reactions: xx0xx
like this
That's definitely true. I was kinda surprised at how much more expensive they are than standard enclosures

I see the M2X is decently reviewed at $40, and Newegg has some on sale right now for $30 with good reviews, so might be able to find a middle ground.

Not sure why the brand name I tried to post for the ones on Newegg is censored? I might have missed something. Maybe there's something bad about that brand- if anyone knows and can clue me in that'd be great (it starts with an O). Didn't see anything in rules and can't search censored terms.
 
NVME enclosures that don't overheat are kind of spendy just an FYI.

Lots of this. These things just get hot. I picked up a smaller one, and the enclosure itself got to skin-burning temperatures by feel during sustained writes (imaging from another drive). You're going to want one with mass and surface area just to keep the external temperature down to 'really warm'.

I'll note that the drive itself, a 660p, didn't seem to mind.
 
I'm having a strange issue now with my 1TB drive where my reads are 3300 MB/s and my writes slowed down to about 1000 MB/s. Just all of the sudden one day...
 
How full is the drive?

Careful here, let's not suggest that it's slowing down due to to the drive being a certain percentage full. This has already been discussed, explained and reported on here AFAIK. This drive does not suffer from being 50% - 70 - 80 - 90% full. I got the same speeds at 90% full vs 5% full but then again, I have a 9900K @ 5.0Ghz across all cores and a slim lite decrapified version of Windows 10 that is highly tuned.

I also don't run any of the Meltdown and Spectre or other CPU exploit patches, or firmwares that kill off 20% - 30% performance like most of you do. I don't do anything critical on my desktop, zero banking, shopping, etc. So have zero worries in that dept.

?????? PCIe bus over-saturated, older drivers and or firmware, 4X port speed not selected in bios, host of other reasons. ??????

Move 1/2 the data off your drive and re-bench the drive again. Report back here.

If this drive is being used in a boot capacity role to store downloads or large work files at least if doing so takes to / near the capacity of he drive. Not sure that would be smart.
 
How full is the drive?

Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.

Careful here, let's not suggest that it's slowing down due to to the drive being a certain percentage full. This has already been discussed, explained and reported on here AFAIK. This drive does not suffer from being 50% - 70 - 80 - 90% full. I got the same speeds at 90% full vs 5% full but then again, I have a 9900K @ 5.0Ghz across all cores and a slim lite decrapified version of Windows 10 that is highly tuned.

I also don't run any of the Meltdown and Spectre or other CPU exploit patches, or firmwares that kill off 20% - 30% performance like most of you do. I don't do anything critical on my desktop, zero banking, shopping, etc. So have zero worries in that dept.

?????? PCIe bus over-saturated, older drivers and or firmware, 4X port speed not selected in bios, host of other reasons. ??????

Move 1/2 the data off your drive and re-bench the drive again. Report back here.

If this drive is being used in a boot capacity role to store downloads or large work files at least if doing so takes to / near the capacity of he drive. Not sure that would be smart.

These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.

have you trimed it?

Win 10 enables trim by default if a drive supports it so I assume yes?
 
Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.



These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.



Win 10 enables trim by default if a drive supports it so I assume yes?

This is true as long as the scheduled task is not altered. just click start then type optimize and open the defragment and optimize drives. worth a check....
Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.



These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.



Win 10 enables trim by default if a drive supports it so I assume yes?

It's a scheduled task, and no harm in checking to be sure it's running.
 
Did some before and after tests when I installed 12.3 firmware. Its not the best test software but it gave me a baseline.


12.2 firmware

btVKema.jpg


12.3 firmware

ezXaPky.jpg
 
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
/agree - my new win 10 x570 system boots 35 seconds tops. but lets see after a few months of windows bloat.
NO! i dont want CANDY Crush Dammit!
 
/agree - my new win 10 x570 system boots 35 seconds tops. but lets see after a few months of windows bloat.
NO! i dont want CANDY Crush Dammit!

It'll take a reload, but I tested this yesterday in a VM with 1903, and it does work well with default settings for the script.

Windows Decrapifyer

If you're willing to reinstall software and save off your data (either by wiping and reinstalling or by using Microsoft's sysprep tool), you can run the script before you log in for the first time and prevent pretty much anything you want from installing in the first place.
 
one thing to note on these if you plan on raid 0 ing them the performance increase is not that great over single drive performance.
 
one thing to note on these if you plan on raid 0 ing them the performance increase is not that great over single drive performance.

And you would never notice it at all in any real world typical usage. Now if all you do is run benchmarks or constantly move unimportant gigantic files around then raid 0 might be useful.
 
And you would never notice it at all in any real world typical usage. Now if all you do is run benchmarks or constantly move unimportant gigantic files around then raid 0 might be useful.

I actually do move large files around on the daily,
 
I actually do move large files around on the daily,
Ahh excellent, a real life example! Could you please share some more details of what you do? I'm quite curious of your setup. Most of my data transfers are limited to my raid arrays that can only sustain about 350MB/sec reads and writes, and even going to regular ssds hits their limit around 550MB/sec. Do you use these in pci express cards for mass storage? I am assuming you're using 40gbit fiber to your lab in such a case to make use of the speed? I only have 10gbit fiber to my server, so I could never hope to get full speeds going nvme to nvme, unless it was all in the same pc (and I can't figure out the use case there?)
 
I work as an Sales Engineer for a tech company - I have to set up breakdown and setup labs all the time. I move vms all over my labs - mostly sql databases and my OVA appliances. I have an threadripper with the asus raid card and 4 nvme drives w2 512 hp drives and 2 of the inland ones 1tb each - That is my live lab machine - the vms and ovas I back up to a dell server I bought off of ebay with 12 TB if storage - I wish I had 40gbit fiber lol...my lan backbone is 1gb
 
I work as an Sales Engineer for a tech company - I have to set up breakdown and setup labs all the time. I move vms all over my labs - mostly sql databases and my OVA appliances. I have an threadripper with the asus raid card and 4 nvme drives w2 512 hp drives and 2 of the inland ones 1tb each - That is my live lab machine - the vms and ovas I back up to a dell server I bought off of ebay with 12 TB if storage - I wish I had 40gbit fiber lol...my lan backbone is 1gb
So it sounds like you can only use the speed when transferring then in your live lab machine, curious. Seems like everything heading off your machine is going to be limited by the gigabit network, that of course could be saturated with a run of the mill mechanical drive. I didn't really think there were many scenarios where moving files among multiple drives in the same machine. Thank you for sharing more with me.
 
I can see that scenario...... I own a sign company and we are frequently working with 4+ GB files. For our needs I'm not going to RAID NVMe drives in the desktops but I do have everything on 10Gbe which helped when 2 or more designers are pulling/saving huge files. My biggest bottleneck is the RAID 10 arrays on the two servers. I need to eventually look at spending the money and upgrade the servers to all solid state storage. That is going to hurt my wallet with 28 drives to replace. :grumpy:
 
My biggest bottleneck is the RAID 10 arrays on the two servers.

Not keeping up with the IOPS, or not maxing out 10Gbit? I figure four mirrored pairs (eight drives total) in a pool with any modern spinners should be able to sustain >10Gbps in read / write, but if there is other stuff going on that might not be the case.

Might even consider an SSD cache?
 
Its a slightly older setup. MR 9270CV-8i (with the supercap instead of battery). Seagate NL-SAS drives (2TB Constellation ES.1 64MB cache). 12 drive raid 10 set (6 spans of 2 drives each). I'm thinking the LSI card and drives are just holding things back. Read speed on the array on the server only hits around 800MB/s which is really not terrible for what it is. I can probably tweak some settings and get it closer to 900 maybe even a bit more. Much more than that and then of course I'll start having to consider 40Gbe networking.... Of course the 7200 rpm drives are not the most responsive either. Tough being a small business. I have to be careful on weighing employee time costs (and their aggravation) versus $$$ for new hardware. Heck Only just recently upgraded everything to 10Gbe - that was not cheap either.
 
I hit a sustained 700MBs / 350MBs with four 6TB Ironwolf drives using onboard SATA and ZFS 0.7x on CentOS7 in a pair of mirrors. The platter density certainly helps.

The challenge is that you're simply not going to get much better for large, sustained reads of random files unless you go full SSD- and then you'd want a faster backbone, at least to the distro switch.

You can certainly accelerate common reads as well as sustained writes with smaller SSDs- you just need good ones for the sustained writes, or you'll be seeing lower writes speeds than the spinners can handle (tried it).
 
Good info, thanks! I can do the upgrade to the switch easy as it has two 40Gbe ports that I'm not using for anything. Just the costs of going all SSD is what kills it for me right now, then again 40Gbe cards and DAC's are not exactly super cheap either...
 
Just ran benchmark on the 512 version 12.2 firmware 10% full (running as my OS drive) - Got it for $48 at MC

Sequential Read (Q= 32,T= 1) : 3463.708 MB/s
Sequential Write (Q= 32,T= 1) : 2127.165 MB/s
Random Read 4KiB (Q= 8,T= 8) : 687.831 MB/s [ 167927.5 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 448.475 MB/s [ 109491.0 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 140.838 MB/s [ 34384.3 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 97.746 MB/s [ 23863.8 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 34.471 MB/s [ 8415.8 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 60.524 MB/s [ 14776.4 IOPS]

Updated to 12.3 and these are my results - Looks like it went down

Sequential Read (Q= 32,T= 1) : 3415.646 MB/s
Sequential Write (Q= 32,T= 1) : 2116.340 MB/s
Random Read 4KiB (Q= 8,T= 8) : 658.611 MB/s [ 160793.7 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 448.565 MB/s [ 109512.9 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 127.505 MB/s [ 31129.2 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 96.942 MB/s [ 23667.5 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 33.831 MB/s [ 8259.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 58.138 MB/s [ 14193.8 IOPS]
 
Last edited:
hi i just ordered the 1TB version off Amazon after being pretty impressed with the 512gb version. My question is where do you find these firmwares?
 
hi i just ordered the 1TB version off Amazon after being pretty impressed with the 512gb version. My question is where do you find these firmwares?
I wouldn't worry about flashing the latest version.
The firmware that ships with your unit will be more than acceptable.
 
Status
Not open for further replies.
Back
Top