Separate names with a comma.
Discussion in '[H]ot|DEALS' started by SixFootDuo, Mar 6, 2019.
You promised us the price would never go up.
Will this work on any of the p12 based SSD? Also will it work on the 512 variant? I got the 512 version from sabrent.
The one from Phison works with all but I havent tried it, I heard it made some performance lower but I'm not too sure.
I'd probably backup was well, could be destructive.
Some where saying 12.1/2 -> 3 is fine and 11->12.3 is destructive..
Do you think it will go back up to $139? It shouldn't but the one thing the powers that be are good at are manipulating the market.
I stand by my promise that it shouldn't go back up under normal circumstances.
moving to 12.3 my speeds increased ....it was a pretty nice speed bump and non-destructive from 12.2 to 12.3.
Got any ADM screens of the increased speeds?
Honestly, I don't think you can go wrong buying this drive. It is an insane amount of capacity and performance for the money. I've been running mine for about 4 months now and it is rad. Getting 3000+ read/write over a pcie x4 adapter as a boot drive. Nuts!
Haha, you said rad
A presence I've not felt since ...
my speeds dropped form 12.2 to 12.3 in almost every thing. why you guys lie to me?
Maybe it's your page file?
I bet it's Literally Unplayable now.
Supposedly the 12.3 F/W isn't destructive *unless* you're upgrading to it from 11.x. But that shouldn't be an issue unless you bought *very* early on.
On an Inland 1 TB, write speeds went up approx. by 50 MB/sec, read may have dropped slightly with 12.3.
This will definitely be a top choice for any future nvme purchases, the price/perf ratio is pretty great. Thinking any time I need an external drive now, I might start using one of these + an enclosure, mostly because nvme drives seem to have decent 4k R/W performance, usually significantly better than standard SATA SSDs
NVME enclosures that don't overheat are kind of spendy just an FYI. I looked into getting one for the 500gb NVME this replaced, but am having trouble justifying a $50 enclosure for it.
That's definitely true. I was kinda surprised at how much more expensive they are than standard enclosures
I see the M2X is decently reviewed at $40, and Newegg has some on sale right now for $30 with good reviews, so might be able to find a middle ground.
Not sure why the brand name I tried to post for the ones on Newegg is censored? I might have missed something. Maybe there's something bad about that brand- if anyone knows and can clue me in that'd be great (it starts with an O). Didn't see anything in rules and can't search censored terms.
Lots of this. These things just get hot. I picked up a smaller one, and the enclosure itself got to skin-burning temperatures by feel during sustained writes (imaging from another drive). You're going to want one with mass and surface area just to keep the external temperature down to 'really warm'.
I'll note that the drive itself, a 660p, didn't seem to mind.
I'm having a strange issue now with my 1TB drive where my reads are 3300 MB/s and my writes slowed down to about 1000 MB/s. Just all of the sudden one day...
How full is the drive?
Careful here, let's not suggest that it's slowing down due to to the drive being a certain percentage full. This has already been discussed, explained and reported on here AFAIK. This drive does not suffer from being 50% - 70 - 80 - 90% full. I got the same speeds at 90% full vs 5% full but then again, I have a 9900K @ 5.0Ghz across all cores and a slim lite decrapified version of Windows 10 that is highly tuned.
I also don't run any of the Meltdown and Spectre or other CPU exploit patches, or firmwares that kill off 20% - 30% performance like most of you do. I don't do anything critical on my desktop, zero banking, shopping, etc. So have zero worries in that dept.
?????? PCIe bus over-saturated, older drivers and or firmware, 4X port speed not selected in bios, host of other reasons. ??????
Move 1/2 the data off your drive and re-bench the drive again. Report back here.
If this drive is being used in a boot capacity role to store downloads or large work files at least if doing so takes to / near the capacity of he drive. Not sure that would be smart.
have you trimed it?
Probably half? No fuller than it was originally when it was first cloned and getting 3000+ write.
These are great suggestions! I haven't looked into it too much yet as I haven't really had time but will check when I have some time. Maybe tonight.
Win 10 enables trim by default if a drive supports it so I assume yes?
This is true as long as the scheduled task is not altered. just click start then type optimize and open the defragment and optimize drives. worth a check....
It's a scheduled task, and no harm in checking to be sure it's running.
Can also run it immediately- takes the place of defrag, more or less.
Did some before and after tests when I installed 12.3 firmware. Its not the best test software but it gave me a baseline.
/agree - my new win 10 x570 system boots 35 seconds tops. but lets see after a few months of windows bloat.
NO! i dont want CANDY Crush Dammit!
It'll take a reload, but I tested this yesterday in a VM with 1903, and it does work well with default settings for the script.
If you're willing to reinstall software and save off your data (either by wiping and reinstalling or by using Microsoft's sysprep tool), you can run the script before you log in for the first time and prevent pretty much anything you want from installing in the first place.
one thing to note on these if you plan on raid 0 ing them the performance increase is not that great over single drive performance.
And you would never notice it at all in any real world typical usage. Now if all you do is run benchmarks or constantly move unimportant gigantic files around then raid 0 might be useful.
Sabrent version is $110 at Amazon
I actually do move large files around on the daily,
Ahh excellent, a real life example! Could you please share some more details of what you do? I'm quite curious of your setup. Most of my data transfers are limited to my raid arrays that can only sustain about 350MB/sec reads and writes, and even going to regular ssds hits their limit around 550MB/sec. Do you use these in pci express cards for mass storage? I am assuming you're using 40gbit fiber to your lab in such a case to make use of the speed? I only have 10gbit fiber to my server, so I could never hope to get full speeds going nvme to nvme, unless it was all in the same pc (and I can't figure out the use case there?)
I work as an Sales Engineer for a tech company - I have to set up breakdown and setup labs all the time. I move vms all over my labs - mostly sql databases and my OVA appliances. I have an threadripper with the asus raid card and 4 nvme drives w2 512 hp drives and 2 of the inland ones 1tb each - That is my live lab machine - the vms and ovas I back up to a dell server I bought off of ebay with 12 TB if storage - I wish I had 40gbit fiber lol...my lan backbone is 1gb
So it sounds like you can only use the speed when transferring then in your live lab machine, curious. Seems like everything heading off your machine is going to be limited by the gigabit network, that of course could be saturated with a run of the mill mechanical drive. I didn't really think there were many scenarios where moving files among multiple drives in the same machine. Thank you for sharing more with me.
I can see that scenario...... I own a sign company and we are frequently working with 4+ GB files. For our needs I'm not going to RAID NVMe drives in the desktops but I do have everything on 10Gbe which helped when 2 or more designers are pulling/saving huge files. My biggest bottleneck is the RAID 10 arrays on the two servers. I need to eventually look at spending the money and upgrade the servers to all solid state storage. That is going to hurt my wallet with 28 drives to replace.
Not keeping up with the IOPS, or not maxing out 10Gbit? I figure four mirrored pairs (eight drives total) in a pool with any modern spinners should be able to sustain >10Gbps in read / write, but if there is other stuff going on that might not be the case.
Might even consider an SSD cache?
Its a slightly older setup. MR 9270CV-8i (with the supercap instead of battery). Seagate NL-SAS drives (2TB Constellation ES.1 64MB cache). 12 drive raid 10 set (6 spans of 2 drives each). I'm thinking the LSI card and drives are just holding things back. Read speed on the array on the server only hits around 800MB/s which is really not terrible for what it is. I can probably tweak some settings and get it closer to 900 maybe even a bit more. Much more than that and then of course I'll start having to consider 40Gbe networking.... Of course the 7200 rpm drives are not the most responsive either. Tough being a small business. I have to be careful on weighing employee time costs (and their aggravation) versus $$$ for new hardware. Heck Only just recently upgraded everything to 10Gbe - that was not cheap either.
I hit a sustained 700MBs / 350MBs with four 6TB Ironwolf drives using onboard SATA and ZFS 0.7x on CentOS7 in a pair of mirrors. The platter density certainly helps.
The challenge is that you're simply not going to get much better for large, sustained reads of random files unless you go full SSD- and then you'd want a faster backbone, at least to the distro switch.
You can certainly accelerate common reads as well as sustained writes with smaller SSDs- you just need good ones for the sustained writes, or you'll be seeing lower writes speeds than the spinners can handle (tried it).
Good info, thanks! I can do the upgrade to the switch easy as it has two 40Gbe ports that I'm not using for anything. Just the costs of going all SSD is what kills it for me right now, then again 40Gbe cards and DAC's are not exactly super cheap either...