Which is what I said. Jesus, why do I even post here anymore.
And actually I've spoken to multiple reviewers about this drive (at least two who had to return it for a "good" sample) and users (one who had to return it) and there were some teething issues. That said you can check the firmware...
7154GiB vs 7451GiB is indeed just a matter of overprovisioning. Many people complained about having "less space than advertised" with the 8TB Rocket Q - just check Amazon reviews - but it's not an unusual amount of OP. 7154GiB matches up with the typical 960GB per TiB (960 * 8 = 7680 GB =...
4 outputs - can check this with the Aorus Master/Xtreme (6 outputs) where under Specifications it states 4 displays max. I actually own the Gaming OC with 5 outputs as well trying to be cheeky with it, but it's just 4.
My goal has always been the Gigabyte Gaming OC 3080 (which has a similar layout to the Eagle) and we've had a teardown of that for a while. We can see here that it's using 6xPOSCAPs at 470µF while the cheap Zotac for example is using 6x330µF.
It does seem like NVIDIA hurt the AIBs here with very little lead time and that compounds the issue. I'm hopeful stock will reach an equilibrium, I'm not of the "doom and gloom" crowd, although I also didn't deal with Turing cards so I lack experience on how that went in comparison. In any case...
https://www.reddit.com/r/nvidia/comments/iw61p3/zotac_rep_says_3080_performance_is_below_fe_by/
Absolutely true as I stated above regarding AHOC's PCB breakdown, I realize people will say "but that's just Zotac!" but I've also already heard of tons of issues with the non-OC TUF (the $699 SKU)...
Just link AHOC's Zotac breakdown where it's clear the card - at the same cost of the FE - is essentially inferior. I'm holding out for a Gigabyte Gaming OC ("$750") but Gigabyte is already livestreaming and touting their Aorus Master & Extreme which will be nowhere near "$699." (the Master alone...
Generally consumer workloads aren't all that hard on a drive but there are exceptions, e.g. DRAM-less (particularly SATA/AHCI) or QLC-based. A good, DRAM-equipped TLC drive won't benefit much (AnandTech reviewed two E12 drives with different marketed OP - MP510 and P34A80 - and found virtually 0...
Absolutely, it tends to be a 70/30 read/write mix, and those tend to be random. Although I think if you're getting a 4.0 SSD like these that you should be intending for such workloads and if not, something like the new P31 is superior in every way.
Full-drive SLC caching on these (25/33% for QLC/TLC), which has significant drawbacks. It's also a bit unreliable in the current iteration. For the first point, it's because dynamic SLC has to be tracked and shifted around based on wear and further has terrible performance outside SLC if you...
It's definitely crucial not to fall into the trap of thinking flash levels are linear, e.g. SLC -> MLC -> TLC - > QLC with each one eventually "catching up" to the prior. Rather, the different types of flash are inherently manufactured differently - this is why QLC with just 33% more capacity...
By 25% he means full-drive SLC caching, the resultant SLC is just 25% the capacity of the drive as it's sourced from QLC. This can definitely come from OP space since you don't want to have 0GB of SLC when the drive is full, not least because you need some SLC at all times for things like the...
Yep, data at rest is generally protected, although there are performance and endurance ramifications for it depending on how it is achieved. A couple years ago I couldn't say that with confidence but you mostly had 2D/planar MLC which was far less prone to issues from the get go. Now that we...
For consumer devices you will almost never have protection for data-in-flight, however for data-at-rest there are a number of methods used for mitigation (for example, SLC caching, as I mentioned above). Typically the lower pages (e.g. LSB when writing CSB) will be buffered internally (latches)...
Ah, here is one such patent. FYI, performance loss from such methods tend to be quite low, but it depends on the native flash (as this patent discusses).
I've spoken with Crucial engineers about this since they talked it up a lot with the P5 recently and they wouldn't give me a direct answer...
There's multiple ways to do this and actually I have sourced some patents on the technology, most specifically Crucial/Micron uses it in for example the MX500. In their case they use a differential device that can basically tell what the original bits (LSB, CSB) are, other methods rely on...
If you absolutely require 8TB in a HDD-replacement SSD, then the 870 QVO isn't bad. It is, however, quite expensive, with a retail price at around $100/TB.
In my opinion, it's best not to use Momentum Cache in 99% of cases. You're adding additional overhead (since the OS still caches) and another point of failure (since RAM by definition is volatile). You're deferring writes, you'll still be bottlenecked by the speed of the interface and device at...
SN720 is the OEM version of the SN750.
It's a tri-core design that somewhat resembles Samsung's older UAX/UBX controllers - one core for host, one for reads, one for writes. Samsung has since gone from Cortex-R4 to Cortex-R5 and five cores (two each for reads and writes) and the SN720's...
I cover the technology in their flash on my sub-reddit, including BiCS4, as well as their controllers. For SATA they generally stick to Marvell's 88SS1074. BiCS4 is actually slightly faster than BiCS3 but has other advantages (incl. coming in 512Gb/die). The NVMe controller is a custom tri-core...
DRAM doesn't operate as a typical write or data cache. It's for mapping/addressing metadata (table of contents). SLC is the native TLC/flash in single-bit mode, it's a temporary data/write cache. On drives with static SLC like that one, it's quite small, I estimate 3GB on that capacity. The...
DRAM isn't directly related to sequential performance. File transfers are single queue/thread so won't be as fast as Q8T1 for example. Copying on the same disk will be slower, too. Lastly, once you are out of SLC cache (which is only 3GB on that SKU) you are hitting TLC, which at that capacity...
It does use full-drive SLC caching like the E16-based drives, which is a mixed blessing. The way it works and the associated algorithms are actually fairly complicated. Generally the amount of SLC available will be approximately one-fourth the amount of flash remaining but this can also include...
Just had the same issue. Tested backup PSU, had the board ready to RMA before I decided to check here and replaced the battery. Rebuilt the machine and it's currently working. All I can say is that at least in my case it wasn't related to anything else (I literally had the board outside the case...
It's a more complicated question than it appears. For example, what stripe size would you use? Most typically you stripe at 64 or 128KB but for SSDs the ideal size would be 4KB, especially if you are running 4Kn. This increases overhead since you're effectively running software RAID. And yes of...
Definitely didn't need to break it, lol, SSDs are relatively easy to securely wipe and recycle. There are ways to pull more information from the drive but either way I think it was in decline.
If you ignore raw values and look at actual and threshold you'll see that AD appears to start at 100 with 5 being a warning spot. This almost certainly means it's tracing spare block percentage. Once that value drops from the initial 100 you are living on borrowed time and you WILL get write...
I would definitely consider it. Usually as soon as I see spare blocks being tapped, I retire the drive - but I can't tell for sure here. The flash on this drive can survive several times that many writes but it's variable depending on write amplification (NAND writes vs. host writes), how many...
I wish I could tell what values AD and B0 actually are, however my thought is that at least one of them is tracking spare blocks (which usually starts at 100 and works backwards). Once a drive starts dipping into spare blocks it's usually the end of the line. The original S700/S700 Pro series...
Phison was showing off reference designs for 8TB NVMe drives at CES2020. We saw a ton of lines that will go up to 4TB including ADATA's Indigo, Pearl, and Sage drives, Mushkin's EON and EON Pro (latter up to 16TB), which means at least 5 controllers right there: Phison E12/E16, Phison E18, SMI...
Many differences. Usually no SLC caching, they'll have power-loss protection (PLP), firmware options like over-provisioning and security features, optimizations for mixed workloads and steady state, more over-provisioning, different form factors, etc.
Yep, can use the Hyper with a board that has PCIe bifurcation, there's adapters also with their own controllers (Gigabyte sells one that goes over x8), you can run multiple single adapters too. Of course the exact speeds are dependent on the motherboard/chipset. You can form a RAID/stripe a...
Sequentially, yes. NVMe as a protocol is just miles beyond AHCI, although I understand people generally not being able to leverage it. Then again with the upcoming consoles, DirectStorage, etc., plus with NVMe/PCIe drives overtaking SATA last year, I think we have a bright future there.