intel 750 1.2TB AIC only game in town?

strikeout

[H]ard|Gawd
Joined
Oct 3, 2005
Messages
1,360
I haven't been checking lately but seems there aren't any larger capacity drives >1TB. Where are the consumer rivals from Samsung, Toshiba/crucial and Sandisk?
 
Samsung has a 2TB version of the 850 Pro/EVO if you just want a large SSD. Readily available on newegg. For NVMe, that's it unless you go for Intel's DC line (which will cost you).
 
I haven't been checking lately but seems there aren't any larger capacity drives >1TB. Where are the consumer rivals from Samsung, Toshiba/crucial and Sandisk?

There are a few larger capacity drives, but if you want performance and that kind of capacity there isn't much out there right now. Samsung is going to release a 1.0TB version of the 950 Pro if I am not mistaken, but I don't know when.
 
There are a few larger capacity drives, but if you want performance and that kind of capacity there isn't much out there right now. Samsung is going to release a 1.0TB version of the 950 Pro if I am not mistaken, but I don't know when.
Q3 apparently.
 
Samsung has a 2TB version of the 850 Pro/EVO if you just want a large SSD. Readily available on newegg. For NVMe, that's it unless you go for Intel's DC line (which will cost you).

I am talking about the AIC versions, the card ones that go into the PCIE slot.

Optane should be out then too but don't know about the sizes available.

Was he referring to a need of NVMe/high performance or just capacity? I didn't see anytthing leading one way or the other.

It would be cool if they could be cheap and use pcie bus but i think they wouldn't want to make something backwards/slower when NVME is all the rage now. My M.2 slot is sata based so i'm still content with ~400MB speeds.
 
Last edited:
I would be cool if they could be cheap and use pcie bus but i think they wouldn't want to make something backwards/slower when NVME is all the rage now. My M.2 slot is sata based so i'm still content with ~400MB speeds.

that made no sense
 
I am talking about the AIC versions, the card ones that go into the PCIE slot.
Do you actually need that kind of performance? Very few do. On the consumer side, that's really it anyway. There's low demand and little market thus far. You're looking at enterprise drives if you want bigger. Intel's DC P3700 2TB can be had for about $2000 on eBay these days.
 
Do you actually need that kind of performance? Very few do. On the consumer side, that's really it anyway. There's low demand and little market thus far. You're looking at enterprise drives if you want bigger. Intel's DC P3700 2TB can be had for about $2000 on eBay these days.

No not really, I just don't want to run cables in my box. I wish Nvidia/AMD would make flagship GPUs that were only bus powered :)
 
so performance is not what u care about but size of drive and lack of cables? -_-

Just get a PCIe Card to put a 2TB EVO in than. Or get a dual SATA PCIe RAID card and put 2x2TB EVOs for 4TB RAID 0 :/
 
btw didnt look but if it is a hardware RAID i would buy a spare just in case the board craps out. You wont be able to replug the drives in through regular SATA and use them if it does. But if you can software RAID you should be safe if the board dies.
 
There's the HyperX Predator, PCIe M.2 drives that come with an M.2-to-PCIe board, 240GB and 480GB. They're not NVMe, though.
 
Optane should be out then too but don't know about the sizes available.

Was he referring to a need of NVMe/high performance or just capacity? I didn't see anytthing leading one way or the other.

He didn't specify. That's why I worded my post the way I did. There are plenty of larger capacity SSD's out there, but nothing that offers the same performance as the Intel SSD 750 and the same capacity in the consumer market.

I am talking about the AIC versions, the card ones that go into the PCIE slot.



It would be cool if they could be cheap and use pcie bus but i think they wouldn't want to make something backwards/slower when NVME is all the rage now. My M.2 slot is sata based so i'm still content with ~400MB speeds.

Most of this made no sense.

Do you actually need that kind of performance? Very few do. On the consumer side, that's really it anyway. There's low demand and little market thus far. You're looking at enterprise drives if you want bigger. Intel's DC P3700 2TB can be had for about $2000 on eBay these days.

While the drives like the DC 3700 are amazing, you have to consider one other problem. That's the firmware. It isn't designed for desktop motherboards. While this may work out okay, it would suck to shell out $2,000 on a drive that doesn't work right in your system. Also keep in mind that those drives are optimized for different workloads than consumer drives are.

No not really, I just don't want to run cables in my box. I wish Nvidia/AMD would make flagship GPUs that were only bus powered :)

It would be nice, but not very realistic. The motherboard's lack the ability to carry enough current to feed a 300 watt video card, much less, two to four of them.

so performance is not what u care about but size of drive and lack of cables? -_-

Just get a PCIe Card to put a 2TB EVO in than. Or get a dual SATA PCIe RAID card and put 2x2TB EVOs for 4TB RAID 0 :/

This.


Yes they do. In fact, several motherboards come with the adapters to do it. (Well, one adapter anyway.) ASUS' Hyper M.2 Kit is packaged with a few motherboards like the X99 Deluxe. It can also be purchased separately. They work on any motherboard, or rather, they should.

btw didnt look but if it is a hardware RAID i would buy a spare just in case the board craps out. You wont be able to replug the drives in through regular SATA and use them if it does. But if you can software RAID you should be safe if the board dies.

Onboard RAID isn't hardware based. It's managed via an option ROM, but there isn't much in the way of hardware involved aside from your CPU and system memory. Intel's onboard RAID is backwards compatible with earlier chipsets and can be imported onto other motherboards using Intel chipsets. The only time you are going to have a problem with that is if you try and go to an AMD processor based system. Intel's chipsets all basically use the same implementation these days so you needn't worry about the motherboard dying. Also, Intel's RAID volumes can be imported into LSI and potentially other RAID controllers that are hardware based.
 
OP: for now and at a non-enterprise price, yes.

If you break out the big bucks, the P3500/3600/3700 go to 2TB and will work on the same boards the 750 does: proper UEFI version + recent PCH for booting, can go older if not primary. I've had a couple different ones in my X79 board just to get data off them.

The newer "bonded" P3608 (up to 4TB) is the one you may have issues with consumer support.
 
OP: for now and at a non-enterprise price, yes.

If you break out the big bucks, the P3500/3600/3700 go to 2TB and will work on the same boards the 750 does: proper UEFI version + recent PCH for booting, can go older if not primary. I've had a couple different ones in my X79 board just to get data off them.

The newer "bonded" P3608 (up to 4TB) is the one you may have issues with consumer support.

i agree with Dan on this...DC series is a terrible idea for what he wants.

@Dan_D I thought that PCIe Card had its own RAID software....Thats why I was saying becareful if its built in RAID in that card because you might not be able to use the drives if the PCIe card dies.

Anyways OP just double check on that because you don't wnat to burn yourself by missing that. It might support software/hardware RAID and if thats the case just do software and your safe.
 
i agree with Dan on this...DC series is a terrible idea for what he wants.

@Dan_D I thought that PCIe Card had its own RAID software....Thats why I was saying becareful if its built in RAID in that card because you might not be able to use the drives if the PCIe card dies.

Anyways OP just double check on that because you don't wnat to burn yourself by missing that. It might support software/hardware RAID and if thats the case just do software and your safe.

The ASUS adapter relies on the UEFI of the motherboard. It has no OROM of its own.
 
I'll probably roll with that. No wires and a nice speed bump from RAID 0. I plan to load all my adobe and steam things there.

Those adapters use Marvell chipsets. I would NOT use the Marvell's RAID features and instead simply do RAID-0 in Windows Disk Management. The reasons are:

  • Marvell RAID management is obtuse
  • Software RAID-0 presents almost no load to the CPU (indeed, that's why Marvell only supports 0/1/10)
  • If there is ever a problem, Windows RAID (dynamic) disks can be mounted in Linux via mdadm. Not sure that Marvell RAID is supported there.
 
Those adapters use Marvell chipsets. I would NOT use the Marvell's RAID features and instead simply do RAID-0 in Windows Disk Management. The reasons are:

  • Marvell RAID management is obtuse
  • Software RAID-0 presents almost no load to the CPU (indeed, that's why Marvell only supports 0/1/10)
  • If there is ever a problem, Windows RAID (dynamic) disks can be mounted in Linux via mdadm. Not sure that Marvell RAID is supported there.

yea thats why i was mentioning hardwareRAID...its not worth it unless your RAIDing on a massive level because of the downsides.
 
I was referring to the APRICORN VEL-DUO Velocity Duo x2 btw

I was referring to a PCIe adapter that held two regular 2.5" SSDs onboard.

I was unfamiliar with that device which is why I didn't consider it. In looking at it's specs, I'm not impressed. Your using two SATA drives over a PCIe x2 bus. SATA drives are fine on the SATA bus. The kit indicates that you can get up to 800MB/s of speed out of it which isn't possible as the SATA III bus the drives use can't do that. At least, you won't see that out of a single drive. You may out of two, but then your spending $140 on an adapter for a 200MB/s increase in drive speed. Marvell controllers are also not as good as their Intel counterparts for speed. If you haven't bought the drives, save your money.

You can get something like the ASUS Hyper Kit and grab a Samsung 950Pro for the same or less money as your oddball dual SATA drive configuration and adapter, This will ultimately be an easier configuration to deal with as your motherboard boots NVMe drives fine.

Velocity Duo x2 - Dual SSD RAID = $139.50
Samsung 850 PRO 512GB 2.5-Inch SATA III Internal SSD Qty. 2 = $441.52
Total Cost = $581.02

Samsung 950 Pro M.2 512GB NVMe = $327.00
Asus Hyper Kit Expansion Card M.2 SAS HD PCI Express 2.5 SSD = $18.13
Total = $345.13

If capacity is the only metric you care about, then I'd just grab a 1TB EVO and be done with it. You brought up the Intel drive initially, so I'm led to believe performance matters at least somewhat. You also mention that you care a lot about cabling. So you could always go with something like this:

Mushkin Enhanced 960GB PCIe 2.0 x2 MLC Internal Solid State Drive (SSD) MKNP22SC960GB = $599.99

This option gives you nearly the capacity you are looking for and does so without the complexity or bullshit that goes with the Marvell based adapter and a pair of SATA drives. This adds two more points of failure vs. one. The M.2 drive and PCIe card gives you the best performance, but the lowest capacity. At least for now. That option can also be used for 2.5" Intel drives and a U.2 to M.2 adapter, which interfaces with the ASUS card I already linked.

If cost isn't a factor, and you want that capacity without the cabling, the Intel 750 SSD 1.2TB is the largest, and best performing option today.

Those adapters use Marvell chipsets. I would NOT use the Marvell's RAID features and instead simply do RAID-0 in Windows Disk Management. The reasons are:

  • Marvell RAID management is obtuse
  • Software RAID-0 presents almost no load to the CPU (indeed, that's why Marvell only supports 0/1/10)
  • If there is ever a problem, Windows RAID (dynamic) disks can be mounted in Linux via mdadm. Not sure that Marvell RAID is supported there.

I absolutely agree with your first point. Your second point needs some expansion I think. RAID 0 and 1 don't load the CPU too much, but keep in mind that the reason RAID 5 and other RAID levels aren't supported is ultimately due to a lack of real hardware to perform functions like parity calculation. Without the dedicated processor onboard the controller, all this has to be done by the CPU. Your third point is mostly valid, but Marvell controllers are very common so mounting disks formatted / setup on a Marvell controller would be relatively easy to do for data recovery. It's more work than is needed if you skip that method and go with something more ubiquitous like the Intel RAID.

I wouldn't do Windows software RAID as the performance and CPU usage tends to be a little worse than doing it through the Intel chipset in my experience.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Thanks for the extensive analysis. I agree that the Mushkin x2 960GB is a much better option than the X2 adapter + 2 SSDs for the OP.

I wouldn't do Windows software RAID as the performance and CPU usage tends to be a little worse than doing it through the Intel chipset in my experience.

Absolutely, that's been my experience as well -- always go Intel if available -- the difference with SSDs is absolutely noticeable.

Personally, I'm waiting for a bare version of this adapter to become available -- possibly with a PEX built-in for those mobos that don't have one -- 4x512GB NVMe here we come!

zturbo_g3_gallery_img2_tcm_245_2119618.jpg
 
Thanks for the extensive analysis. I agree that the Mushkin x2 960GB is a much better option than the X2 adapter + 2 SSDs for the OP.



Absolutely, that's been my experience as well -- always go Intel if available -- the difference with SSDs is absolutely noticeable.

Personally, I'm waiting for a bare version of this adapter to become available -- possibly with a PEX built-in for those mobos that don't have one -- 4x512GB NVMe here we come!

Image snip...........

That's pretty badass. a PEX chip wouldn't need to be included in that though.
 
a PEX chip wouldn't need to be included in that though.

Sorry, I should have been more specific. Supermicro makes 2xSFF 2.5" NVMe to one 8x PCIe adapter card. They make two versions: (a) with PEX chip and (b) without PEX chip. The one with PEX is $250 and supported in any PCIe x8 slot; the one without it is $150 but restricted to a few of their mobos which already have a PEX switch. The upshot, apparently, is that Intel CPUs don't like it when you directly (electrically) connect multiple 4x devices to one of their x8 or x16 slots.

The Supermicro adapters are neat and hope they (or someone more consumer-focused) comes up with easily available adapters that will hold at least two M.2s onboard.

Edit: I also noticed that the price of the 1.2TB AIC is increasing steadily at the Egg, while the 1.2TB SFF version has been holding steady at $899. I hate extra, messy cabling as much as anyone else, but at a certain point you don't mind feeling a little green if it'll save you some green ;)
 
Sorry, I should have been more specific. Supermicro makes 2xSFF 2.5" NVMe to one 8x PCIe adapter card. They make two versions: (a) with PEX chip and (b) without PEX chip. The one with PEX is $250 and supported in any PCIe x8 slot; the one without it is $150 but restricted to a few of their mobos which already have a PEX switch. The upshot, apparently, is that Intel CPUs don't like it when you directly (electrically) connect multiple 4x devices to one of their x8 or x16 slots.

The Supermicro adapters are neat and hope they (or someone more consumer-focused) comes up with easily available adapters that will hold at least two M.2s onboard.

Edit: I also noticed that the price of the 1.2TB AIC is increasing steadily at the Egg, while the 1.2TB SFF version has been holding steady at $899. I hate extra, messy cabling as much as anyone else, but at a certain point you don't mind feeling a little green if it'll save you some green ;)

While I like the AIC version, I think that the U.2 cabling, even via an M.2 adapter can be done clean enough in my case at least. I'd actually prefer it I think. I've got the AIC 800GB now, which is fine but saving PCIe slots is always nice. Especially when I've been known to throw three and four graphics cards in my machine. I just haven't done that this time because the TITAN X SLI scaling past two GPUs has been pathetic. Hell, it's sucked with two cards in a few shitty console ports like Arkham Knight.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Now, OP's desktop is surely going to have cables on the outside. Does anyone think he will mind an external SAS cable added to the mix, snaking along to an out-of-sight SSD DAS? :D
 
Back
Top