[PCIe 3.0 NVMe SSDs] Intel 750 400GB or Samsung SM951 512GB?

jfromeo

n00b
Joined
Jan 13, 2010
Messages
29
Good morning.

I am currently building a new rig and I cannot decide myself on the SSD part.

09_w_600.png


From what I have read from reviews, both the Intel 750 NVMe and Samsung SM951 NVMe (to be released) are quite similar performance wise. The SM951 excels in managing low size files (more client / desktop based) and the Intel 750 in the bigger size ones in heavy load escenarios (more server / workstation based).

What I like the least about the SM951 is the lack of warranty (apart from the directly given by the shop) and the overheating issues. And that if you apply TIM and a heatsink, you void the warranty as you have to peel off the sticker. In this two terms, the Intel 750 is a clear winner here (5-year warranty and heatsink incorporated).

These are the price tags for each component:

The SM951 512GB AHCI is priced at 330€ (0,65€/GB)
The Intel 750 400GB NVMe is priced at 400€ (1€/GB)
The SM951 512GB NVMe is expected to be priced somewhere in between, so it will have much better value than the Intel 750 400GB NVMe.

Which one would you buy for an enthusiast user? I will be doing pretty much CAD/3d rendering/Finite element calculus (AutoCAD, 3dsMAX+VRay, ANSYS, SAP2000...).

Thanks in advance.
 
since your doing CAD wouldn't the Intel be better? I don't know much about CAD but I thought it relied on larger files or is that situational?

Also if you have a decent case i don't think the card (SM) actually over heats. Add a fan that blows on it?

also some thermal tape might be weak enough to not damage the sticker and allow you to put a passive copper spread on it but i think a fan on it should be enough personally.

Also the price of the SM951 is nice but do realize IIRC the samsung hits the wall of endurance sooner. Meaning that sustain use will drop to a lower performance much faster but some OP will help considering the size.

Alternatively, if you do not do a lot of sustained read/writes you get a lot more space with the samsung and its cheap.

So do you expect to thrash the hell of this? Intel. No? Samsing.

Lastly, Intels warranty is super short. It is like sub 200 TBW I forget but its really low. So don't expect to use the warranty if you do a lot of writes. You'll pass the warranty before the card dies.

Oh Lastly, :) xpoint or whatever comes out next year or something and its super duper uber cool and I want Intel to TAKE MY MONEY!!!!!
 
Check out this review if you haven't already.

the gist of it is:

All in all, the SSD 750 remains as the best option for very IO intensive workloads, but for a more typical enthusiast the SM951-NVMe provides better performance, although not substantially better than the AHCI version. If you need an SSD today, I wouldn't wait for the NVMe version because the availability is a mystery to all and you may end up waiting possibly months. Nevertheless, if the SM951-NVMe was easily available and reasonably priced, I would give it our "Recommended by AnandTech" award, but for now one can only drool after it.

Like you, i'm very excited about the prospect of NVMe drives! Bye bye 550mbs barrier!
75317.png


However, it does look like the tech still needs refining in some areas.
75318.png


Samsung, in this press release, has talked about the future of NVMe tech and how they plan on incorporating it into their nand technology. IMO, the tech just doesn't seem ripe for purchase yet, even for a prosumer. I will be waiting until Samsung includes it in their nand tech and we see a polished consumer release.
 
Check out this review if you haven't already.

the gist of it is:



Like you, i'm very excited about the prospect of NVMe drives! Bye bye 550mbs barrier!
75317.png


However, it does look like the tech still needs refining in some areas.
75318.png


Samsung, in this press release, has talked about the future of NVMe tech and how they plan on incorporating it into their nand technology. IMO, the tech just doesn't seem ripe for purchase yet, even for a prosumer. I will be waiting until Samsung includes it in their nand tech and we see a polished consumer release.

by that point you'll have an xpoint OS/App drive, NVME general storage, and HDD bulk storage :D GIVE ME!!!
 
I'd lean towards the 750. As others have implied, the SM951 is strong in lighter duty workloads, and is great for 98% of users.

You're not in the 98%, with the applications you mentioned.

The 750 is a workstation drive, and workloads like yours are where it shines.
 
I am looking for an extremely fast SSD preferably just 1 and I don't want to run RAID 0/1. It's primary function will be a game disk drive that will stream games to up to 50-60 diskless clients. I will be using 10Gb SFP+ on the server to the switch and am thinking the Intel 750 1.2TB might be the drive for me.

Thoughts?

Don't mean to hijack the thread but it seems you got your answer and I don't want to make a new thread.
 
I am looking for an extremely fast SSD preferably just 1 and I don't want to run RAID 0/1. It's primary function will be a game disk drive that will stream games to up to 50-60 diskless clients. I will be using 10Gb SFP+ on the server to the switch and am thinking the Intel 750 1.2TB might be the drive for me.

Thoughts?

Don't mean to hijack the thread but it seems you got your answer and I don't want to make a new thread.

RAMdisk at that point. :)
 
yea 50-60 clients aint happening. You really need a RAID array for that many clients. Your better off running a RAID 0 for something like that. Additionally, I don't know how 10Gbps is going to keep up with 50-60 clients. Dual NIC? That is a lot of potential data requests.

@Chris_Lonardo Do you think a RAID 0 of 4-6 Sandisk Extremes would be better or would a 750 be better? These are independent requests from a lot of clients. I don't know...would this learn RAID 0 or single drive? It is an interesting situations aye? High or low queue depth? I would think its high queue depth right? So which in your professional opinion be better?

Also what is the game library size? Is 1 TB sufficient? I have 1k games personally so 750 1.2TB wouldn't be enough. is that space sufficient?

EDIT: the more i think about it maybe a 750 is better but Chris would know better. Never heard of thought of a situation like this so unsure what the demand looks like. I do think you want to do dual 10Gbp NICs with SMP (is that right? SMP?) Have you run any tests to see what the 10 Gb E can do? Like how many connections, requests, file sizes and such. It might actually be the first thing to bottle neck in odd ways depending on how well it can communicate...gah can't think of the word lol.

FYI RAID would be way cheaper and give you more storage as well. Not sure if thats an issue in your situation. I assume this is a gaming cafe

BTW this is a review of IRST raid. I would use a controller if you went RAID round of course but here are some interesting RAID 0 figures for you if you wanted to look at them.
http://www.tomshardware.com/reviews/ssd-dc-s3500-raid-performance,3613-5.html

Sorry i am not of much help. Your situation is interesting but i don't know a lot about it so take what i said with some good ole salt
 
Last edited:
Are you running, or will you be running SLI in the future? This exacerbates any heat issues an M.2 drive may have or will create issues where there weren't issues before. Basically two high end video cards sit over most M.2 slots and bake the shit out of the drive. The Intel SSD 750 avoids this in both configurations either utilizing a PCIe card or a M.2 > U.2 adapter / cable connection to remote mount it from the M.2 slot.

For this reason, I am going Intel 750 in the near future.
 
Are you running, or will you be running SLI in the future? This exacerbates any heat issues an M.2 drive may have or will create issues where there weren't issues before. Basically two high end video cards sit over most M.2 slots and bake the shit out of the drive. The Intel SSD 750 avoids this in both configurations either utilizing a PCIe card or a M.2 > U.2 adapter / cable connection to remote mount it from the M.2 slot.

For this reason, I am going Intel 750 in the near future.

i would assume he would get a riser? (the expansion slot card for m.2) but according to chris the 750 is a much better choice.
 
yea 50-60 clients aint happening. You really need a RAID array for that many clients. Your better off running a RAID 0 for something like that. Additionally, I don't know how 10Gbps is going to keep up with 50-60 clients. Dual NIC? That is a lot of potential data requests.


Also what is the game library size? Is 1 TB sufficient? I have 1k games personally so 750 1.2TB wouldn't be enough. is that space sufficient?

EDIT: the more i think about it maybe a 750 is better but Chris would know better. Never heard of thought of a situation like this so unsure what the demand looks like. I do think you want to do dual 10Gbp NICs with SMP (is that right? SMP?) Have you run any tests to see what the 10 Gb E can do? Like how many connections, requests, file sizes and such. It might actually be the first thing to bottle neck in odd ways depending on how well it can communicate...gah can't think of the word lol.

FYI RAID would be way cheaper and give you more storage as well. Not sure if thats an issue in your situation. I assume this is a gaming cafe

BTW this is a review of IRST raid. I would use a controller if you went RAID round of course but here are some interesting RAID 0 figures for you if you wanted to look at them.
http://www.tomshardware.com/reviews/ssd-dc-s3500-raid-performance,3613-5.html

Sorry i am not of much help. Your situation is interesting but i don't know a lot about it so take what i said with some good ole salt
Thanks for the info!

The software I'm using is called CCBoot. According to them you really only need more than 1 NIC when you have 70+ clients.

They also recommend RAID 0 for faster access times on the game drive. I really need something like 2 TB because 1TB like you said really isn't that much.

I was hoping to get away with just 1 1.2TB Intel 750 but 2 2TB 850 Pro's in RAID 0 might make more sense.

As for the 10Gbps on the server to switch side that is just to balance the load across all the clients. Obviously they will be limited to 1Gb Ethernet still but during peak times when all PC's are being utilized 10Gbps + extremely fast game disks in RAID 0 came in handy.
 
yea 50-60 clients aint happening. You really need a RAID array for that many clients. Your better off running a RAID 0 for something like that. Additionally, I don't know how 10Gbps is going to keep up with 50-60 clients. Dual NIC? That is a lot of potential data requests.

@Chris_Lonardo Do you think a RAID 0 of 4-6 Sandisk Extremes would be better or would a 750 be better? These are independent requests from a lot of clients. I don't know...would this learn RAID 0 or single drive? It is an interesting situations aye? High or low queue depth? I would think its high queue depth right? So which in your professional opinion be better?

Also what is the game library size? Is 1 TB sufficient? I have 1k games personally so 750 1.2TB wouldn't be enough. is that space sufficient?

EDIT: the more i think about it maybe a 750 is better but Chris would know better. Never heard of thought of a situation like this so unsure what the demand looks like. I do think you want to do dual 10Gbp NICs with SMP (is that right? SMP?) Have you run any tests to see what the 10 Gb E can do? Like how many connections, requests, file sizes and such. It might actually be the first thing to bottle neck in odd ways depending on how well it can communicate...gah can't think of the word lol.

FYI RAID would be way cheaper and give you more storage as well. Not sure if thats an issue in your situation. I assume this is a gaming cafe

BTW this is a review of IRST raid. I would use a controller if you went RAID round of course but here are some interesting RAID 0 figures for you if you wanted to look at them.
http://www.tomshardware.com/reviews/ssd-dc-s3500-raid-performance,3613-5.html

Sorry i am not of much help. Your situation is interesting but i don't know a lot about it so take what i said with some good ole salt
Beyond the fact that NVMe is designed for high queue depths and lower latencies, even if the 750 used AHCI, it would still be a killer drive. Why? 18 angry channels of NAND fury. Intel took a ridiculous workstation drive, and released it to the consumer market because they have the deep pockets to let it be a loss leader while establishing market-leading credibility. It's a great opportunity for those of us who don't just order new workstations through corporate purchasing. The SM951 is fantastic, and there's a lot of good to be said about a bunch of Extreme Pros or 850 Pros in RAID 0, but the 750 is dominant in demanding workloads.

As for supporting 50-60 clients- obviously there are other variables in the mix, but I wouldn't put it past the 750. NVMe's relatively lock-free approach to data access (which is where this would trounce a big RAID array) coupled with Intel's superb controller give it a fighting chance.
 
Beyond the fact that NVMe is designed for high queue depths and lower latencies, even if the 750 used AHCI, it would still be a killer drive. Why? 18 angry channels of NAND fury. Intel took a ridiculous workstation drive, and released it to the consumer market because they have the deep pockets to let it be a loss leader while establishing market-leading credibility. It's a great opportunity for those of us who don't just order new workstations through corporate purchasing. The SM951 is fantastic, and there's a lot of good to be said about a bunch of Extreme Pros or 850 Pros in RAID 0, but the 750 is dominant in demanding workloads.

As for supporting 50-60 clients- obviously there are other variables in the mix, but I wouldn't put it past the 750. NVMe's relatively lock-free approach to data access (which is where this would trounce a big RAID array) coupled with Intel's superb controller give it a fighting chance.

then it would come down to budget? Can he make a 1.2 TB space work or get two of them for 2.4 TB??? Many said that even 3-4 SSDs in RAID 0 were roughly equal to the 750 in their reviews. Wasn't yours one of them too? Remember we got the issue of if the NICs can even keep up with the 750 interms of IOs. Will the requests for data be limited by drive or NIC?
 
i would assume he would get a riser? (the expansion slot card for m.2) but according to chris the 750 is a much better choice.

I am not sure what you mean. An M.2 drive either fits into the slot, or you use a PCIe adapter, or in the case of the SSD 750, you use a U.2 adapter and cable kit.

Beyond the fact that NVMe is designed for high queue depths and lower latencies, even if the 750 used AHCI, it would still be a killer drive. Why? 18 angry channels of NAND fury. Intel took a ridiculous workstation drive, and released it to the consumer market because they have the deep pockets to let it be a loss leader while establishing market-leading credibility. It's a great opportunity for those of us who don't just order new workstations through corporate purchasing. The SM951 is fantastic, and there's a lot of good to be said about a bunch of Extreme Pros or 850 Pros in RAID 0, but the 750 is dominant in demanding workloads.

As for supporting 50-60 clients- obviously there are other variables in the mix, but I wouldn't put it past the 750. NVMe's relatively lock-free approach to data access (which is where this would trounce a big RAID array) coupled with Intel's superb controller give it a fighting chance.

I've worked with the enterprise version of these drives. Simply amazing hardware. At my last job we also had Fusion IO cards which cost a lot more but were slower than the Intel P3700s. There are some Fusion IO models out there which were even faster but were far more expensive. We had lower end models from Fusion IO which cost three times what the Intel P3700's do. The Intel SSD's in the enterprise sector are actually a good value compared to what some other options cost.
 
Last edited:
1 CPU 1 NVME

50 VMS


Hmmm unless nothing is going on in those VMs you're going to tax something ;)

Can you explain more than "CCBOOT".
What are you doing exactly?

2x SATA SSD != 1 NVME just so you know.
 
1 CPU 1 NVME

50 VMS


Hmmm unless nothing is going on in those VMs you're going to tax something ;)

Can you explain more than "CCBOOT".
What are you doing exactly?

2x SATA SSD != 1 NVME just so you know.

I don't want to derail the thread too much but basically CCBoot is a diskless solution for workstations or in my case gaming PC's. All the PC's PXE boot off a locked down Windows image from a server. On another drive in the server, all the games are stored. This is where the NVME storage solution was really appealing to me but now that I think about it 1.2TB might not be enough. Might just RAID 0 2 850 Evo/Pro 2TB drives. That's why I am looking for input here. In the past people were just RAID 0 regular hard drives but times have changed and we can use SSD's now. By far the best advantage to going SSD is the access times.

A lot of LAN centers around the world are using diskless and some are as large as 250 diskless clients.

A couple a of reasons I want it.


  • No need to purchase hard drives/SSD's for every computer. Less things to fail.

  • Locked down Windows image that cannot be modified by the client so no worrying about viruses or other malicious software.

  • If you need to update all the workstations, simply login to one of the clients, enable administrator mode and install any updates or software you want. Turn off and upload new image to server. Reboot all workstations and done.
    Updating games be done from 1 PC using the same method as above.

Some people say they don't like having 1 point of failure in a LAN center using diskless but when you think about it, if SmartLaunch server (LAN Center billing software with timers) takes a crap you can't work anyway. So it doesn't matter. The SmartLaunch server would be on the same machine as the CCBoot software. I will utilize 2 servers and most likely use a cloning software so if 1 server takes a crap all I have to do is reroute to the backup. Sounds good in theory... Was thinking of using clonezilla server edition. Heard good things about it but it looks a little intimidating for initial setup.

Not having to purchase hard drives for 50-60 clients saves around $12k. That gets invested directly back into servers and networking which you need 2 of everything for backup purposes but it's still cheaper than installing hard drives on all PC's.

Also will be utilizing Netgear 10Gb SFP+ switches. 10Gb will be fine for 50-60 PC's might even be overkill. I will be running Cat7 to all workstations for future proofing the LAN center when 10GbE becomes a thing I won't have to rerun all new cables.

That's the plan at least!
 
Last edited:
"All the PC's PXE boot off a locked down Widows image from a server. On another drive in the server, all the games are stored."

This is why I was curious how you were using it... :)
 
"All the PC's PXE boot off a locked down Widows image from a server. On another drive in the server, all the games are stored."

This is why I was curious how you were using it... :)

That's the plan. I still need to see if 1.2TB would really even be realistic. I might max it out fairly fast. Only need 1 copy of every game obviously. I guess once I figure out what games I plan on offering I will go from there.

I personally want to obviously offer all the popular free to play games such as LoL and Dota 2 but don't want to offer too many paid games where people aren't able to play local games against each other because everyone is playing something different. People are extremely competitive especially when that competition is literally right next to them with other people watching. :)
 
I am not sure what you mean. An M.2 drive either fits into the slot, or you use a PCIe adapter, or in the case of the SSD 750, you use a U.2 adapter and cable kit.

when i said riser i meant the adapter. hence why in his case i doubt he would not use an adapter to actually use a regular PCI-e slot plus i dont think his server has GPUs
 
then it would come down to budget? Can he make a 1.2 TB space work or get two of them for 2.4 TB??? Many said that even 3-4 SSDs in RAID 0 were roughly equal to the 750 in their reviews. Wasn't yours one of them too? Remember we got the issue of if the NICs can even keep up with the 750 interms of IOs. Will the requests for data be limited by drive or NIC?

I haven't tested the performance of RAID 0 options on the 750, unfortunately, but if the budget's there, it's certainly worth a shot. Or just run two separate volumes, if that makes sense- not really familiar with the details of this workload.

In terms of 3-4 SATA SSDs in RAID 0, I haven't tested this, but will hopefully get a chance to do so in the future. Admittedly, I haven't looked into this specific comparison, but my expectation would be this- in RAID 0, with 4 drives, you're looking at about a +100% improvement vs. a single drive in random performance at very high queue depths, but a negligible increase in random performance at low queue depths. Sequential performance would be maybe +150-200% vs. a single drive.

Looking at, say, 4x 512GB 850 Pros (and applying the optimistic estimates above) vs. a 1.2TB Intel 750, we get the following:

850 Pro peak read IOPS: 100k (maybe 200k with RAID) vs. Intel 750's 440k read IOPS
850 Pro peak sequential read 550MB/s (~1.6GB/s with RAID) vs. Intel 750's 2.4GB/s sequential read

4x 512GB 850 Pros will run you about $1k (plus a controller if not using onboard RAID), giving you 2TB usable capacity in RAID 0. The Intel 750 is about that much money for 1.2TB.

I would heavily lean towards the 750, particularly if the server uses multiple threads effectively. Lower latency, better handling of file system locking, and a generally better integrated solution. When you're teaming drives together in a RAID array, you've got a bottleneck in accessing each of them through the SATA bus. The 750 already has a massive amount of "teaming" going on onboard, with that 18 channel controller accessing the NAND directly.

I've worked with the enterprise version of these drives. Simply amazing hardware. At my last job we also had Fusion IO cards which cost a lot more but were slower than the Intel P3700s. There are some Fusion IO models out there which were even faster but were far more expensive. We had lower end models from Fusion IO which cost three times what the Intel P3700's do. The Intel SSD's in the enterprise sector are actually a good value compared to what some other options cost.
In informal/personal testing, I've seen the 750 spank some enterprise drives that it really has no business competing against. For anyone who needs the speed, it's a bargain.
 
Intel
Better track record with less problems on all their SSDs
Data loss protection
A warranty you can actually use if you ever need it
 
could he also do JBOD -_- derp. No reason to worry about RAID for those 750s rofl

Thats tomshardware review i reference showed solid scaling all the way to 4 drives and some improvement in a 5th drive and marginal improvement in a 6th drive for RAID 0. This was with intel SSDs...I forget which ones. (3500?) The 5th and 6th drive showed little improvement because they were hitting DMI bandwidth wall especially at the 6th drive!

High queue depths scaled great but low queue depths topped at ~2x but large queue depths scaled at near 100% efficiency from what I recall. Check the article out. It is a great read.

Also he is limited at 10Gbps so max 1GBps so most of the 750 won't even be used. It might be worth considering doing dual NIC.

Also the Extreme Pros 480 GB can be had for 200 and the 1 TBs for 410. Way cheaper than those 850 Pros.

The only 2 things I can think would be a major concern is:
1.)does lower latency even matter since this is carried out over network? Is the SSD latency kind of moot at this point since its only a small chunk of the overall latency because of the network and such.
2.)Endurance. This could be one of the only reason to stay with a 750 because SSDs that hit that endurance wall can run worse then HDDs. I periodically have my whole PC hang because of that freakin BS endurance wall. Where the stupid SSD gets chocked up. Happened on my Samsung 840 non pro (edition before EVO) and my Sandisk Extreme Pro. The Extreme Pro is much better and its harder to hit that wall but it still happens fairly often. I never had that PC stall crap with a HDD so thats worth considering. And don't ask me as a normal geek how I throw down the rage on my SSD. I just tinker and do computer stuff.

http://www.tomshardware.com/reviews/ssd-dc-s3500-raid-performance,3613-5.html

R0-Seq-Combo.png

RAID0-4-KB-Read.png

RAID0-4-KB-Write.png
 
Last edited:
There's so much guessing going on in here, and it's mainly because the person asking if it will work has no clue what his load is per-VM or per-game within the VM.

Without that info everyone is just guessing.

2 SATA
4 SATA

!= 1 NVME

When he posts his IOPs requirement per-VM then people can actually step-in and provide REAL advice.
 
There's so much guessing going on in here, and it's mainly because the person asking if it will work has no clue what his load is per-VM or per-game within the VM.

Without that info everyone is just guessing.

2 SATA
4 SATA

!= 1 NVME

When he posts his IOPs requirement per-VM then people can actually step-in and provide REAL advice.

yes but as i said the real issue here is the connection. That 10Gb E is the limiting factor not whether SATA or PCIe is better. Thats what I am trying to get at. A RAID array of SATA SSDs vs Intel 750 is moot if he is limited to a mere 700-1200MBps with a single 10Gb E connection. That is why I am saying if he goes either option he should consider upgrading with a dual NIC set up to actually use those drives. There is 0 performance difference between 750 or SATA RAID if he is on 1 NIC. If he has 2 NICs there could actually be a noticeable difference.

Granted the potential other issue is endurance...if he really has that large of a lot but thats situational.
 
Back
Top