Intel Claims Storage Speed Record with First Large-Capacity Optane SSD

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Intel has begun shipping their first large-capacity Optane SSD drive, the DC P4800X, which has 375GB of storage and costs $1,520. Many have accused the company of hyping the new technology up, and they were probably right—benchmarks do not seem to reflect earlier claims, and conventional SSDs may remain the better choice for the typical system.

…benchmarks indicate that the new Optane drive, in most real-world uses, won’t reach the levels of performance that Intel has been hyping up to now. On top of that, the benchmarks were conducted in complex environments that made the numbers hard to interpret. In a nutshell, Intel said that if you run sequential tasks, it would be better to use conventional SSDs. Optane lights up when running random reads and writes, which are common in servers and high-end PCs. Optane’s random writes reach up to 10 times faster compared to conventional SSDs, but only when utilization is being pushed to extremes, while reads are around three times faster.
 
I just read that crazy expensive for a SSD and you thought regular Sata SSDs were expensive.
 
Well Optane was a waste. It will arrive DOA in the consumer space.
 
That website is painful to read and there's no indexing for the info. Yuck.

Curious that it's only really shining in the extreme top end.
 
I stopped noticing storage speeds somewhere around 250MB/s (SATA 2 saturation). I honestly can't tell the difference any more.....it's always something else now at this point that holds up the train.

It's like the Cores thing. Ok, you can buy a consumer 6 or 8 core CPU.....now what?

(Cores used to be pivotal for me for ripping movies....but QuickSnyc has gotten so good I just use that most of the time now which negates Core-O-Rama).
 
"... may remain..."?

Um.. yeah, you can get a 1Tb M.2 SSD with NV for a fraction of that.. and you don't take it in the hinder compatibility wise either... count me out...
 
Usually there is a wealth of informed and insightful posts on this forum. This thread is certainly an exception as of yet, but it's early.

I stopped noticing storage speeds somewhere around 250MB/s (SATA 2 saturation).

Intel doesn't even advertise throughput speeds for its new Optane device. The 7-10x IOPS vs. NAND SSDs at low queue depths, performance consistency, increased endurance, and 60% reduction in price compared to DRAM for the targeted systems are the main points. The Intel software that will allow these drives to become an extension of system memory is also a huge deal. Try reading TH's article on the subject. Intel released some impressive charts - check out the latency numbers under different workloads.

This drive isn't designed to load your OS and games faster.
 
my 500GB Samsung 950 Pro M2 NMVE is laughing it's ass off right now

2,5000 megabytes / sec, a fraction of the cost, and the size of a stick of gum and it was available a long long time ago.

Suck it intel.
 
my 500GB Samsung 950 Pro M2 NMVE is laughing it's ass off right now

2,5000 megabytes / sec, a fraction of the cost, and the size of a stick of gum and it was available a long long time ago.

Suck it intel.

Lol, your 950 pro can't handle a fraction of the IOPs that this thing can. Sequential performance hasn't been a useful metric for years.

Too little space, too much money, dead in the water.

So dead in the water that they'll sell everyone they make. Price point is perfectly reasonable for the intended market and will sell like hotcakes.

Was optane ever advertised for the consumer market?

While there will likely be consumer caching products based off of Optane, it has been an enterprise play from the beginning.
 
Lol, your 950 pro can't handle a fraction of the IOPs that this thing can. Sequential performance hasn't been a useful metric for years.



So dead in the water that they'll sell everyone they make. Price point is perfectly reasonable for the intended market and will sell like hotcakes.



While there will likely be consumer caching products based off of Optane, it has been an enterprise play from the beginning.
Didn't think so, but this is front page news, it attracts all kinds of posters lol.
 
Was optane ever advertised for the consumer market?
don't think so

although I'm sure manufacturers like Asus put it on the box

its an enterprise thing
wouldn't be surprised if they're constantly in short supply

Optane lights up when running random reads and writes, which are common in servers

Optane SSDs could also be used to expand memory capacity in servers by mimicking DRAM with the help of a hypervisor, Myers said. To make this happen, Intel will sell software called Memory Drive for Optane drives. This feature will only work on servers with Intel’s upcoming Xeon chips based on the Skylake architecture, and won’t work with AMD chips.

Applications that will benefit from Optane include MySQL and Memcached, which are popular with cloud providers. Data movement in servers run by companies like Facebook and Google is fast to ensure instant responses to social media or search requests.


personally I loved how seemingly many made a point out of optane in a z170 vs z270 discussion

by the time optane comes to enthusiast prices were are so many generations of Intel boards later :rolleyes:

and it's not like SSD's are going to stand still
 
I remember a short time ago I was mandated to 5ms or better response times in my environments, and now I could have the ability to get it down to 200 microseconds. I could easily see this crushing any ssd in the market when it comes to Casandra, Elastic Search, and Hadoop based applications. Using this technology as cache accelerator cards in Oracle RAC, MSSQL, or VMware environments? Wow. Can't wait till the big tier 1 storage providers start to use it. Specifically EMC and their XtremIO and Vmax lines. *swoon*.

For heavily randomize work loads, this is a game changer.
 
Usually there is a wealth of informed and insightful posts on this forum. This thread is certainly an exception as of yet, but it's early.



Intel doesn't even advertise throughput speeds for its new Optane device. The 7-10x IOPS vs. NAND SSDs at low queue depths, performance consistency, increased endurance, and 60% reduction in price compared to DRAM for the targeted systems are the main points. The Intel software that will allow these drives to become an extension of system memory is also a huge deal. Try reading TH's article on the subject. Intel released some impressive charts - check out the latency numbers under different workloads.

This drive isn't designed to load your OS and games faster.
60% the price of dram just means you might as well spend twice as much and have actual dram speed.
The linked article strangely has no comparisons between the SSD and actual DRAM.
 
Using this technology as cache accelerator cards in Oracle RAC, MSSQL, or VMware environments? Wow. Can't wait till the big tier 1 storage providers start to use it. Specifically EMC and their XtremIO and Vmax lines. *swoon*.

For heavily randomize work loads, this is a game changer.

DING DING DING! ZFS systems would LOVE this for write-cache - RAM acts as the read cache, and this could finally be a replacement for the venerable ZuesRAM device. Low-queue it destroys even the Intel 3700, which was the write-king.
 
I remember a short time ago I was mandated to 5ms or better response times in my environments, and now I could have the ability to get it down to 200 microseconds. I could easily see this crushing any ssd in the market when it comes to Casandra, Elastic Search, and Hadoop based applications. Using this technology as cache accelerator cards in Oracle RAC, MSSQL, or VMware environments? Wow. Can't wait till the big tier 1 storage providers start to use it. Specifically EMC and their XtremIO and Vmax lines. *swoon*.

For heavily randomize work loads, this is a game changer.


well thank God that there is going to be competition ;)

many column inches have been devoted to the potential uses of 3D XPoint. ...., but while everyone is waiting, Samsung has snuck up behind everyone with their new Z-SSD product line.

3_575px.jpg


At Cloud Expo Europe, Samsung had a Z-SSD on display and started talking numbers, if not the technology itself. The first drive for select customers to qualify will be 800GB in a half-height PCIe 3.0 x4 card. Sequential R/W will be up to 3.2 GBps, with Random R/W up to 750K/160K IOPS. Latency (presumably read latency) will be 70% lower than current NVMe drives, partially due to the new NAND but also a new controller, which we might hear about during Samsung's next tech day later this year.


http://www.anandtech.com/show/11206/samsung-shows-off-a-z-ssd
 
To those talking about price and speed of desktop SSDs, you don't understand the market or use for this. Take that EVO SSD you have and put it in a 24/7 100% load server and see how long those speeds last, if you only need a server to run for 30mins it will probably serve you well.

People say the same thing when Enterprise SSDs are a few grand for 1TB, yet a 1TB EVO is a fraction of that, same reasons apply. As time goes on we will see this trickle down into the desktop market, but not with the same capability at much lower price. However, don't expect to see large gains in performance, just like people going from a normal SSD with 500MB/s to ones with 2,000MB/s many people don't even notice outside of very specific uses.
 
DING DING DING! ZFS systems would LOVE this for write-cache - RAM acts as the read cache, and this could finally be a replacement for the venerable ZuesRAM device. Low-queue it destroys even the Intel 3700, which was the write-king.
Ok, write-caching does sound like a good use case. I suppose the industry has moved away from battery backed raid cards?
 
Well, with the FreeNAS and similar ZFS systems, they let the OS handle EVERYTHING. Raid cards are to be as dumb as possible so that they don't get in the way.
 
Ok, write-caching does sound like a good use case. I suppose the industry has moved away from battery backed raid cards?

They have not. In my world, all my main arrays have battery backed cache in the event of a power failure. They dump cache to disks in the event of power failure. This I do not see going away anytime soon.
 
Well, with the FreeNAS and similar ZFS systems, they let the OS handle EVERYTHING. Raid cards are to be as dumb as possible so that they don't get in the way.

I have mixed feelings about the above statement. In my world, you are foolish to not utilize the on board raid controllers that come with my infrastructure. IE, UCS, HP or Dell servers. It make no sense to waste CPU cycles on boot drive mirroring with a software based solution. You just over complicated it, more so when there is a drive failure. Utilizing the on board card on those enterprise systems is the way to go. If you have a drive failure, you just yank the drive, hot swap a replacement and walk away. Something a junior level 1 noc guy can do all day everyday. If your only experience with raid cards are super micro servers, or consumer based mother boards, then I completely agree with kicking those chumpy solutions out the way, and going with a software based solution.
 
I have mixed feelings about the above statement. In my world, you are foolish to not utilize the on board raid controllers that come with my infrastructure. IE, UCS, HP or Dell servers. It make no sense to waste CPU cycles on boot drive mirroring with a software based solution. You just over complicated it, more so when there is a drive failure. Utilizing the on board card on those enterprise systems is the way to go. If you have a drive failure, you just yank the drive, hot swap a replacement and walk away. Something a junior level 1 noc guy can do all day everyday. If your only experience with raid cards are super micro servers, or consumer based mother boards, then I completely agree with kicking those chumpy solutions out the way, and going with a software based solution.

Almost no one doing new deployments is using raid cards to do simple mirroring. Hell, a large majority of deployed servers aren't even booting off local disks at all. And those that are likely use the onboard C6xx mirroring capabilities which is free since its the chipset and does everything a multi hundred $ raid controller does. as far as software based raid, that's pretty much the standard these days. There isn't a raid card in existence that provides a fraction of the performance.

They have not. In my world, all my main arrays have battery backed cache in the event of a power failure. They dump cache to disks in the event of power failure. This I do not see going away anytime soon.

No modern raid cards use battery backed ram. They've all switched over to NAND based flash for caching layers.
 
Almost no one doing new deployments is using raid cards to do simple mirroring. Hell, a large majority of deployed servers aren't even booting off local disks at all. And those that are likely use the onboard C6xx mirroring capabilities which is free since its the chipset and does everything a multi hundred $ raid controller does. as far as software based raid, that's pretty much the standard these days. There isn't a raid card in existence that provides a fraction of the performance.



No modern raid cards use battery backed ram. They've all switched over to NAND based flash for caching layers.

If you are talking about boot from san? Agree. But I've personally come from 3 different shops with thousands of servers, when not booting off the san, are booting off of mirrored drives using raid cards from the vendor. In my world, you don't cheap out with a software based raid solution and you always go with a hardware solution. To say "Almost no one doing new deployments using raid cards to do simple mirroring" may be correct in your circles, but in my circles and the hardware stacks I support? You'd be laughed out the building.
 
If you are talking about boot from san? Agree. But I've personally come from 3 different shops with thousands of servers, when not booting off the san, are booting off of mirrored drives using raid cards from the vendor. In my world, you don't cheap out with a software based raid solution and you always go with a hardware solution. To say "Almost no one doing new deployments using raid cards to do simple mirroring" may be correct in your circles, but in my circles and the hardware stacks I support? You'd be laughed out the building.
I wouldn't be surprised if vendor NAS/SAN systems are internally set up like he suggests, but yeah for a critical server, you want standardised and supported storage.
 
I can agree with the above statement. Its absolutely possible that a NAS/SAN stack works that way with the stack purposely built for it. But its a stack of hardware built for that to server storage up for a large chunk of machines.

Throwing a software mirroring package on top of your base OS, then serving up storage, and your DB on the same single piece of hardware? Uh no. Bad idea. It reeks of my time with veritas volume manager. God damn, that thing was a pile and is what broke me on software defined raid solutions.
 
That's the thing though, for not a lot more money I can level up the actual RAM. Price is about 30 per cent too high imo. The point it's at now I wouldn't bother with yet another complex layer for the VM's to cache to. RAM really does "just work" and adjusting for it is easy.

As density climbs and price vs DRAM comes to a bit better percentage, this will change, as new tech always does. Not sure how this might impact AMD server hardware as a choice.
 
If you are talking about boot from san? Agree. But I've personally come from 3 different shops with thousands of servers, when not booting off the san, are booting off of mirrored drives using raid cards from the vendor. In my world, you don't cheap out with a software based raid solution and you always go with a hardware solution. To say "Almost no one doing new deployments using raid cards to do simple mirroring" may be correct in your circles, but in my circles and the hardware stacks I support? You'd be laughed out the building.

Those circles are basically anyone who actually cares about actual reliability. Using a complex 3rd party raid controller with complex third party firmware to do simple mirroring which can run in hardware on a C6XX chipset is simply asking for additional issues. The C6XX hardware has an installed base that absolutely dwarfs any raid card and has an installed base of mirror disks that's even bigger. Literally, you can't buy an new Intel x86 server without a C6XX chipset.

As far as software defined raid solutions... There are no other types. Raid is a software solution and has been for over a decade. Adaptec? Software Raid. LSI? Software Raid. EMC? Software Raid. And down the line with any solution you want to name. Hardware Raid is a myth and has almost always been a myth.
 
Last edited:
I may have miss-communicated. When I say raid card, I'm talking about the integrated raid management component that is part of the server when shipped from the vendor. I personally don't care if its a C6XX derivative as long as the vendor supports it when I call 1-800-its-broke for support. When I speak of a Dell, HP, or UCS server's "Raid card" I'm talking about what is currently integrated with that stack and if its a c6xx, derivative? I'm cool with that for basic boot disk mirroring. I do not agree with buying a server, forgoing the onboard solution, and buying some random 3rd party card for boot drive mirroring and hoping for the best. I also do not agree with buying a server, forgoing the onboard solution, and using a software defined raid solution in addition instead on the same system for boot drive mirroring.

I 100% agree with you when it comes to LSI, Adaptec, and EMC all being "software" raid. Cause yes.. they all run software raid. But there is a HUGE difference between a chumpy LSI raid solution and an EMC solution. There is also a huge difference between mirroring 2 drives on a local system for boot purposes and an entire hardware stack to serve up 100s of Ts of capacity.
 
Back
Top