Intel Announces New Optane DC Persistent Memory

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,552
Intel is launching a new type of memory based upon 3D Xpoint and it's persistent memory in a DDR4 form factor. This new memory is expected to be used in datacenters and should really help out with database operations, but I'm not sure this will help the mainstream. With that said, I'm sure big data will love it.

Intel clearly sees new storage technology as key to its long-term addressable market spaces, and it’s targeting its initiatives at these areas, possibly in a bid to tie companies more closely to its CPU products. After all, if Optane provides significant performance advantages and Works Best With Intel (or only with Intel), then Intel has a neat method for keeping itself even more relevant in the data center market.
 
This is another tiny step forward toward integrating memory and storage. 10 years from now, they'll no longer be separate components.
 
It will be a challenge for OSs to fully exploit this new technology -- we've never seen byte-rewritable (without erasure) non-volatile memory at anywhere near this scale, outside of exotic battery-backed up DRAM drives. As a result, file systems are all block-oriented, a real waste when you can change a value in a file by just writing the byte(s) that express that value.

Still, block-oriented is pretty much required if you're going to encrypt your filesystem, in which case you didn't have much use for byte-rewrite anyway. This stuff is still wicked fast, on a wicked fast interface (compared to even PCIe3x16). Latency and bandwidth are vastly improved over prior-art NV solutions. So if you can treat this as just-another-disk, boot off of it, keep your OS and swap file on it, and maybe use a bit of it to cache your way-slow M.2 SSD, it may radically change the PC experience.

So sometime next decade, I expect I'll build a modest little PC with, say, 16 cores/32 threads with a base clock of 5GHz or more, 64GB of DRAM and 1TB of XPoint or a competing technology in the memory slots, a 24TB (6 bit per cell) M.2 SSD , and a GPU equivalent to 8 or more Titan Vs. Imagine how fast Visio will run on that!* Imagine how good the VR live-streams from the Moon (Bezos) and Mars (Musk) colonies will look! Some things to look forward to.

(* I do my work with Office and Visio, and even on my i7-6700K/GeForce 1080 system, Visio can be annoyingly slow.)
 
  • Like
Reactions: lazz
like this
I could watch cat videos and foamy the squirrel videos faster than i ever could before!
 
My question is how do you wipe persistent memory, now if you have issues, you reboot or turn of the pc, memory get's wiped and you start with a more or less clean slate, how will this work with these things?
 
My question is how do you wipe persistent memory, now if you have issues, you reboot or turn of the pc, memory get's wiped and you start with a more or less clean slate, how will this work with these things?

Probably still have a volatile memory as well as that evolves in speed too. Might also be able to designate an area to erase at bios startup. and that area could be used for scratch or app temp data
 
My question is how do you wipe persistent memory, now if you have issues, you reboot or turn of the pc, memory get's wiped and you start with a more or less clean slate, how will this work with these things?
Same way you'd wipe an SSD or HDD, I imagine. Boot to removable storage and over-write everything in the XPoint NVM. Of course, with the speed and storage density we'll see in the future, you could also use a journal-ling file system and just revert it to a previous point in time.
 
My question is how do you wipe persistent memory, now if you have issues, you reboot or turn of the pc, memory get's wiped and you start with a more or less clean slate, how will this work with these things?

Same way you wipe volatile memory after a power on or reboot.
 
You'd 'wipe' them by removing the map / page table to the physical memory on the device. Create a new one and it'd be seen as blank.

Can't remember the exact terms as it's 20 years since CS but it's basically the same as hard disks, a normal delete removes the pointer not the data. It just gets turned over faster on ram and data is much more distributed over the device.

I think the updates to use it would be pretty quick tbh, in memory DB has been a thing for a while but now acid isn't such a problem. The OS would need some work in terms of power states and reboots but it's probably already been done on some weird ass Linux derivative.

Pretty cool stuff.
 
"DC Persistent Memory"

Does that mean no matter who's in control it'll remember what they did?

Sorry, bad joke but couldn't resist.

 
It will be a challenge for OSs to fully exploit this new technology -- we've never seen byte-rewritable (without erasure) non-volatile memory at anywhere near this scale, outside of exotic battery-backed up DRAM drives. As a result, file systems are all block-oriented, a real waste when you can change a value in a file by just writing the byte(s) that express that value.
I'm not sure byte-rewritable is quite the way to go though. The indirection with a TLB could be a real PITA for the caches. More addresses than actual data at that point. Further, I'm of the mindset some form of HBM acts as a high bandwidth cache between the persistent memory and processor. Using DIMMs over NVMe for the higher bandwidth limit with a large cache on die/package. Write performance of Optane isn't great from a power perspective, so a large cache to absorb frequent modification makes sense. Yes there are other applications where the cache may be less useful, but in general replacing NAND with persistent memory seems the better solution.
 
Back
Top