How AMD StoreMI Technology Works

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,577
Last month Dan wrote up a nice informative article on AMD StoreMI and provided some test results. The bottom line is it's a nice technology similar to Intel's SRT and could help out gamers that can't afford a large SSD. You should give it a read if you missed it. However, If you want to get the scoop straight from AMD I have a nice video from Robert Hallock and he tells you how it works. It's very informative and you should check it out.

Watch the video here.
 
I was a bit perplexed when I found out that the pronunciation is "m, i" not "me" after the release of Ryzen. I was thinking that it would be very Nihongo. So, no sense "me" and store "me."


https://www.hardwarezone.com.sg/fea...en-and-it-s-not-just-fast-its-super-smart-too
While no further technical specs have been revealed, we now know that Ryzen will feature a group of new sensing and adaptive prediction technologies collectively known as SenseMI (pronounced Sense Em Ai, not Sense Me).
 
Last edited by a moderator:
I'd prefer just using the higher-speed NV storage as cache, rather than moving stuff on and off the HDD.
Not much downside: in the example of the video, a reduction from 4TB to 3.75TB of storage.
But that's assuming I still used HDDs for anything but NAS, which I don't: the only things that spin in my systems are the fans.

Seriously, who wants to deal with all that migrating, location remapping, data fracturing, and other issues just for a 6% increase in HDD storage size?
 
My first reaction is that they had to sit there and interview people that can write mirrored and backwards.

One nice thing is that you can pretty much use any SSD as to my understanding. Intel's has limitations put in place by the marketing department. There is no reason my 750 PCIe NVMe drive should not work with their product, but it doesn't, you have to buy their new Optane stuff. Shame.

BTW, where is TR support?
 
Last edited:
I'm going to be the odd man out here, but I'd like to see tiered desktop storage solutions stop focusing on the low end market. Stop trying to make mechanical disks "faster" for users with really limited systems. In my experience, anyone tied so tightly coupled to mechanical disks, which are in their desktop and not a large NAS, also skimped out on the rest of the system. Hitting the mechanical disk might be slow, but those users aren't doing much of anything I/O challenging with their systems in my experience. I don't see this benefiting higher end systems with users making money off their PCs or even enthusiasts with no RAID support.

I like the concept here vs caching, but it always seems like a really hard sell for the systems/users this targets. IMO, it's almost always HDD only upselliing to HDD + small SSD, not small SSD + large HDD vs larger SSD only. Maybe it will expand to cover more use cases and be higher performance.

I'd really like to see tiered storage/caching take a ThreadRipper workstation focus; take the FreeNAS route and make the tiers/caching super high performance. How about a RAM -> SSD -> NAS tier? I'd love to have a ThreadRipper system with a ton of RAM and RAM caching, an Intel Optane SSD (OS + dynamically managed blocks), and then 10GbE/25GbE to a NAS. The OS would always be super fast, random I/O could be boosted beyond even Optane numbers with the RAM cache, and whatever data I've been using is local on the Optane but I don't have to manually pull it across the network and manage it.

This won't sell Ryzen systems, but neither does the existing solution IMHO. It could make AMDs HEDT architecture much more competitive with Intel's and enable it to move upwards in pricing. HEDT/workstation users who aren't doing it just for the LOLs have money and will pay for a better solution...to a point. There's money filling the niche from SSD to NAS and hiding as much latency as possible. Toss in some dynamic prefetching or better index caching.

Storage tiering/caching is to hide system latency. Almost anyone that really cares about system latency/performance and needs a solution larger than what can easily fit locally, the latency is out to the network, not to a set of mechanical disks.
 
I'm going to be the odd man out here, but I'd like to see tiered desktop storage solutions stop focusing on the low end market. Stop trying to make mechanical disks "faster" for users with really limited systems. In my experience, anyone tied so tightly coupled to mechanical disks, which are in their desktop and not a large NAS, also skimped out on the rest of the system. Hitting the mechanical disk might be slow, but those users aren't doing much of anything I/O challenging with their systems in my experience. I don't see this benefiting higher end systems with users making money off their PCs or even enthusiasts with no RAID support.

I like the concept here vs caching, but it always seems like a really hard sell for the systems/users this targets.

Yeah, that was my first thought when I read about this. I already do essentially what this thing does as a matter of habit; less-accessed data on a HDD, games and OS and stuff I use frequently on the SSD. Makes me wonder who this is for, exactly.
 
And if your mechanical drive kicks the bucket?

It seems like this is adding one more device in line for a chance of failure. At least currently, if I had a 4TB say Music/Movie/Game drive that died, my OS would still be intact and functioning on an SSD.

Or the other way around, what if the SSD failed? in this scenario it sounds like ALL data would be lost as blocks would be here and there on both drives? No thanks.
 
I noticed he didn’t write down which video card he’s using LOL! Good simple explanation though, wasn’t douchy about it.
 
If they said what this really was, i think no one would be paying any attention to it. It's just some 3rd party software. One of a million niche utility software. Only reason for the attention is they slapped AMD on the 3rd party license.

When the marketers saw who would push adoption of Ryzen first, it was all the twitch/youtubers. And this software is likely something a lot of them would use. For the other 99% of us, this doesnt make sense. Steam lets you handle this already.
 
PrimoCache. $20. Works with any vendors kit. I got it when I couldn't get Intel's horrible Optane software to work; Primocache saw it immediately and I was able to dedicate it to L2 cache with three clicks. Heck it speed up access from my Evo Pro M2 SSD in a noticeable way.

I don't have a problem with this except it's AMD only. I get that they feel they need stuff to differentiate them, I'd rather have something vendor neutral.
 
I've been using this as a boot drive since the X470 launch. Works great for me. I can install everything I want to my C drive without worrying about it filling up super fast like when I previously only used an SSD for my C drive.

I still have a second NVMe for things that I want to be absolutely sure run as fast as possible.

I keep a windows image backup incase it ever dies, but I'd do this even if I was running off of a single drive.

...

I don't have a problem with this except it's AMD only. I get that they feel they need stuff to differentiate them, I'd rather have something vendor neutral.

You can buy StoreMI from the original maker directly: http://www.enmotus.com/products In spite of the marketing hype, this is not truely an AMD exclusive technology. Getting an X470 motherboard just gives you a "Free" license.
 
  • Like
Reactions: DocNo
like this
You can buy StoreMI from the original maker directly: http://www.enmotus.com/products In spite of the marketing hype, this is not truely an AMD exclusive technology. Getting an X470 motherboard just gives you a "Free" license.

Looks interesting. It's quite a bit more for the server version than PrimoCache and it looks like you have to set up a virtual volume and put your data there; that's more complexity. With PrimoCache you just enable it on your existing box and it starts working. No need to change your drive layouts. PrimoCache can even be configured from the command line so it works with Hyper-V Server (perfect for home or small business VM environments).

It would be interesting to do some benchmarking/extended run time comparisons between the two to see if the extra work for their "machine learning" really is worth it or not. I guess if you needed max performance in a production environment the extra cost and configuration overhead could be worth it; my environments aren't that critical.

I'm always interested in stuff like this and have added it to my toolbox for potential use if the need dictates so thanks for the link! It's cheaper than the other more established server caching solutions so that's a good thing.
 
Last month Dan wrote up a nice informative article on AMD StoreMI and provided some test results. The bottom line is it's a nice technology similar to Intel's SRT and could help out gamers that can't afford a large SSD. You should give it a read if you missed it. However, If you want to get the scoop straight from AMD I have a nice video from Robert Hallock and he tells you how it works. It's very informative and you should check it out.

Watch the video here.
It sure seems to me there is a stupid misleading error in the video.

After laboriously explaining how it works at block level, he then immediately lapses into contradictory references to ~"frequently used files" etc. .

If, e.g., the two tiers were fairly equal, the algorithm described would probably split most sizable files between both drives, so the "file" would be like swiss cheese on each physical drive.

It also actively hides an important attraction for some - fast storage speed rather than tiering, which, like raid 0 teams two drives, but differently, in ways that could be advantageous to some apps.
 
Last edited:
I a guarantee this will be available on TR2

I'd use it.

I have 8 spindles on my FreeNAS. I'd iSCSI the NAS drives as bulk storage and StoreMI into my 512 960 pro.

And I'm not bandwidth limited. I have a 10gb Fiber into my NAS and my two desktops in my home as I'm an ex network engineer with lots of leftover Intel and Cisco crap Haha everything else is wifi.
 
And if your mechanical drive kicks the bucket?

It seems like this is adding one more device in line for a chance of failure. At least currently, if I had a 4TB say Music/Movie/Game drive that died, my OS would still be intact and functioning on an SSD.

Or the other way around, what if the SSD failed? in this scenario it sounds like ALL data would be lost as blocks would be here and there on both drives? No thanks.

Like most things, it assumes u have a backup, but yeah, HDD is iffy as is (5% failure rate?), so to make one half of a ~striped array seems to invite a time wasting restore.

The odds improve greatly imo, if the tiers are both ssd. A 250GB nvme w/ a 500GB sata ssd e,g.

Other x470 combos are:

the full nvme as tier 1 & the chipset nvme as tier 2. This has the advantage of minimising the traffic on the chipset - the hard work is done by the faster nvme port.

either of the nvmeS as tier 1 & a sata ssd as tier 2
 
I a guarantee this will be available on TR2

I'd use it.

I have 8 spindles on my FreeNAS. I'd iSCSI the NAS drives as bulk storage and StoreMI into my 512 960 pro.

And I'm not bandwidth limited. I have a 10gb Fiber into my NAS and my two desktops in my home as I'm an ex network engineer with lots of leftover Intel and Cisco crap Haha everything else is wifi.
If you are suggesting using an array as a tier, then u cannot do this.
 
If you are suggesting using an array as a tier, then u cannot do this.

Not really suggesting it... just having a little fun. Creative wishful thinking nothing more. Its a simple hard drive SSD combo system nothing more in reality. I know that.

But a serious question would be when your nvme and your spinner are coupled together under StoreMI can you allocate more than 2GB of ram? I do not know if it would make a difference being how volatile ram is .. but some of us have 32 GB plus of ram so I can afford to toss half of it at the tech just for fun.

I have quad channel ram simply for the looks of having 4 dimms all RGB'd up in my X299 board. Nothing more. I have no need for 32 GB for any usage scenario I do.
 
Last edited:
I a guarantee this will be available on TR2

I'd use it.

I have 8 spindles on my FreeNAS. I'd iSCSI the NAS drives as bulk storage and StoreMI into my 512 960 pro.

And I'm not bandwidth limited. I have a 10gb Fiber into my NAS and my two desktops in my home as I'm an ex network engineer with lots of leftover Intel and Cisco crap Haha everything else is wifi.

Do you have any advice re buying used server drives for desktop (i am only familiar w/ sata) use? Just curious. As you say, its fun having lots around to play with.
 
FYI, there is similar free software with a cheap raid card i have, which has an interesting twist. You can opt to have a copy of the small tier automatically kept on the large tier.

In the event of the first tier failing, recovery is possible.

IMO this would also speed things when the fast tier was making space by moving stuff to the slow tier, as the data already resides there, & can just be re-labeled rather than copied.
 
Do you have any advice re buying used server drives for desktop (i am only familiar w/ sata) use? Just curious. As you say, its fun having lots around to play with.

Don't buy used server drives. Just go and get regular hard drives for desktop usage and couple it to a small sata ssd and you will be happy with the performance. Getting used server space stuff is or can be rather expensive for a home user even used. And your likely to not see any uptick in your day to day computing needs performance wise.

For instance if you want to get SAS or nearline SAS drives you cant just snap them into your desktop as you'll need a raid card that can control SAS drives etc.... there are more things you have to buy in many cases to get enterprise crap to work in a home environment.

I have zero need for a Cisco Distribution Switch in my home. But it has enough bandwidth to serve an entire residential development of houses on just 2 ports much less the other 46 ports haha
 
I a guarantee this will be available on TR2

I'd use it.

I have 8 spindles on my FreeNAS. I'd iSCSI the NAS drives as bulk storage and StoreMI into my 512 960 pro.

And I'm not bandwidth limited. I have a 10gb Fiber into my NAS and my two desktops in my home as I'm an ex network engineer with lots of leftover Intel and Cisco crap Haha everything else is wifi.
We're in the same boat. I've got 12 spindles and Intel X550-T2 and the new XXV710-DA2 (dual 25GbE). I've been testing, but the XXV will live in my FreeNAS box. Unfortunately, I don't (yet) have a 25GbE switch, so it has to be directly connected to the workstation.
 
We're in the same boat. I've got 12 spindles and Intel X550-T2 and the new XXV710-DA2 (dual 25GbE). I've been testing, but the XXV will live in my FreeNAS box. Unfortunately, I don't (yet) have a 25GbE switch, so it has to be directly connected to the workstation.

Well you know the way that iSCSI presents itself as a phys drive to windows you might actually be able to use StoreMI. Well have to find out in due time and testing.

10gb and higher really isn't that expensive. The cards are about 150 to 300. Fiber or cat 6 is cheap on monoprice. It's the damn switches that burn you. However Ubiquiti has Unifi 16 port 10gb SFP+ switch for 500 I have been eyeballing for many moons now. All ports are 10g capable.

Anyways sorry... back on topic Heres to hoping TR2 is an affordable beast. Waiting on Kyle's Review.
 
Last edited:
Back
Top