AMD's StoreMI Technology @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
AMD's StoreMI Technology

AMD’s StoreMI or (Store Machine Intelligence Technology) is storage performance enhancement technology, which can accelerate the responsiveness and the perceived speed of mechanical storage devices to SSD levels. This isn’t exactly a new concept, but AMD’s approach to this implementation is different than what we’ve seen in the past.
 
Interesting indeed. Unfortunately it's a complex solution, that requires basic technical knowledge to live with, so it's not an option for pre loading on my customer's machines, it would probably create more problems than it solved.
 
"AMD StoreMI is not a caching solution; it utilizes advanced machine intelligence, virtualization and automated tiering to analyze the data blocks that are most often accessed, and actually moves those blocks to the fastest storage tier."

This makes it a non-starter for me. Speed isn't the only criteria I use in deciding where data should go: reliability and configuration change capability are as well.

Not being able to handle drive failure, leading to potential loss of important data or an unbootable system, is unacceptable, especially when HDDs are involved, but the article says this technology can't handle it. Furthermore, the review and slides don't address whether this technology is compatible with backup and disk-imaging utilities. If it isn't, then it's a non-starter for serious work.

Furthermore, I want to be able to pull out a drive with full knowledge of what information I'm removing from the system. This technology makes that impossible.

All in all, it looks more like a toy for use in marketing (like the junky little prize in a box of Cracker Jacks) or for cheesing benchmarks than a usable tool.
 
This is interesting tech...

AI is going to revolutionise storage in the next few years. This is interesting tech... my money is on some very cool storage tech coming out the next few years. WD is moving toward using Risc-V chips in their drives in the next few years. They plan to use that processing power to do the same types of things at the hardware level. WD has stated they plan to ship billions of RiscV chips on drives within the next few years. Hybrid drives with enough on board processing power to properly move the right data at the right times I would bet will be very attractive. Next few years should be interesting in the storage field.
 
What about a 256gb nvme + 2tb sata sdd? You could really have some fun with this.
 
Thanks for covering this. I saw some stuff on it a few weeks ago and found absolutely no one reviewing it.
 
I wonder if there is any technical reason you couldn't use this on an Intel system. I suspect it's just a check in the code that only allows it to run on Ryzen processors.
 
What about a 256gb nvme + 2tb sata sdd? You could really have some fun with this.

You can absolutely do this. In fact, AMD's documentation even recommends a pairing a fast 256GB NVMe drive with a larger SSD for the best performing "large" solution. This is a complex solution and I can think of a number of configurations to try this with, but sadly, it takes a lot of time to test.

"AMD StoreMI is not a caching solution; it utilizes advanced machine intelligence, virtualization and automated tiering to analyze the data blocks that are most often accessed, and actually moves those blocks to the fastest storage tier."

This makes it a non-starter for me. Speed isn't the only criteria I use in deciding where data should go: reliability and configuration change capability are as well.

Right now, the technology is really only about speeding up slower storage.

Not being able to handle drive failure, leading to potential loss of important data or an unbootable system, is unacceptable, especially when HDDs are involved, but the article says this technology can't handle it. Furthermore, the review and slides don't address whether this technology is compatible with backup and disk-imaging utilities. If it isn't, then it's a non-starter for serious work.

The technology would work fine with backup and imaging software. That said, I haven't tested this. Therefore, I'm not recommending you trust it for that. Some solutions will be inherently problematic with this type of solution. To be perfectly honest, I wouldn't use this on an OS drive. That said, it will work with backup software. Why would you think otherwise? What's presented to the OS is a single drive volume. If your doing a simple data backup everything from NT Backup or batch files based on xcopy commands to imaging software would be perfectly usable for this purpose. Where you may run into problems is using an image software and trying to write to a different volume that's not managed by StoreMI, such as in a case where you replace the drive volumes entirely.

Furthermore, I want to be able to pull out a drive with full knowledge of what information I'm removing from the system. This technology makes that impossible.

You can't do that now in any multi-drive solution. If I've got a RAID 5 array with three disks, I can take one drive out and toss it on the floor. However, I can't read what's on it alone. I can't pull a drive out of a two disk RAID 0 array and know that either. I can't pull one drive out of a 12 disk SAN and know precisely what's on it, nor be guaranteed of being able to access it in any way shape or form. This technology doesn't change how things are done today in that respect. This is a technology that accelerates slow storage in a surprisingly fast and unique way. You can also deactivate StoreMI and migrate all the data off the cache volume manually should you want to do so.

All in all, it looks more like a toy for use in marketing (like the junky little prize in a box of Cracker Jacks) or for cheesing benchmarks than a usable tool.

How on Earth is performance not usable? It's a solution that works fine in certain situations. Nothing more, nothing less. I can't use it in my system either, but that hardly makes it a marketing gimmick. The best configuration I can see for this is a fast NVMe based SSD for your OS and then a 256GB cache drive plus a large mechanical hard drive. Add to that whatever backup solution you like to employ and you are good to go. You will have all the benefits of a big mechanical drive without the huge performance gap you normally see transitioning from an SSD to mechanical storage. Granted, this isn't a massive advantage in the sense that a lot of what we throw on those big clunky spinners is data that doesn't need massive performance, but the algorithm that governs this can move whatever does fit that category for you. You need a backup solution no matter what you are doing if you care about your data. Again, this solution doesn't take the place of that or eliminate the need for that. Even if you are running a RAID 1 setup on your system, any data that resides in that system and remains online and accessible is at risk. Fire, power surges, viruses, and all other security concerns come into play in that scenario.

I also don't see this as someone cheesing benchmarks. Do people sit around and bench their mechanical hard drives and compare virtual dick sizes that way? I don't think they do. When people do benchmarks for bragging rights they aren't including their slow ass mechanical storage in the mix.

This is interesting tech...

AI is going to revolutionise storage in the next few years. This is interesting tech... my money is on some very cool storage tech coming out the next few years. WD is moving toward using Risc-V chips in their drives in the next few years. They plan to use that processing power to do the same types of things at the hardware level. WD has stated they plan to ship billions of RiscV chips on drives within the next few years. Hybrid drives with enough on board processing power to properly move the right data at the right times I would bet will be very attractive. Next few years should be interesting in the storage field.

I believe this is a stepping stone to much better implementations down the line. As I said in the article, one day , if this were applied to accelerating mechanical RAID arrays, or became more flexible in general, I think it would be fantastic.
 
Last edited:
The best configuration I can see for this is a fast NVMe based SSD for your OS and then a 256GB cache drive plus a large mechanical hard drive.

Three drives? Sounds like an expensive hassle. And how much of a boost will this give over, say, a simpler and cheaper Xpoint+HDD solution ?

I don't see this technology having any real impact on anything.
Seeing "advanced machine intelligence" in the marketing is a pretty good clue that it was just hype.
 
.
Three drives? Sounds like an expensive hassle. And how much of a boost will this give over, say, a simpler and cheaper Xpoint+HDD solution ?

I don't see this technology having any real impact on anything.
Seeing "advanced machine intelligence" in the marketing is a pretty good clue that it was just hype.

I would use three drives regardless. I wouldn't want my OS drive on anything but a super fast NVMe drive leaving StoreMI for use with the larger data drive. If your using the RAMCache, and the data you are working with can benefit from it, your drives won't make much of a difference. At that point you are primarily executing data from the RAMCache which is faster than running off the cache drive. What I saw here was roughly 5x boost in performance without the RAM cache. In some types of workloads its much greater. An SSD, even a slow one probably wouldn't benefit as much, but I'm sure that would be faster. Again, I think the best use of this technology is to speed up large mechanical drives.
 
I would go with the NVME drive for OS and Dan's forth option with an additional SSD and mechanical drive for all my games with ram cache. That seems to be a perfect way to get more of my Steam games locally for a rather cheap cost, faster speeds without having to move games around. I would have zero concern about data loss in this case since I could always download any Steam game if needed.

I love these types of articles by the way and the testing that shows potential benefits. Great job Dan!
 
I'm thinking this might work well for video editing, where you are working with large file sizes and if on a budget, using massive mechanical drives to store the files? For that application, it would be interesting to see if the 4GB RAM cache made any improvements over the 2GB cache, and if it were worth the $60. Certainly can't be worse than the memory "expander" programs everyone was paying good money for in the DOS days!
 
  • Like
Reactions: mord
like this
I'm thinking this might work well for video editing, where you are working with large file sizes and if on a budget, using massive mechanical drives to store the files? For that application, it would be interesting to see if the 4GB RAM cache made any improvements over the 2GB cache, and if it were worth the $60. Certainly can't be worse than the memory "expander" programs everyone was paying good money for in the DOS days!

That would be a good usage case for StoreMI. I'm curious on the RAM Cache question as well.
 
It is interesting. I like the aspects of what it offers, but having this in a laptop would be an over all boon to most laptop users. My laptop has a 256 gig SSD and a 1tb platter drive that is slow as dog poodoo. If I could spend 40 bucks and have an across the board SSD like solution that would be nice. But as this is a software generated solution I don't think it is really for laptops unless you are running 2 SSD drives and a SAS drive in a laptop. Like in the article 1 for boot, one for tiered, and 1 for slow data storage.
 
Could you include or comment on game loading benchmarks? The synthetic ones are interesting, but how well does it fare in practice?
 
I find myself disagreeing with many of the blanket statements in the introduction to this article.

Dan_D said:
This provides the most improvement in system performance as games don't really benefit from SSDs in most cases outside of reductions in level load times.

This one is accurate, but don't underplay the importance of level load times. They can make a HUGE difference in how enjoyable a game is.

Dan_D said:
SSDs have gotten larger over the years alongside mechanical drives. However, they haven’t reached capacities that are anywhere near large enough to replace such devices and supplant those completely.

For OS/Applications/Games they certainly have. I haven't had a spinning hard drive in any local machine I've built since about 2010.

For mass storage of files/media libraries, hard drives still make sense. I've had them in my NAS for some time, but all my os/applications.games have been on SSD'[s for 8 years. I don't even have a particularly large SSD as SSD's go. Only a PCIe 400GB Intel SSD 750 only ~260GB of which are partitioned for windows. That ~260GB is more than enough for OS/Games/Apps.

Dan_D said:
At present, my games directory on my system is 553GB and that doesn't count Steam games which were moved off my 1.2TB Intel SSD 750 this past weekend and placed on an 800GB Intel SSD 750 that I had lying around. My Steam folder is 311GB. Together, with my OS and applications I have nearly consumed 1.2TB of storage space. Granted, I may not be a typical case while I may have more applications installed but certainly other people have more pictures, games, and personal stuff than I do. I don't have that many games installed compared to the number I own on Steam and Origin. It doesn't take much to chew through a large amount of space quickly.

Remember, there is no need to keep every title you own installed. 2-3 games at a time is all anyone really needs. Uninstall it when done, and reinstall it if you want to revisit it. It takes less than 15 minutes on a good connection.

Furthermore, you can get a 2Tb Samsung 860 EVO for about $500 now. (For all those who were complaining that they wouldn't buy SSD's until they hit $1 per GB, this is $0.25 per GB) Yes, EVO drives are TLC, but in practical matters this means next to nothing. Even in pretty write heavy environments, they last a decade or longer. And yes, they aren't as fast as PCIe NVMe drives, but in the grand scheme of things that is irrelevant too. The HUGE leap was in going from mechanical drives to first gen SATA SSD's. Everything since has provided significant limiting returns. The jump to PCIe, M.2 and NVMe has provided some stellar results in benchmarks, but in real world experience on the desktop (load times, system responsiveness) they have done next to nothing that a good SATA SSD doesn't do. (note, there are workloads they excel at, but most of these are enterprise type workloads with high queue depthts, like massive heavy databases)

So, SATA EVO drives are good enough, and (with risk of sounding like Bill Gates) 2TB ought to be enough for anyone (at least today). There really is no need to have a spinning disk in a local machine anymore, unless you have an unusual workload, or are building an extreme budget system.

I'm not implying that AMD's new storage technology isn't any good. (it very well may be) I just feel like the hyperbole and blanket statements in the article just don't jive with my experiences at all.

Maybe you discuss this later (I have to contrinue reading past the introduction, I got stuck here) but what WOULD be really interesting - IMHO - is to see if this could be used with a dual SSD setup. Small super fast intel optane drive (I'm still not clear if all of these are Intel only or not anymore, I feel like the optane drives that were small and for cahce are still intel only, but that the larger PCIe ones work on all platforms?) as the fast drive and a large 2TB Samsung EVO drive as the slow drive.

In general though what I don't like with proprietary solutions like this is that you generally can't (at least not easily) when trouble shooting or rescuing data just unplug a drive and move it to another system to read its contents. This is one of the main reasons I always use common or open source storage methodologies. My ZFS pools on my NAS can be mounted on any system, that runs ZFS if something bad happens and I need to rescue data. Same goes for a standalone drive on a client. Smart response setups and stuff like this? At the very least it makes it more difficult, if it is possible at all..
 
A couple of questions i have after reading the same thing:

1.) Could you do multiple nesting and more than 2 tiers? Like RAMDISK -> NVME -> slower SSD -> HD?

2.) Is it cross platform, or do you have to be running Windows? Can you use it in an environment where you dual or triple boot multiple operating systems?

3.) Some real world tests would have been neat. I have found that disk benchmark apps generally aren't a very good test of how these types of things work in practice. Comparing system boot times or level load times of some popular titles before and after would possibly have more value.
 
Well done and interesting article.

I do wonder though about a SSD's endurance from using this. I would assume the SSD will be doing, potentially, a lot of writes as it moves the most used data to the SSD.
 
Although they state StoreMI doesn't work with RAID, I'm taking that as meaning it doesn't work with a mainboard RAID... or does that mean any RAID? Given a hardware RAID shows up as a single drive to the OS you would think it would work in theory, although I don't know if I'd want to have 20+TB of data go poof due to an SSD failing.
 
Well done and interesting article.

I do wonder though about a SSD's endurance from using this. I would assume the SSD will be doing, potentially, a lot of writes as it moves the most used data to the SSD.


My experience has been that write endurance is a mostly moot concern these days. In the early days of SSD's it might be something to worry about, but these days it is almost completely irrelevant.

Firstly, we have the Tech Reports Enfurance Experiment (intro, links to all subchapters, conclusion.)

The two worst drive in their lineup were the Kingston HyperX 3K which failed after 728TB written, and the Intel 335 Series which failed after 750TB written.

The best was the Samsung 840 Pro at 2.4PB (yes, petabytes)

For reference, the 840 EVO model made it to about 900TB.

Keep in mind, the Samsung 840 series are older drives, back before Samsung used 3D NAND in their SSD's

So even a planar NAND TLC drive is hitting 900TB, and 3D NAND vastly improves write endurance.


I have some personal experience to chime in with here as well. In the early days of SSD's I used OCZ drives, and I never had one last 2 years without failing. This just isn't the case anymore.

I have 9 SSD's in my server. Most of them have been in there for many years.

Two 500GB Samsung 850 EVO are mirrored and serve as the boot drives and datastores for my virtual machines. Not heavy write, but medium writes, and they ARE TLC drives. They have been in there for 2 years and 3+ months of 24/7 use in this medium high write environment, and have 84% remaining life according to smart. This suggests they should survive in this setup for ~14 years total.

I have one 1 TB EVO that serves as a video recording drive. It records all my DVR shows. Every day at 4am a script runs and moves the oldest recordings to spinning hard drives, such that it has about 150GB free for the next day. This I would call a medium to high write load. It has 93% remaining life according to smart, in about 2 years and 1 month. At this rate, the expected life is 29.7 years in this mid to high write environment. Again, this is for a TLC drive.

I have two 128GB Samsung 850 Pro's which served from 2014 to 2016 as cache drives for my NAS with very high writes. Since 2016 one has been serving as a dedicated swap drive for the server, and the other is a LiveTV ring bufffer. These are also pretty high write. So a total of about 4 years of use in heavy write environments. They have 82% and 84% remaining, respectively, so we are talking 22 and 25 years total, respectively.

When the 128GB drives were removed from cache duty, they were replaced by two 512GB Samsung 850 Pro drives about 2 years and 3 months ago, under brutal heavy write conditions. These have 81 and 82 percent remaining, and thus ~12 years in total.

The last two are a set of two 100GB Intel S3700 SSD's. They are a mirrored pair and serve as ZIL/SLOG devices on the system. These store sync writes while they are committed to spinning disks in case of a crash or power loss. So, not exactly write cache, but serve to speed up sync writes. They see near constant writes, but those writes are very small, and the drives stay mostly empty. They have both been in there since 2014, about 4 years, and they both have 99% remaining write endurance. If this holds up, they'll last for 400 years :p


And keep in mind, all of the drives in the Tech Report endurance test went WAY beyond the point where their wear leveling indicator hit 0.

I guess, long story short, the point I am trying to make is, if you buy an SSD today, unless you get a shitty one or you are doing some crazy heavy write work loads (or possibly even both) chances are that SSD is still going to be functional when it is obsolete.
 
I find myself disagreeing with many of the blanket statements in the introduction to this article.



This one is accurate, but don't underplay the importance of level load times. They can make a HUGE difference in how enjoyable a game is.



For OS/Applications/Games they certainly have. I haven't had a spinning hard drive in any local machine I've built since about 2010.

That depends on your storage needs. I don't have nearly as massive an array as some people on our forums and SSD's are too expensive for what little capacity they offer.


For mass storage of files/media libraries, hard drives still make sense. I've had them in my NAS for some time, but all my os/applications.games have been on SSD'[s for 8 years. I don't even have a particularly large SSD as SSD's go. Only a PCIe 400GB Intel SSD 750 only ~260GB of which are partitioned for windows. That ~260GB is more than enough for OS/Games/Apps.

I disagree. Maybe I have ADD with games or something but I have enough games and software installed that I had to move some games to another SSD as the primary in my machine (1.2TB Intel SSD 750) was insufficient for the task.

Remember, there is no need to keep every title you own installed. 2-3 games at a time is all anyone really needs. Uninstall it when done, and reinstall it if you want to revisit it. It takes less than 15 minutes on a good connection.

Wrong. It takes a fuck load longer than 15 minutes to reinstall a game over a 300MB internet connection. That's not a bad connection by any means. Back when I had a Gigabit connection, I felt the same way.

Furthermore, you can get a 2Tb Samsung 860 EVO for about $500 now. (For all those who were complaining that they wouldn't buy SSD's until they hit $1 per GB, this is $0.25 per GB) Yes, EVO drives are TLC, but in practical matters this means next to nothing. Even in pretty write heavy environments, they last a decade or longer. And yes, they aren't as fast as PCIe NVMe drives, but in the grand scheme of things that is irrelevant too. The HUGE leap was in going from mechanical drives to first gen SATA SSD's. Everything since has provided significant limiting returns. The jump to PCIe, M.2 and NVMe has provided some stellar results in benchmarks, but in real world experience on the desktop (load times, system responsiveness) they have done next to nothing that a good SATA SSD doesn't do. (note, there are workloads they excel at, but most of these are enterprise type workloads with high queue depthts, like massive heavy databases)

I'll agree with this point, but again 2TB at $500 is too rich for my blood, especially when 2TB is no where near getting shit done.

So, SATA EVO drives are good enough, and (with risk of sounding like Bill Gates) 2TB ought to be enough for anyone (at least today). There really is no need to have a spinning disk in a local machine anymore, unless you have an unusual workload, or are building an extreme budget system.

This is a myopic viewpoint and you just quoted the phrase that tells you why this is an inaccurate statement. I also explained in the article why this isn't the case.

I'm not implying that AMD's new storage technology isn't any good. (it very well may be) I just feel like the hyperbole and blanket statements in the article just don't jive with my experiences at all.

The statements jive with mine and where they don't, I point that out. This technology doesn't do me any good in my main desktop rig either. Again, I feel this can be developed into a more robust solution given time.

Maybe you discuss this later (I have to contrinue reading past the introduction, I got stuck here) but what WOULD be really interesting - IMHO - is to see if this could be used with a dual SSD setup. Small super fast intel optane drive (I'm still not clear if all of these are Intel only or not anymore, I feel like the optane drives that were small and for cahce are still intel only, but that the larger PCIe ones work on all platforms?) as the fast drive and a large 2TB Samsung EVO drive as the slow drive.

You really should read an article before commenting on it. You made a statement about why 2TB should be enough for people and quoted a hilarious statement that shows why this isn't the case. The reasons why I said this was past the introduction. You would also have the answer to the above question had you bothered to read the whole thing. If you want to go point by point and comment on what I said, I'm all for it. But please read the entire statement, article or comment before doing so. It provides crucial context.

In general though what I don't like with proprietary solutions like this is that you generally can't (at least not easily) when trouble shooting or rescuing data just unplug a drive and move it to another system to read its contents. This is one of the main reasons I always use common or open source storage methodologies. My ZFS pools on my NAS can be mounted on any system, that runs ZFS if something bad happens and I need to rescue data. Same goes for a standalone drive on a client. Smart response setups and stuff like this? At the very least it makes it more difficult, if it is possible at all..

This is a reasonable concern, but this is a performance solution. A single drive that's online in any system is a point for failure. You need a good backup strategy if you care about this data. Depending on moving drives to another system for recovery is foolish at best. A NAS type solution isn't even a good backup either. It can be part of a good backup strategy, but it isn't the end of one. As for "proprietary" RAID arrays like those created on Intel systems and AMD, theoretically you can import foreign RAID types on other controllers if they support that feature. I'll be the first to admit, it's somewhat hit and miss. If you have a good backup solution, or a NAS system at the very least this is less of a concern.

A couple of questions i have after reading the same thing:

1.) Could you do multiple nesting and more than 2 tiers? Like RAMDISK -> NVME -> slower SSD -> HD?

2.) Is it cross platform, or do you have to be running Windows? Can you use it in an environment where you dual or triple boot multiple operating systems?

3.) Some real world tests would have been neat. I have found that disk benchmark apps generally aren't a very good test of how these types of things work in practice. Comparing system boot times or level load times of some popular titles before and after would possibly have more value.

1.) No. At present, it only supports 2 tiers.
2.) The solution is Windows only currently.
3.) I thought about this, and agree. I may revisit this topic in the future.

Well done and interesting article.

I do wonder though about a SSD's endurance from using this. I would assume the SSD will be doing, potentially, a lot of writes as it moves the most used data to the SSD.

It wouldn't be any worse than running the OS off of an SSD or using one as storage for games.

Although they state StoreMI doesn't work with RAID, I'm taking that as meaning it doesn't work with a mainboard RAID... or does that mean any RAID? Given a hardware RAID shows up as a single drive to the OS you would think it would work in theory, although I don't know if I'd want to have 20+TB of data go poof due to an SSD failing.

It doesn't work with any RAID array at present.
 
Last edited:
One thing that has annoyed me with systems like this in the past is that if you play a brand new title, or if you go back to a title you haven't played in a while, it can take a while for the system to learn that a title is in the "current frequently used" category.

With some more linear titles you may never load a single level more than once before you move on to the next, giving the system no time to adapt and present you with content from the higher tier.

How did you find this worked?
 
I disagree. Maybe I have ADD with games or something but I have enough games and software installed that I had to move some games to another SSD as the primary in my machine (1.2TB Intel SSD 750) was insufficient for the task.

Wrong. It takes a fuck load longer than 15 minutes to reinstall a game over a 300MB internet connection. That's not a bad connection by any means. Back when I had a Gigabit connection, I felt the same way.

That's fair. I should consider that not everyone can get enthusiast level internet where they live.

I used to have Fios 150Mbit/150Mbit where I live thoufh, and I don't really recall waiting more than like 25 minutes. Maybe I wasn't getting the largest titles.

If I were internet speed limited, I'd probably use steams backup feature to store games I was uninstalling to my NAS to free up space, and keep the option to reinstall them more quickly.

(Steam does still do this right? I haven't used the feature in years)
 
...especially when 2TB is no where near getting shit done.



This is a myopic viewpoint and you just quoted the phrase that tells you why this is an inaccurate statement. I also explained in the article why this isn't the case.



The statements jive with mine and where they don't, I point that out. This technology doesn't do me any good in my main desktop rig either. Again, I feel this can be developed into a more robust solution given time.



You really should read an article before commenting on it. You made a statement about why 2TB should be enough for people and quoted a hilarious statement that shows why this isn't the case. The reasons why I said this was past the introduction. You would also have the answer to the above question had you bothered to read the whole thing. If you want to go point by point and comment on what I said, I'm all for it. But please read the entire statement, article or comment before doing so. It provides crucial context.

Well, Mr Gates comment was mostly accurate at the time he made it.

I hear what you are saying, but I still can't wrap my head around a scenario where 2TB isn't enough for OS/Programs/Games. My problem has always been the opposite. I want the top performance storage, but its annoying because the smaller versions never have as good performance as the largest ones, and I don't want to buy a ton of storage I'll never use.

As mentioned before my Windows 10 partition (granted this one is completely dedicated to games, as I do everything else in Linux) is 260GB and usually has over 100GB free. I keep maybe 2-4 games installed at a time. Usually one or two multiplayer FPS, a Sid Meyers Civ game and then the story mode game du jour, which I play through, then delete as soon as I've seen the credits.

My stepson is on a single 256GB Samsung 850 Pro, and he's never run out of space. My Fiance is also on a single 256GB drive, but she is neither a power user, nor plays any games, so her example may not be very representative.

Different usage scenarios I guess. ¯\_(ツ)_/¯
 
I am using StoreMI for a boot drive that pairs a 256GB NVMe with a 2TB HDD. It's great. I can install to my C drive all day long without worrying about running out of space (unlike with only a SSD). Things I use are fast. Things I don't use are probably slow... I don't know, I don't use them. I also have an 512GB NVMe for things that have to be fast all of the time.

I don't care about the data risk of either OS drive failing. My important stuff is backed up to my file server and the cloud (which I'd do even if I was running a redundant OS drive). I have a restore OS image saved to speed up a rebuild if this ever does happen. For me the risk of needing to spend an hour or two to rebuild in the unlikely event of a drive failure is acceptable.

For those interested about how games work with StoreMI, watch this video:

For me, StoreMI is definitely a win-win. The convenience of a large boot drive with reasonable speed is perfect. This may not be the right solution for everyone, but that doesn't mean no one will find value in it.
 
One thing that has annoyed me with systems like this in the past is that if you play a brand new title, or if you go back to a title you haven't played in a while, it can take a while for the system to learn that a title is in the "current frequently used" category.

With some more linear titles you may never load a single level more than once before you move on to the next, giving the system no time to adapt and present you with content from the higher tier.

How did you find this worked?

Unfortunately, I didn't spend enough time with StoreMI to know this. I may revisit this topic again from a pure gaming perspective. I'll play around with it and see.

That's fair. I should consider that not everyone can get enthusiast level internet where they live.

I used to have Fios 150Mbit/150Mbit where I live thoufh, and I don't really recall waiting more than like 25 minutes. Maybe I wasn't getting the largest titles.

If I were internet speed limited, I'd probably use steams backup feature to store games I was uninstalling to my NAS to free up space, and keep the option to reinstall them more quickly.

(Steam does still do this right? I haven't used the feature in years)

It depends on the game. Older titles, sure. I can reinstall them pretty quickly. The larger ones that are 50GB or so are a different matter. That seems to take around an hour or more. Its not the end of the world but I like to keep games installed for the most part. When I'm in the mood to play something, I don't want to wait too long to install it as I may just end up doing something else and not getting back to that game I wanted to play so bad. Maybe that's just me. I don't know. You can also handle Steam games manually in a similar fashion. You can move them out of the common folder and place them somewhere else for faster retrieval.
 
I find myself disagreeing with many of the blanket statements in the introduction to this article.



This one is accurate, but don't underplay the importance of level load times. They can make a HUGE difference in how enjoyable a game is.



For OS/Applications/Games they certainly have. I haven't had a spinning hard drive in any local machine I've built since about 2010.

For mass storage of files/media libraries, hard drives still make sense. I've had them in my NAS for some time, but all my os/applications.games have been on SSD'[s for 8 years. I don't even have a particularly large SSD as SSD's go. Only a PCIe 400GB Intel SSD 750 only ~260GB of which are partitioned for windows. That ~260GB is more than enough for OS/Games/Apps.



Remember, there is no need to keep every title you own installed. 2-3 games at a time is all anyone really needs. Uninstall it when done, and reinstall it if you want to revisit it. It takes less than 15 minutes on a good connection.

Furthermore, you can get a 2Tb Samsung 860 EVO for about $500 now. (For all those who were complaining that they wouldn't buy SSD's until they hit $1 per GB, this is $0.25 per GB) Yes, EVO drives are TLC, but in practical matters this means next to nothing. Even in pretty write heavy environments, they last a decade or longer. And yes, they aren't as fast as PCIe NVMe drives, but in the grand scheme of things that is irrelevant too. The HUGE leap was in going from mechanical drives to first gen SATA SSD's. Everything since has provided significant limiting returns. The jump to PCIe, M.2 and NVMe has provided some stellar results in benchmarks, but in real world experience on the desktop (load times, system responsiveness) they have done next to nothing that a good SATA SSD doesn't do. (note, there are workloads they excel at, but most of these are enterprise type workloads with high queue depthts, like massive heavy databases)

So, SATA EVO drives are good enough, and (with risk of sounding like Bill Gates) 2TB ought to be enough for anyone (at least today). There really is no need to have a spinning disk in a local machine anymore, unless you have an unusual workload, or are building an extreme budget system.

I'm not implying that AMD's new storage technology isn't any good. (it very well may be) I just feel like the hyperbole and blanket statements in the article just don't jive with my experiences at all.

Maybe you discuss this later (I have to contrinue reading past the introduction, I got stuck here) but what WOULD be really interesting - IMHO - is to see if this could be used with a dual SSD setup. Small super fast intel optane drive (I'm still not clear if all of these are Intel only or not anymore, I feel like the optane drives that were small and for cahce are still intel only, but that the larger PCIe ones work on all platforms?) as the fast drive and a large 2TB Samsung EVO drive as the slow drive.

In general though what I don't like with proprietary solutions like this is that you generally can't (at least not easily) when trouble shooting or rescuing data just unplug a drive and move it to another system to read its contents. This is one of the main reasons I always use common or open source storage methodologies. My ZFS pools on my NAS can be mounted on any system, that runs ZFS if something bad happens and I need to rescue data. Same goes for a standalone drive on a client. Smart response setups and stuff like this? At the very least it makes it more difficult, if it is possible at all..

Not sure if how you wrote it is what you really meant: " 2-3 games at a time is all anyone really needs", " 2TB ought to be enough for anyone (at least today)":
  • Just because it makes you happy or satisfy what you think is good does not mean that is the case for others
  • For me 2-3 games would not work - multiple gamers, VR alone I play more than that in a week plus other games. I also like to go back and play some levels here and there (which would be interesting if the AI see's a directory being used and automatically block feeds it to the SSD and ram - since this is not a typical cache but a smarter method for loading up the SSD)
  • 2 TB can get rather small if you are storing videos, especially 4K ones
  • Plus why would I spend $500 on a 2TB SSD when less than $150 for just my games would work just as well and in fact maybe better since of the Ram Cache?

I am almost ready to get either a 256gb or 512gb WD M.2 drive at Newegg for this for the HTPC $79/$155. It has a Sata III 860Evo for the OS and 1TB hard drive which maybe upgraded. This would be perfect for the types of games I like to play there.
 
I disagree. Maybe I have ADD with games or something but I have enough games and software installed that I had to move some games to another SSD as the primary in my machine (1.2TB Intel SSD 750) was insufficient for the task.

So get more SSD. It's cheap and getting cheaper.

From TechReport: "HP EX920 1-TB NVMe drive. This unit comes in the M.2 form factor and uses 3D TLC NAND. The manufacturer posts a maximum sequential read figure of 3200 MB/s and 1800 MB/s for writes. Those are healthy numbers, but random I/O is much more impressive at 350 K random read IOPS and 250 K write IOPS. You can get your hands on this drive for only $328.99 from Rakuten with the checkout code SAVE15." 33 cents a gigabyte.

And also from RechReport: "Sandisk SSD Plus 480 GB drive. This unit can post 535 MB/s read and 445 MB/s write figures, and it's currently selling for a measly $98.99 at Amazon." 21 cents a gigabyte.

Someone who has paid for over 1.2TB of games shouldn't have a problem forking out for some SSD to store them on. Heck, at these prices, I could get 3TB of NVMe SSD or 4TB of SATA SSD for the cost of my primary monitor. Point is, as time goes on, SSD will get cheaper, and there will be less and less reason for technologies like StoreMI.

Besides, the performance improvement seems to mainly come from RAM caching:

At that point you are primarily executing data from the RAMCache which is faster than running off the cache drive. What I saw here was roughly 5x boost in performance without the RAM cache

But RAM caching of disk drives, that's hardly new technology, is it?
So I'll stick with my opinion that this is just a marketing toy, more so than Intel's XPoint drive caches, which unlike this does involve new tech.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
So get more SSD. It's cheap and getting cheaper.

From TechReport: "HP EX920 1-TB NVMe drive. This unit comes in the M.2 form factor and uses 3D TLC NAND. The manufacturer posts a maximum sequential read figure of 3200 MB/s and 1800 MB/s for writes. Those are healthy numbers, but random I/O is much more impressive at 350 K random read IOPS and 250 K write IOPS. You can get your hands on this drive for only $328.99 from Rakuten with the checkout code SAVE15." 33 cents a gigabyte.

And also from RechReport: "Sandisk SSD Plus 480 GB drive. This unit can post 535 MB/s read and 445 MB/s write figures, and it's currently selling for a measly $98.99 at Amazon." 21 cents a gigabyte.

Someone who has paid for over 1.2TB of games shouldn't have a problem forking out for some SSD to store them on. Heck, at these prices, I could get 3TB of NVMe SSD or 4TB of SATA SSD for the cost of my primary monitor. Point is, as time goes on, SSD will get cheaper, and there will be less and less reason for technologies like StoreMI.

Besides, the performance improvement seems to mainly come from RAM caching:



But RAM caching of disk drives, that's hardly new technology, is it?
So I'll stick with my opinion that this is just a marketing toy, more so than Intel's XPoint drive caches, which unlike this does involve new tech.

SSD's are definitely getting cheaper. No question, but if you have vast storage needs buying several 1-2TB SSD's is still pricey. I agree that there may come a day when we won't need mechanical spinning disks. Hell, I look forward to that but it may be further off than you think. We thought the days of the HDD were much more numbered years ago but our data capacity needs kept growing with the same increases in SSD and mechanical HDD capacity.

Lastly, the performance boost from RAM Cache is much larger than what StoreMI provides on its own. However, the boost from StoreMI without cache was still substantial.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
This is basically just a more limited SuperCache/FancyCache in the case of the RAM Cache which is most exciting part about StoreMI. I certainly hope the 2GB RAM limitation is raised to 4GB like the better license for Ryzen 7nm CPU's if they introduce a new chip set to pair with it. I don't think the $60 license is worth it at all at that point just buy SuperCache or FancyCache which can access as much ram as you have available. It doesn't seem perfect, but it's better than Rapid Storage and for something free that comes bundled with x470 totally worth while and hell for $20's I'd get the basic license on a x370 board if I owned one already. If you don't own a x370/x470 board and are shopping for a Ryzen CPU I'd strongly recommend getting the x470 for the extra $20-30's you'd likely pay for the better StoreMI Technology bundled with it. This is a bit of a game changer really. It's been possible for years though, but gone mostly unspoken about. Now since we are on this subject I wonder if AMD will implement a way to utilize this alongside HBCC on cards that utilize M.2 storage because that could be a bit game changing.
 
Too bad the $20 license is limited to 128gb for the SSD, 2gb ram, while the $60 one is 1TB SSD and 4GB ram. Looks like the $20 one is good enough for the HTPC. Would also aid in testing how smart the AI is by running a number of games. A license for 256gb SSD at like $29 would have been more ideal. Just a large disparity between the two available licenses.
 
Too bad the $20 license is limited to 128gb for the SSD, 2gb ram, while the $60 one is 1TB SSD and 4GB ram. Looks like the $20 one is good enough for the HTPC. Would also aid in testing how smart the AI is by running a number of games. A license for 256gb SSD at like $29 would have been more ideal. Just a large disparity between the two available licenses.

StoreMI comes with X400 series chipsets and gives you 256GB for the SSD and 2GB RAM cache. It's when you use it with X300 series chipsets that the base license is $20 and limits you to 128GB.
 
It doesn't work with any RAID array at present.

That's interesting because the User's Guide for StorageMI only states it can't be used with AMD Software RAID solutions, it doesn't seem to address hardware RAID solutions at all. Did AMD relay that information through a press release or testing guide?

https://www.amd.com/system/files/2018-04/AMD-StoreMI-Users-Guide.pdf

Check the following prior to upgrading your system to StoreMI:
• Your system meets the minimum configuration: AMD socket AM4 processor and 4xx series motherboard with a minimum of 4G RAM (6G RAM to support RAM cache),
• Secure Boot is NOT enabled. Consult your system documentation for further details.
• There are no SSD caching or AMD software RAID solutions installed.
• The BIOS SATA disk settings are set to AHCI, not RAID and there is no software RAID installed on the system.
• Microsoft’s chkdisk or other third-party disk scan tools run error free on the boot drive
• A new unused SSD or HDD is available
• If wishing to use bootable tiers > 2TB in size, the system must be configured to boot in UEFI mode with a UEFI bootable Windows OS installation as Windows 10 does not support > 2TB boot drives in legacy boot mode.
 
I meant hardware based RAID. I didn't think about software RAID at all in this case.

OK, I think I understand it now. According to the guide, you can't have AMD software RAID or any bootable RAID system. Given the three scenarios listed in the review, it seems to want to integrate into the boot drive in any of those scenarios, which keeps you from using RAID since it needs to bootable. But looking at the guide, you could also use it to create a non-boot tiered storage as well, they call it Data (non-bootable) StoreMI Tierdrive which, at least according to the documentation, doesn't have a limitation in regards to hardware RAID.
 
StoreMI comes with X400 series chipsets and gives you 256GB for the SSD and 2GB RAM cache. It's when you use it with X300 series chipsets that the base license is $20 and limits you to 128GB.
Exactly, X370 and B350 here, not worth buying a whole new motherboard for this. May build a TR system later.

Now I am wondering if you can have two separate setups/StoreMI devices in same machine, as in two SSD's and two Hard Drives, each a StoreMI device?
 
Hey guys, New user here, just wanted to chime in because I bought the premium fuzedrive mostly due to curiosity.
I'm currently using it for my games drive, which has an ADATA SX8000 512GB NVME SSD Fuzed with an 8TB shucked Seagate archive 8TB.
(Yes, earlier in the thread someone was talking about not needing more than "2-3" games at one time. I'm not that scenario. I have over 5 terabytes of steam games.)

I'm running a Ryzen 1700 on an Asrock X370 motherboard that I both got at launch. I'm not really interested in upgrading either at the moment so I had to bite the bullet and buy the program.

I've been using it for about 3 weeks now. Once you get it working: It works, it works well. I have multiple games installed that approach or break the 100GB barrier. Instead of trying to move them off and on my SSD to a slower secondary drive, which can take multiple minutes to do, I can just keep them all on the same drive. When I feel like playing something new, it has a slow first time boot, but I've managed to get load times in games like FFXV down to about 15 seconds compared to the 2+ minutes when it was purely on a mechanical drive. Especially noticeable within that game is the 4GB Cache which can turn fast travel load times down to mere seconds. For small games the benefit is almost instantaneous, but for really large games you need to "train" the Fuzedrive for faster loading by hitting a couple load zones. It's not perfect, but its absolutely faster than moving 100GB+ folders around to try and satiate my needs for having a super large library and simultaneously have NVME speed load times.

Now for the bad:

I had some major complications getting this FuzeDrive installed. I think the real reason that StoreMi caps out at 256GB/2TB for the drives is for convenience. Anything more than that and you start getting into some really weird MBR/GPT formatting issues because windows just wants to instantly format anything smaller than 2TB as MBR. I was originally just going to expand my 512GB SSD with the 8TB secondary drive, but the SSD was legacy formatted as MBR due to its small size and there was literally no way to expand it to anything bigger than 2TB when I fuzed it, which left 6TB completely unusable. I tried to use a program to switch from MBR to GPT which ended up making my computer unbootable. (This was, very likely, a user error that could have been avoided.)

So after some effort and a little googling around I was able to unfuze the drives, reinstall windows, and fuze them again. This created a bootable windows, and as I got to back to reinstalling my backup of my programs off the cloud I, yet again, came across the same fucking 2TB limitation. I have since learned enough knowledge to fix this (create a Win10 GPT only boot disk with Rufus, etc.) but by the time it happened a second time I decided that maybe the Fuzedrive was a little too volatile to have as my main OS drive, even with my cloud backup.

My motherboard has two NVME slots, so I decided to just buy a tiny, new, super fast SSD for my OS and non game programs. Went with the ADATA SX82000 due to its price and the reliability i've had with my SX8000.
Reinstalled windows on that new SSD (I love NVME drives, Reinstalling windows took < 10 minutes).
Unfuzed, formatted and refuzed my 512GB SSD+ 8TB fuzedrive into a crazy thing that has 7.73TB of usable space. When it's not the boot disk this task went smooth as butter and didn't have the multiple reboots that trying to create a bootable fuzedrive had. Got to the process of moving my ridiculous terabytes of hard drives over the fuzedrive since then things have worked wonderfully*.

*Except HWInfo64 crashes at boot so I had to change my rainmeter widgets.

In short, when it works it's awesome. I think a best case scenario would be a system with only two hard drives, fuzing a single NVME and a single HDD into one usable partition would probably be really smooth. It would be amazing for laptops (although its not supported yet?)
For desktops I'm beginning to think that Primocache is probably the superior option, it doesn't have anywhere near as many limitations and it keeps your data safer.

If you have any other questions I'll be glad to answer em.

Also, I'm curious how these tests were performed. If they were all right after the other without any repeat testing then I can see why performance slowed down in the later tests. Fuzedrive hadn't yet re-allocated the files to the fast tier. This takes some time to do so and has been getting noticeably more efficient. I'm sure there will be some drastic variance if you run the same test multiple times in a row, with it probably increasing in speed after every test.
 
This: https://www.anandtech.com/show/12826/intel-persistent-memory-event-live-blog looks like it will demolish AMD's StoreMI tech (and everything else). Why bother with RAM cache when you can put a 1TB NVM byte-readable byte-writeable drive in the CPU's DDR slots? 100X performance improvement in data-heavy scenarios. Server-only for the near-term it seems, but no technical reason it can't come down to HEDT.

Drool drool. And it will be even nicer when it's actually shipping. :)
 
This: https://www.anandtech.com/show/12826/intel-persistent-memory-event-live-blog looks like it will demolish AMD's StoreMI tech (and everything else). Why bother with RAM cache when you can put a 1TB NVM byte-readable byte-writeable drive in the CPU's DDR slots? 100X performance improvement in data-heavy scenarios. Server-only for the near-term it seems, but no technical reason it can't come down to HEDT.

Drool drool. And it will be even nicer when it's actually shipping. :)


Sounds like very expensive alive enterprise technology to me. Doesn't solve the same problem that StoreMI does, namely maintaining high SSD speeds while keeping mass storage costs low.
 
I'm just going to put this out there, this is AMD making software that enterprise storage arrays have used for a decade. Where you need it, this sort of thing can be great. From someone who works in the enterprise storage world, I have seen some catastrophic failures from multiple vendors.

If you don't want to buy SSDs or you want to try to put some lipstick on the pig that is old spindle drives, this is fun solution that could provide some real world benefit. But this may not be for everyone. Pricing though does not look bad at all for non-amd users. I'll probably snag a license for my NAS :D
 
Back
Top