Another 6 months before VEGA?

Although that was comparing to SSD storage and loading and not system RAM, developers could do much more with system RAM with regards to creating a more dynamic unified memory pool espcially where people have 16GB or 32GB but there is no incentive in same way for the developers to use VRAM efficiently.
For SSG to benefit games though the developer still needs to interact with it and Raja touched upon this very briefly in an interview, without that intervention its benefits are a bit more limited.
Cheers
That was also comparing with Polaris as opposed to Vega with a HBCC. I believe they tested streaming from memory as well, but it's been a while since I watched that video. For the SSG or even HBCC to really make a difference the game would still have to present the hardware with all the resources. All that storage is useless without something to store.

Data Centers, Server, HPC, AMD is in competition with nV, nV already same similar tech in that regard for those markets.
You have a link to Nvidia's SSG? Seems rather odd to say they have similar tech when everyone is going to AMD for a feature that wasn't on the market. Or is this one of those "technically works but too slow to use" arguments?

And would an SSG be cost beneficial to the game market? Above you stated they seem to be more for data centers..... and also Scott Wasson already stated without the SGG the benefits are limited in current platforms. Actually his words were more like there is no benefit currently in gaming.
I guess it depends on if an extra $50-100 for a card to enable 4k or higher gaming with minimal stuttering if of interest to gamers. Where did I state they were more for data centers? GPUs are for gaming but they're in datacenters... I'm not sure you're understanding those SSG limitations either. Without the ability to fetch in data, a Fiji based SSG would be somewhat limited on current platforms. This thread and discussion however is about Vega. I guess you could be correct if a High Bandwidth Cache Controller doesn't actually control any caching. Seems oddly named if that's the case.
 
That was also comparing with Polaris as opposed to Vega with a HBCC. I believe they tested streaming from memory as well, but it's been a while since I watched that video. For the SSG or even HBCC to really make a difference the game would still have to present the hardware with all the resources. All that storage is useless without something to store.


You have a link to Nvidia's SSG? Seems rather odd to say they have similar tech when everyone is going to AMD for a feature that wasn't on the market. Or is this one of those "technically works but too slow to use" arguments?


I guess it depends on if an extra $50-100 for a card to enable 4k or higher gaming with minimal stuttering if of interest to gamers. Where did I state they were more for data centers? GPUs are for gaming but they're in datacenters... I'm not sure you're understanding those SSG limitations either. Without the ability to fetch in data, a Fiji based SSG would be somewhat limited on current platforms. This thread and discussion however is about Vega. I guess you could be correct if a High Bandwidth Cache Controller doesn't actually control any caching. Seems oddly named if that's the case.
Well it seems they definitely did not compare to memory, this was confirmed by someone (one of the reputable posters) on B3d, I remember because they responded to my post on the subject.
If you find evidence otherwise please post as it would be quite important as it shifts the original context and performance comparison.

SSG has less of an impact in HPC and data servers due to the unified memory and structure of the frameworks used to also manage resources, especially when it is a full node and scale out nodes and to some extent offload mode.
Workstation modelling-rendering sure (still need to see it benchmarked with the professional application and in theory should work well but also need to see how those professional CAD/rendering/Modelling applications work in ideal environments), but larger scale Nvidia's approach works very well as they do massive modelling-mapping solutions.
Cheers
 
Last edited:
You have a link to Nvidia's SSG? Seems rather odd to say they have similar tech when everyone is going to AMD for a feature that wasn't on the market. Or is this one of those "technically works but too slow to use" arguments?

Server's, HPC's, etc nV doesn't need SSG's nor does AMD, the high bandwidth transports solve that issue.

SSG's are more for standalone systems or quick fixes for people that are testing applications for the use of bigger systems for now.

I guess it depends on if an extra $50-100 for a card to enable 4k or higher gaming with minimal stuttering if of interest to gamers. Where did I state they were more for data centers? GPUs are for gaming but they're in datacenters... I'm not sure you're understanding those SSG limitations either. Without the ability to fetch in data, a Fiji based SSG would be somewhat limited on current platforms. This thread and discussion however is about Vega. I guess you could be correct if a High Bandwidth Cache Controller doesn't actually control any caching. Seems oddly named if that's the case.

You think SSG's are going to be 50-100 bucks more lol, guess again, they are going to be thousands more.
 
Well it seems they definitely did not compare to memory, this was confirmed by someone (one of the reputable posters) on B3d, I remember because they responded to my post on the subject.
If you find evidence otherwise please post as it would be quite important as it shifts the original context and performance comparison.
The storage performance would ultimately be a property of the storage media used and there's no reason traditional memory couldn't effectively be used on the GPU. I'd imagine that XPoint is the ultimate goal here and it's approaching RAM performance. There would be a lot of variables on the implementation that could affect performance like host system specs, storage media used, and application requirements. Dataset size, bus speeds, and scaling could all have an impact on performance.

SSG has less of an impact in HPC and data servers due to the unified memory and structure of the frameworks used to also manage resources, especially when it is a full node and scale out nodes and to some extent offload mode.
Workstation modelling-rendering sure (still need to see it benchmarked with the professional application and in theory should work well but also need to see how those professional CAD/rendering/Modelling applications work in ideal environments), but larger scale Nvidia's approach works very well as they do massive modelling-mapping solutions.
Cheers
Not necessarily, as the technology improves scaling past the node size that Nvidia is adhering too. It's also the method AMD apparently chose to connect a large number of devices. Remember that each Vega 10 had 32x PCIE3 lanes. So unless a theoretical 32x PCIE slot arrives, even system memory at best would be constrained by the 16x slot and command traffic. For servers, that's far more lanes than would be available to a typical system. For very large deployments that should be the preferred method. If the pool on each device is large enough to meet application needs, the only real limit on GPUs in a node is space and power. A 4U node with 16+ GPUs wouldn't be unreasonable.

For an enthusiast, that ultimately still leaves half the lanes unaccounted. Which would seem to be dual GPU, second processor (ex. FPGA, encoder/decoder), or secondary storage pools.

You think SSG's are going to be 50-100 bucks more lol, guess again, they are going to be thousands more.
Do you have any evidence to support less than $100 in current consumer available hardware costing thousands to implement for a manufacturer on a consumer oriented part? The professional products will obviously be priced higher, but this technology isn't that expensive.

Server's, HPC's, etc nV doesn't need SSG's nor does AMD, the high bandwidth transports solve that issue.

SSG's are more for standalone systems or quick fixes for people that are testing applications for the use of bigger systems for now.
The SSG is a high bandwidth transport option that is less constrained by transport size. AMD apparently thinks there is a need or they wouldn't have created them in the first place or bothered with more lanes than are practical on their chips. An entry level SSG is geared towards standalone, but you've completely ignored the options available when that technology is scaled up. The Instinct presentation with data fabrics/SANs and existence of a HBCC would seem to be at odds with your opinion here. They've actually shown systems with the technology scaled up, yet you deny those systems have cause to exist.
 
The storage performance would ultimately be a property of the storage media used and there's no reason traditional memory couldn't effectively be used on the GPU. I'd imagine that XPoint is the ultimate goal here and it's approaching RAM performance. There would be a lot of variables on the implementation that could affect performance like host system specs, storage media used, and application requirements. Dataset size, bus speeds, and scaling could all have an impact on performance.


Not necessarily, as the technology improves scaling past the node size that Nvidia is adhering too. It's also the method AMD apparently chose to connect a large number of devices. Remember that each Vega 10 had 32x PCIE3 lanes. So unless a theoretical 32x PCIE slot arrives, even system memory at best would be constrained by the 16x slot and command traffic. For servers, that's far more lanes than would be available to a typical system. For very large deployments that should be the preferred method. If the pool on each device is large enough to meet application needs, the only real limit on GPUs in a node is space and power. A 4U node with 16+ GPUs wouldn't be unreasonable.

For an enthusiast, that ultimately still leaves half the lanes unaccounted. Which would seem to be dual GPU, second processor (ex. FPGA, encoder/decoder), or secondary storage pools.
.
Here is the simplest way to look at it, show me one HPC publication that is getting excited about AMD's SSG :)
None of the ones I follow show the level of interest that they do for certain other technologies because its use is limited.
Cheers
 
Do you have any evidence to support less than $100 in current consumer available hardware costing thousands to implement for a manufacturer on a consumer oriented part? The professional products will obviously be priced higher, but this technology isn't that expensive.

No I don't but do you have any that are? I doubt this is that cheap of a product, its highly specialized, with specialized drivers and hardware components, its not ment for high volume sales, hence why the price is going to be high.
The SSG is a high bandwidth transport option that is less constrained by transport size. AMD apparently thinks there is a need or they wouldn't have created them in the first place or bothered with more lanes than are practical on their chips. An entry level SSG is geared towards standalone, but you've completely ignored the options available when that technology is scaled up. The Instinct presentation with data fabrics/SANs and existence of a HBCC would seem to be at odds with your opinion here. They've actually shown systems with the technology scaled up, yet you deny those systems have cause to exist.

The need is there, just that need is for very specific people, who don't have access to a full server rack. And that artificially limits the uses.

Once volume sales aren't there, price per product goes up. This was never ment for volume sales, and rules of economics, price per product as volume drops and demand stays the same vs the invest of resources in the product, selling price must go up to recover the investment.

http://www.amd.com/en-us/press-releases/Pages/amd-radeon-pro-2016jul25.aspx

here is AMD's press release on their SSG's.

When was the last time you saw the same price of the same GPU on professional cards vs gaming cards now add the SSG to it....

Oh sorry yes I do have a price for it.

These are available for $9,999.

Its right there in their press release.

There is our 10k gaming card.

This quote is extremetech

Given that beta developer kits are going on sale for a cool $10,000, we don’t expect to see many of these units ship, period — but if the technology proves as useful as AMD’s demo implies, we may see Nvidia move towards this concept as well. Faster PCI Express storage and higher-end GPUs may make the pairing more attractive in the future once Vega arrives.
 
Last edited:
Here is the simplest way to look at it, show me one HPC publication that is getting excited about AMD's SSG :)
None of the ones I follow show the level of interest that they do for certain other technologies because its use is limited.
Cheers
You probably won't until NDAs lift they they start a marketing campaign. All that's been announced so far are deals with current tech.

No I don't but do you have any that are? I doubt this is that cheap of a product, its highly specialized, with specialized drivers and hardware components, its not ment for high volume sales, hence why the price is going to be high.
I could go google prices of DIMMs and a Nano for you, but seems a bit of a waste of time. For whatever reason you're ignoring the potential of Vega using the technology where all these specialized needs are already baked in. I have no idea why you think a feature included on every single GPU of the Vega line is specialized.

Once volume sales aren't there, price per product goes up. This was never ment for volume sales, and rules of economics, price per product as volume drops and demand stays the same vs the invest of resources in the product, selling price must go up to recover the investment.
PCB volume? Every partner designs one or more of those for all the products. That's still assuming Vega doesn't push wider adoption of the technology for consumer tech.

When was the last time you saw the same price of the same GPU on professional cards vs gaming cards now add the SSG to it....

Oh sorry yes I do have a price for it.

Its right there in their press release.

There is our 10k gaming card.[/QUOTE]
Hopefully you realize that the arbitrarily priced beta dev kit from last generation's architecture isn't representative of actual costs for a consumer product.
 
You probably won't until NDAs lift they they start a marketing campaign. All that's been announced so far are deals with current tech.

I think we are going to disagree.
so these HPC publications get all excited about other future techs and yet remain quiet on AMD's SSG.
There are technical reasons why it will not be great for HPC, some of which we hinted earlier on but maybe there should be a specific thread on AMD SSG.
Most of the publications are more interested in NVDIMM, this fits better with HPC than SSG in a GPU.
Cheers
 
You probably won't until NDAs lift they they start a marketing campaign. All that's been announced so far are deals with current tech.


I could go google prices of DIMMs and a Nano for you, but seems a bit of a waste of time. For whatever reason you're ignoring the potential of Vega using the technology where all these specialized needs are already baked in. I have no idea why you think a feature included on every single GPU of the Vega line is specialized.


PCB volume? Every partner designs one or more of those for all the products. That's still assuming Vega doesn't push wider adoption of the technology for consumer tech.


Oh sorry yes I do have a price for it.

Its right there in their press release.

There is our 10k gaming card.
Hopefully you realize that the arbitrarily priced beta dev kit from last generation's architecture isn't representative of actual costs for a consumer product.[/QUOTE]


Who are making the drivers, who is validating these cards, why do you think Pro cards are so damn more expansive when they are the pretty much game GPU's? They shouldn't be more expensive not to the degree they are, I know your line of thinking, cause we know they don't cost much more just because they are just products that are in volume else where. But by god AMD can ask for that price, when its in a niche and its for professionals. These products are not meant for general gamers. Driver updates for pro products are much faster, and the quality control is much higher on the hardware. Because business can't afford down time, if they have down time because of a IHV's hardware, they will not use them again. Just for an example if lets say Fox at the Superbowl this year, which they use quadros for all their in game analysis and camera wrap around and what not, if they had driver problems with them just for one minute how much money will they loose? Millions, tens of millions of dollars. They can't mess around with things, and there is no way IHV's can over look that.

Lets say Lockhead Martin is working on a large scale government project, and they have driver related issues with tesla for a mission critical portion which is deadline driven and can't be stretched out. What will happen, when the government gets "its not our problem that our hardware had issues". World doesn't work that way.

Lets take the stock market, some firms have already deployed Tesla products for automated trading, what happens just for 30 secs of down time? Millions of dollars down the tube, maybe even hundreds of millions.

Are they going to go back to nV products if that happens?

This is the same thing for this product SSG's price is high because its for a specific professional market.

This product is and will never be made for gamers.

You think game companies are the target for these products.

They are not, the targets are movie industry, special effects, things like small architectural firms, etc specifically samller independent companies that do 3d work, that need lower cost than a full server rack. So they can spend 10k on something instead of 150k on a full rack.

SSG's aren't even for HPC's either. The need isn't there for them. HPC's ya gotta go to full rack, multiple racks too.
 
Last edited:
Now I wouldn't limit what AMD is doing with the cache controller to just what AMD or AIB will install on the card to use it. There could be options for users to configure as in add their own SSD, Ram or interface to other devices or GPU cards. For example a PCIe 3.0 4x slot or USB 3.1 etc. could be on a card. Maybe even CFX will be updated hooking up the two cache controllers with a very fast interface. Just don't think in the box, many options this can lead into.

Now will AMD go down that route or allow AIBs to do it? Drivers? How transparent to the system will this be other then more video memory available? Software access? Could just end up for specialize uses such as video production on Ultra High resolution projects. VR 360 video comes to mind with Ansel resolutions. Anything that needs to have a very large dataset that a GPU can process case. Future games may fall into that category as well (if you have 4x objects+ in a game means you will have way more textures + shaders + interactions with those additional assets - memory requirements will soar).
 
If you're trying to argue that SSG is a bad thing to test or try, you're out of your mind or paid to do it.

It could enable new gaming scenarios and types of games, texture densities and qualities we never thought of. Does it hurt to try?
Here's a simple one; Imagine a 4k VR minigame where you rapidly jump through portals and race sections of other worlds, with rapidly changing, incredibly detailed scenery. Like no mans sky + cryengine on roids and LSD, each with their own challenges..

The reduction of usage of PCIe bandwidth and latency alone is a good thing worth looking at, along with other overheads.

Lets not shit on everything all the time just because it doesn't have a green or blue badge..

TLDR:'technology progress is bad, it's a waste of time'

edit:

SSG is big boosts for oil + gas exploration (order of magnitude IIRC) and other industries with large datasets. It's already in use. We just wait for trickle down to gaming and see if anything happens with it.
 
Last edited:
If you're trying to argue that SSG is a bad thing to test or try, you're out of your mind or paid to do it.

It could enable new gaming scenarios and types of games, texture densities and qualities we never thought of. Does it hurt to try?
Here's a simple one; Imagine a 4k VR minigame where you rapidly jump through portals and race sections of other worlds, with rapidly changing, incredibly detailed scenery. Like no mans sky + cryengine on roids and LSD, each with their own challenges..

The reduction of usage of PCIe bandwidth and latency alone is a good thing worth looking at, along with other overheads.

Lets not shit on everything all the time just because it doesn't have a green or blue badge..

TLDR:'technology progress is bad, it's a waste of time'

edit:

SSG is big boosts for oil + gas exploration (order of magnitude IIRC) and other industries with large datasets. It's already in use. We just wait for trickle down to gaming and see if anything happens with it.

How fast is that SSG again? Is it...2 or 4GB/sec?

Latency wise SSG lose massively to main memory over PCIe as well. As in a factor of 1000 if its not 3Dxpoint based. Then its only a factor of 100. There is a reason why U.2 interfaces got no latency penalty, no matter if its NAND or 3DXpoint based.

How much did SSG sell so far? Nothing?
 
How fast is that SSG again? Is it...2 or 4GB/sec?

Latency wise SSG lose massively to main memory over PCIe as well. As in a factor of 1000 if its not 3Dxpoint based. Then its only a factor of 100. There is a reason why U.2 interfaces got no latency penalty, no matter if its NAND or 3DXpoint based.

How much did SSG sell so far? Nothing?
Is SSG available now besides for testing and developers? I would think a Vega solution in the end would be the real goal here. I don't know.

Doesn't matter if it comes from ram directly - it still must come from a storage device unless you have 100gb+ of main memory. From the main memory it has to be written to by the cpu or DMA from a SSD, then from main memory to cpu then to graphics card. I do not see it being faster overall then direct access from an onboard SSD or fast ram if the case.

We just don't know if AMD talking about real design 4K games with real texture and other asset support making a real 4k and beyond game, including future VR titles is what this is design for - looks like it but it would have to work. AMD has a number of patents on this tech so how would Nvidia use something similar will be interesting.
 
If you're trying to argue that SSG is a bad thing to test or try, you're out of your mind or paid to do it.

It could enable new gaming scenarios and types of games, texture densities and qualities we never thought of. Does it hurt to try?
Here's a simple one; Imagine a 4k VR minigame where you rapidly jump through portals and race sections of other worlds, with rapidly changing, incredibly detailed scenery. Like no mans sky + cryengine on roids and LSD, each with their own challenges..

The reduction of usage of PCIe bandwidth and latency alone is a good thing worth looking at, along with other overheads.

Lets not shit on everything all the time just because it doesn't have a green or blue badge..

TLDR:'technology progress is bad, it's a waste of time'

edit:

SSG is big boosts for oil + gas exploration (order of magnitude IIRC) and other industries with large datasets. It's already in use. We just wait for trickle down to gaming and see if anything happens with it.


For gaming its a bit different, an increased amount of data that would become useful, to create that amount and types of assets, we don't have the processing power to do it unless we start looking at multiple GPU set ups, but that doesn't work too well with the current tools we have either. 8k textures are bad enough right now on GPU's to process when making PBR textures. 8k textures in the PBR tools, you need minimum 64 gigs of ram, and a top end GPU and its still slower than molasses, and mGPU setups don't help here. If we go higher forget about it.

Now having said that bandwidth is important of those types of tools too, but with the SSG, the tools have be made so they take advantage of the SSG. Substance painter depending on the texture sizes being created uses around 1 gig and up memory for caching purposes, not that much, but when working on large sets that cache will increase exponentially so yeah things like that will come in handy.

Take it from a real time game side, the asset development is so heavy, the actually rendering isn't. Its going to be a long time before we will see game assets, in VR or where ever to ever take advantage of an SSG.

Of specific HPC set ups like oil and gas, they also need processing power, one GPU isn't enough for the more power systems, they could use SSG's for a prototype but they will need to go to a full server for a full production setup. If using more than one SSG, it defeats the purpose of the SSG and caching, as memory management gets extremely complex across multiple boards and multiple hard drives. If the programmers don't have control over this, the system will fall apart.
 
Last edited:
  • Like
Reactions: N4CR
like this
Who are making the drivers, who is validating these cards, why do you think Pro cards are so damn more expansive when they are the pretty much game GPU's? They shouldn't be more expensive not to the degree they are, I know your line of thinking, cause we know they don't cost much more just because they are just products that are in volume else where. But by god AMD can ask for that price, when its in a niche and its for professionals. These products are not meant for general gamers. Driver updates for pro products are much faster, and the quality control is much higher on the hardware. Because business can't afford down time, if they have down time because of a IHV's hardware, they will not use them again. Just for an example if lets say Fox at the Superbowl this year, which they use quadros for all their in game analysis and camera wrap around and what not, if they had driver problems with them just for one minute how much money will they loose? Millions, tens of millions of dollars. They can't mess around with things, and there is no way IHV's can over look that.
...
This is the same thing for this product SSG's price is high because its for a specific professional market.

This product is and will never be made for gamers.

You think game companies are the target for these products.

They are not, the targets are movie industry, special effects, things like small architectural firms, etc specifically samller independent companies that do 3d work, that need lower cost than a full server rack. So they can spend 10k on something instead of 150k on a full rack.

SSG's aren't even for HPC's either. The need isn't there for them. HPC's ya gotta go to full rack, multiple racks too.
AMD has already said the intention for HBCC is to remove resource management from game developers. I'd have to go dig up the link, but I dropped it at B3D a while back. So yeah I'm thinking that means it's targeted at game developers. Being a feature they intend to bring to the gaming industry I'd assume the drivers are part of the standard package. Unlike the original Fiji based SSG the HBCC does the lifting instead of requiring programmer intervention. That's far more than a "niche" market.

How fast is that SSG again? Is it...2 or 4GB/sec?
The dev kit was around 4GB's with the Samsung NVMe drives and x4 PCIE configuration. The Vega implementation with 16(or more) PCIE3 lanes I'd assume is up around 16GB/s or more, ultimately dependent on the storage media used. They could still use system memory for cost savings, or load up an enthusiast class product with it's own memory for performance.

We just don't know if AMD talking about real design 4K games with real texture and other asset support making a real 4k and beyond game, including future VR titles is what this is design for - looks like it but it would have to work. AMD has a number of patents on this tech so how would Nvidia use something similar will be interesting.
One of AMD's senior engineers said it was for gaming. 4k+ would be implied, but even 1080p could use large quantities of memory. It was in one of the videos following the first Horizon events as I recall.

For gaming its a bit different, an increased amount of data that would become useful, to create that amount and types of assets, we don't have the processing power to do it unless we start looking at multiple GPU set ups, but that doesn't work too well with the current tools we have either. 8k textures are bad enough right now on GPU's to process when making PBR textures. 8k textures in the PBR tools, you need minimum 64 gigs of ram, and a top end GPU and its still slower than molasses, and mGPU setups don't help here. If we go higher forget about it.

Now having said that bandwidth is important of those types of tools too, but with the SSG, the tools have be made so they take advantage of the SSG. Substance painter depending on the texture sizes being created uses around 1 gig and up memory for caching purposes, not that much, but when working on large sets that cache will increase exponentially so yeah things like that will come in handy.

Take it from a real time game side, the asset development is so heavy, the actually rendering isn't. Its going to be a long time before we will see game assets, in VR or where ever to ever take advantage of an SSG.

Of specific HPC set ups like oil and gas, they also need processing power, one GPU isn't enough for the more power systems, they could use SSG's for a prototype but they will need to go to a full server for a full production setup. If using more than one SSG, it defeats the purpose of the SSG and caching, as memory management gets extremely complex across multiple boards and multiple hard drives. If the programmers don't have control over this, the system will fall apart.
As I mentioned above, AMD already indicated this is the way they're moving for gaming. As you said, it's not the processing power that is the limitation, but the memory/storage performance. The very issue this tech would address. Ultimately the configuration comes down to how large of a dataset is being used and if TBs or 10's of GBs are required.
 
Just throwing this in because it is going to take time to see SSG mature, and it is competing against what Intel has been slow to bring to market with Optane.
The performance even when not ideal is eye watering, and once they bring the NVDIMM cache along with the OS hooks it is going to be even faster, tbh I am annoyed how Intel has approached this but their loss has opened the way for Ryzen to sell - think Ryzen would had challenges if Intel had managed to get NVDIMM cache onto consumers with the OS hooks and game developers with Kaby Lake.
But this seems some time away for consumers, and the same can be said about SSG IMO.
https://www.pcper.com/news/Storage/...tane-SSD-DC-P4800X-Enterprise-SSD-Performance

Allyn at PCPer who did the article is a pretty smart engineer and based this on engineer specs for Optane (Intel with Enterprise/HPC is usually pretty accurate) and for experience with SSD.
The latency improvement is actually 10x better than the charts indicate as that is the worst case for Optane.
I think some really underestimate the potential for Optane tech not just as a high GB disk cache but importantly the NVDIMM solution Intel/Micron are also looking to provide and is even faster.

rnd.png


qos-r.png



Unfortunately this is what SSG is going to compete against, and I see HPC journalists more weighted towards Optane/NVDIMM.
For workstations and rendering/modelling, may be closer in competition but still there depends how long it takes both to provide mature products importantly with broad support, but Intel/Micron approach gives them more flexibility.
Cheers
 
Last edited:
AMD has already said the intention for HBCC is to remove resource management from game developers. I'd have to go dig up the link, but I dropped it at B3D a while back. So yeah I'm thinking that means it's targeted at game developers. Being a feature they intend to bring to the gaming industry I'd assume the drivers are part of the standard package. Unlike the original Fiji based SSG the HBCC does the lifting instead of requiring programmer intervention. That's far more than a "niche" market.

Its a niche market right now, its going to be a few generations down the road before it becomes viable and at that point it will still be enthusiast level, so not for the masses, game developers will still have to focus on what is best for most people, Lets say around 5-7 gens I'm guessing would be a good fit for this type of tech for mass gaming so 7.5 to 10.5 years. So at that point we don't even know what else will come up.

As I mentioned above, AMD already indicated this is the way they're moving for gaming. As you said, it's not the processing power that is the limitation, but the memory/storage performance. The very issue this tech would address. Ultimately the configuration comes down to how large of a dataset is being used and if TBs or 10's of GBs are required.

I agree its the data set that is going to push this tech, and to hit that from what we got now (we have to think about what is being seen on the screen vs, what is in a total level), and its going to be some time before that size of data sets are going to be used in games.
 
Its a niche market right now, its going to be a few generations down the road before it becomes viable and at that point it will still be enthusiast level, so not for the masses, game developers will still have to focus on what is best for most people, Lets say around 5-7 gens I'm guessing would be a good fit for this type of tech for mass gaming so 7.5 to 10.5 years. So at that point we don't even know what else will come up.



I agree its the data set that is going to push this tech, and to hit that from what we got now (we have to think about what is being seen on the screen vs, what is in a total level), and its going to be some time before that size of data sets are going to be used in games.
Microsoft Scorpio Xbox suppose to render at 4K and also render VR well. I see lower cost and larger graphics cache would really come into play here without the high cost of 16gb or more of fast HBM or DDR6 ram. I would think it will be 8gb HBM RyZen/Vega hybrid with cache for the GPU. Since on a console a consistent platform for developers it might take off here first. This tech looks pretty much ready to be implemented and designed for. Streaming high resolution textures on demand a.k.a megatextures via old sytle mechanical drives was done in the past. With local SSD it could really stream some rather large textures if the case. Almost nothing new here other then hardware with a more direct path to the GPU decreasing latency and contention with the main memory for the APU. If all game assets are loaded onto the cache SSD for the console GPU it would free up a lot of bandwidth for the cpu use of main memory. This also allows a separate pool of fast graphics memory for the APU gpu as well. Will be very interesting but looks like AMD will have a very stout design for the next Xbox even with only some of the capabilities incorporated.

All meaning large dataset type programing can be coming soon with the next console generation.
 
  • Like
Reactions: N4CR
like this
Microsoft Scorpio Xbox suppose to render at 4K and also render VR well. I see lower cost and larger graphics cache would really come into play here without the high cost of 16gb or more of fast HBM or DDR6 ram. I would think it will be 8gb HBM RyZen/Vega hybrid with cache for the GPU. Since on a console a consistent platform for developers it might take off here first. This tech looks pretty much ready to be implemented and designed for. Streaming high resolution textures on demand a.k.a megatextures via old sytle mechanical drives was done in the past. With local SSD it could really stream some rather large textures if the case. Almost nothing new here other then hardware with a more direct path to the GPU decreasing latency and contention with the main memory for the APU. If all game assets are loaded onto the cache SSD for the console GPU it would free up a lot of bandwidth for the cpu use of main memory. This also allows a separate pool of fast graphics memory for the APU gpu as well. Will be very interesting but looks like AMD will have a very stout design for the next Xbox even with only some of the capabilities incorporated.

All meaning large dataset type programing can be coming soon with the next console generation.


The res is not really what pushes the data set, even on 1080p higher quality textures are better for fidelity. Just to give you an example, for movies back in early 2000's, we were using 8k textures with procedural effects.

AMD would need to introduce a higher cost product into a console environment which is a low cost market. Another words that isn't up to them, that is up to the console manufacturer, and they don't like their consoles to go up in price, that's a last resort for them.
 
The res is not really what pushes the data set, even on 1080p higher quality textures are better for fidelity. Just to give you an example, for movies back in early 2000's, we were using 8k textures with procedural effects.

AMD would need to introduce a higher cost product into a console environment which is a low cost market. Another words that isn't up to them, that is up to the console manufacturer, and they don't like their consoles to go up in price, that's a last resort for them.
Cost is one premium requirement and next would be best performance for that cost I would say and then development cost for developers to use the new system. Guessing or thinking.

So possible design I see is an 8gb HBM2 APU, 4core/8thread cpu combined with a Vega GPU with around 32 CUs add in a cache SSD of 64gb-128gb. Obviously low cost HBM will need to be available. Now if the cpu and gpu is separated vice APU using a single interposer could be used but you loose that fast coupling between the gpu and cpu in the APU so I would not think that would be the case. You would have a very low power solution, small size and a huge asset cache for the APU. Combine that with a cheap mechanical drive (no need for ssd because you just read right into the cache as the game starts and plays). I see base systems at $399, add more for VR and SSD etc.
 
Cost is one premium requirement and next would be best performance for that cost I would say and then development cost for developers to use the new system. Guessing or thinking.

So possible design I see is an 8gb HBM2 APU, 4core/8thread cpu combined with a Vega GPU with around 32 CUs add in a cache SSD of 64gb-128gb. Obviously low cost HBM will need to be available. Now if the cpu and gpu is separated vice APU using a single interposer could be used but you loose that fast coupling between the gpu and cpu in the APU so I would not think that would be the case. You would have a very low power solution, small size and a huge asset cache for the APU. Combine that with a cheap mechanical drive (no need for ssd because you just read right into the cache as the game starts and plays). I see base systems at $399, add more for VR and SSD etc.


not the simple, give you an example, the cell CPU from Sony, that thing was fricken powerful for its time, and developers liked its power (hard to code for initially till they got used to it, but that is another thing). Developers were disappointed that Sony dropped the cell and went with AMD's cores, they even complained about it being not powerful enough. But Sony did it to cut costs. The cost of the cell processor is pretty much the reason why PS3 had to be more expensive than xbox, and that hurt them, cause if they sell less consoles, then less games are being sold, and that is where the money is in the console market, the game sales
 
not the simple, give you an example, the cell CPU from Sony, that thing was fricken powerful for its time, and developers liked its power (hard to code for initially till they got used to it, but that is another thing). Developers were disappointed that Sony dropped the cell and went with AMD's cores, they even complained about it being not powerful enough. But Sony did it to cut costs. The cost of the cell processor is pretty much the reason why PS3 had to be more expensive than xbox, and that hurt them, cause if they sell less consoles, then less games are being sold, and that is where the money is in the console market, the game sales
Zen is way more powerful then Bobcat cores, a 4 core/ 8 thread Zen, developers should be very happy about. Microsoft said 6Teraflops for the Scorpio so it will be a hefty increase in performance over the Xbox 1, like 4x. With that performance you will need some fast memory to drive both the cpu and gpu and that pretty much leaves HBM as 1st choice other wise your talking about a bigger board, wider memory bus for DDR and more power. What allows for the cost savings I see is the cache ability of Vega, this can keep the memory size down from like 16gb to 8gb, keep OS in the SSD cache and use a lower cost high capacity mechanical drive for storing games. This will be one very powerful console and the biggest jump from one generation to the next. I don't see it costing more then $599 but it would have to drop rather fast for it to be successful. That is what is coming, how configured to get that performance level can be up to debate. Bottom line is large data sets for games especially with HDR and VR is here for development.
 
The dev kit was around 4GB's with the Samsung NVMe drives and x4 PCIE configuration. The Vega implementation with 16(or more) PCIE3 lanes I'd assume is up around 16GB/s or more, ultimately dependent on the storage media used. They could still use system memory for cost savings, or load up an enthusiast class product with it's own memory for performance.

There is nothing here that dont scream cost issue into your head?

Its funny how placing an SSD on a graphics card is "innovation".
 
Just throwing this in because it is going to take time to see SSG mature, and it is competing against what Intel has been slow to bring to market with Optane.
The performance even when not ideal is eye watering, and once they bring the NVDIMM cache along with the OS hooks it is going to be even faster, tbh I am annoyed how Intel has approached this but their loss has opened the way for Ryzen to sell - think Ryzen would had challenges if Intel had managed to get NVDIMM cache onto consumers with the OS hooks and game developers with Kaby Lake.
But this seems some time away for consumers, and the same can be said about SSG IMO.
https://www.pcper.com/news/Storage/...tane-SSD-DC-P4800X-Enterprise-SSD-Performance

Allyn at PCPer who did the article is a pretty smart engineer and based this on engineer specs for Optane (Intel with Enterprise/HPC is usually pretty accurate) and for experience with SSD.
The latency improvement is actually 10x better than the charts indicate as that is the worst case for Optane.
I think some really underestimate the potential for Optane tech not just as a high GB disk cache but importantly the NVDIMM solution Intel/Micron are also looking to provide and is even faster.

rnd.png


qos-r.png



Unfortunately this is what SSG is going to compete against, and I see HPC journalists more weighted towards Optane/NVDIMM.
For workstations and rendering/modelling, may be closer in competition but still there depends how long it takes both to provide mature products importantly with broad support, but Intel/Micron approach gives them more flexibility.
Cheers

SSG and Optane got nothing to do with one another. SSG could be 3dxpoint based too and even use NVDIMMs.
 
Microsoft Scorpio Xbox suppose to render at 4K and also render VR well. I see lower cost and larger graphics cache would really come into play here without the high cost of 16gb or more of fast HBM or DDR6 ram. I would think it will be 8gb HBM RyZen/Vega hybrid with cache for the GPU. Since on a console a consistent platform for developers it might take off here first. This tech looks pretty much ready to be implemented and designed for. Streaming high resolution textures on demand a.k.a megatextures via old sytle mechanical drives was done in the past. With local SSD it could really stream some rather large textures if the case. Almost nothing new here other then hardware with a more direct path to the GPU decreasing latency and contention with the main memory for the APU. If all game assets are loaded onto the cache SSD for the console GPU it would free up a lot of bandwidth for the cpu use of main memory. This also allows a separate pool of fast graphics memory for the APU gpu as well. Will be very interesting but looks like AMD will have a very stout design for the next Xbox even with only some of the capabilities incorporated.

All meaning large dataset type programing can be coming soon with the next console generation.

We already know enough about Xbox S to say that's not going to happen. Say hello to ~320GB/sec on a 384bit bus using GDDR5 and 12GB ;)

Zen cores would be great, but it would kill a lot of backwards compatibility. We do know it will have 8 cores and that's not 4C/8T. Improved Jaguar cores is the guess.

Zen is way more powerful then Bobcat cores, a 4 core/ 8 thread Zen, developers should be very happy about. Microsoft said 6Teraflops for the Scorpio so it will be a hefty increase in performance over the Xbox 1, like 4x. With that performance you will need some fast memory to drive both the cpu and gpu and that pretty much leaves HBM as 1st choice other wise your talking about a bigger board, wider memory bus for DDR and more power. What allows for the cost savings I see is the cache ability of Vega, this can keep the memory size down from like 16gb to 8gb, keep OS in the SSD cache and use a lower cost high capacity mechanical drive for storing games. This will be one very powerful console and the biggest jump from one generation to the next. I don't see it costing more then $599 but it would have to drop rather fast for it to be successful. That is what is coming, how configured to get that performance level can be up to debate. Bottom line is large data sets for games especially with HDR and VR is here for development.

6Tflops is close to RX480 with 5.7Tflops. And forget your HBM dreams.
 
Last edited:
We already know enough about Xbox S to say that's not going to happen. Say hello to ~320GB/sec on a 384bit bus using GDDR5 and 12GB ;)

Zen cores would be great, but it would kill a lot of backwards compatibility. We do know it will have 8 cores and that's not 4C/8T. Improved Jaguar cores is the guess.
.
Can you please explain to me how replacing one X86 cpu with another would kill a lot of backward compatibility?
 
SSG and Optane got nothing to do with one another. SSG could be 3dxpoint based too and even use NVDIMMs.
SSG is using Samsung tech and is based upon GPU on board caching; I am talking about AMD's use of the tech and they have no plans for it outside of that because then you are just equal to normal SSD loading.

While they have different tech and implementation, they will be used or can for similar things.
And so it is important to understand just how much faster Optane/NVDIMM is to standard SSD.
3Dxpoint makes the SSG 'cache' in Vega redundant, especially with NVDIMM and Optane (which is the only solution coming out for awhile and support with motherboards-CPU-chipset and OS).

Anyway remember that AMD SSG with GPU was only compared to standard SSD loading (not surprised as Optane is only just shipping for sampling now) and not the massive performance gain of Optane.
Cheers
 
Last edited:
Can you please explain to me how replacing one X86 cpu with another would kill a lot of backward compatibility?

Because they code so tight to the CPU that even instruction timings can mess up. Same reason why you just cant raise the clocks either without testing and perhaps patching as the new "boost modes" for legacy titles on PS4 Pro for example. You get more performance, but you also lock yourself.
 
SSG is using Samsung tech and is based upon GPU on board caching; I am talking about AMD's use of the tech and they have no plans for it outside of that because then you are just equal to normal SSD loading.

While they have different tech and implementation, they will be used or can for similar things.
And so it is important to understand just how much faster Optane/NVDIMM is to standard SSD.
3Dxpoint makes the SSG 'cache' in Vega redundant, especially with NVDIMM and Optane (which is the only solution coming out for awhile and support with motherboards-CPU-chipset and OS).
Cheers

Oh I agree. But SSG was irrelevant from the start. Its nothing but a useless PR feature unless you start placing GPUs on x1 lanes and preload them for more or less individual compute.
 
Oh I agree. But SSG was irrelevant from the start. Its nothing but a useless PR feature unless you start placing GPUs on x1 lanes and preload them for more or less individual compute.
I agree :)
And my post was to show just how much so when considering the performance of the Optane PCIe 375GB add-in card, which is not as flexible or even as fast as the NVDIMM cache solution Intel/Micron are also launching.
Cheers
 
Because they code so tight to the CPU that even instruction timings can mess up. Same reason why you just cant raise the clocks either without testing and perhaps patching as the new "boost modes" for legacy titles on PS4 Pro for example. You get more performance, but you also lock yourself.

Makes sense, thanks for explaining it. I am not a console guru.
 
We already know enough about Xbox S to say that's not going to happen. Say hello to ~320GB/sec on a 384bit bus using GDDR5 and 12GB ;)

Zen cores would be great, but it would kill a lot of backwards compatibility. We do know it will have 8 cores and that's not 4C/8T. Improved Jaguar cores is the guess.



6Tflops is close to RX480 with 5.7Tflops. And forget your HBM dreams.
Microsoft Project Helix is to merge the Xbox and PC games to one platform. That is suppose to be completed with Xbox Scorpio.
  • Meaning console code and PC code will run on any type of compliant x86/X64 cpu meaning cpu differences will not make a difference.
Polaris at 5.7TFlops is too high powered for an APU with 8 cores. That would never work, at least in current state. Vega with addition perf/w advantage would be the only true solution.
A custom cpu using Jaguar cores is possible but what same speed? If different speed then you already mention software would break -> In reality most would work with RyZen cores as well as Intel and other AMD cores. All software should be compatible at that time with PC Win 10 gaming machines.



We are getting too far out for this tread. I do believe Vega will be used for the next Xbox like Polaris was used for PS4 pro - which did not break too much stuff in the end. You are basically saying Microsoft will just duplicate a PS4 Pro with a speed bump a year later (not likely).
 
Unfortunately this is what SSG is going to compete against, and I see HPC journalists more weighted towards Optane/NVDIMM.
For workstations and rendering/modelling, may be closer in competition but still there depends how long it takes both to provide mature products importantly with broad support, but Intel/Micron approach gives them more flexibility.
SSG is just the first gen of what is adding secondary memory pools to the adapter. That same engineer I mentioned above even said the SSG/HBCC was designed for use with high performance non-volatile technologies. For supercomputers AMD won't likely use SSGs as much as CPU/APUs which represent the majority of that market. SSG and 3DXPoint/Optane/NVDIMM are all working together. SSG being the HBCC controller implementation with HBM as cache and NVDIMM as storage. The storage used would be dependent on the application needs and price sensitivity.

Its a niche market right now, its going to be a few generations down the road before it becomes viable and at that point it will still be enthusiast level, so not for the masses, game developers will still have to focus on what is best for most people, Lets say around 5-7 gens I'm guessing would be a good fit for this type of tech for mass gaming so 7.5 to 10.5 years. So at that point we don't even know what else will come up.
I'd guess we start seeing it in a matter or months. Just need a game that uses sparse resources then some high resolution texture packs to push storage needs. Plenty of currently released games already doing that. All it should really take is the graphics card reporting an absurdly large available memory pool and a game that uses all available memory.

There is nothing here that dont scream cost issue into your head?

Its funny how placing an SSD on a graphics card is "innovation".
Decreasing relative HBM capacity while increasing capacity of far more affordable memory for a storage pool? Doesn't really scream cost issue to me, just more options. No more than someone could get 1TB of ram on their system if they really wanted.

Oil/gas and video apparently like the tech that wasn't available on the market. Seems reasonably innovative along with saving devs from resource management in the future.

SSG is using Samsung tech and is based upon GPU on board caching; I am talking about AMD's use of the tech and they have no plans for it outside of that because then you are just equal to normal SSD loading.

While they have different tech and implementation, they will be used or can for similar things.
And so it is important to understand just how much faster Optane/NVDIMM is to standard SSD.
3Dxpoint makes the SSG 'cache' in Vega redundant, especially with NVDIMM and Optane (which is the only solution coming out for awhile and support with motherboards-CPU-chipset and OS).

Anyway remember that AMD SSG with GPU was only compared to standard SSD loading (not surprised as Optane is only just shipping for sampling now) and not the massive performance gain of Optane.
Cheers
The original SSG was using the fastest storage available with a large capacity. 3DXPoint isn't the SSG "cache". The cache is the HBM/GDDR and the NVDIMM/3DXPoint/SSD the "video memory/storage". It was compared to standard SSD loading to keep the comparison apples to apples. The SSD is ultimately whatever storage media is chosen: system memory, NVDIMM, SSD, SAN. The SSG/HBCC could use any of those so comparing different media would be somewhat pointless beyond benchmarking storage system performance.
 
Zen is way more powerful then Bobcat cores, a 4 core/ 8 thread Zen, developers should be very happy about. Microsoft said 6Teraflops for the Scorpio so it will be a hefty increase in performance over the Xbox 1, like 4x. With that performance you will need some fast memory to drive both the cpu and gpu and that pretty much leaves HBM as 1st choice other wise your talking about a bigger board, wider memory bus for DDR and more power. What allows for the cost savings I see is the cache ability of Vega, this can keep the memory size down from like 16gb to 8gb, keep OS in the SSD cache and use a lower cost high capacity mechanical drive for storing games. This will be one very powerful console and the biggest jump from one generation to the next. I don't see it costing more then $599 but it would have to drop rather fast for it to be successful. That is what is coming, how configured to get that performance level can be up to debate. Bottom line is large data sets for games especially with HDR and VR is here for development.

I'm not talking about what is better and what is worse, i'm giving an example of why Sony didn't reuse a cell architecture (updated of course) for their PS4 which would have been way better then the jaguar cores its got now. It all came down to cost of the console. They couldn't compete with Xbox if their cost was going to 100 bucks more per console. End result they lost hundreds of millions or billions in game sales because of it.
 
Another th
I'm not talking about what is better and what is worse, i'm giving an example of why Sony didn't reuse a cell architecture (updated of course) for their PS4 which would have been way better then the jaguar cores its got now. It all came down to cost of the console. They couldn't compete with Xbox if their cost was going to 100 bucks more per console. End result they lost hundreds of millions or billions in game sales because of it.
10-4
 
The original SSG was using the fastest storage available with a large capacity. 3DXPoint isn't the SSG "cache". The cache is the HBM/GDDR and the NVDIMM/3DXPoint/SSD the "video memory/storage". It was compared to standard SSD loading to keep the comparison apples to apples. The SSD is ultimately whatever storage media is chosen: system memory, NVDIMM, SSD, SAN. The SSG/HBCC could use any of those so comparing different media would be somewhat pointless beyond benchmarking storage system performance.

You mistake my context, it is a 'cache' storage in terms of what it is used for, all journalists refer to it as that as well and not my fault AMD is trying to rename HBM VRAM.
You do agree that it has only been shown working with SSD tech and not 3DXPoint, nor is there any indicator it will get that?

The HBCC that you touch upon has nothing to do with SSG in my context..
SSG is just on-card NAND storage, while HBCC is meant as a more advanced unified-dynamic memory pool manager.

And come on, it is a bit nonsense to go along with AMD renaming HBM as 'High Bandwidth Cache' instead of VRAM.
When Nvidia does anything like this everyone complains about them renaming for sake of it.

And as we all have mentioned repeatedly, the HBCC itself just manages the diverse memory pool options along with being more dynamic in terms of loaded VRAM, but as mentioned that also has its limitations until coded for by developers.

Late so sorry if I do not give a lengthy response.
Cheers
 
APU's are still bound by the same metrics, your direct memory pool or cache pool will still be ram instead of vram, then your bottleneck will shift over to the other component connectors, which have less bandwidth than the ram to the APU.
Actually as a user of ENB Boost that isn't entirely correct. Using it allows for insane Textures and dynamic shifting of loads to allow for smooth gameplay where using just the GPU Vram would cause stutter and CTD. It isn't as simple as some basic belief that you keep parroting.
 
You mistake my context, it is a 'cache' storage in terms of what it is used for, all journalists refer to it as that as well and not my fault AMD is trying to rename HBM VRAM.
You do agree that it has only been shown working with SSD tech and not 3DXPoint, nor is there any indicator it will get that?
It's only been shown with SSD tech, but 3DX is the next logical step and unavailable at the time. 3DX is only starting to hit the market as well as Vega which would have improved on the functionality. 4x vs 16x+(?) lanes being beneficial with the faster drives.

The HBCC that you touch upon has nothing to do with SSG in my context..
SSG is just on-card NAND storage, while HBCC is meant as a more advanced unified-dynamic memory pool manager.
The original was NAND, but at it's most basic was a block IO device which could be interesting. For oil/gas and video, capacity would have been the primary concern.

And come on, it is a bit nonsense to go along with AMD renaming HBM as 'High Bandwidth Cache' instead of VRAM.
When Nvidia does anything like this everyone complains about them renaming for sake of it.

And as we all have mentioned repeatedly, the HBCC itself just manages the diverse memory pool options along with being more dynamic in terms of loaded VRAM, but as mentioned that also has its limitations until coded for by developers.
Consider it from the context of an APU. A Vega APU with HBCC would probably see system memory as it's video memory and the HBM as a L3(?) data cache. That system memory could be DRAM, NVDIMMs, etc. For a discrete GPU it might make sense they retain that design. Working around a PCIE bottleneck by adding an additional storage pool to the device.

In my view SSG was first generation 'tiered memory' (lack of a better term atm). A GPU with a block IO device added for additional storage when presented with a large dataset. With a flat memory you could substitute VRAM with the block device and run a program accordingly. Performance would have it's obvious limitations depending on what you were doing. Extremely ALU heavy workloads, load data once then spend forever processing, should work well with that model.

Second generation would be HBCC. That block IO device still exists, but a new hardware memory manager was added to improve efficiency by using HBM as a cache with better bandwidth utilization. Which device it maps as memory is probably selective. If cache holds everything a pool isn't required. In the case of an APU it could be system RAM. It may also be a storage/network device if capacity/redundancy is the larger concern.

For developers it should "just work", but obviously could be more efficient with some hints. At the most basic implementation, each created resource could be a single reference for the HBCC. Sparse resources, where a texture for example gets broken into tiles, would allow this with improved granularity. Less wasted memory that doesn't ultimately get used. At the most basic it "just works" with the current model, loading entire resources as required. The benefit of HBCC is that hardware is more flexible/efficient when it comes to managing a large quantity of resources and issuing commands. A software model would likely have a list, with most recently used resource being at the front. When RAM required being freed, the last item on the list evicted. Once that list becomes thousands or millions of elements in size and updated in parallel it becomes problematic. Tracking metrics for a million cache lines in software would likely get "fun" really fast. Keep in mind each GPU core could be generating hits on random portions of a sparse resource. The eviction scheme they use could be more complex as well. Frequency likely more significant than last accessed, alignment, etc. Will be interesting to see the details on just how many tiles can be handled. Fiji already has a software approach to my understanding. How they manage the 4GB in many games. Nvidia would likely do something similar. Vega10/11 hardware implementations, possibly with different capabilities between them. 8GB vs 16GB potentially changing the number of tiles ideally tracked. Resident tiles handled somewhat dynamically based on capabilities(virtual addresses and metrics).
 
Actually as a user of ENB Boost that isn't entirely correct. Using it allows for insane Textures and dynamic shifting of loads to allow for smooth gameplay where using just the GPU Vram would cause stutter and CTD. It isn't as simple as some basic belief that you keep parroting.


ENB based software still requires individual system setup. Its not simple when you are talking about different latency hardwares, different caching systems, different memory technologies. Might work well for a set memory type that needs to shuffle from system memory to vram, but add in other variables, like a hard drive, you dramatically increase the complexity.

I'm not parroting, JustReason, you are just not able to comprehend the nuances of what we are talking about.
 
Last edited:
I'd guess we start seeing it in a matter or months. Just need a game that uses sparse resources then some high resolution texture packs to push storage needs. Plenty of currently released games already doing that. All it should really take is the graphics card reporting an absurdly large available memory pool and a game that uses all available memory.


Really we have games that at max right now get around 6 gb of vram, and AMD is saying they really don't use that much, and by using their HBCC, they could cut down that by as much as half. How do you expect to see games in the next few months to not only double memory requirements, but actually have 4 fold increase (if AMD is going to push for developers to use HBCC)? its gotta be one of the other, cause pushing both at the same time, makes no sense they contradict each other. You think developers were planning to push 12 or 16 gb games when only one card which was the Titan X had 12 gb?

Remember dev's weren't playing on SSG tech when they started making their games. Asset development is very time consuming and costly if you need to redo assets. I think we talked about how long assets take to make and from that its easy to figure out costs of course certain things won't increase like concept art, but that is a fairly small portion of the actually cost.
 
Last edited:
Back
Top