Vega Rumors

I am very interested in the infinity fabric. I would assume it will use pci-e x16 like the cpu, however what will be the speed? Will it be tied to ram speed? Not sure how much correlates with the CPU.
Ram speed is where it gets interesting with the differences of HBM design. This is why everyones unsure about it. No doubt if they are/have done it, will be some logical way
But that also leads to Simplyfuns' post which is probably bang on the money.
I think AMD is about to make mGPU a "thing" that is invisible to whatever is making the calls. Given their past history, I think it will work, but ver 1 will have quirks we don't like.
This sounds about right. E.g. 50% scaling only, or some sort of frame time/pacing issue.

This is RTGs only real ticket to high end competition for next few years though and could be similarly disruptive as Naples could potentially be.

E.g. we may see as following
Vega 10: OCd 1080 to Ti or faster in some games.
Gv104 ~Ti speed range
Nvidia is still tied with AMD for overall performance until they get Gv102 out later. Of course the power use shilling will be in overdrive during this time.
Gv102 launches 2018, usual +50-60% with a much bigger die than Gp102, clock bump, some process refinements as usual, Titan, ten gorillion dollars.
AMD counters with Vega10x2, matches or takes crown back (I doubt it will be ready this year but please prove me wrong AMD) costs a little less.
>PowerUseShillingIntensifies.webm
late 2018 dec Nvidia releases Volta Ti 'Titan watered down edition because we can't sell any more for HPC first'
AMD releases Navi, drawing nearly even on process this time (7nm gloflo)
Nvidia releases Ti 'watered down edition Titan with a cost cut and less ram' and some 'even less watered down edition' Titan with more ram than the last one and a clock bump
MCM process repeats (note how they expect Vega10x2 much sooner than the usual year between single and dual GPU launches...) as they now make MCM/mGPU cards part of key R&D
High end is now muddied. AMD offers higher performance and slightly higher power usage.
Epeen upgraditis types now confused 'cus muh Nvidia is not so stronk all of a sudden and 'AMD SUXX LOLOLOL' can't be so easily played when they're competitive or better on performance.


Next few years are going to be a great time for GPU consumers... 4k, 8k, VR, mGPU/MCM bring it on!
 
High end is now muddied. AMD offers higher performance and slightly higher power usage.
Epeen upgraditis types now confused 'cus muh Nvidia is not so stronk all of a sudden and 'AMD SUXX LOLOLOL' can't be so easily played when they're competitive or better on performance.

Next few years are going to be a great time for GPU consumers... 4k, 8k, VR, mGPU/MCM bring it on!

Yeah I'm not making any performance claims here. I still recall the Fury launch turning into nothing more than an interesting experiment I didn't want to buy. If they launch this the way I think it will be launched there won't be frame pacing issues in the traditional sense, this won't be two GPU's in crossfire, it will be one GPU composed of a couple chips and some driver managed latency issues or something. This memory caching they've been busting their ass talking about is what is leading me down this thought trail. It's not like AMD can't design a unique memory controller, they have a few years behind them here.

But I know nothing and I don't claim to.
 
All this talk of Infinity Fabric / HyperTransport / 'Intel Fabric' etc. remind me about how some of the old SGI workstations were built: Processor and memory and graphics sub-system were all linked together.

No real relation to what y'all are talking about but thought it interesting considering all the chatter going on in this thread. ;)
 
All this talk of Infinity Fabric / HyperTransport / 'Intel Fabric' etc. remind me about how some of the old SGI workstations were built: Processor and memory and graphics sub-system were all linked together.

No real relation to what y'all are talking about but thought it interesting considering all the chatter going on in this thread. ;)


Yep and that is why it all doesn't really matter, what matters is if these new "fabrics" have the available bandwidth to sustain memory pooling. Splitting up work loads accross GPU's isn't an issue, but each GPU needs to be aware and have access to what the other GPU's are doing. Otherwise mGPU has to fall back to SFR or AFR, which neither of those are ideal at the moment because of the type of renderers that are being used ATM and the near future.
 
I think AMD is about to make mGPU a "thing" that is invisible to whatever is making the calls. Given their past history, I think it will work, but ver 1 will have quirks we don't like.

Hoping for magical pixie-fairies are cute...but that is about it.
 
All this talk of Infinity Fabric / HyperTransport / 'Intel Fabric' etc. remind me about how some of the old SGI workstations were built: Processor and memory and graphics sub-system were all linked together.

No real relation to what y'all are talking about but thought it interesting considering all the chatter going on in this thread. ;)

If you're talking about something like the O2 class, it was a straightforward unified memory architecture. They implemented it well at that time. Basically every APU uses the same concept.

Yep and that is why it all doesn't really matter, what matters is if these new "fabrics" have the available bandwidth to sustain memory pooling. Splitting up work loads accross GPU's isn't an issue, but each GPU needs to be aware and have access to what the other GPU's are doing. Otherwise mGPU has to fall back to SFR or AFR, which neither of those are ideal at the moment because of the type of renderers that are being used ATM and the near future.

I think it's quite possible that we might see mGPU that operates as a single traditional GPU. Concepts we are used to regarding AFR and SFR might well be out the window and we could be looking at a Ryzen type situation - the OS has no idea if a Zen based design is multi module or not.

I really , seriously don't think they want HBM2 because it's a cool thing to have or there's good marketing hype, I think they feel they absolutely NEED HBM2 to make this concept work. The only thing I can't wrap my head around is latency issues across whatever fabric approach there might be when they attach modules to it.

This is a seriously curious situation because at the end of the day, they could scale this approach to whatever they can handle from a power management perspective.
 
Hoping for magical pixie-fairies are cute...but that is about it.

Fair enough. I just think some of the patent work we've seen moving through and some of the concepts we've seen introduced with Zen can apply here. As I said, I have no idea. Strictly speculating out my ass based on available data.

But that doesn't mean I believe in fairies no matter how hard you try and pick me up by complimenting my cuteness. Save it for your boyfriend.
 
If you're talking about something like the O2 class, it was a straightforward unified memory architecture. They implemented it well at that time. Basically every APU uses the same concept.



I think it's quite possible that we might see mGPU that operates as a single traditional GPU. Concepts we are used to regarding AFR and SFR might well be out the window and we could be looking at a Ryzen type situation - the OS has no idea if a Zen based design is multi module or not.

I really , seriously don't think they want HBM2 because it's a cool thing to have or there's good marketing hype, I think they feel they absolutely NEED HBM2 to make this concept work. The only thing I can't wrap my head around is latency issues across whatever fabric approach there might be when they attach modules to it.

This is a seriously curious situation because at the end of the day, they could scale this approach to whatever they can handle from a power management perspective.

its a combination of the interposer and memory type, the drop in latency and sufficient bandwidth eliminates the need for mGPU. Now having said that, we can see what happens to the "effective bandwidth" with the CCX crosstalk with Ryzen. Can't have that in a GPU setting at all, GPU's are all about throughput. Any delay would be magnified much more than what we see with Ryzen.
 
Yep and that is why it all doesn't really matter, what matters is if these new "fabrics" have the available bandwidth to sustain memory pooling. Splitting up work loads accross GPU's isn't an issue, but each GPU needs to be aware and have access to what the other GPU's are doing. Otherwise mGPU has to fall back to SFR or AFR, which neither of those are ideal at the moment because of the type of renderers that are being used ATM and the near future.

I wonder how much IF and other utility besides HBCC has been added to Vega versus Fiji.

Same number of stream processors but a whole lot of other circuity added it seems for it to be as big as it is with a process shrink. I guess we'll find out soon™.
 
Fair enough. I just think some of the patent work we've seen moving through and some of the concepts we've seen introduced with Zen can apply here. As I said, I have no idea. Strictly speculating out my ass based on available data.

But that doesn't mean I believe in fairies no matter how hard you try and pick me up by complimenting my cuteness. Save it for your boyfriend.

Then you need to provide some arguments that are not based on fuzzy-warm-feelings.

Going by AMD's history in multi-GPU (Master/Slave cards, micro-stutter etc.) I see no technical back-story for your hope...ball in your court.
 
Then you need to provide some arguments that are not based on fuzzy-warm-feelings.

Going by AMD's history in multi-GPU (Master/Slave cards, micro-stutter etc.) I see no technical back-story for your hope...ball in your court.

Just looking for the rules that say speculation is not allowed here, especially when I state everywhere that I really am speculating. Why that upsets you so is beyond me. Maybe stand up and get a stretch, approach this what if thread as if that's what it is.

What I've written is not technically impossible, I can already see and mentioned flaws in the ideas I proposed and I'm looking to work those through. Some people are adding to this and saying why the idea might not work, what they think will happen and how this might all role out.

I just don't have this burning keyboard warrior desire to be proven right and you're not really adding much to the conversation here.
 
I'm not upset...or using fallacies...I am simply asking you to base your speculation on something more than wishful thinking.

It seems you are unable to do so...no suprise...but being mad at me for your own shortcomings is the opposite of intelligent.

Have a nice day.

And you too have a truly wonderful day!!
 
Then you need to provide some arguments that are not based on fuzzy-warm-feelings.

Going by AMD's history in multi-GPU (Master/Slave cards, micro-stutter etc.) I see no technical back-story for your hope...ball in your court.

I used sli and micro-stutter sucks ass there as well, not unique to AMD crossfire. Both companies have had their issues over the years.
 
  • Like
Reactions: N4CR
like this
Yeah I'm not making any performance claims here. I still recall the Fury launch turning into nothing more than an interesting experiment I didn't want to buy. If they launch this the way I think it will be launched there won't be frame pacing issues in the traditional sense, this won't be two GPU's in crossfire, it will be one GPU composed of a couple chips and some driver managed latency issues or something. This memory caching they've been busting their ass talking about is what is leading me down this thought trail. It's not like AMD can't design a unique memory controller, they have a few years behind them here.

But I know nothing and I don't claim to.

Neither - than what has happened in history and that it's almost always cyclical.. excellent point about AMD mentioning memory. It's almost like they have not talked about Vega or Navi directly, but have let the cat out of the bag with the more unusual aspects of the designs they are working on.
Fury was indeed an experiment if you really look at it, nice way to put it. I think process/core clock limitations aside, Fury did better than 4gb of vram should have. Quite a few don't know that some managed to get hbm to clock really well... wonder what that does for certain mining?

Yep and that is why it all doesn't really matter, what matters is if these new "fabrics" have the available bandwidth to sustain memory pooling. Splitting up work loads accross GPU's isn't an issue, but each GPU needs to be aware and have access to what the other GPU's are doing. Otherwise mGPU has to fall back to SFR or AFR, which neither of those are ideal at the moment because of the type of renderers that are being used ATM and the near future.

What sort of BW are we talking for this? Would some HBM-tier HBCC be enough?

I'm not upset...or using fallacies...I am simply asking you to base your speculation on something more than wishful thinking.

It seems you are unable to do so...no suprise...but being mad at me for your own shortcomings is the opposite of intelligent.

Have a nice day.

It's not just wishful thinking. There is potential to this - look at the roadmap.
Roadmap-640x360.jpg


Scalability.

Scalability? What in the hell are they going to put that for if they don't mean mGPU?
Nexgen memory is SSG, which is already being tested with Polaris currently in some form for large dataset customers (oil gas/imaging etc). So there are PCI-e lanes already available today. So now the Polaris being larger, slightly less efficient, 'autorouted' excuse obviously does not cover it all. AMD has pulled a Xeon, opening up capabilities to certain customers at sillicon/hardware level as needed, on a bigger basis than just pro/consumer driver level as usual.
Like Fiji, Polaris also is a little bit of an experiment too in this way.
Vega combines the two together, with Navi expanding it.

Now we have GPUs with pci-e lanes onboard not being used for SSG in consumer environment. If they have enough of them, do you think they could possibly use them for mGPU memory pooling? I think they could or will at least investigate this if the capability is already there. What about another set of HBM-like transceivers between dies in close proximity, plus extra circuitry routing area on die. Surely that would be enough? As I mentioned earlier, the loss of die area would be offset by scalability if done correctly. Especially for smaller dies which are more cost effective. E.g. design for quad die usage. Now imagine a fiji performance level die (e.g. small navi) on 7nm @ 4x 150mm² in a quad MCM. The yield rate of the dies would be far higher than 600mm² equivalent high end card, only added cost would be interposer and HBM. Interposer tech is more mature now, it'd be what Naples is shaping up to be all over again.
 
Neither - than what has happened in history and that it's almost always cyclical.. excellent point about AMD mentioning memory. It's almost like they have not talked about Vega or Navi directly, but have let the cat out of the bag with the more unusual aspects of the designs they are working on.
Fury was indeed an experiment if you really look at it, nice way to put it. I think process/core clock limitations aside, Fury did better than 4gb of vram should have. Quite a few don't know that some managed to get hbm to clock really well... wonder what that does for certain mining?



What sort of BW are we talking for this? Would some HBM-tier HBCC be enough?



It's not just wishful thinking. There is potential to this - look at the roadmap.
Roadmap-640x360.jpg


Scalability.

Scalability? What in the hell are they going to put that for if they don't mean mGPU?
Nexgen memory is SSG, which is already being tested with Polaris currently in some form for large dataset customers (oil gas/imaging etc). So there are PCI-e lanes already available today. So now the Polaris being larger, slightly less efficient, 'autorouted' excuse obviously does not cover it all. AMD has pulled a Xeon, opening up capabilities to certain customers at sillicon/hardware level as needed, on a bigger basis than just pro/consumer driver level as usual.
Like Fiji, Polaris also is a little bit of an experiment too in this way.
Vega combines the two together, with Navi expanding it.

Now we have GPUs with pci-e lanes onboard not being used for SSG in consumer environment. If they have enough of them, do you think they could possibly use them for mGPU memory pooling? I think they could or will at least investigate this if the capability is already there. What about another set of HBM-like transceivers between dies in close proximity, plus extra circuitry routing area on die. Surely that would be enough? As I mentioned earlier, the loss of die area would be offset by scalability if done correctly. Especially for smaller dies which are more cost effective. E.g. design for quad die usage. Now imagine a fiji performance level die (e.g. small navi) on 7nm @ 4x 150mm² in a quad MCM. The yield rate of the dies would be far higher than 600mm² equivalent high end card, only added cost would be interposer and HBM. Interposer tech is more mature now, it'd be what Naples is shaping up to be all over again.

To try and push your point further:

If memory is an issue, look at their HBCC demo for Tomb Raider, accessing system RAM for additional resources is pretty impressive, especially when you see that the minimum FPS was still respectable.

Perhaps the HBCC has some more tricks up its sleeve for working across two GPUs, one HBCC talking directly to another? On board VRAM bandwidth connected via IF pooled by two controllers working together?
 
Ram speed is where it gets interesting with the differences of HBM design. This is why everyones unsure about it. No doubt if they are/have done it, will be some logical way
But that also leads to Simplyfuns' post which is probably bang on the money.

This sounds about right. E.g. 50% scaling only, or some sort of frame time/pacing issue.

This is RTGs only real ticket to high end competition for next few years though and could be similarly disruptive as Naples could potentially be.

E.g. we may see as following
Vega 10: OCd 1080 to Ti or faster in some games.
Gv104 ~Ti speed range
Nvidia is still tied with AMD for overall performance until they get Gv102 out later. Of course the power use shilling will be in overdrive during this time.
Gv102 launches 2018, usual +50-60% with a much bigger die than Gp102, clock bump, some process refinements as usual, Titan, ten gorillion dollars.
AMD counters with Vega10x2, matches or takes crown back (I doubt it will be ready this year but please prove me wrong AMD) costs a little less.
>PowerUseShillingIntensifies.webm
late 2018 dec Nvidia releases Volta Ti 'Titan watered down edition because we can't sell any more for HPC first'
AMD releases Navi, drawing nearly even on process this time (7nm gloflo)
Nvidia releases Ti 'watered down edition Titan with a cost cut and less ram' and some 'even less watered down edition' Titan with more ram than the last one and a clock bump
MCM process repeats (note how they expect Vega10x2 much sooner than the usual year between single and dual GPU launches...) as they now make MCM/mGPU cards part of key R&D
High end is now muddied. AMD offers higher performance and slightly higher power usage.
Epeen upgraditis types now confused 'cus muh Nvidia is not so stronk all of a sudden and 'AMD SUXX LOLOLOL' can't be so easily played when they're competitive or better on performance.


Next few years are going to be a great time for GPU consumers... 4k, 8k, VR, mGPU/MCM bring it on!

I am seriously confused right now. Vega MCM? What... multiple 530mm dies on an interposer? lol
 
What sort of BW are we talking for this? Would some HBM-tier HBCC be enough?




Well I would imagine enough bandwidth to sustain cache to cache interaction without any bottleneck. So at least 5 times greater then what we have now?
 
Well I would imagine enough bandwidth to sustain cache to cache interaction without any bottleneck. So at least 5 times greater then what we have now?

not just that, if they go the MCM route they will have redundant logic inevitably, i guess a big reduction in cache bw requirements will also come from introduction of tiled rasterization too
 
I am seriously confused right now. Vega MCM? What... multiple 530mm dies on an interposer? lol
Vega 10x2 is coming nearly 6 months sooner than a usual mGPU comes from AMD or Nvidia. Wondering if they will do it traditionally or with some MCM/joined weirdness as a test for Navi, like Fury was also an HBM/interposer test. Next step up is MCM either Navi or before.. But yes dual Vega dies. Ps my Vega die measurements are closer to mid 400s but I might be way off... another guy got 470mm² too (Raja photos from nearly 6 months ago). I have not seen any official die size released yet?
A dual mid 400 die could be slightly more feasible for a high end water cooled product.

AMD-Vega-GPU_2.jpg
AMD-Vega-GPU-740x493.jpg
AMD-Vega-GPU_1.jpg

TR pic is best and I got ~440mm² on that but I'm being conservative.


Well I would imagine enough bandwidth to sustain cache to cache interaction without any bottleneck. So at least 5 times greater then what we have now?
Jesus okay... so like 15-20% of the die plus somehow making vias/routing for it chip-chip x:
 
Vega 10x2 is coming nearly 6 months sooner than a usual mGPU comes from AMD or Nvidia. Wondering if they will do it traditionally or with some MCM/joined weirdness as a test for Navi, like Fury was also an HBM/interposer test. Next step up is MCM either Navi or before.. But yes dual Vega dies. Ps my Vega die measurements are closer to mid 400s but I might be way off... another guy got 470mm² too (Raja photos from nearly 6 months ago). I have not seen any official die size released yet?
A dual mid 400 die could be slightly more feasible for a high end water cooled product.

AMD-Vega-GPU_2.jpg
AMD-Vega-GPU-740x493.jpg
AMD-Vega-GPU_1.jpg

TR pic is best and I got ~440mm² on that but I'm being conservative.



Jesus okay... so like 15-20% of the die plus somehow making vias/routing for it chip-chip x:

Yeah you're way off with youre measurements virtually every website to have posted results was between 520-530mm

470mm is GP102.

There's no way in hell they are going to fit more than a single Vega 10 on an interposer with 4 hBM stacks
https://www.computerbase.de/2017-01/amd-vega-preview/
 
Yeah you're way off with youre measurements virtually every website to have posted results was between 520-530mm

470mm is GP102.

There's no way in hell they are going to fit more than a single Vega 10 on an interposer with 4 hBM stacks
https://www.computerbase.de/2017-01/amd-vega-preview/

I know where I screwed up now. Raja is a tiny mofo. I've been overestimating his hand size!

Just measured my 7970 die within 5mm of official. Noted some plasticy-ceramic looking package @1.7x0.95mm which looks identical to the larger vega components.
If they are the larger components on the vega die, I'm way off. But they appear to be the same.
Working from that I measure 23.2x30mm die size - that puts it in 600s!? Probably too high.
So the 500 is probably right... middle ground lol.
 
From Hynix's specs the HBM2 is 11.87mm x 7.75mm so yeah, it's quite a large chip if you use those dimensions of the HBM2 packages as a ruler.
That's what I have been trying to find for a while. Thank you.

I get 20x26mm die size. So exactly 520mm... take the border out it'll probably be spec around 512mm @ ~1.3%
 
Well I would imagine enough bandwidth to sustain cache to cache interaction without any bottleneck. So at least 5 times greater then what we have now?


Yep that is why AMD with Ryzen have CCX latency issues ;). CPU's definitely don't need as much bandwidth as GPU's, so......

I want to see what happens with Thread ripper and or Epyc, cause I think we see that problem exacerbated.

Whycry from videocardz just posted up some "leaked" %'s vs Intel, and even though Epyc looks good, they don't seem that good compared to what they are comparing against.
 
I am seriously confused right now. Vega MCM? What... multiple 530mm dies on an interposer? lol
Multiple GPUs on multiple interposers most likely. A single would be best, but rather large. AMD could barely fit Threadripper on a single interposer if they went that direction.

Yep that is why AMD with Ryzen have CCX latency issues
Yet the interconnect bandwidth isn't a concern for NUMA aware workloads, like graphics.
 
Multiple GPUs on multiple interposers most likely. A single would be best, but rather large. AMD could barely fit Threadripper on a single interposer if they went that direction.


Yet the interconnect bandwidth isn't a concern for NUMA aware workloads, like graphics.

Graphics code is not numa aware don't know where you got that from.

Numa aware code is code that understands the latency of the different pieces of silicon and can adjust for that latency by caching information or doing things is specific order, to hide that latency.

Graphics card drivers can do that to some extent through compilation, but after that is complied, there is no more control.

Graphics doesn't need numa type systems anyways, if they went that route it would create a whole new level of complexity in programming that will really create headaches for game developers.

We are not talking about offline rendering which is slow, numa aware code is great at hiding latency, but graphics code is not good at hiding latency, because GPU's get their speed because of throughput.
 
Last edited:
Hahahaha, i was right, they list identical clocks on liquid and air cooled versions. If there was ever a confirmation required that these clocks are only hit on liquid, this is it.

EDIT: Hohoho, they list 300W TDP for the blower AND 375W TDP for the liquid. Dayum.
 
Hahahaha, i was right, they list identical clocks on liquid and air cooled versions. If there was ever a confirmation required that these clocks are only hit on liquid, this is it.

EDIT: Hohoho, they list 300W TDP for the blower AND 375W TDP for the liquid. Dayum.

A lesser man would posted I told you so in massive neon green font, but I humbly limit myself to standard formatting.

Same clocks, more stable, 375w.
 
Same clocks, more stable, 375w.
I am conflicted tbf.

Because i kind of want to see [H] review of this monstrosity (i mean, that makes it what, first single-die 375W in stock GPU?), but i understand perfectly well that it makes no sense to do it too.
 
Hahahaha, i was right, they list identical clocks on liquid and air cooled versions. If there was ever a confirmation required that these clocks are only hit on liquid, this is it.

EDIT: Hohoho, they list 300W TDP for the blower AND 375W TDP for the liquid. Dayum.

I don't really understand your first statement. If it's (the clocks) only on liquid, how is it that it's also on air.

Edumacate me please.
 
Hahahaha, i was right, they list identical clocks on liquid and air cooled versions. If there was ever a confirmation required that these clocks are only hit on liquid, this is it.

EDIT: Hohoho, they list 300W TDP for the blower AND 375W TDP for the liquid. Dayum.

Hey, I'm not seeing the TDP listed on the actual store pages, where are they pulling those numbers from?
 
It' all FUD until product is in hand. Because you can't trust these sources OR MAD to place the right numbers in front of us.

Where the Frig is Kyle when you need him. He's laughing at us, because he already knows.
 
Well of course you can trust that site when you like what they say but it's garbage when you dont. Obviously it makes no sense to clock the same but somehow uses 75 watts more power. Something is missing there.
 
I don't really understand your first statement. If it's (the clocks) only on liquid, how is it that it's also on air.
Throttling. AMD likes to play a little tricky with their proclaimed clocks by listing max turbo clock you can possibly achieve in stock. In practice for one reason or another they often throttle below that proclaimed mark on reference air cooled cards.
Hey, I'm not seeing the TDP listed on the actual store pages, where are they pulling those numbers from?
Some company that sells workstation GPUs has a slide claiming 300W TDP on air and 375W on liquid versions, you can find it in WhyCry_'s blog. That slide is kinda hilarious, really.
 
Well of course you can trust that site when you like what they say but it's garbage when you dont. Obviously it makes no sense to clock the same but somehow uses 75 watts more power. Something is missing there.
The fact that AMD's boost clock is a maximum, not actual real clock. Thought any AMD user knows that.
 
Same as Nvidia boost is not guaranteed. They both give a minimum the card will run at.
Not really the same,
I just looked at quite a few sites that monitor clock speeds and even when the 1080ti or Titan Pascal throttled it was still above their nvidia boost spec of 1531MHz for Titan X and 1582MHz for 1080ti.
So Nvidia is still hitting and sustaining rated boost clocks even when thermal throttling (looking at HardOCP/Computerbasede/TechPowerUp/etc).

Cheers
 
Same as Nvidia boost is not guaranteed. They both give a minimum the card will run at.
No, AMD's boost clock is actual maximum clock card hits.
Nvidia's "boost clock" is their measured average in their testing, not even Nvidia can predict what clocks their cards will hit with their boost shenanigans. They, as such, only set the minimum: base clock. AMD sets the whole range.
 
Back
Top