Vega Rumors

Cognative Dissonance occurs when the brain feels threatened because someone presents a belief that is counter to the subject's own. A perfect example of this is precious little snowflakes on campus who throw an epic fit because a conservative speaker was invited. It doesn't matter if there are facts involved or not. The thought that somebody might be wrong causes their brain to respond with "threat" reactions as the basil core levels where the most basic instincts of survival and fear come from. (Proven with MRI)

No one is presenting you with belief, we're dealing with the facts of a genuine, honest to Dog, it's not flat earth or sky ghost stuff launch of a Vega - a nice fat full sized Vega chip with double the HBM too....

I'm sure massive amounts of engineering went into it. It's kind of neat... like... oh ... Howard Hughes Spruce Goose sort of neat.
 
1&2. ATOS, and Doom
3. Mantle is Vulkan (more or less) So it lives on and offically supported by Microsoft.
4. I wasn't talking about raw speed. I said % degredation. If you buy a product from 2->4 years ago, is it still viable for AAA today? In terms of performance loss on average, AMD has fared better, particularly as DX12 titles have down up. Sure I could buy a 970, then buy a 1070 a year later. But that's going to cost you more money in the end.

so 2 games that even a 1060 can STILL run just fine again Async Compute adds little value
Vulkan is not mantle at this point and NV hardware runs it just fine
i ran my 780ti from the time it came out till last December when i replaced it with a 1070 so NV hard performance over time is fine
 
Dissonance would be if people were drawing conclusions that went counter to the evidence at hand.

This isn't doom and gloom, no company has a perfect product track record. AMD and RTG just has a worse track record than most and looks especially bad compared to it's only competition. I have no doubts they have more designs and more plans and future cards that will be received more favorably. They might even have a giant slayer in the pipeline, but I wouldn't take a 100 to one bet on that.

But, FE is basically it. If you expect something greater than five to ten per cent better than FE, I have a bridge to sell you.
so 2 games that even a 1060 can STILL run just fine again Async Compute adds little value
Vulkan is not mantle at this point and NV hardware runs it just fine
i ran my 780ti from the time it came out till last December when i replaced it with a 1070 so NV hard performance over time is fine

For all intents and purposes off the shelf RX580 and 1060 are equivalent. But soon as you throw in Vulkan (which was based on the performance improvements found in Mantle) makes a huge difference in favor or AMD. So yeah, I will grant you two titles don't make much a difference. Except that all that low level API stuff also exist on PS4 and XBOne hardware too. So the likelihood of it showing up in later titles is greater. So you'll have to place your bets there.

And it has been shown time and time and time again, NVIDIA suffers in it's DX12 implementation (in some cases a step backwards in performance on the same title).

So when you factor in things like long term longevity, AMD is some cases is just as compelling as NVIDIA at some price points.

Now if you were going to run DX11 titles all day, and don't mind forking out more money for a GSync monitor, then by all means, NVIDIA is for you.

It's not a one size fits all.
 
1&2. ATOS, and Doom
3. Mantle is Vulkan (more or less) So it lives on and offically supported by Microsoft.
4. I wasn't talking about raw speed. I said % degredation. If you buy a product from 2->4 years ago, is it still viable for AAA today? In terms of performance loss on average, AMD has fared better, particularly as DX12 titles have shown up. Sure I could buy a 970, then buy a 1070 a year later. But that's going to cost you more money in the end.


1 &2)Mantle isn't Vulkan nor will it ever be, nor was it or is it DX12, MS's xbox had a LLAPI and features of that and Sony's PS4 API's were put together to create Mantle. Now in the beginning DX12 on AMD looked better, its gotten much closer now, the reason why AMD "looked" better was because their focus on DX12 drivers and nV's focus on DX11 drivers. So by the time DX12 is really important nV has already caught up. To bad AMD lost all that ground with DX11 drivers though.

At the moment AOTS, nV has caught up, Doom intrinsic shaders aren't the same thing as API advantages. Its very rare an IHV will ever have API advantages, unless there are flaws in the architecture of the competitor. Just look at the 1060 vs Polaris? Do you see flaws, nope, seems to do DX12 just fine. Now with 30% less Tflops and 20% less power draw, if nV equalized those to Polaris. Polaris is screwed right?

Async isn't DX12, and Pascal does it just fine too. We have examples of that as well with nV title games (those don't work as well on AMD hardware), AMD title games on the other hand async seems to not work so good on nV hardware.

So end of it all, its same as before, nothing has changed.

Now comparing DX11 to DX12, AMD sure looks good, but that is just their DX11 driver overhead coming to play.

4) Typical graphics card buyers at most 2 generations then they upgrade that is for midrange low end performance, enthusiasts and high end performance segments every gen they upgrade, and low end gaming every gen they have to upgrade but they might waffle into the 2 generations too.

Again, its AMD's focus that needs to shift if they want to make good products that span the life time of typical gamers. Gamers shouldn't be the ones to decide AMD's fate, AMD should do it.
 
No one is presenting you with belief, we're dealing with the facts of a genuine, honest to Dog, it's not flat earth or sky ghost stuff launch of a Vega - a nice fat full sized Vega chip with double the HBM too....

I'm sure massive amounts of engineering went into it. It's kind of neat... like... oh ... Howard Hughes Spruce Goose sort of neat.

Oh plenty of people are like, "It's not worth it. It's lost. It's a joke. It's too hot. It's lost because it consumes 100 Watts more. Once again AMD is a day short and a dollar too much" before we have official reviews. You can't deny that people are crying doom and gloom. I'm just saying "Wait to find out. We don't have evidence one way or another yet. Only guesses from people who aren't hardware engineers working for AMD"

Drawing a conclusion before applicable evidence can be drawn shows bias and is a sign of cognitive dissonance.
 
Oh plenty of people are like, "It's not worth it. It's lost. It's a joke" before we have official reviews. Drawing a conclusion before applicable evidence can be drawn shows bias and is a sign of cognitive dissonance.


Its pretty much over man, 10% is the max performance increase via drivers at launch. to expect more is really over selling AMD's capabilities in a one months time. Shit they had 1 year for Vega, and they haven't changed since their first showings of Doom and BF1 with Vega FE....... That should give use quite a bit of understanding right there.

If you had a program and been working on it for 1 year you expect to see changes in performance right? Will one more month make the difference?

We already know the TDP's of these cards, we know how AMD minimalizes those TDP figures we have seen them do it with Ryzen, Polaris, BD, Fiji, r3xx, r2xx looks to be like Vega too right?

now if those prices are accurate, V11 cut down Vega, 1070 performance, V10 full Vega, GTX 1080 performance, V10 with water, just above GTX 1080 performance. Still beats nV at price/performance, which is something AMD promised, but at the cost of a lot of power and of course nothing special at the end, nothing we haven't seen before.

We have a lot of evidence pointing to Vega's not so great performance and crazy power usage. This isn't out of the blue.

We also have historic data of GCN failing miserable when upping the unit counts and power consumption going crazy.

This is what I've been saying all along, unless they re do the transistor layouts, they are not going to fix GCN's power and scaling issues. Just will not happen. Essentially they need to have an entirely new architecture.
 
Last edited:
1 &2)Mantle isn't Vulkan nor will it ever be, nor was it or is it DX12, MS's xbox had a LLAPI and features of that and Sony's PS4 API's were put together to create Mantle. Now in the beginning DX12 on AMD looked better, its gotten much closer now, the reason why AMD "looked" better was because their focus on DX12 drivers and nV's focus on DX11 drivers. So by the time DX12 is really important nV has already caught up. To bad AMD lost all that ground with DX11 drivers though.

This I will agree with you upon conditionally. A lot of the API functions operate the same between Mantle and Vulkan even if the calls are different. And DX12 exposes a lot of the same low level functionality. IE: Unified memory pool functions. So porting from one to another is possible.

I will acquiesce that I should have originally said, "Low Level API access has resulted in massive gains compared to NVIDIA" which would have been technically more correct.
 
Oh plenty of people are like, "It's not worth it. It's lost. It's a joke. It's too hot. It's lost because it consumes 100 Watts more. Once again AMD is a day short and a dollar too much" before we have official reviews. You can't deny that people are crying doom and gloom. I'm just saying "Wait to find out. We don't have evidence one way or another yet. Only guesses from people who aren't hardware engineers working for AMD"

Drawing a conclusion before applicable evidence can be drawn shows bias and is a sign of cognitive dissonance.

But we do have evidence. And we have seen it. It's called Vega FE. and Vega RX isn't going to be that different.

I think you're a bit wrapped up in this. It is what it is.
 
Its pretty much over man, 10% is the max performance increase via drivers at launch. to expect more is really over selling AMD's capabilities in a one months time. Shit they had 1 year for Vega, and they haven't changed since their first showings of Doom and BF1 with Vega FE....... That should give use quite a bit of understanding right there.

If you had a program and been working on it for 1 year you expect to see changes in performance right? Will one more month make the difference?

We already know the TDP's of these cards, we know how AMD minimalizes those TDP figures we have seen them do it with Ryzen, Polaris, BD, Fiji, r3xx, r2xx looks to be like Vega too right?

now if those prices are accurate, V11 cut down Vega, 1070 performance, V10 full Vega, GTX 1080 performance, V10 with water, just above GTX 1080 performance. Still beats nV at price/performance, which is something AMD promised, but at the cost of a lot of power and of course nothing special at the end, nothing we haven't seen before.

We have a lot of evidence pointing to Vega's not so great performance and crazy power usage. This isn't out of the blue.

We also have history data of GCN failing miserable when upping the unit counts and power consumption going crazy.

This is what I've been saying all along, unless they re do the transistor layouts, they are not going to fix GCN's power and scaling issues. Just will not happen. Essentially they need to have an entirely new architecture.

Razor, have you ever EVER worked on large multi scale SaFE development teams. Do you know what code branches are? Do you know sometimes a team may focus on ONE code branch before they go and work on another code branch which result in performance increases due to limited resources.

Again, as someone who has programming for 30 years now and working with hardware, I can show you plenty of example where there have been significant gains in 1 months time. We used to generate data sequentially because data access was not thread safe. When we switched data sources, we threaded everything and found a 5x's improvement. And that's just one of examples in my own personal history. We had a FORTRAN program from 1968 that was not thread safe because it wrote the results to a static flat file. Well we fixed that and it is also 5'x faster (8 thread processor) on average.

It might only be 5%, it might by -5%. It doesn't matter if price to performance is comparable. That's what I've been claiming all along. I never once said it will be faster than a 1080ti. I don't make ANY claims one way or another.
 
But we do have evidence. And we have seen it. It's called Vega FE. and Vega RX isn't going to be that different.

I think you're a bit wrapped up in this. It is what it is.

Yes because a baseline mustang runs exactly the same as a Rouche Stage 3 mustang. *eye rolls*

Again, for the god damn thick headed, "It's too early to tell" I'm not claiming one way or the other.
 
Razor, have you ever EVER worked on large multi scale SaFE development teams. Do you know what code branches are? Do you know sometimes a team may focus on ONE code branch before they go and work on another code branch which result in performance increases due to limited resources.

Again, as someone who has programming for 30 years now and working with hardware, I can show you plenty of example where there have been significant gains in 1 months time. We used to generate data sequentially because data access was not thread safe. When we switched data sources, we threaded everything and found a 5x's improvement. And that's just one of examples in my own personal history. We had a FORTRAN program from 1968 that was not thread safe because it wrote the results to a static flat file. Well we fixed that and it is also 5'x faster (8 thread processor) on average.

It might only be 5%, it might by -5%. It doesn't matter if price to performance is comparable. That's what I've been claiming all along. I never once said it will be faster than a 1080ti. I don't make ANY claims one way or another.


Yes I have worked on multiple multi million dollar programs, games, special effects, 3d modellers, AI for HPC in datacenters for stocks etc. We never did optimizations till the final steps, but those final steps take about 50% of the time and more than 50% of the budget.

Its not about the branches or how many branches you have. Its about the ability to do those optimizations while the code base is still being changed. It just can't be done that way right? So when did Vega get its base done? It was done when they showed off Doom or which ever game they showed off in the beginning. There was no Fiji driver for Vega. They had plenty of time to get the hardware features ready to go. After Tape out the driver team can go full blown on the drivers. The base code should be done by the time the chip comes back from foundry via hardware emulation.

Look we have seen this with g2xx series form nV and Fermi. Do you think they didn't know they were in trouble with those 2 architectures months before their release? I am sure they knew it. These things don't happen by accident.
 
Yes because a baseline mustang runs exactly the same as a Rouche Stage 3 mustang. *eye rolls*

Again, for the god damn thick headed, "It's too early to tell" I'm not claiming one way or the other.

err too many differences in the engine to make that comparison.

We aren't talking about different chips, its the same chip, the same engine, just different lets air intakes, will taking a baseline mustang and increasing its airflow by 50% increase its performance, sure it will, but its also going to depend on compression right?
 
Yes I have worked on multiple multi million dollar programs, games, special effects, 3d modellers, AI for HPC in datacenters for stocks etc. We never did optimizations till the final steps, but those final steps take about 50% of the time and more than 50% of the budget.

Its not about the branches or how many branches you have. Its about the ability to do those optimizations while the code base is still being changed. It just can't be done that way right? So when did Vega get its base done? It was done when they showed off Doom or which ever game they showed off in the beginning. There was no Fiji driver for Vega. They had plenty of time to get the hardware features ready to go. After Tape out the driver team can go full blown on the drivers. The base code should be done by the time the chip comes back from foundry via hardware emulation.

Look we have seen this with g2xx series form nV and Fermi. Do you think they didn't know they were in trouble with those 2 architectures months before their release? I am sure they knew it. These things don't happen by accident.

If hardware emulation was perfect, there wouldn't have been so many respins on the RX480. You can tune for individual titles and situations but that takes time and profiling with hand optimizations. You should know this. You establish a firm base code which is stable, then use profilers to determine where the weak points are on a case by base basis. Then optimize around them.

I have a piece of code I'm staring at right now I can reduce to 3 SQL statements when dealing with a specific product line. However when running general algorithms which is more flexible, it takes over 700+ SQL calls. I hand tuned that.
 
Yes because a baseline mustang runs exactly the same as a Rouche Stage 3 mustang. *eye rolls*

Again, for the god damn thick headed, "It's too early to tell" I'm not claiming one way or the other.


LOL, what a piss poor analogy. This is more like the baseline mustang with new ECM code.

There will be no miracles here. The chip is the chip is the chip no matter what they do with it. The only way this chip will rock the world is if they put many of them into a pipe bomb.

I'll wait patiently while they try again. It literally makes no difference to me.
 
If hardware emulation was perfect, there wouldn't have been so many respins on the RX480. You can tune for individual titles and situations but that takes time and profiling with hand optimizations. You should know this. You establish a firm base code which is stable, then use profilers to determine where the weak points are on a case by base basis. Then optimize around them.


its not perfect but after they get the chip back then they go in and fix the bugs. But for the most part, yeah they get it pretty much done. There is a good video on youtube on how nV does this. AMD has the same process too.



They already know the performance of the chip before its in mass production for the most part. They also know the power usage, disadvantages, advantages, etc.
 
LOL, what a piss poor analogy. This is more like the baseline mustang with new ECM code.

There will be no miracles here. The chip is the chip is the chip no matter what they do with it. The only way this chip will rock the world is if they put many of them into a pipe bomb.

I'll wait patiently while they try again. It literally makes no difference to me.

I never claimed there would be a bloody miracle. I'm saying it might be comparable for the price. But we don't know.
 
err too many differences in the engine to make that comparison.

We aren't talking about different chips, its the same chip, the same engine, just different lets air intakes, will taking a baseline mustang and increasing its airflow by 50% increase its performance, sure it will, but its also going to depend on compression right?

Yep. Minor hardware difference don't make a difference like single rank to dual rank memory? </sarcasm>
What percentage of bandwidth does ECC take up? What is the overhead on HBM2?
Is the HBM2 memory the limiting factor heat wise? What if it has better cooling?
What if drivers will give us 5%? What if better cooling gives us another 5%?

If we were running a Edsel against a Ferrari F40, sure I would happily call for the Ferrari. But it's somewhat close to call without the final hardware and drivers out the door.

Too many questions without knowing for sure. Again it all boils down to performance to price ratio.
 
Look I'm not fanboi. I slammed the Fury and the RX480.

What I get f'n tired of is f'n idiots who can't be impartial.

Now why would someone buy a Vega? There's pros and cons to each manufacturers approach. It's akin to choosing a Lotus versus a Dodge Demon. I'm not here to make those arguments. And quite frankly if you think one size fits all, then you would be mistaken.

but in the Vega's PRO category:

1. Async Compute
2. Freesync is cheaper
3. When properly implemented, Mantle pretty much beats NVIDIA's offerings at similar price points
4. If it's anything like Fury, it's long term longevity will fare better than NVIDIA in terms of future titles.


In NVIDIA's PRO category:
1. Raw speed overall
2. Gameworks appears to work better
3. Lower power consumption

THE FINAL UNKNOWN:
1. How the FINAL VEGA CONFIGURATION/DRIVERS AND AIR COOLING/WATER COOLING PERFORM AND HOW THAT CALCULATES ON PRICE/PERFORMANCE BASIS. And this is where most of you are showing you're fan-boi idiots. You're spelling doom and gloom before it's all said and done. You can't just f'n wait and pounce.

I swear to f'n god, I get sick of this bullshit and people's cognitive dissonance on both sides. Closed and narrow minded.



This seems really contrived to be honest. Async Compute ? Relevance? If it doesn't make the card perform better, it's relevance is precisely zero, so it's not a pro for shit. I need async compute like I need a nipple on my elbow if in the end the card still performs worse than competing solutions that don't require such a scheme to perform competitively.

Mantle is irrelevant and no longer supported, and there is no substance whatsoever that "When properly implemented, Mantle pretty much beats NVIDIA's offerings at similar price points" because those few games that implemented mantle (which is 100% GCN locked, far more anti-competitive than Gameworks. This is a locked down hardware specific API) did not show substantial improvements over usual DX paths on nvidia side .

As for Fury faring better in the long term? It has actually lost more ground vs 980Ti when comparing recent games to launch.

Basically what Vega has going for it is

1. Freesync
2. Freesync 2

Edit: As for this

How the FINAL VEGA CONFIGURATION/DRIVERS AND AIR COOLING/WATER COOLING PERFORM AND HOW THAT CALCULATES ON PRICE/PERFORMANCE BASIS


This is already the final vega configuration, this is just denial at this point. Price/performance could be relevant still, but at the end of the day it's a lost cause in terms of margins
 
Oh plenty of people are like, "It's not worth it. It's lost. It's a joke. It's too hot. It's lost because it consumes 100 Watts more. Once again AMD is a day short and a dollar too much" before we have official reviews. You can't deny that people are crying doom and gloom. I'm just saying "Wait to find out. We don't have evidence one way or another yet. Only guesses from people who aren't hardware engineers working for AMD"

Drawing a conclusion before applicable evidence can be drawn shows bias and is a sign of cognitive dissonance.

According to cognitive dissonance theory, there is a tendency for individuals to seek consistency among their cognitions (i.e., beliefs, opinions). When there is an inconsistency between attitudes or behaviors (dissonance), something must change to eliminate the dissonance.

I could just as easily argue that there is plenty of applicable evidence to draw a conclusions, even if we look strictly at Pro benchmarks Vega has a large handicap vs competing NV solutions. I don't give a shit how many times people try to bring up the Titan Xp comparisons again, I am quite comfortable that most reasonable people who have been following this launch have long abandoned any notion of the TXp vs Vega FE comparison posted by AMD being valid.

There is no evidence WHATSOEVER that TBR is not functioning in the drivers, it's just an assumption many people have made, probably because it gets repeated so much. The insistence of some people in finding excuses to justify this really disappointing performance/power can just as easily be labelled cognitive dissonance.

1. Vega performs amazingly
2. Vega FE performs horrible
3. Therefore Vega FE cannot be "true Vega"

This is also very similar to a no true scotsman fallacy, you are denying the relevance of Vega FE by claiming it is essentially not vega
 
One way or the other I'll sure be glad when this is over and you guys can all quit it with the going around in endless circles. It's tiresome. I come to this thread to see if there are any ACTUAL developments and I have to weed through all this bullshit.
 
I could just as easily argue that there is plenty of applicable evidence to draw a conclusions, even if we look strictly at Pro benchmarks Vega has a large handicap vs competing NV solutions. I don't give a shit how many times people try to bring up the Titan Xp comparisons again, I am quite comfortable that most reasonable people who have been following this launch have long abandoned any notion of the TXp vs Vega FE comparison posted by AMD being valid.

There is no evidence WHATSOEVER that TBR is not functioning in the drivers, it's just an assumption many people have made, probably because it gets repeated so much. The insistence of some people in finding excuses to justify this really disappointing performance/power can just as easily be labelled cognitive dissonance.

1. Vega performs amazingly
2. Vega FE performs horrible
3. Therefore Vega FE cannot be "true Vega"

This is also very similar to a no true scotsman fallacy, you are denying the relevance of Vega FE by claiming it is essentially not vega
You know what would be hilarious? If VEGA FE would perform really great, would the same guys who now seem to think that VEGA RX would somehow perform differently (a lot better)? They would surely be shouting all over the place how awesome VEGA RX is going to be even though "it not the same card".
 
You know what would be hilarious? If VEGA FE would perform really great, would the same guys who now seem to think that VEGA RX would somehow perform differently (a lot better)? They would surely be shouting all over the place how awesome VEGA RX is going to be even though "it not the same card".

My thoughts are that they've head a bear of a time with the DSBR in driver programming, and they're worried they might not get it done in time.

If you're having a coding issue like that, it's probably you can't give a definitive timeline for how long it'll take to figure it out.

In the Beyond 3D Suite we see that there is a very small difference between 100% and 50% culled triangles vs even the RX580 (let alone the Nvidia cards) in the Polygon benchmarks.

I really think this might be one of the cards we're not going to see the whole performance for a while yet, with a rough launch day showing given the significant rasterization changes that AMD has to code for.
 
My thoughts are that they've head a bear of a time with the DSBR in driver programming, and they're worried they might not get it done in time.

If you're having a coding issue like that, it's probably you can't give a definitive timeline for how long it'll take to figure it out.

In the Beyond 3D Suite we see that there is a very small difference between 100% and 50% culled triangles vs even the RX580 (let alone the Nvidia cards) in the Polygon benchmarks.

I really think this might be one of the cards we're not going to see the whole performance for a while yet, with a rough launch day showing given the significant rasterization changes that AMD has to code for.

Polygon through put is extensively a combination of the DSBR and primitive shaders, Typical polygon through put for Vega should end up like Polaris 4 tris per clock (added with the increase clocks it should be just above Polaris by 20%), to get the rest of the performance, FP 16 via primitive shaders and that will be variable based on what ever else the shader array is doing that is up to 11 tris per clock, once that is done via programming, they will match Pascal in polygon through put.
 
One way or the other I'll sure be glad when this is over and you guys can all quit it with the going around in endless circles. It's tiresome. I come to this thread to see if there are any ACTUAL developments and I have to weed through all this bullshit.
They'll just move onto overhyping Navi after RX Vega is released, and this cycle of nonsense will repeat itself. I've already heard the excuse that Vega is simply a stopgap for Navi.
 
They'll just move onto overhyping Navi after RX Vega is released, and this cycle of nonsense will repeat itself. I've already heard the excuse that Vega is simply a stopgap for Navi.


Seriously lol, so Fiji, Vega were just testing products for the real thing, great to burn a few hundred million bucks for testing right?
 
Polygon through put is extensively a combination of the DSBR and primitive shaders, Typical polygon through put for Vega should end up like Polaris 4 tris per clock (added with the increase clocks it should be just above Polaris by 20%), to get the rest of the performance, FP 16 via primitive shaders and that will be variable based on what ever else the shader array is doing that is up to 11 tris per clock, once that is done via programming, they will match Pascal in polygon through put.

Polygon should be affected by improved discard from culling right?

In that case, shouldn't DSBR working properly improve discard speed, which is what those bar graphs are indicating, discarded polys?

I'm not 100% on that, I'll have to go back and read again my head is cooked today.
 
Polygon should be affected by improved discard from culling right?

In that case, shouldn't DSBR working properly improve discard speed, which is what those bar graphs are indicating, discarded polys?

I'm not 100% on that, I'll have to go back and read again my head is cooked today.


DSBR works on a pixel level, so it can affect polygon throughput, but we're talking about pixel amounts nor are the objects that those polygons make up, most polygons aren't pixel sizes. Early discard will affect it more, and right now early discard is very similar to Polaris (and Fiji is also similar to Polaris if I'm not mistaken at least for part of the pipeline, discarding the tris, Polaris's front end optimizations do show improvements over Fiji to some degree but not as much as AMD made it out to be so there has to be something else there that held Polaris back in most applciations).

Culling and early discard work on full mesh, so lets say, part of the mesh is hidden, the rest of the mesh is still visible, guess what those polygons that are hidden will still be rendered. This is where DSBR comes in play those pixels on the hidden polygons don't need to be rendered though.

Think of it this way, you have two systems one is a per polygon ray trace or z buffer approach to look at visible polygons on a per object basis based on bound boxes, and then you have DSBR which looks at visible pixels.
 
Last edited:
DSBR works on a pixel level, so it can affect polygon throughput, but we're talking about pixel amounts nor are the objects that those polygons make up, most polygons aren't pixel sizes. Early discard will affect it more, and right now early discard is very similar to Polaris (and Fiji is also similar to Polaris if I'm not mistaken at least for part of the pipeline, discarding the tris, Polaris's front end optimizations do show improvements over Fiji to some degree but not as much as AMD made it out to be so there has to be something else there that held Polaris back in most applciations).

Culling and early discard work on full mesh, so lets say, part of the mesh is hidden, the rest of the mesh is still visible, guess what those polygons that are hidden will still be rendered. This is where DSBR comes in play those pixels on the hidden polygons don't need to be rendered though.

Think of it this way, you have two systems one is a per polygon ray trace or z buffer approach to look at visible polygons on a per object basis based on bound boxes, and then you have DSBR which looks at visible pixels.

It also is suspected to heavily assist with power efficiency, as that was one of the bigger changes for Maxwell vs Kepler right?
 
It also is suspected to heavily assist with power efficiency, as that was one of the bigger changes for Maxwell vs Kepler right?

Well it is proposed to save a lot of power, no one really knows how much though lol.

The power savings comes from the reduction in L2 cache thrashing and bandwidth savings. How effective that is based on the architecture is unknown. In all honesty the way AMD stated their DSBR when its working, sounds like its really limited. They stated it will only work (well turned on) when there are bandwidth constraints, and things of that nature. That sounds to me that its not going to be active most of the time. For what ever reason that is, I don't know, probably will get a better understanding of it once the white papers are released hopefully.

Also this might be why David Kanter's program didn't show Vega doing anything with the DSBR, it might be there but Vega's driver is not activating it cause the conditions are not met for it to turn on?
 
Well it is proposed to save a lot of power, no one really knows how much though lol.

The power savings comes from the reduction in L2 cache thrashing and bandwidth savings. How effective that is based on the architecture is unknown. In all honesty the way AMD stated their DSBR when its working, sounds like its really limited. They stated it will only work (well turned on) when there are bandwidth constraints, and things of that nature. That sounds to me that its not going to be active most of the time. For what ever reason that is, I don't know, probably will get a better understanding of it once the white papers are released hopefully.

Also this might be why David Kanter's program didn't show Vega doing anything with the DSBR, it might be there but Vega's driver is not activating it cause the conditions are not met for it to turn on?

Yeah, as I said before, I think they'll have to code the driver for each game after confirming it doesn't break or need tweaks.
 
when I hear per app basis though, for me its a red flag:). How much can they really do? LLAPI's are supposed to drop driver work right? So how much control do the IHV's really have through drivers to change the rendering pipelines?
Granted things like latency switches for cache is one thing (or hiding latency in a pipeline), this is a whole different set of problems. Cause it consists of the entire graphics pipeline front end to back end, everything.
 
when I hear per app basis though, for me its a red flag:). How much can they really do? LLAPI's are supposed to drop driver work right? So how much control do the IHV's really have through drivers to change the rendering pipelines?
I think only time will tell.

It could be as simple as flagging it to be "on" after QA testing it to make sure it doesn't fuck up?
 
it shouldn't screw up though, cause nV's architecture worked from day one, with old and new programs. AMD did something differently and that is probably what is stopping them from using it all the time. If that video is accurate, and the guy talking in the video is an nV engineer who works in the emulation lab, so it should be accurate lol, these problems should be ironed out even before the chip hits risk production. At that point if they haven't figured out how to get all the functionality up and going in drivers, they are in deep shit. The engineering team should give enough info to the driver team to get things functional.

actually nV has much more experience with TBR too, forgot they had tegra for 2 generations prior to Maxwell? So in essence Maxwell is their 3rd generation TBR, not their first.

Now AMD/ATi did have experience with TBR prior to Vega too, but when they sold off their mobile division, those engineers were taken by Qualcomm, so the main GPU engineers @ AMD had to recreate the wheel per se.
 
Last edited:
it shouldn't screw up though, cause nV's architecture worked from day one, with old and new programs. AMD did something differently and that is probably what is stopping them from using it all the time. If that video is accurate, and the guy talking in the video is an nV engineer who works in the emulation lab, so it should be accurate lol, these problems should be ironed out even before the chip hits risk production. At that point if they haven't figured out how to get all the functionality up and going in drivers, they are in deep shit. The engineering team should give enough info to the driver team to get things functional.

actually nV has much more experience with TBR too, forgot they had tegra for 2 generations prior to Maxwell? So in essence Maxwell is their 3rd generation TBR, not their first.

Now AMD/ATi did have experience with TBR prior to Vega too, but when they sold off their mobile division, those engineers were taken by Qualcomm, so the main GPU engineers @ AMD had to recreate the wheel per se.

After losing their Apple business going forward, maybe AMD can snap up Imagition Technologies for cheap?
 
Back
Top