Surprise: AMD cancels implicit Primitive Shader driver

Ieldra

I Promise to RTFM
Joined
Mar 28, 2016
Messages
3,539

Surprise !! Definitely wasn't a feature they knew they could never implement but needed to justify disappointing performance ! 1080Ti performance is cancelled.
 
I only just found out, don't think it was posted here! Couldn't pass on the opportunity given the long long history of drawn out arguments we have had about this on the forum lol
 
I only just found out, don't think it was posted here! Couldn't pass on the opportunity given the long long history of drawn out arguments we have had about this on the forum lol

*Shrug* No one really cares anymore. For those who purchased Vega hardware, such as myself, we are content if not happy. For those who had no intention of purchasing Vega, this makes no difference anyways. (I would comment on those who are considering purchasing but, with mining prices what they are........)
 
It was posted here a while ago, search.
upload_2018-3-23_13-17-58.png
 
Definitely wasn't a feature they knew they could never implement but needed to justify disappointing performance ! 1080Ti performance is cancelled.
Why is that a surprise? Explicit > Portable Compute > Implicit. Makes more sense to focus on the portable option for developers as it works on all hardware. The 1080ti performance has already been achieved when everything is working. Doesn't mean it wont be added eventually either. That primitive shaders patent would seem to refute what all the anti-ps guys were claiming was impossible. It can be automatic, but gains from explicit or compute method will likely exceed it. Implicit would only be useful for titles with a shitload of geometry or tessellation being culled. Old school Gimpworks titles with the 64x Tess factors for example. Most devs would likely choose the compute method alongside indirect execution to reduce CPU load with the newer APIs.
 
Why is that a surprise? Explicit > Portable Compute > Implicit. Makes more sense to focus on the portable option for developers as it works on all hardware. The 1080ti performance has already been achieved when everything is working. Doesn't mean it wont be added eventually either. That primitive shaders patent would seem to refute what all the anti-ps guys were claiming was impossible. It can be automatic, but gains from explicit or compute method will likely exceed it. Implicit would only be useful for titles with a shitload of geometry or tessellation being culled. Old school Gimpworks titles with the 64x Tess factors for example. Most devs would likely choose the compute method alongside indirect execution to reduce CPU load with the newer APIs.
It isn't a surprise, which is my point. You kept insisting this feature would see the light of day and would project Vega to impressive performance numbers. It didn't.

1080Ti performance is mythical, just like implicit PS.
 
It isn't a surprise, which is my point. You kept insisting this feature would see the light of day and would project Vega to impressive performance numbers. It didn't.

1080Ti performance is mythical, just like implicit PS.
So AMD found a preferable solution that is applicable to both IHVs and uses established APIs, and that means it never saw the light of day? Of course they could always go the explicit route and prevent devs from optimizing for Nvidia hardware. Then sink a lot of cash and devrel time into those optimizations. Screwing over all gamers and their experience for a competitive advantage. Not like Nvidia would attempt any anti-competitive bullshit.

Implicit is laid out in a published patent as being the original intent. Everyone arguing against was arguing that it wasn't possible, despite what AMD engineers and the patent suggest. Only difference is AMD found a better solution that still accomplishes the same goals in the process. Might even be a bit more future proof, and the pipelines could get reworked for upcoming APIs. Everyone seems to agree the current pipeline is problematic as none of the hardware actually matches the model. No way of knowing if explicit will be that eventual solution, but the compute model AMD is pushing seems valid.
 
So AMD found a preferable solution that is applicable to both IHVs and uses established APIs, and that means it never saw the light of day? Of course they could always go the explicit route and prevent devs from optimizing for Nvidia hardware. Then sink a lot of cash and devrel time into those optimizations. Screwing over all gamers and their experience for a competitive advantage. Not like Nvidia would attempt any anti-competitive bullshit.

Implicit is laid out in a published patent as being the original intent. Everyone arguing against was arguing that it wasn't possible, despite what AMD engineers and the patent suggest. Only difference is AMD found a better solution that still accomplishes the same goals in the process. Might even be a bit more future proof, and the pipelines could get reworked for upcoming APIs. Everyone seems to agree the current pipeline is problematic as none of the hardware actually matches the model. No way of knowing if explicit will be that eventual solution, but the compute model AMD is pushing seems valid.
What you're doing boils down to starting a completely unrelated discussion that is tangential to the subject at best and making supposedly snarky comment as a prelude to your usual insufferable 'gimpworks' rants. It was marketed as a major feature with the Vega launch, and it became Vega's crutch much like asynchronous compute was to Fiji. The latest GCN iteration is no more capable in terms of geometry throughput than Hawaii unless I'm very badly mistaken (4/cy) and in the absence of a magical solution that could be applied retroactively to all the titles in which Vega underperforms (13tflops was it?) this is quite the problem. Now you could argue that AMD's marketing certainly didn't make claims as far fetched as the ones that are made daily on forums/reddit etc, but let's be realistic, it's expected, they knew what they were doing with that announcement, and they knew it wasn't even remotely feasible.

There was never going to be a magical driver that brought performance up to par, you're just going to rely on those two or three games per generation that actually make use of those features, and how/why is this even related to preventing developers for optimizing for NV hw??? It's AMD that needs the special attention in order to be able to perform competitively (supposedly, I am yet to see an actual meaningful perf comparison), and it's hilarious that you claim they've found a "better solution". Better? Better than what? The complete and total absence of a key feature they promised close to a year ago? Vega seems to be performing well in FC5, I've been out of the loop for quite a while but FC5 is on the front page of most review sites. LC Vega nearing 1080ti performance, and if I'm not mistaken that's with prim shaders, shader intrinsics, support for packed math for water simulation among other things...

While we're on the topic though, remember when you INSISTED, for MONTHS, that DSBR was not working hence the lackluster performance and that once it was enabled it would improve performance significantly ?

upload_2018-3-28_21-18-31.png


Great job anarchist, now please reply to this very direct post by writing some laborious, but eloquently worded, reply that deftly dodges all the main points in the discussion in an attempt to sound well informed by way of vague remarks alluding to the validity of your argument in light of "everyone" subscribing to your view.

upload_2018-3-28_21-21-45.png


Well damn.

upload_2018-3-28_21-22-37.png




Can't turn it on overnight eh? Well what a fucking surprise. Anarchist4000 will surely weasel his out of this one without anyone noticing eh?


Here's the link https://techreport.com/news/33153/radeon-rx-vega-primitive-shaders-will-need-api-support , couldn't make this stuff up if I wanted to lol.

Good to be back :D

Your move antichrist, don't let me stop you from talking about how AMD would never screw over their customers, and as a Maxwell card owner I can confirm that my posterior is so enlarged by the savage shafting I have gotten from NV that I need to buy an extra seat for air travel.

upload_2018-3-28_21-31-1.png


You can see that GTX 980 awkwardly eyeing the Fury X below it with a bewildered look of 'the fuck happened to you mate?"
 
Last edited:
Vega has been the most colossal f**kup we've ever seen from AMD.

So many unkept promises...
 
That's really unfair man, the Fury X is a strong contender.

Are you high? The Fury X actually matched and sometimes beat the 980Ti (the top-end Nvidia GPU at the time). The Vega comes nowhere close to doing the same to the 1080Ti.
 
Ummm, AMD has had lots of big fuck ups, this is not one. I mean it trades blows with the 1080, that is not bulldozer.

It trades blows with a 1080? In what world? It is constantly 10-15% behind, while using nearly twice as much power and using a slab of silicon twice the size.

Bulldozer at least kept up with the 2600k in tasks that could leverage all 8 corelets. Don't get me wrong, bulldozer was a failure, but Vega is a COMPLETE failure. It fails in every way conceivable. There is no world where buying one is a good idea... Unless you're mining.
 
Are you high? The Fury X actually matched and sometimes beat the 980Ti (the top-end Nvidia GPU at the time). The Vega comes nowhere close to doing the same to the 1080Ti.
The Fury X promised to improve massively over time. It didn't. Overclocked 980Tis are closer in performance to 1080s than 980s/Fury X, which amusingly lands 980Ti performance closer to Vega than to Fiji.
 
The Fury X promised to improve massively over time. It didn't. Overclocked 980Tis are closer in performance to 1080s than 980s/Fury X, which amusingly lands 980Ti performance closer to Vega than to Fiji.

Agreed, but Vega can't even claim the same success as Fury because it was never even in the same league as the top end cards. At least fury could be considered AT THE TIME, though it is completely flaccid now. Vega is really just a die-shrink of Fury, so not only is it slower than its peers, it will probably age just as poorly.
 
It trades blows with a 1080? In what world? It is constantly 10-15% behind, while using nearly twice as much power and using a slab of silicon twice the size.

Bulldozer at least kept up with the 2600k in tasks that could leverage all 8 corelets. Don't get me wrong, bulldozer was a failure, but Vega is a COMPLETE failure. It fails in every way conceivable. There is no world where buying one is a good idea... Unless you're mining.

Oh shoot, I better hang my head in shame because Kazeo thinks my purchase of my Vega 56 was not a good idea..... Oh, the shame, the shame...….. :D Let's see, $469 for my Vega 56 or over $700 for a 1080Ti, no brainer for me but, then again, buying that 980 Ti for $649 back in the day was dumb for me as well. Burn me once, shame on you, burn me twice..... not going to happen. :D
 
Oh shoot, I better hang my head in shame because Kazeo thinks my purchase of my Vega 56 was not a good idea..... Oh, the shame, the shame...….. :D Let's see, $469 for my Vega 56 or over $700 for a 1080Ti, no brainer for me but, then again, buying that 980 Ti for $649 back in the day was dumb for me as well. Burn me once, shame on you, burn me twice..... not going to happen. :D

II'm not going to say that you made a BAD decision, My best friend bought TWO, count'em TWO Vega 56 cards for his build. You know how I felt when FarCry 5 Benchmarks came out? I was HAPPY. I was HAPPY that his investment is maybe paying off, because in the end I want competition in the market, and I want my friend to enjoy his games.

Some parents spoil their child and refuse to see their child do any wrong, others who discipline their child when they do wrong because they know the child is capable of more.

I trash Vega because I KNOW AMD is capable of more. I know AMD has made truly generation-defining products in the GPU space in the past. Vega is not good work. Vega is a poor excuse for innovation. If my child presented me with Vega when I know they've accomplished the 7970 or 9700 before, I'd feel they didn't even try.

Vega is not acceptable as a product from AMD.

Its value as a product for sale is all dependant on price and your situation, and I don't scorn ANYONE who's bought a Vega-based card.
 
Oh shoot, I better hang my head in shame because Kazeo thinks my purchase of my Vega 56 was not a good idea..... Oh, the shame, the shame...….. :D Let's see, $469 for my Vega 56 or over $700 for a 1080Ti, no brainer for me but, then again, buying that 980 Ti for $649 back in the day was dumb for me as well. Burn me once, shame on you, burn me twice..... not going to happen. :D


Vega 56.... 470$. Okay. 700$ is 48% more money. According to the performance data posted above the 1080Ti is ~33% faster for 48% more money.

Similarly, the LC Vega 64 edition is $700. 13% faster than Vega 56.

VALUE.

Buying 980Ti for 650$ 2+ years ago and have it perform within 10% of 1080 aka Vega 64 ? Fucking nvidia buyers and their stupid purchasing decisions eh?
 
The latest GCN iteration is no more capable in terms of geometry throughput than Hawaii unless I'm very badly mistaken (4/cy) and in the absence of a magical solution that could be applied retroactively to all the titles in which Vega underperforms (13tflops was it?) this is quite the problem.
If you completely discount the clockspeed in the 4/clock then I guess you could be right. Few if any titles actually need that much throughput. The Vega issue looks more a temp issue holding back clockspeeds. It's doing well, but not maintaining the clocks that it should which hurts throughput.

There was never going to be a magical driver that brought performance up to par
You and some others were the ones that created the magical driver. All I said was performance would eventually get up there when all the features were ironed out and appropriately coded. We have non-cherrypicked FarCry benchmarks everywhere showing Vega right up there with 1080ti and that's still without everything fully implemented.

it's hilarious that you claim they've found a "better solution". Better? Better than what? The complete and total absence of a key feature they promised close to a year ago? Vega seems to be performing well in FC5, I've been out of the loop for quite a while but FC5 is on the front page of most review sites. LC Vega nearing 1080ti performance, and if I'm not mistaken that's with prim shaders, shader intrinsics, support for packed math for water simulation among other things...
AMD said they found a better solution. Not sure why you're attributing that claim to me. Primitive shaders work but the extensions are still being finalized for upcoming APIs. Evidenced in the quotes you provided if you'll read them. Extensions for DX12/Vulkan take time and are still coming as your evidence says. I haven't seen any evidence Vega can't actually run a Primitive Shader, only that the hardware feature isn't widely exposed yet. That will likely change in the future and be valid for upcoming architectures. The mining boom kind of makes targeting a feature only available on Vega a bit pointless for developers. I haven't seen anything about FarCry using primitive shaders, just packed math.

The point of primitive shaders is to efficiently cull geometry. There also exists a compute shader which can do the same and run on nearly all hardware. From a development standpoint one makes far more sense.

While we're on the topic though, remember when you INSISTED, for MONTHS, that DSBR was not working hence the lackluster performance and that once it was enabled it would improve performance significantly ?
It doesn't appear to be fully working at this time. Surely you can understand the difference between a checkbox feature and a robustly implemented one. For DSBR to work well, geometry needs to be presented in an optimal fashion. I haven't seen anything suggesting any devs have hit that point as it likely requires primitive shaders or the implicit driver path.

Great job anarchist, now please reply to this very direct post by writing some laborious, but eloquently worded, reply that deftly dodges all the main points in the discussion in an attempt to sound well informed by way of vague remarks alluding to the validity of your argument in light of "everyone" subscribing to your view.
If you mean all the off-topic attempts at derailing and misconstruing the conversation with uninformed strawman arguments, yeah I try to dodge those.

Can't turn it on overnight eh? Well what a fucking surprise. Anarchist4000 will surely weasel his out of this one without anyone noticing eh?
Not really up to AMD to fully implement features affecting all IHVs and get them implemented in Khronos and Microsoft APIs. Especially when some parties have an interest in dragging their feet for competitive reasons. You're the one weaseling out of all the claims you were making that got proved wrong.

Vega has been the most colossal f**kup we've ever seen from AMD.

So many unkept promises...
In what regards? The huge increase in market share, revenue, or adoption in mobile platforms that it achieved? From a business standpoint Vega may be AMD's most successful architecture ever and the real gains haven't even hit yet. The mobile platforms are a huge revenue market for game devs that all engines seem to be focused on lately. The success of Nintendo's Switch, mobile devices, and consoles really make that the more ideal market for developers. There are simply that many more devices.
 
In what regards? The huge increase in market share, revenue, or adoption in mobile platforms that it achieved? From a business standpoint Vega may be AMD's most successful architecture ever and the real gains haven't even hit yet. The mobile platforms are a huge revenue market for game devs that all engines seem to be focused on lately. The success of Nintendo's Switch, mobile devices, and consoles really make that the more ideal market for developers. There are simply that many more devices.

Vega is by no means responsible for any expanded market share, Vega on mobile (as in the Vega on APUs and such) is completely irrelevant to AMD's push to the market.

We have seen that Vega brings no IPC improvements at all, and efficiency is basically the exact same as Polaris. So in other words, AMD slapped the name "Vega" on a bunch of Polaris cores integrated into CPUs and suddenly Vega is now the epicentre of innovation? Are you saying that if AMD used Polaris cores instead of Vega cores on their APUs that they would perform ANY differently?

Take a step back and you realise that Vega is just the name given to their mobile parts now. It's not innovation.
 
Vega is by no means responsible for any expanded market share, Vega on mobile (as in the Vega on APUs and such) is completely irrelevant to AMD's push to the market.

We have seen that Vega brings no IPC improvements at all, and efficiency is basically the exact same as Polaris. So in other words, AMD slapped the name "Vega" on a bunch of Polaris cores integrated into CPUs and suddenly Vega is now the epicentre of innovation? Are you saying that if AMD used Polaris cores instead of Vega cores on their APUs that they would perform ANY differently?

Take a step back and you realise that Vega is just the name given to their mobile parts now. It's not innovation.

They would perform differently.. They would have lower clocks, and lower performance as a result :)
 
Vega is by no means responsible for any expanded market share, Vega on mobile (as in the Vega on APUs and such) is completely irrelevant to AMD's push to the market.

We have seen that Vega brings no IPC improvements at all, and efficiency is basically the exact same as Polaris. So in other words, AMD slapped the name "Vega" on a bunch of Polaris cores integrated into CPUs and suddenly Vega is now the epicentre of innovation? Are you saying that if AMD used Polaris cores instead of Vega cores on their APUs that they would perform ANY differently?

Take a step back and you realise that Vega is just the name given to their mobile parts now. It's not innovation.

Cannot say I agree with anything you just said. Vega is not Polaris, although the previous head of RTG certainly did not do anything but the minimum he had to do, unfortunately. Even without mining, Vega would sell quite well, at least at MSRP. :) Also, when it is fully used, it performs very well but, it does have to be fully used, just as Nvidia cards do, although they have gimpworks to go with. (Yes, it does gimp the performance of their competition.)

What I like is that we do not even know what is coming, because they finally got rid of the loud mouths who would hype without substance. The real CEO has basically got RTG to just shut up and let their actions speak for themselves.
 
Vega, when clocked down to Fury X levels, performs just like Fury X (without using any Vega-specific features). HardOCP already measured this. However, Vega is clocked much higher than Fury X because it was designed to do so.

Now, if developers were to use Vega-specific features, it's fair to presume that it would perform much better than a Fury X. The problem is that Vega silicon has been out there for quite some time (not too long after Polaris was released, in fact). With that much lead-time, it is also fair to presume that AMD/RTG had enough lead time to prepare the developer side to take advantage of those features. Some people, who I won't point at, were adamant that these features would be seamless to developers and would 'just work'. This thread, and the Vega rumors thread that meandered far beyond it's useful life, is pointing out that the expectation that Vega features would 'just work' without AMD/RTG intervention was incorrect.
 
Vega, when clocked down to Fury X levels, performs just like Fury X (without using any Vega-specific features). HardOCP already measured this. However, Vega is clocked much higher than Fury X because it was designed to do so.

Now, if developers were to use Vega-specific features, it's fair to presume that it would perform much better than a Fury X. The problem is that Vega silicon has been out there for quite some time (not too long after Polaris was released, in fact). With that much lead-time, it is also fair to presume that AMD/RTG had enough lead time to prepare the developer side to take advantage of those features. Some people, who I won't point at, were adamant that these features would be seamless to developers and would 'just work'. This thread, and the Vega rumors thread that meandered far beyond it's useful life, is pointing out that the expectation that Vega features would 'just work' without AMD/RTG intervention was incorrect.

The problem is that those features will never be the norm. Without driver-level integration and enabling, these features are going to go unused 99% of the time.
 
It trades blows with a 1080? In what world? It is constantly 10-15% behind, while using nearly twice as much power and using a slab of silicon twice the size.

Bulldozer at least kept up with the 2600k in tasks that could leverage all 8 corelets. Don't get me wrong, bulldozer was a failure, but Vega is a COMPLETE failure. It fails in every way conceivable. There is no world where buying one is a good idea... Unless you're mining.
That's not true Vega is great for blender.
 
If you completely discount the clockspeed in the 4/clock then I guess you could be right. Few if any titles actually need that much throughput. The Vega issue looks more a temp issue holding back clockspeeds. It's doing well, but not maintaining the clocks that it should which hurts throughput.

What a stupid comment, what do you mean I'm discounting the frequency? I am comparing it to Hawaii at clock parity you mongoose. Few if any titles need that throughput, yet whenever a title does you blame "gimpworks" and cry us a river. The comments about clockspeeds are outright lies because fortunately PCGH tests with fixed clocks they specify in all their graphs, as well as testing in-game as opposed to use the built-in benchmarks that produced skewed results when compared to actual gameplay testing (Deus Ex, AotS come to mind).

You and some others were the ones that created the magical driver. All I said was performance would eventually get up there when all the features were ironed out and appropriately coded. We have non-cherrypicked FarCry benchmarks everywhere showing Vega right up there with 1080ti and that's still without everything fully implemented.

I didn't create anything, I expressed my disbelief at the claim that there would be a driver bringing support for automated implementation of a key feature (for vega) nobody is actually using. You insisted that DSRB was inactive and that between that and Primitive Shader driver missing, that explained Vega's lackluster performance. Even if we entertain the notion that Techspot is the only review website worth a damn (central to the validity of your flimsy argument), one game out of hundreds in which it performs like a three year old overclocked maxwell part (with equivalent, or worse, efficiency) hardly makes for a compelling argument. Direct cooperation with AMD and one of the largest AAA studios on the planet integrating pretty much all of Vega's new features and it's still behind a 1080Ti in a built-in benchmark that is 100% predictable and can easily be tuned to perform better than the game itself(with it's inherent randomness)

AMD said they found a better solution. Not sure why you're attributing that claim to me. Primitive shaders work but the extensions are still being finalized for upcoming APIs. Evidenced in the quotes you provided if you'll read them. Extensions for DX12/Vulkan take time and are still coming as your evidence says. I haven't seen any evidence Vega can't actually run a Primitive Shader, only that the hardware feature isn't widely exposed yet. That will likely change in the future and be valid for upcoming architectures. The mining boom kind of makes targeting a feature only available on Vega a bit pointless for developers. I haven't seen anything about FarCry using primitive shaders, just packed math.

AMD says whatever they have to say to save face, the fact of the matter is they lied about this driver. It was never even remotely viable. Now they've completely changed rhetoric and are almost mocking the expectation of that driver actually being released by saying "can't happen overnight". I don't see how this is a strawman argument. It's all in there in the article I linked and your long history of funny posts.

The point of primitive shaders is to efficiently cull geometry. There also exists a compute shader which can do the same and run on nearly all hardware. From a development standpoint one makes far more sense.

It doesn't appear to be fully working at this time. Surely you can understand the difference between a checkbox feature and a robustly implemented one. For DSBR to work well, geometry needs to be presented in an optimal fashion. I haven't seen anything suggesting any devs have hit that point as it likely requires primitive shaders or the implicit driver path.

Oh, so it doesn't appear to be fully working because the results are not satisfactory to someone who has been expending an impressive amount of effort to cast AMD in the best light possible no matter the situation? Hire me as a ghost writer, I will maintain the essential abject disdain for honesty that is so prevalent in your writing and spice it up with some good humor which you evidently seem to lack. Expecting developers to go back and do more work to patch up the crumbling mess that is Vega is unrealistic. Doom didn't save it. Wolfenstein didn't save it. AotS didn't save it. Far Cry 5 is the crowning glory as far as your concerned and all it is is Vega performing like it should given it's size and specs, but only in a strictly repeatable canned benchmark. Kappa.

If you mean all the off-topic attempts at derailing and misconstruing the conversation with uninformed strawman arguments, yeah I try to dodge those.

Considering I am the thread author, I determined what the thread topic was to be the moment I posted the thread, I deem your claim that my posts are offtopic to be offtopic, and I encourage you to accelerate head first into a wall of your choosing.

Not really up to AMD to fully implement features affecting all IHVs and get them implemented in Khronos and Microsoft APIs. Especially when some parties have an interest in dragging their feet for competitive reasons. You're the one weaseling out of all the claims you were making that got proved wrong.

Ah yes, "not up to AMD to make sure AMD products perform adequately, NV are turds because they have proprietary libraries that make use of the strengths of their hardware, but AMD's patchwork attempts to improve the performance of their aging design don't work out because they developers have virtually no incentive to invest time and effort to extract an extra bit of performance from GPUs nobody actually uses in the grand scheme of things and I take offense at the notion that there really is no excuse for any of this at this point and I've invested too much time and effort supporting AMD on the forum for me to yield my position"

Features affecting all IHVs? What are you on about. NV managed to implement a tiled rasterizer, completely transparently, without anyone realizing, and with absolutely 0 excuses. This affects AMD, and AMD only.

Having said that, you are more than welcome to tell me what claims I have made that were proven wrong.
 
What a stupid comment, what do you mean I'm discounting the frequency? I am comparing it to Hawaii at clock parity you mongoose. Few if any titles need that throughput, yet whenever a title does you blame "gimpworks" and cry us a river. The comments about clockspeeds are outright lies because fortunately PCGH tests with fixed clocks they specify in all their graphs, as well as testing in-game as opposed to use the built-in benchmarks that produced skewed results when compared to actual gameplay testing (Deus Ex, AotS come to mind).



I didn't create anything, I expressed my disbelief at the claim that there would be a driver bringing support for automated implementation of a key feature (for vega) nobody is actually using. You insisted that DSRB was inactive and that between that and Primitive Shader driver missing, that explained Vega's lackluster performance. Even if we entertain the notion that Techspot is the only review website worth a damn (central to the validity of your flimsy argument), one game out of hundreds in which it performs like a three year old overclocked maxwell part (with equivalent, or worse, efficiency) hardly makes for a compelling argument. Direct cooperation with AMD and one of the largest AAA studios on the planet integrating pretty much all of Vega's new features and it's still behind a 1080Ti in a built-in benchmark that is 100% predictable and can easily be tuned to perform better than the game itself(with it's inherent randomness)



AMD says whatever they have to say to save face, the fact of the matter is they lied about this driver. It was never even remotely viable. Now they've completely changed rhetoric and are almost mocking the expectation of that driver actually being released by saying "can't happen overnight". I don't see how this is a strawman argument. It's all in there in the article I linked and your long history of funny posts.





Oh, so it doesn't appear to be fully working because the results are not satisfactory to someone who has been expending an impressive amount of effort to cast AMD in the best light possible no matter the situation? Hire me as a ghost writer, I will maintain the essential abject disdain for honesty that is so prevalent in your writing and spice it up with some good humor which you evidently seem to lack. Expecting developers to go back and do more work to patch up the crumbling mess that is Vega is unrealistic. Doom didn't save it. Wolfenstein didn't save it. AotS didn't save it. Far Cry 5 is the crowning glory as far as your concerned and all it is is Vega performing like it should given it's size and specs, but only in a strictly repeatable canned benchmark. Kappa.



Considering I am the thread author, I determined what the thread topic was to be the moment I posted the thread, I deem your claim that my posts are offtopic to be offtopic, and I encourage you to accelerate head first into a wall of your choosing.



Ah yes, "not up to AMD to make sure AMD products perform adequately, NV are turds because they have proprietary libraries that make use of the strengths of their hardware, but AMD's patchwork attempts to improve the performance of their aging design don't work out because they developers have virtually no incentive to invest time and effort to extract an extra bit of performance from GPUs nobody actually uses in the grand scheme of things and I take offense at the notion that there really is no excuse for any of this at this point and I've invested too much time and effort supporting AMD on the forum for me to yield my position"

Features affecting all IHVs? What are you on about. NV managed to implement a tiled rasterizer, completely transparently, without anyone realizing, and with absolutely 0 excuses. This affects AMD, and AMD only.

Having said that, you are more than welcome to tell me what claims I have made that were proven wrong.

Ledra, I think boycotting is something you are giving more and more credence too. :D The hardware is there and that is just the way it is. Deal with it or do not, that is on you and no one else.
 
Ledra, I think boycotting is something you are giving more and more credence too. :D The hardware is there and that is just the way it is. Deal with it or do not, that is on you and no one else.

The hardware is there and that is just the way it is. It's not even competitive with similarly sized chips, but we're going to give them a pass anyway in the name of selfless promotion of practices that encourage competition and promote AMD's continued efforts to produce overhyped products that consistenly fail to deliver on the promises made at launch. That is correct. I don't have to deal with the absence of an implicit primitive shader path , hence my amusement.
 
The hardware is there and that is just the way it is. It's not even competitive with similarly sized chips, but we're going to give them a pass anyway in the name of selfless promotion of practices that encourage competition and promote AMD's continued efforts to produce overhyped products that consistenly fail to deliver on the promises made at launch. That is correct. I don't have to deal with the absence of an implicit primitive shader path , hence my amusement.
Why do you give a shit?
 
Why do you give a shit?

That ship has sailed a long time ago. It never has to do with any other logical reason just to bash AMD.
Surprise !! Definitely wasn't a feature they knew they could never implement but needed to justify disappointing performance ! 1080Ti performance is cancelled.

Unrealistic view of AMD products and does not have a grasp on reality clearly means that he is the only person on this forum that has come to this conclusion, the odd thing is that he posts about it in a way that if you do not agree with him you must be the problem....
 
That ship has sailed a long time ago. It never has to do with any other logical reason just to bash AMD.


Unrealistic view of AMD products and does not have a grasp on reality clearly means that he is the only person on this forum that has come to this conclusion, the odd thing is that he posts about it in a way that if you do not agree with him you must be the problem....

>Unrealistic views
>Peddles marketing tripe for years
>Polaris = 1080
>Vega = 1080Ti

but keep on telling me that I'm the delusional one :D
 
Because his masters would be displeased with him, otherwise. :D Oh well, AMD bashing has been a thing around these parts for a while and yet, AMD is still ALIVE! :D

Says the guy whose forum signature is an AMD marketing wet dream


AMD - Because I use what I want and spend my money on what I want. (And I want fast, longterm, stable investments in hardware, which AMD is on all fronts.

The only thing that has been FAST, LONGTERM and STABLE has been AMD's decline in the graphics market.

Savage.
 
Back
Top