Ieldra
I Promise to RTFM
- Joined
- Mar 28, 2016
- Messages
- 3,539
Surprise !! Definitely wasn't a feature they knew they could never implement but needed to justify disappointing performance ! 1080Ti performance is cancelled.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I only just found out, don't think it was posted here! Couldn't pass on the opportunity given the long long history of drawn out arguments we have had about this on the forum lol
I only just found out, don't think it was posted here! Couldn't pass on the opportunity given the long long history of drawn out arguments we have had about this on the forum lol
It was posted here a while ago, search.
Why is that a surprise? Explicit > Portable Compute > Implicit. Makes more sense to focus on the portable option for developers as it works on all hardware. The 1080ti performance has already been achieved when everything is working. Doesn't mean it wont be added eventually either. That primitive shaders patent would seem to refute what all the anti-ps guys were claiming was impossible. It can be automatic, but gains from explicit or compute method will likely exceed it. Implicit would only be useful for titles with a shitload of geometry or tessellation being culled. Old school Gimpworks titles with the 64x Tess factors for example. Most devs would likely choose the compute method alongside indirect execution to reduce CPU load with the newer APIs.Definitely wasn't a feature they knew they could never implement but needed to justify disappointing performance ! 1080Ti performance is cancelled.
It isn't a surprise, which is my point. You kept insisting this feature would see the light of day and would project Vega to impressive performance numbers. It didn't.Why is that a surprise? Explicit > Portable Compute > Implicit. Makes more sense to focus on the portable option for developers as it works on all hardware. The 1080ti performance has already been achieved when everything is working. Doesn't mean it wont be added eventually either. That primitive shaders patent would seem to refute what all the anti-ps guys were claiming was impossible. It can be automatic, but gains from explicit or compute method will likely exceed it. Implicit would only be useful for titles with a shitload of geometry or tessellation being culled. Old school Gimpworks titles with the 64x Tess factors for example. Most devs would likely choose the compute method alongside indirect execution to reduce CPU load with the newer APIs.
So AMD found a preferable solution that is applicable to both IHVs and uses established APIs, and that means it never saw the light of day? Of course they could always go the explicit route and prevent devs from optimizing for Nvidia hardware. Then sink a lot of cash and devrel time into those optimizations. Screwing over all gamers and their experience for a competitive advantage. Not like Nvidia would attempt any anti-competitive bullshit.It isn't a surprise, which is my point. You kept insisting this feature would see the light of day and would project Vega to impressive performance numbers. It didn't.
1080Ti performance is mythical, just like implicit PS.
What you're doing boils down to starting a completely unrelated discussion that is tangential to the subject at best and making supposedly snarky comment as a prelude to your usual insufferable 'gimpworks' rants. It was marketed as a major feature with the Vega launch, and it became Vega's crutch much like asynchronous compute was to Fiji. The latest GCN iteration is no more capable in terms of geometry throughput than Hawaii unless I'm very badly mistaken (4/cy) and in the absence of a magical solution that could be applied retroactively to all the titles in which Vega underperforms (13tflops was it?) this is quite the problem. Now you could argue that AMD's marketing certainly didn't make claims as far fetched as the ones that are made daily on forums/reddit etc, but let's be realistic, it's expected, they knew what they were doing with that announcement, and they knew it wasn't even remotely feasible.So AMD found a preferable solution that is applicable to both IHVs and uses established APIs, and that means it never saw the light of day? Of course they could always go the explicit route and prevent devs from optimizing for Nvidia hardware. Then sink a lot of cash and devrel time into those optimizations. Screwing over all gamers and their experience for a competitive advantage. Not like Nvidia would attempt any anti-competitive bullshit.
Implicit is laid out in a published patent as being the original intent. Everyone arguing against was arguing that it wasn't possible, despite what AMD engineers and the patent suggest. Only difference is AMD found a better solution that still accomplishes the same goals in the process. Might even be a bit more future proof, and the pipelines could get reworked for upcoming APIs. Everyone seems to agree the current pipeline is problematic as none of the hardware actually matches the model. No way of knowing if explicit will be that eventual solution, but the compute model AMD is pushing seems valid.
Vega has been the most colossal f**kup we've ever seen from AMD.
So many unkept promises...
That's really unfair man, the Fury X is a strong contender.
Vega has been the most colossal f**kup we've ever seen from AMD.
So many unkept promises...
Ummm, AMD has had lots of big fuck ups, this is not one. I mean it trades blows with the 1080, that is not bulldozer.
The Fury X promised to improve massively over time. It didn't. Overclocked 980Tis are closer in performance to 1080s than 980s/Fury X, which amusingly lands 980Ti performance closer to Vega than to Fiji.Are you high? The Fury X actually matched and sometimes beat the 980Ti (the top-end Nvidia GPU at the time). The Vega comes nowhere close to doing the same to the 1080Ti.
The Fury X promised to improve massively over time. It didn't. Overclocked 980Tis are closer in performance to 1080s than 980s/Fury X, which amusingly lands 980Ti performance closer to Vega than to Fiji.
It trades blows with a 1080? In what world? It is constantly 10-15% behind, while using nearly twice as much power and using a slab of silicon twice the size.
Bulldozer at least kept up with the 2600k in tasks that could leverage all 8 corelets. Don't get me wrong, bulldozer was a failure, but Vega is a COMPLETE failure. It fails in every way conceivable. There is no world where buying one is a good idea... Unless you're mining.
Oh shoot, I better hang my head in shame because Kazeo thinks my purchase of my Vega 56 was not a good idea..... Oh, the shame, the shame...….. Let's see, $469 for my Vega 56 or over $700 for a 1080Ti, no brainer for me but, then again, buying that 980 Ti for $649 back in the day was dumb for me as well. Burn me once, shame on you, burn me twice..... not going to happen.
Oh shoot, I better hang my head in shame because Kazeo thinks my purchase of my Vega 56 was not a good idea..... Oh, the shame, the shame...….. Let's see, $469 for my Vega 56 or over $700 for a 1080Ti, no brainer for me but, then again, buying that 980 Ti for $649 back in the day was dumb for me as well. Burn me once, shame on you, burn me twice..... not going to happen.
If you completely discount the clockspeed in the 4/clock then I guess you could be right. Few if any titles actually need that much throughput. The Vega issue looks more a temp issue holding back clockspeeds. It's doing well, but not maintaining the clocks that it should which hurts throughput.The latest GCN iteration is no more capable in terms of geometry throughput than Hawaii unless I'm very badly mistaken (4/cy) and in the absence of a magical solution that could be applied retroactively to all the titles in which Vega underperforms (13tflops was it?) this is quite the problem.
You and some others were the ones that created the magical driver. All I said was performance would eventually get up there when all the features were ironed out and appropriately coded. We have non-cherrypicked FarCry benchmarks everywhere showing Vega right up there with 1080ti and that's still without everything fully implemented.There was never going to be a magical driver that brought performance up to par
AMD said they found a better solution. Not sure why you're attributing that claim to me. Primitive shaders work but the extensions are still being finalized for upcoming APIs. Evidenced in the quotes you provided if you'll read them. Extensions for DX12/Vulkan take time and are still coming as your evidence says. I haven't seen any evidence Vega can't actually run a Primitive Shader, only that the hardware feature isn't widely exposed yet. That will likely change in the future and be valid for upcoming architectures. The mining boom kind of makes targeting a feature only available on Vega a bit pointless for developers. I haven't seen anything about FarCry using primitive shaders, just packed math.it's hilarious that you claim they've found a "better solution". Better? Better than what? The complete and total absence of a key feature they promised close to a year ago? Vega seems to be performing well in FC5, I've been out of the loop for quite a while but FC5 is on the front page of most review sites. LC Vega nearing 1080ti performance, and if I'm not mistaken that's with prim shaders, shader intrinsics, support for packed math for water simulation among other things...
It doesn't appear to be fully working at this time. Surely you can understand the difference between a checkbox feature and a robustly implemented one. For DSBR to work well, geometry needs to be presented in an optimal fashion. I haven't seen anything suggesting any devs have hit that point as it likely requires primitive shaders or the implicit driver path.While we're on the topic though, remember when you INSISTED, for MONTHS, that DSBR was not working hence the lackluster performance and that once it was enabled it would improve performance significantly ?
If you mean all the off-topic attempts at derailing and misconstruing the conversation with uninformed strawman arguments, yeah I try to dodge those.Great job anarchist, now please reply to this very direct post by writing some laborious, but eloquently worded, reply that deftly dodges all the main points in the discussion in an attempt to sound well informed by way of vague remarks alluding to the validity of your argument in light of "everyone" subscribing to your view.
Not really up to AMD to fully implement features affecting all IHVs and get them implemented in Khronos and Microsoft APIs. Especially when some parties have an interest in dragging their feet for competitive reasons. You're the one weaseling out of all the claims you were making that got proved wrong.Can't turn it on overnight eh? Well what a fucking surprise. Anarchist4000 will surely weasel his out of this one without anyone noticing eh?
In what regards? The huge increase in market share, revenue, or adoption in mobile platforms that it achieved? From a business standpoint Vega may be AMD's most successful architecture ever and the real gains haven't even hit yet. The mobile platforms are a huge revenue market for game devs that all engines seem to be focused on lately. The success of Nintendo's Switch, mobile devices, and consoles really make that the more ideal market for developers. There are simply that many more devices.Vega has been the most colossal f**kup we've ever seen from AMD.
So many unkept promises...
In what regards? The huge increase in market share, revenue, or adoption in mobile platforms that it achieved? From a business standpoint Vega may be AMD's most successful architecture ever and the real gains haven't even hit yet. The mobile platforms are a huge revenue market for game devs that all engines seem to be focused on lately. The success of Nintendo's Switch, mobile devices, and consoles really make that the more ideal market for developers. There are simply that many more devices.
Vega is by no means responsible for any expanded market share, Vega on mobile (as in the Vega on APUs and such) is completely irrelevant to AMD's push to the market.
We have seen that Vega brings no IPC improvements at all, and efficiency is basically the exact same as Polaris. So in other words, AMD slapped the name "Vega" on a bunch of Polaris cores integrated into CPUs and suddenly Vega is now the epicentre of innovation? Are you saying that if AMD used Polaris cores instead of Vega cores on their APUs that they would perform ANY differently?
Take a step back and you realise that Vega is just the name given to their mobile parts now. It's not innovation.
Vega is by no means responsible for any expanded market share, Vega on mobile (as in the Vega on APUs and such) is completely irrelevant to AMD's push to the market.
We have seen that Vega brings no IPC improvements at all, and efficiency is basically the exact same as Polaris. So in other words, AMD slapped the name "Vega" on a bunch of Polaris cores integrated into CPUs and suddenly Vega is now the epicentre of innovation? Are you saying that if AMD used Polaris cores instead of Vega cores on their APUs that they would perform ANY differently?
Take a step back and you realise that Vega is just the name given to their mobile parts now. It's not innovation.
Vega, when clocked down to Fury X levels, performs just like Fury X (without using any Vega-specific features). HardOCP already measured this. However, Vega is clocked much higher than Fury X because it was designed to do so.
Now, if developers were to use Vega-specific features, it's fair to presume that it would perform much better than a Fury X. The problem is that Vega silicon has been out there for quite some time (not too long after Polaris was released, in fact). With that much lead-time, it is also fair to presume that AMD/RTG had enough lead time to prepare the developer side to take advantage of those features. Some people, who I won't point at, were adamant that these features would be seamless to developers and would 'just work'. This thread, and the Vega rumors thread that meandered far beyond it's useful life, is pointing out that the expectation that Vega features would 'just work' without AMD/RTG intervention was incorrect.
That's not true Vega is great for blender.It trades blows with a 1080? In what world? It is constantly 10-15% behind, while using nearly twice as much power and using a slab of silicon twice the size.
Bulldozer at least kept up with the 2600k in tasks that could leverage all 8 corelets. Don't get me wrong, bulldozer was a failure, but Vega is a COMPLETE failure. It fails in every way conceivable. There is no world where buying one is a good idea... Unless you're mining.
If you completely discount the clockspeed in the 4/clock then I guess you could be right. Few if any titles actually need that much throughput. The Vega issue looks more a temp issue holding back clockspeeds. It's doing well, but not maintaining the clocks that it should which hurts throughput.
You and some others were the ones that created the magical driver. All I said was performance would eventually get up there when all the features were ironed out and appropriately coded. We have non-cherrypicked FarCry benchmarks everywhere showing Vega right up there with 1080ti and that's still without everything fully implemented.
AMD said they found a better solution. Not sure why you're attributing that claim to me. Primitive shaders work but the extensions are still being finalized for upcoming APIs. Evidenced in the quotes you provided if you'll read them. Extensions for DX12/Vulkan take time and are still coming as your evidence says. I haven't seen any evidence Vega can't actually run a Primitive Shader, only that the hardware feature isn't widely exposed yet. That will likely change in the future and be valid for upcoming architectures. The mining boom kind of makes targeting a feature only available on Vega a bit pointless for developers. I haven't seen anything about FarCry using primitive shaders, just packed math.
The point of primitive shaders is to efficiently cull geometry. There also exists a compute shader which can do the same and run on nearly all hardware. From a development standpoint one makes far more sense.
It doesn't appear to be fully working at this time. Surely you can understand the difference between a checkbox feature and a robustly implemented one. For DSBR to work well, geometry needs to be presented in an optimal fashion. I haven't seen anything suggesting any devs have hit that point as it likely requires primitive shaders or the implicit driver path.
If you mean all the off-topic attempts at derailing and misconstruing the conversation with uninformed strawman arguments, yeah I try to dodge those.
Not really up to AMD to fully implement features affecting all IHVs and get them implemented in Khronos and Microsoft APIs. Especially when some parties have an interest in dragging their feet for competitive reasons. You're the one weaseling out of all the claims you were making that got proved wrong.
What a stupid comment, what do you mean I'm discounting the frequency? I am comparing it to Hawaii at clock parity you mongoose. Few if any titles need that throughput, yet whenever a title does you blame "gimpworks" and cry us a river. The comments about clockspeeds are outright lies because fortunately PCGH tests with fixed clocks they specify in all their graphs, as well as testing in-game as opposed to use the built-in benchmarks that produced skewed results when compared to actual gameplay testing (Deus Ex, AotS come to mind).
I didn't create anything, I expressed my disbelief at the claim that there would be a driver bringing support for automated implementation of a key feature (for vega) nobody is actually using. You insisted that DSRB was inactive and that between that and Primitive Shader driver missing, that explained Vega's lackluster performance. Even if we entertain the notion that Techspot is the only review website worth a damn (central to the validity of your flimsy argument), one game out of hundreds in which it performs like a three year old overclocked maxwell part (with equivalent, or worse, efficiency) hardly makes for a compelling argument. Direct cooperation with AMD and one of the largest AAA studios on the planet integrating pretty much all of Vega's new features and it's still behind a 1080Ti in a built-in benchmark that is 100% predictable and can easily be tuned to perform better than the game itself(with it's inherent randomness)
AMD says whatever they have to say to save face, the fact of the matter is they lied about this driver. It was never even remotely viable. Now they've completely changed rhetoric and are almost mocking the expectation of that driver actually being released by saying "can't happen overnight". I don't see how this is a strawman argument. It's all in there in the article I linked and your long history of funny posts.
Oh, so it doesn't appear to be fully working because the results are not satisfactory to someone who has been expending an impressive amount of effort to cast AMD in the best light possible no matter the situation? Hire me as a ghost writer, I will maintain the essential abject disdain for honesty that is so prevalent in your writing and spice it up with some good humor which you evidently seem to lack. Expecting developers to go back and do more work to patch up the crumbling mess that is Vega is unrealistic. Doom didn't save it. Wolfenstein didn't save it. AotS didn't save it. Far Cry 5 is the crowning glory as far as your concerned and all it is is Vega performing like it should given it's size and specs, but only in a strictly repeatable canned benchmark. Kappa.
Considering I am the thread author, I determined what the thread topic was to be the moment I posted the thread, I deem your claim that my posts are offtopic to be offtopic, and I encourage you to accelerate head first into a wall of your choosing.
Ah yes, "not up to AMD to make sure AMD products perform adequately, NV are turds because they have proprietary libraries that make use of the strengths of their hardware, but AMD's patchwork attempts to improve the performance of their aging design don't work out because they developers have virtually no incentive to invest time and effort to extract an extra bit of performance from GPUs nobody actually uses in the grand scheme of things and I take offense at the notion that there really is no excuse for any of this at this point and I've invested too much time and effort supporting AMD on the forum for me to yield my position"
Features affecting all IHVs? What are you on about. NV managed to implement a tiled rasterizer, completely transparently, without anyone realizing, and with absolutely 0 excuses. This affects AMD, and AMD only.
Having said that, you are more than welcome to tell me what claims I have made that were proven wrong.
Ledra, I think boycotting is something you are giving more and more credence too. The hardware is there and that is just the way it is. Deal with it or do not, that is on you and no one else.
Why do you give a shit?The hardware is there and that is just the way it is. It's not even competitive with similarly sized chips, but we're going to give them a pass anyway in the name of selfless promotion of practices that encourage competition and promote AMD's continued efforts to produce overhyped products that consistenly fail to deliver on the promises made at launch. That is correct. I don't have to deal with the absence of an implicit primitive shader path , hence my amusement.
Pretty sure it was in the Vega thread, razor one discussed it a while ago
Why do you give a shit?
Surprise !! Definitely wasn't a feature they knew they could never implement but needed to justify disappointing performance ! 1080Ti performance is cancelled.
That ship has sailed a long time ago. It never has to do with any other logical reason just to bash AMD.
Unrealistic view of AMD products and does not have a grasp on reality clearly means that he is the only person on this forum that has come to this conclusion, the odd thing is that he posts about it in a way that if you do not agree with him you must be the problem....
Why do you give a shit?
Because his masters would be displeased with him, otherwise. Oh well, AMD bashing has been a thing around these parts for a while and yet, AMD is still ALIVE!
AMD - Because I use what I want and spend my money on what I want. (And I want fast, longterm, stable investments in hardware, which AMD is on all fronts.