Tell that to Killing Floor 2 and BLIronic, considering GPU PhysX is dead.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Tell that to Killing Floor 2 and BLIronic, considering GPU PhysX is dead.
On which titles? All I've seen was that it worked on the Specview Energy test which is somewhat meaningless and still doesn't mean they are optimally setting tile and bin sizes.And it is enabled in the performance numbers presented by AMD a few days ago.
Not a tirade, I just repeatedly pointed out that a GCN core could provide similar functionality as the tensor cores on Volta. The change being a 32/64 wide wave running as a single thread with 16 wide vectors for the tensor. More of a software change than anything in hardware, but there are ways to make it more efficient.No, what you did was go off an long tirade about how tensor cores are cursory addition of little relevance and that can be easily implemented on current GCN hardware, moreover you claimed that it is performing tensor products, which indicates to me that you hadn't even bothered to read what little information NV had published about it. My favorite kind of poster.
Not bias, I just take a position outside that of various marketing narratives. I never said 480 as we know it would be up in 1070/80 territory. I said that a potential bonded version with HBM would be competing. Fiji does quite well in certain titles at higher resolution.This is like when someone comes home to their dog having ravaged the furniture and the dog just acts like nothing happened. You have shown bias every chance you got, the rx480 never encroached on 1070 territory, Fiji never performed as you claimed it would...
I didn't complain about the community, I said discussions about AMD have a habit of running off course with pro-Nvidia marketing lines from a handful of posters making it difficult to follow. As I said, a few ignores and the SNR goes up substantially so I've dealt with it. Still sucks for everyone else though.I'm not the one complaining about the community or the thread I'm posting in. That's you, and only you can change it.
Not sure it's even possible to increase primitive discard from Polaris. Now making the process more efficient as Mantor laid out in that video is another matter entirely. Primitive shaders would be a method to discard, or more appropriately not create, the triangles in the first place. That's what devs have been presenting SIGGRAPH papers on for the past few years. Not sure what the percentage of discarded triangles has to do with geometry throughput. In Linux drivers at the very least AMD has already merged the initial stages to facilitate some of those savings. Wouldn't be surprising if the driver is executing a limited "primitive shader" for all titles to achieve those savings and handle the binning with DSBR.Ok AMD just clearly stated primitive discard is at Polaris levels, to get any more primitive shaders MUST be used. So my initial assumptions about AMD triangle throughput performance over a year ago is now 100% validated by AMD. There is absolutely no way AMD can match up against Pascal without extra work being done by developers with this alone.
Anarchist think you might want to take your numbers and throw them out. If you are still going across the lines of improved throughput on Vega.
PS this has been the biggest problem of GCN and why its shader array isn't fully unitized. This explains a lot why Vega isn't scaling well.
Has AMD actually said what ram is on RX? The 8-Hi stacks we're doing 2200MHz overclocks and 4-Hi could come faster stock.This is incorrect FE and RX have virtually same rated bandwidth, FE is 484 RX isn't even 5% higher
I doubt it's the controller, but a thermal issue. Some tests of FE showed the memory running at thermal limits which would cause throttling. What isn't mentioned is that simply running hot under the limit will require more refresh cycles on the ram. Vega might be able to manage more memory bandwidth just by turning up the fans or water cooling at the same clocks, but nobody checked that.Or even if it is a side-effect of the HBCC<--->Memory Controller with some quirks in real world but then again that would apply to all.
I guess more waiting for review benchmarks
Cheers
I doubt it's the controller, but a thermal issue. Some tests of FE showed the memory running at thermal limits which would cause throttling. What isn't mentioned is that simply running hot under the limit will require more refresh cycles on the ram. Vega might be able to manage more memory bandwidth just by turning up the fans or water cooling at the same clocks, but nobody checked that.
Not sure it's even possible to increase primitive discard from Polaris. Now making the process more efficient as Mantor laid out in that video is another matter entirely. Primitive shaders would be a method to discard, or more appropriately not create, the triangles in the first place. That's what devs have been presenting SIGGRAPH papers on for the past few years. Not sure what the percentage of discarded triangles has to do with geometry throughput. In Linux drivers at the very least AMD has already merged the initial stages to facilitate some of those savings. Wouldn't be surprising if the driver is executing a limited "primitive shader" for all titles to achieve those savings and handle the binning with DSBR.
AMD stated it's usefulness will be limited to resource limited GPUs, don't overhype something the manufacturer himself is not.On which titles? All I've seen was that it worked on the Specview Energy test which is somewhat meaningless and still doesn't mean they are optimally setting tile and bin sizes.
Yep just like you are saying a potential driver/features will improve Vega dramatically, we can safely assume this will go down the same way your prediction for 480 did, into thin air.Not bias, I just take a position outside that of various marketing narratives. I never said 480 as we know it would be up in 1070/80 territory. I said that a potential bonded version with HBM would be competing. Fiji does quite well in certain titles at higher resolution.
It is not just memory BW but issue with the cache, it is having issues in both cache and BW tests, I think it is more of a stretch to say it is a thermal issue rather than the cache<---->HBCC<---->Memory controller; the issue may be the implementation and structure with the HBCC.[
That was theoretical vs actual bandwidth
Has AMD actually said what ram is on RX? The 8-Hi stacks we're doing 2200MHz overclocks and 4-Hi could come faster stock.
I doubt it's the controller, but a thermal issue. Some tests of FE showed the memory running at thermal limits which would cause throttling. What isn't mentioned is that simply running hot under the limit will require more refresh cycles on the ram. Vega might be able to manage more memory bandwidth just by turning up the fans or water cooling at the same clocks, but nobody checked that.
Not sampling to review sites would be a pretty bad move, but certainly would show a considerable lack of confidence. I think Vega looks like a good 2016 card and that is all.On a separate note, it appears AMD didn't sample a single site with RX Vega cards, we are 6 days away from launch now, are they repeating Vega FE situation again?
I didn't complain about the community, I said discussions about AMD have a habit of running off course with pro-Nvidia marketing lines from a handful of posters making it difficult to follow. As I said, a few ignores and the SNR goes up substantially so I've dealt with it. Still sucks for everyone else though.
If any prospective consumer wants an even remotely educated opinion from reading forums, this is not the place to do it. The general understanding of engineering and how electronics function is far too limited, falling to individuals that only understand what is directly in front of them with no ability to anticipate the effects of changing the system. That and a bunch of viral marketers that probably put a bunch of heat on their boss following the whole async debacle.
Not sampling to review sites would be a pretty bad move, but certainly would show a considerable lack of confidence. I think Vega looks like a good 2016 card and that is all.
Same with not sampling to any site. You send the message you don't want consumers to make an informed purchasing choice. It's not uncommon to not sample Pro cards but pretty odd to not sample consumer GPUs.Honestly, they're better off not sampling any cards vs. sampling to a select few sites. Only sampling to AMD/RTG-friendly sites would be a clear indicator that they don't believe in their own product, IMO.
fuck we're like 6 days away from launch and there aint shit for leaks.
Know what else was shit and didn't have leaks? Bulldozer.
God damn it amd.
Sell the Radeon division and dump all the R&D into their CPU which actually look to have a rosy picture right now. Radeon has to be a real drag on the R&D budget.They're not finished with the magical drivers yet that will make it faster than Titan Xp SLI. It takes a while to track down the rainbow colored unicorns that are the key ingredient to AMD's "fine wine".
On a separate note, it appears AMD didn't sample a single site with RX Vega cards, we are 6 days away from launch now, are they repeating Vega FE situation again?
Unclear what's going on but I remember HD5850/70 released early and AMD sand bagged the cards with a dumb down driver till Nvidia released there product so Reviewers had to retest AMD cards vs new products by Nvidia and the AMD cards gained like 10-20% performance just from a new magical driver.
On this subject, does anyone else find it amusing that AMD's reputation was previously for great hardware and shit drivers (back in the 2000-2010 time frame) and now we're at a point where the drivers are pretty great but the hardware has slipped? Heh.
I don't agree with this.
as far as computational power AMD has had the edge for a very long time.
1080TI - tflops = 11.3
Vega - tflops = 12.6
980TI - tflops = 5.6
Fury X - tflops = 8.7
780TI - tflops = 5.0
290x - tflops = 5.6
680 - tflops = 3.1
HD7970 - tflops = 3.8
580 - tflops - 1.5
HD6970 - tflops = 2.7
480 - tflops 1.3
HD5870 - 2.7
You can see in every generation going back a half dozen years - AMD had stronger hardware -- and thus weaker drivers support --- at least in regards to pulling out all the raw performance potential since the NVidia card typically had the edge on actual in game performance. That's why AMD cards have traditionally been better at password cracking or mining in raw performance -- because the compute power is there --- but NVidia has a stronger driver team and has been able to make a higher performing card with less computer power. I think this also plays into the fine wine theme. They can make improvements over a longer period of time at AMD because they left so much on the table to begin with.
I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.
TFLOPs mean absolutely fuck all. This is a forum mostly dedicated to gamers, not HPC use, and personally I couldn't care less about any theoretical performance numbers, only how much they deliver in the real world.I don't agree with this.
as far as computational power AMD has had the edge for a very long time.
1080TI - tflops = 11.3
Vega - tflops = 12.6
980TI - tflops = 5.6
Fury X - tflops = 8.7
780TI - tflops = 5.0
290x - tflops = 5.6
680 - tflops = 3.1
HD7970 - tflops = 3.8
580 - tflops = 1.5
HD6970 - tflops = 2.7
480 - tflops = 1.3
HD5870 = 2.7
285 - tflops = .7
HD4870 = 1.2
sources:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units
It might go back further than this - I didn't search further back -- but to this point - for the modern GPU era -- you can see in every generation going back a decade that AMD had stronger hardware. As such, then they must always have had weaker driver support --- at least in regards to pulling out all the raw performance potential, this since nVidia cards typically had the edge on actual in-game performance.
That's why AMD cards have traditionally been better at password cracking, computational workload, or raw performance at mining - because the compute power is there. NVidia has a stronger driver team and has been able to make a higher performing gaming card with less computer power.
I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.
Ok old guy lol. The 285 is referencing the Nvidia 285 not AMDs 285.TFLOPs mean absolutely fuck all. This is a forum mostly dedicated to gamers, not HPC use, and personally I couldn't care less about any theoretical performance numbers, only how much they deliver in the real world.
I'm also either a lot older than you or you're just newer to the scene, but I was thinking back to the 4000-series days when AMD went with a smaller die approach to combat NVIDIA by undercutting their prices. The R9 285 is like 3 years old, lol.
Ok old guy lol. The 285 is referencing the Nvidia 285 not AMDs 285.
TFLOPs mean absolutely fuck all. This is a forum mostly dedicated to gamers, not HPC use, and personally I couldn't care less about any theoretical performance numbers, only how much they deliver in the real world.
I'm also either a lot older than you or you're just newer to the scene, but I was thinking back to the 4000-series days when AMD went with a smaller die approach to combat NVIDIA by undercutting their prices. The R9 285 is like 3 years old, lol.
I've been around this a long time too. An avid PC gamer since the late 80s.
TFlops directly relates to compute performance. So yes it is very much related to this exchange and my challenge to your point that AMD "hardware has slipped."
Try to follow along...
In the last 10 years every AMD top tier card 1:1 has had more raw computational power than every nvidia card it competed against ---cards that were released at the same time in the same class ---- as I laid out in the list above. Yet in almost every case (every case?) the Nvidia card had better gaming performance. I was challenging your point that AMD's hardware was inferior. (As opposed to their software/drivers)
My bad, I'm apparently retarded.Ok old guy lol. The 285 is referencing the Nvidia 285 not AMDs 285.
Look, I understand what you are saying. I get it. And I apologize for misreading the TFLOPS post. What I have been saying is that in practical terms, AMD cards haven't really challenged NVIDIA in performance - in games - the past two generations (1000 series, 900 series), and certainly not in perf/watt. And in the generation priors to that, they were mostly "almost as fast but not quite", with the exception of the 7000 series on release. Meanwile, the common perception of AMD's drivers was that they were buggy and games were not as compatible (more or less).I've been around this a long time too. An avid PC gamer since the late 80s.
TFlops directly relates to compute performance. So yes it is very much related to this exchange and my challenge to your point that AMD "hardware has slipped."
Try to follow along...
In the last 10 years every AMD top tier card 1:1 has had more raw computational power than every nvidia card it competed against ---cards that were released at the same time in the same class ---- as I laid out in the list above. Yet in almost every case (every case?) the Nvidia card had better gaming performance. I was challenging your point that AMD's hardware was inferior. (As opposed to their software/drivers)
no biggie. I was afraid you might be offended my old man jab. I am almost old, fighting the daylights out of it, not yet...My bad, I'm apparently retarded.
I asked around some friends I know, nobody that I know has Vega, despite them receiving ThreadRipper samples 10 days in advance for reviews. And the clock is ticking, 5 days to go now.Where exactly are you getting this from? I'd be amazed if they didn't send out review samples considering the wait for this card. If anything sites probably already have them or will get them at some point this week.
They could be dropping them late so that reviewers "had them" but not really enough time to post a review ahead of release. Dunno?I asked around some friends I know, nobody that I know has Vega, despite them receiving ThreadRipper samples 10 days in advance for reviews. And the clock is ticking, 5 days to go now.
Reviewers are free to inform people they have the cards, they often do so frequently, for example all reviewers announced they have ThreadRippers in their hands way ahead of reviews. we are already seeing many leaks for ThreadRipper as well. But Vega is complete radio silence, no one has cards.t's called a NDA guys there are not supposed to be leaks or anything of that sort. Most likely to see them a day or two before at the earliest and they cant tell you if they have them or not.
Could be, they could be planning to sample them just before launch, and then we have to wait for reviews several days later.They could be dropping them late so that reviewers "had them" but not really enough time to post a review ahead of release. Dunno?
Reviewers are free to inform people they have the cards, they often do so frequently, for example all reviewers announced they have ThreadRippers in their hands way ahead of reviews. we are already seeing many leaks for ThreadRipper as well. But Vega is complete radio silence, no one has cards.
Could be, they could be planning to sample them just before launch, and then we have to wait for reviews several days later.
That is not true my friend. A NDA can allow you to disclose it or it will tell you can not disclose it, up to the manufacture how much info they want out. Been in the business for years and it's always been that way. Not uncommon to get them 24 hours ahead of time, you will be able to tell how long they had them by the depth of the reviews done or about them complaining they didn't have the card very long.
... AMD Cards have traditionally been better at password cracking, computational workload, or raw performance at mining - because the compute power is there. NVidia has a stronger driver team and has been able to make a higher performing gaming card with less computer power.
I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.