TaintedSquirrel
[H]F Junkie
- Joined
- Aug 5, 2013
- Messages
- 12,705
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
So this guy was able to get across a clear message without calling anyone NVIDIOTS, FAN BOY or SHILL?
Cooler heads prevail.
If I was doing the video the invectives would have blot out the sun like those arrows in 300!
Another good question.So since the AMD cards gradually overtake the Nvidia cards, does that mean Nvidia never gets its drivers right or does that mean AMD has better hardware, but needs to learn how to write drivers?
Just imagine the landscape if AMD could get their drivers together on release and not years later. Years later don't matter to the bottom dollar.. just to people hanging on for dear life to their aging video card.
Just imagine the landscape if AMD could get their drivers together on release and not years later. Years later don't matter to the bottom dollar.. just to people hanging on for dear life to their aging video card.
Ok, then AMD has the superior hardware if it overtakes the Nvidia hardware consistently over time.When you haven't changed uarch in ~5 years and developers gets more and more experience with it, that's why.
But at this point even small changes like the primitive discard can send older cards out in the old very fast.
Ok, then AMD has the superior hardware if it overtakes the Nvidia hardware consistently over time.
Not all the games were updated that were in the tests, meaning it was AMD improving performance by the drivers themselves. This also give credence Nvidia in the past just has better drivers at launch of games. Now the 480 has improved rather dramatically over a shorter period of time which is most likely due to the new hardware stuff added which also give credence to RTG better efficiency for running the graphics division.
We just don't know if AMD current cards will age like the 7970 and 290x did in the past. I don't see Fiji aging particularly good in the long run due to memory restrictions or 4gb. The past may hint at what the future brings but is not guaranteed it will flow that way. Yes the 480 has improved since launched due to drivers.Did the 480 improve dramatically? Or is it just a matter of games benched?
The perf/watt of 14nm LPP parts is close to that of 28nm parts of the competition. That's how big of a change there has been. 5 years back the 2 was equal.
There is pros and cons of both ways, however only one of them is long term viable. And this is why the cards today are what they are.
We just don't know if AMD current cards will age like the 7970 and 290x did in the past. I don't see Fiji aging particularly good in the long run due to memory restrictions or 4gb. The past may hint at what the future brings but is not guaranteed it will flow that way. Yes the 480 has improved since launched due to drivers.
Power is much better for AMD but still not in Nvidia's league. Hopefully AMD will be able to gain some more ground with Vega.
AMD just needs to win the race with a good race car and driver, not build a race car that initially looses all the races but could beat all the other cars after two years with a better driver. Except after two years all the other racers have new faster cars anyways so that race car never wins even with a better driver.
Yes, Kyle had heads up with the issues prior to launch but AMD has at least made this GPU sell good enough in the mean time. The 1060 also has some disadvantages as well, not able to SLI, 6gb or less memory and an almost unusable 3gb version. Still when I built my daughter an SFF system the 1060 won over the 480 for exactly for the last reason you mention - power = heat. AMD lack of competition at the high end for over 6 months is the biggest failure I've seen from them - Vega needs to be good if not awesome with a near perfect launch.The problem is those things requires fundamental changes. Picking low hanging fruits and focusing on bigger titles isn't going to win the race. Fiji was pretty much obsolete the day it got released.
Polaris 10 for example uses 33% more memory bandwidth, 30% more transistors and 37% more power than GP106. From a consumer perspective you can ignore the first 2 as such, because that's something AMD needs to pay for. But it shows the big problem.
If you want to play titles like Divinity Original Sin 2, Skyrim SE, Cossacks 3, Dishonored 2 or even their own titles like Civ6 and Warhammer. Then there isn't much aging or driver benefit. Just to mention a few examples.
Did you actually watch the video?
I'm very happy with my R9 290x 3 years on. The only card that's lasted me longer was my Voodoo 3 way back in the day. I still feel no pressure to upgrade.
It plays Kerbal Space Program and my other games just fine on Linux Mint. I was worried I was going to have to pick up an Nvidia card to play anything on Linux based on all the anti-AMD FUD I'd read.
AMD cards often have more raw processing power than their nvidia competitors at launch, but the drivers aren't as fine tuned out of the gate.
Case in point the Fury X has 8.5 teraflops of processing power while the 1070 has 6.5 teraflops. That explains why the Fury X has been able to gain so much ground on the original launch Fury X benchmarks. It had a lot of untapped potential at launch and still does.
The Fury X has 30% more potential via hardware than the 1070 if the drivers on each card were 100% efficient. And the Fury X was released a full year earlier than the 1070. Benchmarks between the two are getting more equitable overall as AMD matured their drivers and ultimately I expect the Fury X to overtake the 1070 across the board, but at that time the elite consumers will have long left the Fury X and the 1070 for the newest generation or two and so this type of info is known by the deeper diving community but not often published at the benchmark sites which typically only show battles between the newest gen stuff.
I.e. All the benchmarks these days are with the newer RX480 vs Nvidia stuff meanwhile the 1.5 year old Fury X is a MUCH faster card than it was at launch and a MUCH faster card than the RX480 in pretty much everything. At ~$300 right now the Fury X is about the best GPU value available. (Considering all variables including the price and availability of freesync displays)
For generation after generation AMD has produced the faster cards computationally. But they aren't optimized as well via software/drivers at launch (maybe not ever optimized fully). In direct raw computational performance they are stronger. That's why the password crackers and miners use AMD cards primarily.
AMD cards often have more raw processing power than their nvidia competitors at launch, but the drivers aren't as fine tuned out of the gate.
Case in point the Fury X has 8.5 teraflops of processing power while the 1070 has 6.5 teraflops. That explains why the Fury X has been able to gain so much ground on the original launch Fury X benchmarks. It had a lot of untapped potential at launch and still does.
The Fury X has 30% more potential via hardware than the 1070 if the drivers on each card were 100% efficient. And the Fury X was released a full year earlier than the 1070. Benchmarks between the two are getting more equitable overall as AMD matured their drivers and ultimately I expect the Fury X to overtake the 1070 across the board, but at that time the elite consumers will have long left the Fury X and the 1070 for the newest generation or two and so this type of info is known by the deeper diving community but not often published at the benchmark sites which typically only show battles between the newest gen stuff.
I.e. All the benchmarks these days are with the newer RX480 vs Nvidia stuff meanwhile the 1.5 year old Fury X is a MUCH faster card than it was at launch and a MUCH faster card than the RX480 in pretty much everything. At ~$300 right now the Fury X is about the best GPU value available. (Considering all variables including the price and availability of freesync displays)
For generation after generation AMD has produced the faster cards computationally. But they aren't optimized as well via software/drivers at launch (maybe not ever optimized fully). In direct raw computational performance they are stronger. That's why the password crackers and miners use AMD cards primarily.
Is that necessary true though? Power curves are exponential. Equalize performance between a 480 and 1060 how much of a power difference still exists? If a 480 with proper drivers ends up 10% ahead of a 1060 the actual perf/watt would probably be in the same neighborhood. Then you have to get past people constantly using results from the very first, less than optimal, benchmarks to draw a conclusion.Power is much better for AMD but still not in Nvidia's league. Hopefully AMD will be able to gain some more ground with Vega.
Is that necessary true though? Power curves are exponential. Equalize performance between a 480 and 1060 how much of a power difference still exists? If a 480 with proper drivers ends up 10% ahead of a 1060 the actual perf/watt would probably be in the same neighborhood. Then you have to get past people constantly using results from the very first, less than optimal, benchmarks to draw a conclusion.
So since the AMD cards gradually overtake the Nvidia cards, does that mean Nvidia never gets its drivers right or does that mean AMD has better hardware, but needs to learn how to write drivers?
It has a bit to do with what Oz explained about the raw amount of flops. Nvidia is prolly better at exposing their hardware for the better part and AMD is not that fast but eventually gets there.
There is a video out with Mantle Q&A from APU13 where the driver guy from AMD touches upon this problem(DX11 optimization). AMD driver team has to figure out what exactly is going on before they can improve and that is where AMD are slower even if there involved in the title in development phase it does not always mean they are able to get maximum performance.
It has very little to do with that, Applications are made to run based on hardware that is out there, so AMD bet the house on shader needs to increase at much higher rate than the fixed units, that did not happen, hence why you see with Polaris, they focused transistors on fixed units and feeding those units more than increasing the size of their shader array. They know where the issues are and have addressed some of the problems they were having.
AMD's DX11 problem is inherently across the board on all DX11 games, the games that don't exhibit that problem are ones they specifically worked with the dev's and sponsored through their dev program. AMD knows their weaknesses or bottlenecks in their hardware and they are well equipped to advise game companies in their dev program what is the best way to avoid those bottlenecks.
I could care less if my old 7950 is able to marginally beats Kepler cards by 2016 when I can get a 1070 that is more than 3x faster.
They also do not use canned benchmark capture measurement-results.Seems Fury models are hurting more than in the past, with only a rare few modern games having the performance one expects from such a card, and usually pretty poor at launch with AAA.
To see how dire it is at launch for Fury X, look at PCGamesHardware that use PresentMon\frame analysis tools\etc and go into careful detail and repeat testing early on when issues identified.
Dishonored 2, Watch Dogs 2, Call of Duty: Infinite Warfare, Forza Horizon 3.
I am just listing the games where the performance could be deemed to really had dropped off, sometimes near same as Polaris 480 or just ahead and even occasionally behind Polaris 480; not listing games where it could be deemed 980ti is outperforming Fury X such as Shadow Warriors 2 by 20%.
I think that most of the performance increase of AMD cards is more to do with the Nvidia cards not having a base of 4GB of VRAM. Like he said in the video the 600 series only came with 2GB of VRAM (They were available with 4GB so I'm not sure why he didn't show that in the vid when he did say the AMD cards came with multiple memory configs). I just upgraded from a 4GB GTX670 and it ran nearly everything but the newest AAA titles at max settings. I'd be surprised if a 3GB 7970 was able to pull more than a few fps more than a 4GB 670/680.
No that is only one aspect of it. Keplar and the 6xx/7xx generations aging relative to the GCN 1.0 7xxx/2xx series, is due to a large combination of factors many of which were one off situational factors.
At the basic level the situation is really this. Graphics cards at launch need to be priced against each other based on what they offer at that time. The reason is that future benefits are often extremely difficult to actually convey as a selling point on tangible level to consumers. So this is what determines on the onset which graphics cards compete with each other for consumers. But that does not necessarily mean those graphics cards are equal in other aspects or considerations (for example what each company places them at).
Also a side note here, FLOPS is becoming one of those buzz jargon terms that people throw around with very little understanding. The FLOPS ratings for graphics cards are just theoretical calculation based on the clock speed x cores x OPs per core (basically 2 for FP32 for everyone). A graphics card with half the memory speed for example will have the same FLOPS rating. It is not even derived from some standardized benchmark (supercomputer FLOPS ratings for the top500 for example are actually at least done running Linpack as a benchmark).
I pretty much 100% agree with this statement. It has been the reason for years that we update game patches and retest cards with pretty much every driver rev. It is why you do not see a graph with 30 video cards from us....because we don't re-use old results as those have the tendency to change greatly over time for both Red and Green and can be very game dependent as well.Basically, this "fine wine" thing has ALWAYS existed, for both AMD and Nvidia.
AMD/Nvidia driver optimizations, or game engine patch optimizations improve performance on existing cards. And developers who now have previous experience with a card can reuse the same tricks with new games, delivering high performance on-release.
But newer games are still a mixed-bag. Some games are just more like an expansion pack, mostly reusing an engine. But others are completely new tech, and a completely new optimization problem. And for many of these newer games, the older cards hit bottlenecks just not seen by newer generations. AS CSI_PC pointed out above, sometimes it's hard to work new features using older tech.
So labeling this as "fine wine" is just self-selective marketing bullshit. To make you forget that AMD will be around a year late with their consumer GTX 1080 competitor.