From ATI to AMD back to ATI? A Journey in Futility @ [H]

It's true. It was just a random forum post that was repeated enough that people started believing it. If anything AMD will be cancelling their high end or pushing it out further to avoid embarrassment. Polaris revealed they can't clock high enough to go hard on the high end.

Seems to be rumours.
You could be right. They also are supposed to have two different chips for Vega, so they may just stagger them and push one early when it's good enough but with limited headroom.
 
The 480 we are looking at 100w card with a ceiling of 150w. The low power allows everything to be made more cheaper, 170% is due to the new Finfet node and 210% with arch improvements giving a up to 2.8x better perf/w.
Wrong math. It's more like 1.7x from 14nm FinFET and 1.65x from arch improvements (1.7 x 1.65 = ~2.8).
 
Wrong math. It's more like 1.7x from 14nm FinFET and 1.65x from arch improvements (1.7 x 1.65 = ~2.8).
Arch improvements contain a superscript that makes them unviable metric for now. So, 1.7x until proven otherwise.
 
Just got this from AMD and found it humorous. :ROFLMAO:

upload_2016-6-7_11-11-10.png
 
So wasn't your last AMD GPU the HD5000 series like 7 years ago? Honestly, I am buying a 1080 and I am not bragging about having money, are you trying to make us jealous or some shit?

I don't know if his was; however, mine definitely was, and I went there strictly as a cheapskate play (which I made no bones about).

I also pointed out exactly WHY the HD5450 made a great cheapskate play - at the time.

However, Intel's HD4400 - a non-discrete GPU - has replaced the HD5450 in the "cheapskate play" category, and entirely due to better DX11 API support.

nVidia Baby Maxwell (GTX 750Ti) and Baby Maxwell 2.0 (GTX 950 and GTX 960 2 GB) remain my picks in the mainstream category - where they have been since their respective launches. That is the category that RX 480 and GTX 1060 are launching into.

This is also the entry-level DX12 space (despite GTX 8xxm - which is priced too tall for the segment).
 
I don't know if his was; however, mine definitely was, and I went there strictly as a cheapskate play (which I made no bones about).
I also pointed out exactly WHY the HD5450 made a great cheapskate play - at the time.
However, Intel's HD4400 - a non-discrete GPU - has replaced the HD5450 in the "cheapskate play" category, and entirely due to better DX11 API support.
nVidia Baby Maxwell (GTX 750Ti) and Baby Maxwell 2.0 (GTX 950 and GTX 960 2 GB) remain my picks in the mainstream category - where they have been since their respective launches. That is the category that RX 480 and GTX 1060 are launching into.
This is also the entry-level DX12 space (despite GTX 8xxm - which is priced too tall for the segment).

Was curious about what you could expect performance wise so I looked it up over at passmark.

I was a little skeptical the 4400 could compare to a 5450... The Radeon 5450 ranks at 812. The Intel HD 4400 ranks at 603. Wow.

Have the HD 520 in my Notebook. The HD 520 (Rank 421) is three slots above the GeForce 8800 GT (Rank 424). Four above the Radeon HD 6570 (Rank 425)

Intel's made some significant progress on the integrated GPU front. Especially considering the 520 is a DX12 part. It's no add-in card performance wise but a hell of a lot better than I thought it was.
 
So your saying this card will run hotter then a ref 290/ 290x or 480GTX ? because that's the meaning of hot in my mind , as I just can't believe 150 watts is hot compared to the 300 watt cards.
 
Intel's made some significant progress on the integrated GPU front. Especially considering the 520 is a DX12 part. It's no add-in card performance wise but a hell of a lot better than I thought it was.

Actually have anyone seen Intel igpus in DX12 benchmarks?
 
Wow AMD really made us resent them with the 300 and Fury series of graphics cards....

That's quite a non-sequitur. In any case, I understand the Fury series, but what's wrong with the 300s (which includes Hawaii, Tonga, Pitcairn...) exactly? Rather broad statement there...
 
Was curious about what you could expect performance wise so I looked it up over at passmark.

I was a little skeptical the 4400 could compare to a 5450... The Radeon 5450 ranks at 812. The Intel HD 4400 ranks at 603. Wow.

Have the HD 520 in my Notebook. The HD 520 (Rank 421) is three slots above the GeForce 8800 GT (Rank 424). Four above the Radeon HD 6570 (Rank 425)

Intel's made some significant progress on the integrated GPU front. Especially considering the 520 is a DX12 part. It's no add-in card performance wise but a hell of a lot better than I thought it was.

HD4400(GT-1) it's still low-end haswell iGPU. the truly advancements on intel iGPUs are found on Broadwell and skylake.. which double the performance of old GT2-GT3 iGPU solutions..
 
So your saying this card will run hotter then a ref 290/ 290x or 480GTX ? because that's the meaning of hot in my mind , as I just can't believe 150 watts is hot compared to the 300 watt cards.

From the presentation, it looks like AMD took a page from Intels book. They have focused on improving IPC as well as getting better performance from node shrink. Pascal have been dubbed Paswell in many forums, since its similar to Maxwell with better overclocking capabilities from node switch. From AMD, we might see less overclocking capabilities, but stronger IPC and therefore better returns from overclocking.

We see this in the CPU space. Intel has much stronger IPC and you get better results from overclocking clock for clock, while AMD have gone the brute force way with weaker cores and higher frequency.

Its going to be interesting to see if this is true when the cards are finally in the hands of reviewers. Will AMD be the ones with stronger IPC and more return for overclocking, while Nvidia have weaker IPC, but higher clocks this time? :)

AMD will be hotter then, if you compare heat ws frequency, but then again, AMD will have better performance per watt due to stronger IPC.


GP104 Chip: Nvidia Dumps IPC For Clockspeed
Pascal Secrets: What Makes Nvidia GeForce GTX 1080 Fast?
 
Last edited:
From the presentation, it looks like AMD took a page from Intels book. They have focused on improving IPC as well as getting better performance from node shrink.
Par for the course for a node shrink.

Pascal have been dubbed Paswell in many forums, since its similar to Maxwell with better overclocking capabilities from node switch. From AMD, we might see less overclocking capabilities, but stronger IPC and therefore better returns from overclocking.
Clock speed increase provides a % based change in performance from its original performance.
The degree of this will be the same not better.
The IPC will give the base performance. The % overclock will give a % performance increase.
This is what we are interested in.
 
Par for the course for a node shrink.


Clock speed increase provides a % based change in performance from its original performance.
The degree of this will be the same not better.
The IPC will give the base performance. The % overclock will give a % performance increase.
This is what we are interested in.

Not sure if we are agreeing or disagreeing here.

IPC at default clocks will give base performance and overclocking from default will give a % performance increase. (Or, more correctly, reviewers are comparing FPS @ default clock as base performance and then FPS increase per clock increase). Thats a given, so I don´t get the point of telling me this?

Higher IPC will give better performance per clock increase then lower IPC will per clock increase. My post that you are replying to, was about the possible different approach AMD and Nvidia might be using in this generation of GPUs. AMD going for higher IPC, but lower clocks, while Nvidia sacrificing IPC for higher clocks (compared to Maxwell). In response to crazycaves post about heat. "Hotter" depends on which matrix you are comparing with: Watt vs. frequency or performance per watt. Nvidia sacrificed IPC for clockspeed, did AMD sacrifice clockspeed for IPC?

What we (who are we?) are interested in, is the final performance result after overclock, regardless of which approach AMD and Nvidia chooses.
 
Last edited:
How to tell article is sort of clickbait: it mentions IPC and a GPU in the same sentence.

So what you are saying, is that almost all all articles where they describe architectual changes between GPUs are clickbait? Like this one:

Moving on, along with the SMM layout changes NVIDIA has also made a number of small tweaks to improve the IPC of the GPU.
The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2

You are missing out on several good reviews if you have that opinion. :)
 
So what you are saying, is that almost all all articles where they describe architectual changes between GPUs are clickbait? Like this one:


The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2

You are missing out on several good reviews if you have that opinion. :)
I had to actually click that one because i had to make sure it was not mentioned in slides. Phew.
But yes, term "IPC" is hardly applicable to GPUs since god knows how long time ago. Throughput is, but it's often workload defined (see: AMD GPUs and memecoins).
 
Not sure if we are agreeing or disagreeing here.

IPC at default clocks will give base performance and overclocking from default will give a % performance increase. (Or, more correctly, reviewers are comparing FPS @ default clock as base performance and then FPS increase per clock increase). Thats a given, so I don´t get the point of telling me this?

Higher IPC will give better performance per clock increase then lower IPC will per clock increase. My post that you are replying to, was about the possible different approach AMD and Nvidia might be using in this generation of GPUs. AMD going for higher IPC, but lower clocks, while Nvidia sacrificing IPC for higher clocks (compared to Maxwell). In response to crazycaves post about heat. "Hotter" depends on which matrix you are comparing with: Watt vs. frequency or performance per watt. Nvidia sacrificed IPC for clockspeed, did AMD sacrifice clockspeed for IPC?

What we (who are we?) are interested in, is the final performance result after overclock, regardless of which approach AMD and Nvidia chooses.
Regardless of the method, a % overclock will result in that % max performance increase.
Higher "IPC" doesnt change this, its not something to consider. (I'm using the term IPC for convenience)
What we need to consider is the base performance which includes any IPC. Then how far it overclocks.

Overclocking does not need to take into account the IPC because it has already been accounted for in the base performance.
If you consider it again, your are squaring its effect which does not happen in reality.
 
Regardless of the method, a % overclock will result in that % max performance increase.
Higher "IPC" doesnt change this, its not something to consider. (I'm using the term IPC for convenience)
What we need to consider is the base performance which includes any IPC. Then how far it overclocks.

Overclocking does not need to take into account the IPC because it has already been accounted for in the base performance.
If you consider it again, your are squaring its effect which does not happen in reality.

I dont think you get it ... You are talking about something different then the post I am replying to. I am talking about percentage performance increase per percentage overclock on GPUs with different IPCs, while you are talking about the method of how you determine baseline and then you you measure overclock afterwards on GPUs regardless of IPC.

You need to take into account the different IPCs when you talk about overclocking, since the gains will differ when you compare them. % overclock on one GPU doesn´t equal % overclock on another different GPU in terms of performance.

Nvidia sacrificed IPC for higher clocks. That means it does less per clock, but can be clocked higher and result in more performance in total.

AMD, from their presentation and talk about IPC gains, combined with the "leaked" clock rates for the RX 480, seems to indicate that they went in another direction. Instead of aiming for higher clocks, they went for their GPUs to do more per clock.

Thats a bit interesting, since in the CPU space, AMD goes for weaker cores, but higher clocks, while Intel have stronger cores and can do more with lower clockspeed. AMD must have taken a page out of Intels book. :)
 
Last edited:
I dont think you get it ... You are talking about something different then the post I am replying to. I am talking about percentage performance increase per percentage overclock on GPUs with different IPCs, while you are talking about the method of how you determine baseline and then you you measure overclock afterwards on GPUs regardless of IPC.

You need to take into account the different IPCs when you talk about overclocking, since the gains will differ when you compare them. % overclock on one GPU doesn´t equal % overclock on another different GPU in terms of performance.

Nvidia sacrificed IPC for higher clocks. That means it does less per clock, but can be clocked higher and result in more performance in total.

AMD, from their presentation and talk about IPC gains, combined with the "leaked" clock rates for the RX 480, seems to indicate that they went in another direction. Instead of aiming for higher clocks, they went for their GPUs to do more per clock.

Thats a bit interesting, since in the CPU space, AMD goes for weaker cores, but higher clocks, while Intel have stronger cores and can do more with lower clockspeed. AMD must have taken a page out of Intels book. :)

Cutting to the crux of it, you are saying that AMD cards will overclock a higher %?
 
Cutting to the crux of it, you are saying that AMD cards will overclock higher?

On the contrary, I think they will overclock less. They might have more gain clock for clock, but will not overclock as high as Pascal.
 
On the contrary, I think they will overclock less. They might have more gain clock for clock, but will not overclock as high as Pascal.
Then I cant see where there is a benefit.

NVidia have chosen lower IPC with much higher starting clocks, resulting in a bit less overclock headroom.
AMD have chosen higher IPC with lower starting clocks (as they have in current and previous cards). But they will also get a lesser overclock.
I'm not seeing the win for AMD here.
 
Then I cant see where there is a benefit.

NVidia have chosen lower IPC with much higher starting clocks, resulting in a bit less overclock headroom.
AMD have chosen higher IPC with lower starting clocks (as they have in current and previous cards). But they will also get a lesser overclock.
I'm not seeing the win for AMD here.

My post was not about the pissing contest between AMD and Nvidia fans, so I haven´t even discussed the benefits of either methods. There are benefits with both methods. We don´t know the win until the cards have been tested out in the wild. :)

That a card overclocks higher, doesn´t mean it performs better then another card when both are overclocked to the max. There are more factors in play then just the clock speed.
 
Wow. Tamlin, be careful, the nvida fanboys are going to slay you. Even Kyle liked Nenus post against you, and Kyle is only the 4t or 5th biggest fanboi on this site :I

Lol! I don´t think its any for or against me. Hopefully this is still a discussion, with perhaps a difference of opinion. :) Kyle is not a fanboi though. Hes probably pissed for being ignored by AMD and naturally a bit more on the negative side now, but he is still a "straight shooter". I think in the time I have been reading [H], he has lashed out against every manufacturer that have done something that rubbed him the wrong way. A bit what I like about [H]. Might not always be right, but at least honest.
 
AMD being run into the ground on the CPU side has only been bad for enthusiasts. When that happens on the GPU side it is just bad for enthusiasts. I speak mind which I believe to be the truth. If you think am mad about a trip to China, which I personally would not have attended anyway, you are a fool.
 
Last edited:
My post was not about the pissing contest between AMD and Nvidia fans, so I haven´t even discussed the benefits of either methods. There are benefits with both methods. We don´t know the win until the cards have been tested out in the wild. :)

That a card overclocks higher, doesn´t mean it performs better then another card when both are overclocked to the max. There are more factors in play then just the clock speed.
You compared methods used by both manufacturers.
I added some clarity.
 
There are benefits with both methods.
There are more factors in play then just the clock speed.

Indeed. I like to look at history. We've already been down this path on the CPU side of things. AMD was on a winner but faltered due to some pretty dirty tricks by Intel and bad moves by AMD.

I'd bet a ram stick that we'll see this play out again if Nvidia keeps at it. Perhaps around 10nm or so. Nvidia is big on marketing and clocks are free marketing. They may fall on their own sword if not careful.
 
I think a lot of AMD negativity was created by the hype train by AMD itself and then failure to deliver on said hype. I've been on Team Red for a decade but that doesn't make me an AMD Fanboi who can only see AMD good nVidia bad. If I go with another Red Team card it's going to be tempered by the fact that I'm not buying a graphics card. I'm buying a graphics card with a driver stack associated with it. Regardless of how much DX12/Vulcan shove the responsibility back on the game developer, these puppies still need drivers and, well, if AMD and RTG split, how much support am I going to have for my shiny red card if I go that route again?

Really getting tired of that fanboi moniker being flung around here (and, well, just about everywhere).
 
That AMD comes out with a 380x? replacement (RX 480) that is less then 150W (hard limit due to single 6-pin connector) and it turns out its between 970 and 980 in performance for $199, I don´t feel sad about that. Its actually great!

I was also at first apalled by Kyle's pure negativism. But then, it is not aimed at product as such, but it rather seems to be criticims of AMD's incompetency to produce something better, which seems to be driven (according to Kyle's sources) by AMD/RTG leadership not focusing on making the best product possible, but rather some internal political bickering. And them "covering it up", and trying to look they made the best product possible (though what else could they do, right?)

Frankly, if all they had was 232mm2 14nm performance equivalent of Radeon 390, it would be pretty underwhelming product line. That does not say 390-level performance is not great at $199, it surely is. But for AMD as a company, they should better have something more powerful in their hand - compare to 1070 that is 3/4 of 313mm2 16nm chip. We will hopefully see in 19 days.

As to the Intel rumour, the only way I see it could work would be Intel paying for RTG so much, that AMD could repay all the debt, close the business and pay our all shareholders some premium over going share price. Not going to happen. Without GPU part of the business, AMD would (i) loose their only competitive advantage vs. Intel, and (ii) have no product for notebooks, or even AIOs. Sure, AMD could then do some reverse-acquisition with nVidia to continue business, but as nVidia shareholder, would you say YES to going to head-to-head vs. Intel in x86? Not going to happen. What may be however quite possible is JV'ing RTG out to some 3rd party to (i) get cash, and (ii) have less restricted R&D resources.

(Edited typos and grammar)
 
Last edited:
I was also at first apalled by Kyle's pure negativism. But then, it is not aimed at product as such, but it rather seems to be criticims of AMD's incompetency to produce something better, which seems to be driven (according to Kyle's sources) by AMD/RTG leadership not focusing on making the best product possible, but rather some internal political bickering. And them "covering it up", and trying to look they made the best product possible (though what else could they do, right?)

Frankly, if all they had was 232mm2 14nm performance equivalent of Radeon 390, it would be pretty underwhelming product line. That does not say 390-level performance is not great at $199, it surely is. But for AMD as a company, they should better have something more powerful in their hand - compare to 1070 that is 3/4 of 313mm2 16nm chip. We will hopefully see in 19 days.

As to the Intel rumour, the only way I see it could work would be Intel paying for RTG so much, that AMD could repay all the debt, close the business and pay our all shareholders some premium over going share price. Not going to happen. Without GPU part of the business, AMD would (i) loose their only competitive advantage vs. Intel, and (ii) have no product for notebooks, or even AIOs. Sure, AMD could then do some reverse-acquisition with nVidia to continue business, but as nVidia shareholder, would you say YES to going to head-to-head vs. Intel in x86? Not going to happen. What may be however quite possible is JV'ing RTG out to some 3rd party to (i) get cash, and (ii) have less restricted R&D resources.

(Edited typos and grammar)


Well the RTG deal, is after RTG spins off, not before. AMD has no position nor any company interested in any part of AMD because of their debt (they could just wait till AMD folds and pick up the pieces at a much lower price). Also AMD can't spin off RTG because of its debt at least till Zen is released and is good enough to put AMD at strong footing, otherwise the only profitable portion of AMD would be RTG. Also the companies that would possible be interested in RTG would be companies already is graphics business and if they don't have interested in PC discreet graphics then RTG is going change quite a bit.
 
More like 7-8 billion.

yeah gotta add the debt in there. 5 billion just for that.

Buzzkill. Here I was thinking that I could clean my sofa and buy AMD. Oh, the things that I would do to Roy Taylor...

"Roy, you're cleaning Kyle's house today. Wear the French maid outfit he likes so much."
 
Back
Top