No need to trust, just wait and see and buy when you actually need something better either Nvidia or AMD.You sound like an AMD employee.
Sorry, cant trust a word.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
No need to trust, just wait and see and buy when you actually need something better either Nvidia or AMD.You sound like an AMD employee.
Sorry, cant trust a word.
It's true. It was just a random forum post that was repeated enough that people started believing it. If anything AMD will be cancelling their high end or pushing it out further to avoid embarrassment. Polaris revealed they can't clock high enough to go hard on the high end.
Wrong math. It's more like 1.7x from 14nm FinFET and 1.65x from arch improvements (1.7 x 1.65 = ~2.8).The 480 we are looking at 100w card with a ceiling of 150w. The low power allows everything to be made more cheaper, 170% is due to the new Finfet node and 210% with arch improvements giving a up to 2.8x better perf/w.
Arch improvements contain a superscript that makes them unviable metric for now. So, 1.7x until proven otherwise.Wrong math. It's more like 1.7x from 14nm FinFET and 1.65x from arch improvements (1.7 x 1.65 = ~2.8).
yepWrong math. It's more like 1.7x from 14nm FinFET and 1.65x from arch improvements (1.7 x 1.65 = ~2.8).
You're making me hungry.Translation: "Are you salty yet?"
So wasn't your last AMD GPU the HD5000 series like 7 years ago? Honestly, I am buying a 1080 and I am not bragging about having money, are you trying to make us jealous or some shit?
By the way.... what were the contents? Some bad jokes about Macau event?
That's his normal face.Roy Taylor troll face.
That's his normal face.
I don't know if his was; however, mine definitely was, and I went there strictly as a cheapskate play (which I made no bones about).
I also pointed out exactly WHY the HD5450 made a great cheapskate play - at the time.
However, Intel's HD4400 - a non-discrete GPU - has replaced the HD5450 in the "cheapskate play" category, and entirely due to better DX11 API support.
nVidia Baby Maxwell (GTX 750Ti) and Baby Maxwell 2.0 (GTX 950 and GTX 960 2 GB) remain my picks in the mainstream category - where they have been since their respective launches. That is the category that RX 480 and GTX 1060 are launching into.
This is also the entry-level DX12 space (despite GTX 8xxm - which is priced too tall for the segment).
Intel's made some significant progress on the integrated GPU front. Especially considering the 520 is a DX12 part. It's no add-in card performance wise but a hell of a lot better than I thought it was.
Wow AMD really made us resent them with the 300 and Fury series of graphics cards....
Was curious about what you could expect performance wise so I looked it up over at passmark.
I was a little skeptical the 4400 could compare to a 5450... The Radeon 5450 ranks at 812. The Intel HD 4400 ranks at 603. Wow.
Have the HD 520 in my Notebook. The HD 520 (Rank 421) is three slots above the GeForce 8800 GT (Rank 424). Four above the Radeon HD 6570 (Rank 425)
Intel's made some significant progress on the integrated GPU front. Especially considering the 520 is a DX12 part. It's no add-in card performance wise but a hell of a lot better than I thought it was.
So your saying this card will run hotter then a ref 290/ 290x or 480GTX ? because that's the meaning of hot in my mind , as I just can't believe 150 watts is hot compared to the 300 watt cards.
Par for the course for a node shrink.From the presentation, it looks like AMD took a page from Intels book. They have focused on improving IPC as well as getting better performance from node shrink.
Clock speed increase provides a % based change in performance from its original performance.Pascal have been dubbed Paswell in many forums, since its similar to Maxwell with better overclocking capabilities from node switch. From AMD, we might see less overclocking capabilities, but stronger IPC and therefore better returns from overclocking.
Par for the course for a node shrink.
Clock speed increase provides a % based change in performance from its original performance.
The degree of this will be the same not better.
The IPC will give the base performance. The % overclock will give a % performance increase.
This is what we are interested in.
How to tell article is sort of clickbait: it mentions IPC and a GPU in the same sentence.
The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2Moving on, along with the SMM layout changes NVIDIA has also made a number of small tweaks to improve the IPC of the GPU.
I had to actually click that one because i had to make sure it was not mentioned in slides. Phew.So what you are saying, is that almost all all articles where they describe architectual changes between GPUs are clickbait? Like this one:
The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2
You are missing out on several good reviews if you have that opinion.
Regardless of the method, a % overclock will result in that % max performance increase.Not sure if we are agreeing or disagreeing here.
IPC at default clocks will give base performance and overclocking from default will give a % performance increase. (Or, more correctly, reviewers are comparing FPS @ default clock as base performance and then FPS increase per clock increase). Thats a given, so I don´t get the point of telling me this?
Higher IPC will give better performance per clock increase then lower IPC will per clock increase. My post that you are replying to, was about the possible different approach AMD and Nvidia might be using in this generation of GPUs. AMD going for higher IPC, but lower clocks, while Nvidia sacrificing IPC for higher clocks (compared to Maxwell). In response to crazycaves post about heat. "Hotter" depends on which matrix you are comparing with: Watt vs. frequency or performance per watt. Nvidia sacrificed IPC for clockspeed, did AMD sacrifice clockspeed for IPC?
What we (who are we?) are interested in, is the final performance result after overclock, regardless of which approach AMD and Nvidia chooses.
Regardless of the method, a % overclock will result in that % max performance increase.
Higher "IPC" doesnt change this, its not something to consider. (I'm using the term IPC for convenience)
What we need to consider is the base performance which includes any IPC. Then how far it overclocks.
Overclocking does not need to take into account the IPC because it has already been accounted for in the base performance.
If you consider it again, your are squaring its effect which does not happen in reality.
I dont think you get it ... You are talking about something different then the post I am replying to. I am talking about percentage performance increase per percentage overclock on GPUs with different IPCs, while you are talking about the method of how you determine baseline and then you you measure overclock afterwards on GPUs regardless of IPC.
You need to take into account the different IPCs when you talk about overclocking, since the gains will differ when you compare them. % overclock on one GPU doesn´t equal % overclock on another different GPU in terms of performance.
Nvidia sacrificed IPC for higher clocks. That means it does less per clock, but can be clocked higher and result in more performance in total.
AMD, from their presentation and talk about IPC gains, combined with the "leaked" clock rates for the RX 480, seems to indicate that they went in another direction. Instead of aiming for higher clocks, they went for their GPUs to do more per clock.
Thats a bit interesting, since in the CPU space, AMD goes for weaker cores, but higher clocks, while Intel have stronger cores and can do more with lower clockspeed. AMD must have taken a page out of Intels book.
Cutting to the crux of it, you are saying that AMD cards will overclock higher?
Then I cant see where there is a benefit.On the contrary, I think they will overclock less. They might have more gain clock for clock, but will not overclock as high as Pascal.
Then I cant see where there is a benefit.
NVidia have chosen lower IPC with much higher starting clocks, resulting in a bit less overclock headroom.
AMD have chosen higher IPC with lower starting clocks (as they have in current and previous cards). But they will also get a lesser overclock.
I'm not seeing the win for AMD here.
Wow. Tamlin, be careful, the nvida fanboys are going to slay you. Even Kyle liked Nenus post against you, and Kyle is only the 4t or 5th biggest fanboi on this site :I
You compared methods used by both manufacturers.My post was not about the pissing contest between AMD and Nvidia fans, so I haven´t even discussed the benefits of either methods. There are benefits with both methods. We don´t know the win until the cards have been tested out in the wild.
That a card overclocks higher, doesn´t mean it performs better then another card when both are overclocked to the max. There are more factors in play then just the clock speed.
There are benefits with both methods.
There are more factors in play then just the clock speed.
That AMD comes out with a 380x? replacement (RX 480) that is less then 150W (hard limit due to single 6-pin connector) and it turns out its between 970 and 980 in performance for $199, I don´t feel sad about that. Its actually great!
I was also at first apalled by Kyle's pure negativism. But then, it is not aimed at product as such, but it rather seems to be criticims of AMD's incompetency to produce something better, which seems to be driven (according to Kyle's sources) by AMD/RTG leadership not focusing on making the best product possible, but rather some internal political bickering. And them "covering it up", and trying to look they made the best product possible (though what else could they do, right?)
Frankly, if all they had was 232mm2 14nm performance equivalent of Radeon 390, it would be pretty underwhelming product line. That does not say 390-level performance is not great at $199, it surely is. But for AMD as a company, they should better have something more powerful in their hand - compare to 1070 that is 3/4 of 313mm2 16nm chip. We will hopefully see in 19 days.
As to the Intel rumour, the only way I see it could work would be Intel paying for RTG so much, that AMD could repay all the debt, close the business and pay our all shareholders some premium over going share price. Not going to happen. Without GPU part of the business, AMD would (i) loose their only competitive advantage vs. Intel, and (ii) have no product for notebooks, or even AIOs. Sure, AMD could then do some reverse-acquisition with nVidia to continue business, but as nVidia shareholder, would you say YES to going to head-to-head vs. Intel in x86? Not going to happen. What may be however quite possible is JV'ing RTG out to some 3rd party to (i) get cash, and (ii) have less restricted R&D resources.
(Edited typos and grammar)
It would cost 4 Billion to buy amd...
More like 7-8 billion.
yeah gotta add the debt in there. 5 billion just for that.