confirmed: AMD's big Navi launch to disrupt 4K gaming

Apple is a tough customer when it comes to pricing and their standards of retail. We had a macbook with a recalled/bad battery and was overcharged for the swap guess what happened next? Apple came to the shop and shut them down for misrep/fraud. I actually had the Apple fraud investigators call me to document what happened.
 
I can get a 'good 4k experience' with an APU... in Counter-Strike.

The statement is meaningless.
Theoretically I can agree with you, but don't tell me that you understand this CEO's words as APU for CS :D
“There’s a lot of excitement for Navi 2, or what our fans have dubbed as the Big Navi“
“Big Navi is a halo product”
“Enthusiasts love to buy the best, and we are certainly working on giving them the best”.
“RDNA 2 architecture goes through the entire stack“
"it will go from mainstream GPUs all the way up to the enthusiasts and then the architecture also goes into the game console products... as well as our integrated APU products.
"This allows us to leverage the larger ecosystem, accelerate the development of exciting features like ray tracing and more."
via AMD's CFO, David Kumar
 
5700xt = PPW parity with current gen nvidia. Nvidia will get process efficiency jump and uarch jump. Or maybe mostly process if more resources went into migration.

So if nvidia gets generous 15% from process and 30% from uarch in PPW terms, they'll still be at parity in midrange to high end if not sightly behind.
RDNA2 forces nvidia to go more die area, higher price or less margin to just stick with AMD in their main revenue area, if their [NV] rumored numbers are true.
This upcoming generation might finally be interesting from a competition POV. My problem is if AMD will be enough for 4k120. It'll definitely drive 10 bit though (I can't believe this is still a petty segmentation issue for NV in 2020).
 
I feel like that is jumping ahead a bit. We need to get to 4K60 solid (and I mean including ray tracing) or 1440p/21:9 high refresh before we start talking about 4K120.
Fair call I guess it depends on the games you play. I'd be happy with 90fps with RT, having done CRT up to 120Hz in past I found beyond 100Hz was diminishing returns. That said with more experience today, maybe it's different.
 
Fair call I guess it depends on the games you play. I'd be happy with 90fps with RT, having done CRT up to 120Hz in past I found beyond 100Hz was diminishing returns. That said with more experience today, maybe it's different.
>100Hz or thereabouts becomes less about 'smoothness' in terms of seeing a difference in framerates and more about 'responsiveness' and 'motion resolution'. I doubt I could tell the difference of anything above 100Hz with respect to smoothness, but in terms of responsiveness we could get to 1,000Hz and still be left wanting.
 
>100Hz or thereabouts becomes less about 'smoothness' in terms of seeing a difference in framerates and more about 'responsiveness' and 'motion resolution'. I doubt I could tell the difference of anything above 100Hz with respect to smoothness, but in terms of responsiveness we could get to 1,000Hz and still be left wanting.
This is why when I wrote my game engine I decoupled my input handling from the graphics (this was back in like 2004 before 100fps was common)... This way even if you had low end hardware and crap frame rates, all inputs were still highly responsive. Of course higher frame rates still helped but it didn't 'feel' less responsive even at 25fps as your inputs where still handled and processed much faster than actual frame rate. With all the cores available nowadays you'd think this would be common. I guess the synchronization between the two is still difficult to handle as you can't be moving objects in the middle of a scene render. It requires a bit of extra effort to handle inputs this way because you need to keep track of inputs/values between renders and then make a copy to use during rendering. I would imagine if someone would do something similar nowadays you could have easily process inputs at 1000hz and render @ 100hz. The responsiveness would be there and your eyes probably couldn't tell the difference.
 
Here's the problem: responsiveness includes output :). Literally the time between the user making an input (button press, mouse click, mouse movement, etc.), and seeing the results of that input on the screen. The speed and consistency of responses weighs heavily on how games 'feel'. id software is an example of a company that puts quite a bit of focus here, and whoever Bethesda has making Elder Scrolls and Fallout games is an example of the opposite.
 
  • Like
Reactions: noko
like this
Not that you're wrong, but if they increase perf/watt by 50% and then keep power draw the same, shouldn't that result in 50% more performance or thereabouts?
Math wise yes, for exact same test method as in same load as in a particular game tested the same way and not limited by something else as in CPU, PCIe bandwidth or anything else.

AMD is more ambiguous it appears then previous, all we know is 50% better performance in The Division 2 per watt. Is that a 5700XT or 40cu running at 275w compared to a 80cu part running at 275w? Navi 10 gains are very much non-linear from 225w to 275w. Or is it compared to a RNDA 40cu at 225w with a 80cu RNDA running at 225w? 40cu as in 5700XT at 225w compared to a 80cu at 275w? (Best case) Or some other combination which when one thinks about it, can be many. In any case AMD made the claim in general so I think most would expect in general it will be true and not some typical marketing ploy which will go over like a lead brick. Probably best to evaluate once AMD and Nvidia release actual hardware and proper testing is done to see advances and problems. Both had serious issues on their last launch.
 
Last edited:
Here's the problem: responsiveness includes output :). Literally the time between the user making an input (button press, mouse click, mouse movement, etc.), and seeing the results of that input on the screen. The speed and consistency of responses weighs heavily on how games 'feel'. id software is an example of a company that puts quite a bit of focus here, and whoever Bethesda has making Elder Scrolls and Fallout games is an example of the opposite.
That I find so true, Doom Eternal is so damn responsive like you are right there in real time while other games I feel like I am in the future controlling the past with a time delay. While only a fraction of the second the mind seems to exaggerate that feeling that something is not right or off.
 
Here's the problem: responsiveness includes output :). Literally the time between the user making an input (button press, mouse click, mouse movement, etc.), and seeing the results of that input on the screen. The speed and consistency of responses weighs heavily on how games 'feel'. id software is an example of a company that puts quite a bit of focus here, and whoever Bethesda has making Elder Scrolls and Fallout games is an example of the opposite.
I agree, that's why I said higher frame rates still made it better, but it didn't feel super laggy b cause from the time you started your movement till you stopped it was much more precise than what was displayed on screen. So your movements were consistent/repetitive. Frame times didn't lead to issues of inconsistent movement or miss-timed movements because they were irrelevant/separate to the input. Hard to describe but if you click to shoot at someone it would process that request even if you were between rendering frames or in the middle of a frams. Our brains are really good at predicting things like where something will be even if we can't actually see all the mini between steps. But if the timing of an action taking place and where you predict the event should happen don't line up, it throws you off (this is why people focus on frame times so you don't get that little stuffed and miss your target since it fired your weapon late). Now, seeing the frames in between would help (to a point). This is why games at 240hz feel faster even though our eyes can't really benefit, more consistent input timing/less variance. The timing is closer to real time without frame time differentiations. That's not to say a game at 20fps will play like one at 100fps, but it does make it much easier to play at 20fps when you move your mouse at a specific time and when the screen finally updates you are at the same place someone else playing at 240hz would be. If it waited to handle the input until the next frame your movement would be delayed (jumping, shooting) which makes it much more difficult because our ability to predict is screwed up when it's not consistent.
Anyways, sorry about the side tracking. Still looking forward to rdna2 ;). Hopefully it'll allow some much needed competition.
 
I've seen some recent claims of 50% faster than a 2080ti. But 50% faster at what? Is it 50%faster at just everything at every resolution? I doubt that.

Can't wait to see some competitive reviews.
 
DLSS 2.0 seems like gravel in the oil for the claim that AMD will disrupt 4K gaming.

Ampere + DLSS will have a far bigger chance of that.
 
Manufacturers' suggested retail price is usually about 10-15 percent above resellers' price before market action. Retail prices vary between 60 and 90 percent of suggested pricing.

This isn't just video cards, this is all MSRP markets.
February 2014 (290X came out in October 2013). This is before Newegg had marketplace sellers. This is why I went with a GTX 780 at the time.

1594990434567.png
 
And to think, I just used mine to play video games.

Like I said, the mining period wasn't a normal time for the market. We should use it as a reminder about things being weird, but not how it do.
 
RUMOUR:

Prominent leaker KatCorgi has taken to Twitter with the claim that green team’s Ampere-based RTX 3080 graphics card will boast a 20-percent performance increase over the current flagship, the RTX 2080 Ti. Who knows if that figure is accurate or not, but another well-known leaker, kopite7kimi, has thrown their hat in the ring and claimed that “it’s true.”

https://www.thefpsreview.com/2020/0...ercent-faster-than-rtx-2080-ti-claims-leaker/

Taken along with the other rumor that "big navi" is 50% faster than RTX 2080 ti, then that would in turn make "big navi" 25% faster than RTX 3080
 
Apple is a tough customer when it comes to pricing and their standards of retail. We had a macbook with a recalled/bad battery and was overcharged for the swap guess what happened next? Apple came to the shop and shut them down for misrep/fraud. I actually had the Apple fraud investigators call me to document what happened.
Yeah like or dislike their product line but they don’t let anybody fuck with their brand that’s for sure.
 
RUMOUR:

Prominent leaker KatCorgi has taken to Twitter with the claim that green team’s Ampere-based RTX 3080 graphics card will boast a 20-percent performance increase over the current flagship, the RTX 2080 Ti. Who knows if that figure is accurate or not, but another well-known leaker, kopite7kimi, has thrown their hat in the ring and claimed that “it’s true.”

https://www.thefpsreview.com/2020/0...ercent-faster-than-rtx-2080-ti-claims-leaker/

Taken along with the other rumor that "big navi" is 50% faster than RTX 2080 ti, then that would in turn make "big navi" 25% faster than RTX 3080
Question then is how will it fair against the 3080TI or the rumoured 3090.... I do hope AMD can put something out because I would love a choice but I am far more interested in their answer to the Quadro’s I need to replace workstations in mid 2021 and a few options would be great.
 
Taken along with the other rumor that "big navi" is 50% faster than RTX 2080 ti, then that would in turn make "big navi" 25% faster than RTX 3080
Landing between the top-tier Nvidia consumer part and the number two Nvidia consumer part would be an above-average showing for AMD, and probably a spot that both companies are comfortable with. Any faster and Nvidia would have to push things and would take away AMDs margins in the process.

Personally I'm just about settled on a CX48, so whatever can drive that reasonably is where I want to be.
 
Landing between the top-tier Nvidia consumer part and the number two Nvidia consumer part would be an above-average showing for AMD, and probably a spot that both companies are comfortable with. Any faster and Nvidia would have to push things and would take away AMDs margins in the process.

Personally I'm just about settled on a CX48, so whatever can drive that reasonably is where I want to be.

I'm starting to lean 5nm since it might be coming next year.
 
I'm starting to lean 5nm since it might be coming next year.
I'm on a 1080Ti; I can't even really make a good case for something faster, I just want to be able to run a 4k OLED at 120Hz with VRR without compromise to the signal chain. Short of that I'd wait too.
 
I would call that not really playable... with the exception of the Titan RTX 24GB using DLSS being just on the cusp of acceptable. That said, it's still impressive :). It's still not anywhere close to playable at native 8k on anything current, but lets be honest... how many could tell the difference between 4k and 8k, and who even has an 8k monitor? The market is very slim for 8k gaming, but things like DLSS are the only way it'll be possible with next gen stuff. But, if it looks good and I can't tell a difference, tomato tomato (ok, that loses it's meaning when I type it, just imagine I said them differently). I'm still using a 1080P monitor so none of this really matters to me at the moment. I'm excited to see what comes out in the sub $200 range for this cycle. If I can get 2060 performance for < $200, i'd be happy. I don't care so much for $1000+ cards.
 
4 core / 8 thread = 2010 i7 ~$350
4 core / 8 thread = 2017 i7 ~$340
Introduce Zen
4 core / 8 thread = 2020 i3 ~$150

>AMD didn't disrupt nuffin!!

Exactly, AMD both forced Intel's hand in adding more cores to their lower-tier chips, as well as releasing better-priced performance in upper tiers.

Had it not been for Ryzen, we'd likely still be buying 4-core non-hyperthreaded i5s for $200+. Such is the state of things to expect when there's no competition.
 
4 core / 8 thread = 2010 i7 ~$350
4 core / 8 thread = 2017 i7 ~$340
Introduce Zen
4 core / 8 thread = 2020 i3 ~$150

>AMD didn't disrupt nuffin!!
Maybe we can credit AMD with influencing Intel's decision to put more than four cores on consumer 14nm parts... but Intel had already planned to move forward with 10nm which was scheduled before Zen was a known quantity.
 
Maybe we can credit AMD with influencing Intel's decision to put more than four cores on consumer 14nm parts... but Intel had already planned to move forward with 10nm which was scheduled before Zen was a known quantity.
Yea and they would still be at 4 cores maybe 6 without AMD.
 
Maybe we can credit AMD with influencing Intel's decision to put more than four cores on consumer 14nm parts... but Intel had already planned to move forward with 10nm which was scheduled before Zen was a known quantity.
AMD brought intel out of stagnation. Up until a year or two ago, I still knew plenty of people who said "This i5 2500 still works just fine" In their main PC. While that Sandy Bridge generation was a huge step for x86 processors, that was the last huge step intel made.

They got complacent. You can say it was greed, laziness, mismanagement (<-- my personal belief) but whatever the cause, they weren't pushing to innovate and they had no competition thanks to FX. This caused a slowdown in the desktop market in particular, where people could replace components individually, and sell them one at a time. In 2015, Sandy Bridge i5's were still selling for $125-150 on ebay. Intel could slow production of their desktop processors down because there was no real reason for people to upgrade, and they could get what they needed for their older systems on the used market. People wonder how intel could suddenly start running into desktop CPU shortages as AMD was taking market share, that was how/why.

In comes AMD with Zen, they pushed intel to increase their cores, yes. They also pushed intel users to upgrade their machines because software was starting to leverage more threads. They also pushed intel to squeeze every last drop of performance they can out of this node that they are still stuck on. If locked at the same frequency, Zen 2 has a better IPC than Coffee Lake or Comet Lake, so Intel has to push everything to the edge of that 5.xxGHz ceiling of diminishing returns.

Die-hard intel users should be thanking AMD for throwing a few bags of ice into intel's warm comfortable stagnant bath. We haven't seen this in well over a decade, and many thought it would never happen again. before Zen, I honestly thought ARM was going to swoop in and make x86 obsolete by 2022 or so. And they probably would have IMHO.

You talk about 10nm, but when was the original launch date supposed to be? 2017? 2018? then pushed back to 2019 only to be released with very limited low-power mobile parts. We still haven't seen any compelling parts on 10nm. This is intel under the gun too. If AMD didn't see a resurgence, how long before anyone would see 10nm in a desktop?

Every x86 enthusiast should be grateful that AMD came back. Hate AMD, Love AMD... doesn't matter. The desktop PC is interesting again.
 
Up until a year or two ago, I still knew plenty of people who said "This i5 2500 still works just fine" In their main PC.
The only reason it wouldn't work fine, or at least in my case, is that the board died. I still have a Sandy-based laptop that works 'just fine', so long as you don't mind the keys on the keyboard that only work occasionally and the grinding noise that the fan makes half the time ;).

that was the last huge step intel made.
Skylake was a pretty big step, but again, we've long since reached 'peak office desktop'. The bigger differences were in mobile, somewhere AMD shows promise but falls short of leadership.
Intel could slow production of their desktop processors down because there was no real reason for people to upgrade
If you count enthusiasts as 'the market', sure. Unfortunately for your argument, that's not the case. Dell, HP, Lenovo etc. are 'the market'. AMD couldn't supply those three if Intel ceased production today. Not even if they ported Zen to Samsung too.
In comes AMD with Zen, they pushed intel to increase their cores, yes.
Intel was already pushing for this. Not that it was needed for 99% of their desktop customers then or today.
They also pushed intel users to upgrade their machines because software was starting to leverage more threads.
Yeah, I upgraded to another Intel CPU, that's still faster for my purposes than AMD produces, and was available before a competing AMD product was close in performance, let alone having a stable platform!
Die-hard intel users
Who are these people? This isn't r/AMD here.
We still haven't seen any compelling parts on 10nm.
The laptop parts are extremely compelling. AMD has nothing that competes.
If AMD didn't see a resurgence, how long before anyone would see 10nm in a desktop?
Which / whose definition 10nm?


The desktop PC is interesting again.
If you mean 'interesting' in terms of having something to talk about, okay?

The common desktop workload and gaming scene hasn't budged. AMD has sold slower cores for less, which have needed more expensive, harder to find motherboards and memory kits to keep working. Even today, manufacturers have to guide users through arcane memory variables in motherboard BIOSes that most of us don't even know exist. Getting memory to run at rated speed on an AMD platform may involve dozens of BIOS entries versus just selecting 'XMP' on an Intel platform!

And that's needed users are going to extract the 'competitive' performance that proponents of the platform use for comparison.
 
The only reason it wouldn't work fine, or at least in my case, is that the board died. I still have a Sandy-based laptop that works 'just fine', so long as you don't mind the keys on the keyboard that only work occasionally and the grinding noise that the fan makes half the time ;).


Skylake was a pretty big step, but again, we've long since reached 'peak office desktop'. The bigger differences were in mobile, somewhere AMD shows promise but falls short of leadership.

If you count enthusiasts as 'the market', sure. Unfortunately for your argument, that's not the case. Dell, HP, Lenovo etc. are 'the market'. AMD couldn't supply those three if Intel ceased production today. Not even if they ported Zen to Samsung too.

Intel was already pushing for this. Not that it was needed for 99% of their desktop customers then or today.

Intel wasn't pushing anything except small incremental upgrades. You keep alluding to software never catching up to the hardware, but how often does software get made for hardware that either doesn't exist, or (for consumer-grade programs) doesn't have a large market share?

I see no signs that intel was going to be releasing more cores on mainstream platforms any time soon (let alone more than twice the number of cores/threads on their mainstream platforms) prior to Zen, and their own past releases are evidence of that. for 7+ years, the highest-end intel CPU you could get on mainstream desktop was 4core 8thread. Hell, 10 core parts were $1700 on their HEDT boards in 2017. Then Zen comes along with 8 cores for less than an i7... So Intel "Oh, we were gonna do that!"

Yeah, I upgraded to another Intel CPU, that's still faster for my purposes than AMD produces, and was available before a competing AMD product was close in performance, let alone having a stable platform!

Who are these people?
lol, the cope

The laptop parts are extremely compelling. AMD has nothing that competes.

Which / whose definition 10nm?

If you mean 'interesting' in terms of having something to talk about, okay?

The common desktop workload and gaming scene hasn't budged. AMD has sold slower cores for less, which have needed more expensive, harder to find motherboards and memory kits to keep working. Even today, manufacturers have to guide users through arcane memory variables in motherboard BIOSes that most of us don't even know exist. Getting memory to run at rated speed on an AMD platform may involve dozens of BIOS entries versus just selecting 'XMP' on an Intel platform!

And that's needed users are going to extract the 'competitive' performance that proponents of the platform use for comparison.

Intel's 10nm... are you playing dumb now? Or is this your way to get into the "Intel's 10nm is the same or better than TSMC's 7nm" argument... Which is just "Yeah, well, my fictional dad can beat up your dad"... lol. By the time that argument even matters, TSMC will be well into 5nm production, maybe even 3nm considering 10nm is STILL not being sampled for Desktop last I checked.

Laptops are a different market than what any of this thread is discussing. AMD still has some ground to gain there. It's not because they can't perform, as you can find benchmarks for all over the internet. I'll leave you to come to your own conclusions why they are fighting for market share in mobile, I'm sure you have a few, and they aren't at all removed from reality. Here's a hint though:

Intel-vs-AMD.png

I'm not sure where your XMP diatribe comes from. I've had no issue enabling XMP on any 2000 or 3000 series Ryzen to the stated supported 3200 or 3600MHz, even on cheap B450 Asrock boards. Maybe you're still hung up on first gen Ryzen and 300 series boards having memory issues, That was 3 1/2 years ago. Life moves pretty fast.
I haven't looked into it yet, but I hear RAM is locked to a max of 2933MHz on any board below Z-series on the 1200 socket. Odd argument to make I suppose in defense of a company that still arbitrarily locks-out features that are baked into the silicon, but aren't accessible unless you pay more for the higher-end SKUs. Hey, at least they finally unlocked SMT for most of their line-up. Baby-steps, intel... baby-steps...

I'll recycle your joke from earlier... "This isn't UserBenchmarks here"

I've seen all these arguments before, and none of them hold water to anyone that considers themselves an enthusiast. "Laptops are where it's at... more cores aren't important... AMD is unstable, intel JUST WORKS!" etc... Intel Just Works is my favorite, wasn't that an Apple slogan 15-20 years ago when they were struggling to sell macs? More recently used by Jensen to push Nvidia's underwhelming RTX features. You aren't trying to sell a Dell here, you do know people can look this up or experience these things for themselves, right? If any of those arguments were true, AMD wouldn't have sprung back at all. Their tiny delusional fan-base that have been keeping their homes heated with their Bulldozer/290X builds aren't keeping AMD afloat.

You seem to think I'm one of them. I hold NO WATER for any multi-million/billion dollar company. I don't blindly buy from a company just because I have bought from them in the past. I'm about as "anti-brand-name" as you can get, and advocate researching individual models of anything tech related, or otherwise. I gladly build PCs with both Intel and AMD processors. I just built a system with my recommended 9700/2080s for a buddy that only wants to game on it. I can proudly say I've never built an FX system, though I did build a couple little FM2 systems as HTPCs, and those would have likely been intel if their integrated graphics hadn't sucked so bad in comparison. My last "Flagship" AMD processor was a Phenom II 965BE (bought in 2009) up until a 2 years ago when I got an R7 2700. In those 9 years, my personal PCs were all intel.. Ivy Bridge i5 3550, then Haswell i7 4770K when I wanted newer features on the board. After building a little file-server using a $120 R5 2400G, I was thoroughly impressed with the price/performance, and built this 2700 for myself. I do more than game and whatever adobe products still can't figure out SMT, so I will take advantage of the value brand this time around, and might upgrade again for the 4000 series, but I'll see if this upcoming back-ported intel design offers anything compelling.

Like it or not, AMD is ahead in performance. Intel still barely holds the crown for gaming, but as software developers (gaming and productivity devs) leverage more threads, intel is going to slip more and more behind overall unless they can do more than just push to that 5GHz ceiling. There's no doubt they will catch up, possibly next year even. My only hope is that AMD won't get caught unprepared when that happens like they have in the past, because intel will likely go right back to over-charging and under-delivering just like they have since around Ivy Bridge.

If you want to knock AMD, their Discrete GPUs currently kinda suck, particularly at the high-end, where they're non-existent. Discrete Vega to 5700 XT was a side-grade, trading a poorly optimized space-heater for "who needs working drivers?"

So I hope these rumors are true, maybe they can kick Nvidia into gear.

But anyone who has paid attention to AMD over the past couple of decades have all been burnt by performance "leaks" in the past. That lesson was learned in the CPU side after FX tanked so badly, but their GPU division might not have gotten the memo that raising expectations and then under-delivering isn't a good marketing strategy.
 
Me to, i really wonder what i should call that.. well, lets try with: thats not playable..

Honestly, the bigger takeaway for me (since DLSS is FAR from universal by any means) was the fact that the 5700XT was outperforming the 2080 Super and almost equivalent to the 2080Ti at 8K.
 
Back
Top