Radeon 6000 series speculation

GTX 1070...would the benefits of the faster card be worth it over the benefits of G-Sync?
The 1070 is still a respectable card, and G-Sync is good to have.
I went from having freesync monitor and an rx580 to a 1060 and really missed the freesync. So much so that I went back to amd just for the freesync itself.
If you're tied to your monitor that much, maybe it makes sense to use the G-Sync features and stick with Nvidia. *I am a freesync user that prefers amd junk.
 
GTX 1070...would the benefits of the faster card be worth it over the benefits of G-Sync?

After going back and forth over the past few years to find the best upper-end experience, I've found, while adaptive sync is nice, raw FPS is preferable. Ideal would be to stick with your 1070 for a few months and get a 3070 or 3060. But if you really can't wait and want to go AMD, get a 6700 next month (assuming you can find one).
 
No one cares about ray tracing.
Other than Raytracing I have no reason to upgrade. I’m happy with my Radeon VII otherwise.
The only goal/game I care about playing in the immediate future is Cyberpunk 2077. And ideally I’d play it maxed out.
As it stands the easiest method to do this might be just to buy a PS5. Otherwise it’s a graphics card capable of giving me all the eye candy.
 
I’m thinking the same as I have no need to replace my current graphics card for performance deficiency. The series X is looking very promising especially since keyboard and mouse should be supported. Hate the friggen controllers always have.
 
I’m thinking the same as I have no need to replace my current graphics card for performance deficiency. The series X is looking very promising especially since keyboard and mouse should be supported. Hate the friggen controllers always have.

The controllers are pretty good for games that don't need fast paced aiming. I use the xbox one x controller for my pc to great satisfaction and I have large guitar playing hands.
 
Other than Raytracing I have no reason to upgrade. I’m happy with my Radeon VII otherwise.
The only goal/game I care about playing in the immediate future is Cyberpunk 2077. And ideally I’d play it maxed out.
As it stands the easiest method to do this might be just to buy a PS5. Otherwise it’s a graphics card capable of giving me all the eye candy.

PS5 and "maxed out" settings is funny.
Good joke...
 
Not a chance. Several of the youtubers and journos who got hands on with the game said it could still do better while running on a RTX 2080ti and there was a lot of eye candy running at lower settings or in cheat mode (sparse sampling on reflections, object pop in, noise) in DF’s analysis video.
I guarantee this will look better and run smoother at ultra settings on a 3xxx series card than PS5. I wouldn’t be surprised if this also ran slightly better on XSX since it uses RT and is a third party game.
 
PS5 and "maxed out" settings is funny.
Good joke...
The recommended settings for CYberpunk are pretty low. So it might come close to being maxed out on PS5...
Not a chance. Several of the youtubers and journos who got hands on with the game said it could still do better while running on a RTX 2080ti and there was a lot of eye candy running at lower settings or in cheat mode (sparse sampling on reflections, object pop in, noise) in DF’s analysis video.
I guarantee this will look better and run smoother at ultra settings on a 3xxx series card than PS5. I wouldn’t be surprised if this also ran slightly better on XSX since it uses RT and is a third party game.

While the PS5 will definitely not have the power of top end RDNA 2 the PS5's greatest advantage is optimization. It for sure will have the greatest/best mixture of fidelity to performance (tile rendering, load times, etc). Raytracing will be included. It will likely be at least 30fps locked in 4k. And it will require zero effort on the part of users.

It's more than likely that no graphics card will be capable of maxing CP2077 out at launch. It might be the first title in a while that is more demanding than what is available. And that is only compounded by the issue of supply currently with nVidia but also likely with AMD when they launch their cards at the end of the month.

If all this has to be spelled out for you, chuckle it up.
 
Last edited:
I read today that there will be board partners for AMD at launch, a bad sign indeed, if true.
 
While the PS5 will definitely not have the power of RDNA 2 the PS5's greatest advantage is optimization. It for sure will have the greatest/best mixture of fidelity to performance (tile rendering, load times, etc). Raytracing will be included. It will likely be at least 30fps locked. And it will require zero effort on the part of users.

It's more than likely that no graphics card will be capable of maxing CP2077 out at launch. It might be the first title in a while that is more demanding than what is available. And that is only compounded by the issue of supply currently with nVidia but also likely with AMD when they launch their cards at the end of the month.

If all this has to be spelled out for you, chuckle it up.

People forget the fastest GPU when the Witcher 3 launched was the $1000 Titan X. It managed a grand total of 25fps at 4K with hairworks OFF.

Ray tracing is even worse than Hairworks. Expect your $1500 3090 to be lucky to hit above 30fps at 4K with it off, and with it on it will probably be good for 1080p60...lol
 
People forget the fastest GPU when the Witcher 3 launched was the $1000 Titan X. It managed a grand total of 25fps at 4K with hairworks OFF.

Ray tracing is even worse than Hairworks. Expect your $1500 3090 to be lucky to hit above 30fps at 4K with it off, and with it on it will probably be good for 1080p60...lol
Wonder if it will support mGPU? Seems like one of those games that should do that.
 
People forget the fastest GPU when the Witcher 3 launched was the $1000 Titan X. It managed a grand total of 25fps at 4K with hairworks OFF.

Ray tracing is even worse than Hairworks. Expect your $1500 3090 to be lucky to hit above 30fps at 4K with it off, and with it on it will probably be good for 1080p60...lol
I don’t see a 3080/3090 struggling too much to hit 4K/60 (w/o RT) in CP2077 if CDPR recommends only a GTX 1060 for 1080p High Settings. A 3080/3090 is approximately 4x faster than a GTX 1060 and 4K is 4x the resolution of 1080p. But the moment you enable the RT features it’ll bring those cards to their knees. Hoping DLSS 2.0 will allow for decent performance with some of the RT features enabled.

We’re still probably at least 2 generations away from adequate RT performance (excluding for full ray-traced games - that’s probably a decade away still).
 
I don’t see a 3080/3090 struggling too much to hit 4K/60 (w/o RT) in CP2077 if CDPR recommends only a GTX 1060 for 1080p High Settings. A 3080/3090 is approximately 4x faster than a GTX 1060 and 4K is 4x the resolution of 1080p. But the moment you enable the RT features it’ll bring those cards to their knees. Hoping DLSS 2.0 will allow for decent performance with some of the RT features enabled.

We’re still probably at least 2 generations away from adequate RT performance (excluding for full ray-traced games - that’s probably a decade away still).

Well its gonna be E3 downgrade part 2 then because they'd have to turn down a lot of details given the 2080 Ti could barely handle the alpha build of the game at 1080p.
 
Well its gonna be E3 downgrade part 2 then because they'd have to turn down a lot of details given the 2080 Ti could barely handle the alpha build of the game at 1080p.

If you notice something, CDPR have changed the way they present stuff.
The Witcher 3 was a lof of pre-release PC footage.
But the shitty consoles could not run those settings, so they had to downgrade everyones version...because of crap-consoles.

This time around they have been sharing a LOT of crap-console image quality in their pre-views
Why?
So console-dorks will not whine about a "downgrade"...but I will be surprised if they not will whine are the higher settings on PC anyways.
 
Well its gonna be E3 downgrade part 2 then because they'd have to turn down a lot of details given the 2080 Ti could barely handle the alpha build of the game at 1080p.
Not necessarily, as you mentioned it’s the alpha build. A lot of can be done to increase performance without impacting the visual fidelity of the game. Optimized game code and game ready GPU drivers will help tremendously. And the 2080 Ti could barely handle the game at 1080p with some RT features turned on. That’s expected. Turn those RT features off and it’s a different ball game.
 
would it make sense to buy a Big Navi card if I have a 1440p 144hz G-Sync monitor (not FreeSync compatible)?...the monitor (ViewSonic XG2703-GS IPS) is only 2 years old so I have no intention of buying a new one anytime soon
IMO, if you have an monitor with an actual G-Sync module, you should get another Nvidia card. VRR technologies give you lower input lag and G-sync will sync every frame. Not just a range.
 
IMO, if you have an monitor with an actual G-Sync module, you should get another Nvidia card. VRR technologies give you lower input lag and G-sync will sync every frame. Not just a range.
G-sync only syncs in a range also... Whatever that range happens to be. The only real difference is g-sync has more strict requirements to meet, so the range is guaranteed to be at least that good, where freesync could have very small ranges. It's not as if g-sync is superior to freesync if the monitors have the same range, it's just with the freesync monitor you have more choices of GPU. If you bought into Nvidia g-sync, then your stuck with Nvidia GPUs until you upgrade/change monitors (or can live without vrr).

Anywho, zen3 announcement soon, then rdna2 in a few weeks, can't wait to see how AMDs cache system handles the narrow memory bus. Hopefully it works well as in the past (pre rdna) bandwidth highly affected speeds, this was much better with rdna caching design, so hopefully with rdna2 cache design they can get by with lower bandwidth = cheaper to build (simpler board layouts, as well as cheaper memory chips).
 
Not necessarily, as you mentioned it’s the alpha build. A lot of can be done to increase performance without impacting the visual fidelity of the game. Optimized game code and game ready GPU drivers will help tremendously. And the 2080 Ti could barely handle the game at 1080p with some RT features turned on. That’s expected. Turn those RT features off and it’s a different ball game.

No I'm talking about early on it could barely handle 1080p before RT was even introduced. If the 2080 Ti could handle the game at 1080p with RT on they wouldn't have turned on DLSS to upsample to 1080p in the later RT performance reveals. At the time they were still working to make the game fluid at 1080p with RTX-on with a 2080 Ti class card.
 
No I'm talking about early on it could barely handle 1080p before RT was even introduced. If the 2080 Ti could handle the game at 1080p with RT on they wouldn't have turned on DLSS to upsample to 1080p in the later RT performance reveals. At the time they were still working to make the game fluid at 1080p with RTX-on with a 2080 Ti class card.
The only report I’ve seen of it struggling at 1080p with DLSS 2.0 was when RT features were enabled. Nothing about it struggling without RT enabled, do you have a source?

“We play in Full HD – a fact that we first have to digest a bit, given the RTX 2080 Ti in the Alienware presentation computer, there should actually be a little more in the pixel density,” wrote PCGH’s Philipp Reuther. “In addition, DLSS is activated, so the internal render resolution is again less than 1,920 × 1080 pixels. But ray tracing is also active, in our preview version in the form of shadows, ambient coverage and indirect lighting (Ray Traced Diffuse Illumination).”

https://www.thefpsreview.com/2020/0...077s-ray-tracing-at-1080p-even-with-dlss-2-0/

I believe Metro uses the same RT as mentioned in the article and there are massive performance hits when it’s enabled.
 
Wow ,if you watch the ps5 teardown, the zen2 + rdna2 chip is small, and that is a cut 40 or 60 cu version? that includes 8 full fat x86 cores, and still comes in pretty small in comparison to a modern GPU die.

That heatsink through....

 
Last edited:
Wow ,if you watch the ps5 teardown, the zen2 + rdna2 chip is small, and that is a cut 40 or 60 cu version? that includes 8 full fat x86 cores, and still comes in pretty small in comparison to a modern GPU die.

That heatsink through....


Well the PS5 and XSX have 8 Zen2 cores but they have cut down on the L3 cache significantly to save on die space from the desktop version. The PS5 has a 40CU with 36 of them active to improve yield.
 
It will likely be at least 30fps locked in 4k. And it will require zero effort on the part of users.

You do realize this is a tech form based heavily on overclocking and user effort?

I mean yes, you can get an inferior experience relatively hitch free on console, but a decent PC will out perform consoles (even the new ones, despite console marketing claims).
 
Very nice video, very impressed with how Sony designed the physical layout of the PS5. Plus the ease in dissassembly. While the outside shape may appear oddball for some it should help in it not being sandwitched with stuff on top etc. causing it to overheat. The vent ducts are also designed such as to help prevent from being blocked.
 
The only report I’ve seen of it struggling at 1080p with DLSS 2.0 was when RT features were enabled. Nothing about it struggling without RT enabled, do you have a source?

“We play in Full HD – a fact that we first have to digest a bit, given the RTX 2080 Ti in the Alienware presentation computer, there should actually be a little more in the pixel density,” wrote PCGH’s Philipp Reuther. “In addition, DLSS is activated, so the internal render resolution is again less than 1,920 × 1080 pixels. But ray tracing is also active, in our preview version in the form of shadows, ambient coverage and indirect lighting (Ray Traced Diffuse Illumination).”

https://www.thefpsreview.com/2020/0...077s-ray-tracing-at-1080p-even-with-dlss-2-0/

I believe Metro uses the same RT as mentioned in the article and there are massive performance hits when it’s enabled.


I just hope it’s like Control where you can just enable some parts. For instance, global illumination RT I really get off to and was only like a 10% impact. Reflections I could care less about and that’s a huge ~40% impact.

There’s a good chance people will be able to turn on some forms of RT with small impact performance wise.

I found control to be best with medium settings + RT rather than ultra + RT off. In BF5 some forms of RT on “low” destroys rasterized on “ultra”.

This is where a HardOCP “user experience” type of review would have been great...
 
I just hope it’s like Control where you can just enable some parts. For instance, global illumination RT I really get off to and was only like a 10% impact. Reflections I could care less about and that’s a huge ~40% impact.

There’s a good chance people will be able to turn on some forms of RT with small impact performance wise.

I found control to be best with medium settings + RT rather than ultra + RT off. Even some forms of RT on “low” destroys rasterized on “ultra”.
I’ll have to find the source but I believe CP2077 is exactly like that where you can toggle each individual RT feature. I remember reading that or seeing it in a YouTube video.
 
From the Zen 3 launch:
rx 6000 series big navi
borderlands 3 with 5900x and big navi 4k 60fps
 
1602174940223.png


I believe this is roughly the performance between a 2080 Ti and 3080. Closer to a 3080 than a 2080 Ti.
 
People forget the fastest GPU when the Witcher 3 launched was the $1000 Titan X. It managed a grand total of 25fps at 4K with hairworks OFF.

Ray tracing is even worse than Hairworks. Expect your $1500 3090 to be lucky to hit above 30fps at 4K with it off, and with it on it will probably be good for 1080p60...lol

This release is made with being able to run on middle of the road 2013 hardware in mind.

If the game is able to run on PS4/XBox (at say 720p low setting), I would imagine that there will be good FPS on reasonably high setting on 3090 RTX without ray tracing, outside some really bad port issue (but considering how much PlayStation-Xbox became close to PCs that do not sound likely).
 
Last edited:
View attachment 286664

I believe this is roughly the performance between a 2080 Ti and 3080. Closer to a 3080 than a 2080 Ti.

Looks like ~20-25% faster than a 2080 Ti and about 10-15% slower than 3080. Of course overclocking and TDP also matters. It could be AMD has lower power consumption and with overclocking it can get close to the 3080, which has basically no OC headroom. Or it could be also clocked to the moon. Guess where it lands in terms of OC headroom will determine value relative to Nvidia for gamers.
 
What is a Board Partner?
Same as what some people call AIB. Honestly, AIB isn't even the correct terminology, but it's what people use and know so if it's understood, then I guess it makes it the right terminology :). AIB = Add in Board... so it's more like a 'custom' board. Technically the reference design from AMD and the FE from NVidia are add in board's... as they are boards you put in your PC. So, board partner is probably more accurate, although less common.
 
Same as what some people call AIB. Honestly, AIB isn't even the correct terminology, but it's what people use and know so if it's understood, then I guess it makes it the right terminology :). AIB = Add in Board... so it's more like a 'custom' board. Technically the reference design from AMD and the FE from NVidia are add in board's... as they are boards you put in your PC. So, board partner is probably more accurate, although less common.
Having board partners would be a good thing I would think. Just like Nvidia. Cooler, quieter gpu's FTW right? I know the prices would go up but that's fine with me. I'm all for a AMD custom designed board from EVGA!
 
Looks like ~20-25% faster than a 2080 Ti and about 10-15% slower than 3080. Of course overclocking and TDP also matters. It could be AMD has lower power consumption and with overclocking it can get close to the 3080, which has basically no OC headroom. Or it could be also clocked to the moon. Guess where it lands in terms of OC headroom will determine value relative to Nvidia for gamers.

Hmmm I honestly wonder if what AMD presented is the performance of their top end 6000 SKU. Why give a performance 'teaser' of your top end card that everyone can see is slightly slower than the RX 3080?? Unless they really plan to undercut on price. I think this might have been a teaser of their ~6800 SKU and then later this month they pull out a ~6900 that has even higher performance. Guess we will find out in a few weeks....
 
Having board partners would be a good thing I would think. Just like Nvidia. Cooler, quieter gpu's FTW right? I know the prices would go up but that's fine with me. I'm all for a AMD custom designed board from EVGA!
They do have board partners, they just aren't going to use them until after they launch. You will see custom boards Q1 2021 with higher power draws I'm sure. AMD is only doing 275w, which leaves a large gap between it and the 3080.
 
Back
Top