RX 5950 XT, RX 5950, and RX 5800 XT

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
Excited? This could be it!!!

"The Radeon RX 5950-series is completely new. This could very well be a new large "Navi" silicon, since dual-GPU is dead. Just as AMD carved out the RX 5700 XT, the RX 5700, and the RX 5600 XT, it could carve out the three new SKUs from this silicon. AMD CEO Dr Lisa Su already confirmed that her company is working to upscale the RX 5000-series "Navi" family. The RX 5950-series could be competition for the likes of the RTX 2080 and RTX 2080 Super. It remains to be seen if the three new SKUs are based on the existing RDNA architecture or the new RDNA2 architecture designed for 7 nm EUV, featuring variable-rate shading."

https://www.techpowerup.com/262951/...-5800-xt-new-amd-radeon-skus-reach-regulators
 
Might put the kibosh on the TechSpot hypothesis that the OpenVR result is from an unannounced NVIDIA SKU.
 
Might put the kibosh on the TechSpot hypothesis that the OpenVR result is from an unannounced NVIDIA SKU.

I didn’t really buy that theory anyway.

wonder if big Navi will be announced in March and that is now why Nvidia is going to announce Ampere.

edit for spelling
 
I didn’t really buy that theory anyway.

wonder if big Navi will be announced in March and that is now why Nvidia is going to announce Ampere.

edit for spelling
No. The big navi 2 cards will not be announced until June.
 
Countdown to computex on 6/1. :D

I’m buying a new video card this year. Hopefully AMD actually delivers something higher than mid range. Doesn’t need to be halo level.
 
Countdown to computex on 6/1. :D

I’m buying a new video card this year. Hopefully AMD actually delivers something higher than mid range. Doesn’t need to be halo level.

I'm in the same boat. I want to get at least the performance realm of a 2070S, but I think I might wait until we get more info on Navi2. If they release something that will at least match 2080/2080S performance for around a 2070/2070S price tag, then I may jump to Radeon.
 
I'm in the same boat. I want to get at least the performance realm of a 2070S, but I think I might wait until we get more info on Navi2. If they release something that will at least match 2080/2080S performance for around a 2070/2070S price tag, then I may jump to Radeon.
I’m waiting for ampere and navi2 and unless intel really knocks it out this year I’m going AMD for processor for the first time since dozer.
 
I'm in the same boat. I want to get at least the performance realm of a 2070S

Not sure what counts as "at least the performance realm of a 2070S," but...

1440p.png

Seems pretty close to me.
 
I’m waiting for ampere and navi2 and unless intel really knocks it out this year I’m going AMD for processor for the first time since dozer.

I've already got a MoBo in mind to pair with a 3700X for moving back to AMD since my X2 6400+ BE that preceded my current 3770K. Waiting for a backorder fulfillment on my 1440p 144Hz monitor that should be arriving later this month, so I'm really wanting a GPU that will provide stellar performance at around the $400-500 mark.
The 2070S and 5700XT stand out right now, but I think I'm going to wait until the 5800 and 5900 series Radeon cards emerge, and make a determination of which will offer the best bang for the buck at that point in the future.
 
Why not compare the 5700xt Annv. to 2070S? That would be a more fair comparison.

I dont think the 5700 XT AE would be a fair comparison even to to the 5700 XT. It's was a card produced in a limited run, 10% more expensive and maybe 1fps faster on average?
 
Excited? This could be it!!!

"The Radeon RX 5950-series is completely new. This could very well be a new large "Navi" silicon, since dual-GPU is dead. Just as AMD carved out the RX 5700 XT, the RX 5700, and the RX 5600 XT, it could carve out the three new SKUs from this silicon. AMD CEO Dr Lisa Su already confirmed that her company is working to upscale the RX 5000-series "Navi" family. The RX 5950-series could be competition for the likes of the RTX 2080 and RTX 2080 Super. It remains to be seen if the three new SKUs are based on the existing RDNA architecture or the new RDNA2 architecture designed for 7 nm EUV, featuring variable-rate shading."

https://www.techpowerup.com/262951/...-5800-xt-new-amd-radeon-skus-reach-regulators

That really isn't a sign that these are imminent.

EEC registrations are not reliable.

Here they are registering the Ampere chip, and GTX 2070/2080, before Turing came out:
https://videocardz.com/76967/nvidia-board-partner-registers-ampere-ga104-400-gpu-at-eec
https://www.guru3d.com/news-story/n...ga104-400-ampere-and-lists-2070-and-2080.html
 
At least TPU did some additional research before reposting this bit of info this time. Still doesn't really mean anything. No reason to get all worked up over something until some substantive information to chew on comes out. Names mean nothing.
 
Dual GPU isn't exactly dead. Apple is doing it for their latest Mac Pro. Not just dual Vega 20 but also a dual RX 5700 option is in the pipeline. While generally ignored thus far on the PC side, those GPUs are linked via AMD's Infinity Fabric. The two dual Vega 20 cards even come with bridges like Crossfire used to years ago.

While Apple's cards are proprietary with a 32x PCIe link, I would like to see the some of those features hit the PC side. Putting the Thunderbolt controllers on the card is a good move for Thunderbolt adoption and system simplicity. If AMD was going to put a PCIe bridge on the card for another dual GPU model, tossing in a Thunderbolt controller would be straight forward.
 
Dual GPU isn't exactly dead. Apple is doing it for their latest Mac Pro. Not just dual Vega 20 but also a dual RX 5700 option is in the pipeline. While generally ignored thus far on the PC side, those GPUs are linked via AMD's Infinity Fabric. The two dual Vega 20 cards even come with bridges like Crossfire used to years ago.
When paired together, the cards are actually also using an infinity fabric link and not Crossfire. I think it's fair to say that Crossfire in its previous iteration is dead (using XDMA etc). I guess they could come out with this Infinity Fabric link for other desktop cards and rebrand that as "Crossfire 5th gen" or whatever. But I'm not holding my breath.
As a side note, I don't think multi-GPU is worth it for desktop use. I noted not to while ago that less than <1% of people have a 2080 or 2080Ti combined. And generally speaking multi-GPU was only worth it for people with top end GPU's who needed even more power. Otherwise a single faster card has more or less always been better as there isn't any issues with synchronization or overhead. When doing multi-GPU for rendering or science applications those things aren't issues.
I also don't personally think that there are many game devs that want to even bother with optimizing for multi-GPU. If DX12/Vulkan can really make rendering to multi-GPU seamless on the programming side, then it's a possibility. But we haven't had any modern examples to test that on.

While Apple's cards are proprietary with a 32x PCIe link, I would like to see the some of those features hit the PC side. Putting the Thunderbolt controllers on the card is a good move for Thunderbolt adoption and system simplicity. If AMD was going to put a PCIe bridge on the card for another dual GPU model, tossing in a Thunderbolt controller would be straight forward.
Something tells me it's not so simple to throw a Thunderbolt controller onto a card and make it work. There would have to be controller support on any and all motherboards that the video card would be on. And up to this point there are very few motherboards (especially on the consumer side) that have even bothered. Notably is Gigabyte's Designare series and Asus has a couple top end boards that will support 1x Thunderbolt 3.
To get a bunch of ports like Apple has done, my understanding (which I admit is limited), is that it would require Motherboard Architecture changes that I doubt PC manufactures care to do, as they haven't cared to yet.
 
Last edited:
Yeah doing that on my phone makes it impossible to actually look at.

You'd think it wouldn't look like shit on mobile.
Looks fine here on chrome mobile after zooming in. Check browser settings, may be compressing images on a proxy server to save bandwidth.
 
Plz plz don't let the high-end card be another case of kinda almost not really as fast but consumes way more power like it has been.
 
Plz plz don't let the high-end card be another case of kinda almost not really as fast but consumes way more power like it has been.

Saw a video yesterday where the team responsible for the greatly reduced power consumption for Zen went over to the Radeon Technologies group. I think the first iteration of Big Navi will reasonably competitive though AMD knows that they need and want to stay competitive and the money from Zen based products are allowing them the resources to invest in better R&D overall. Keep in mind our GPUs make them a lot less money than server chips in example however with Apple migrating to 5nm that really freed up 7nm and AMD is now apparently the biggest customer on 7nm. It takes time and a hell of a lot of R&D, planning, etc. AMD wants to be tops on both CPUs and GPUs and it will happen.

I personally am waiting for RDNA2 + Big Navi. I don't care about Ray Tracing too much though it would be a nice-to-have albeit dedicated hardware is not necessary as demonstrated by a tech demo. 2020 is going to kick ass, just give it a little while to wake up and get out of bed and wash the 2019 out of it's mouth.
 
When paired together, the cards are actually also using an infinity fabric link and not Crossfire. I think it's fair to say that Crossfire in its previous iteration is dead (using XDMA etc). I guess they could come out with this Infinity Fabric link for other desktop cards and rebrand that as "Crossfire 5th gen" or whatever. But I'm not holding my breath.
As a side note, I don't think multi-GPU is worth it for desktop use. I noted not to while ago that less than <1% of people have a 2080 or 2080Ti combined. And generally speaking multi-GPU was only worth it for people with top end GPU's who needed even more power. Otherwise a single faster card has more or less always been better as there isn't any issues with synchronization or overhead. When doing multi-GPU for rendering or science applications those things aren't issues.

Yeah, I mentioned that, hence the term 'like'. It is noteworthy that AMD is actually putting forth the effort to create a new GPU-GPU link where as XMDA was just done over the PCIe bus. This is on top of the transition to PCIe 4.0 on the RX 5700 series.

I also don't personally think that there are many game devs that want to even bother with optimizing for multi-GPU. If DX12/Vulkan can really make rendering to multi-GPU seamless on the programming side, then it's a possibility. But we haven't had any modern examples to test that on.

I have a feeling that with the arrival of ray tracing, there will be a renewed interest on the ultra high end due to the increased resources to run with ray tracing without compromises. You are correct that game developers will not want to put that much effort unless that market segment expands a bit. The solution I'm seeing is that AMD and nVidia will move forward with idea to virtualize the multiple GPUs as a single logical unit to expose to software. The result is that game developers wouldn't do anything special to account for the extra GPU. That's the theory anyway as nVidia has made some moves in this direction anyway.

Something tells me it's not so simple to throw a Thunderbolt controller onto a card and make it work. There would have to be controller support on any and all motherboards that the video card would be on. And up to this point there are very few motherboards (especially on the consumer side) that have even bothered. Notably is Gigabyte's Designare series and Asus has a couple top end boards that will support 1x Thunderbolt 3.
To get a bunch of ports like Apple has done, my understanding (which I admit is limited), is that it would require Motherboard Architecture changes that I doubt PC manufactures care to do, as they haven't cared to yet.

There needs to be PCIe lanes allocated to the Thunderbolt controller which is why there would need to be a PCIe bridge chip on the card as well. For a dual GPU card for PCs, such a bridge chip would also be necessary so this extra cost for Thunderbolt is already factored into the design. Single GPU setups with Thunderbolt on the GPU board is another matter as that jumps up the price and the GPU market is VERY sensitive to pricing. Only way it'd become more mainstream on single GPUs is if that is part of a reference design. Thunderbolt would also need to have an additional cable to go to the motherboard to pass interrupt information. The various Asus and Gigabyte boards that come with Thunderbolt cards have the necessary header already and it is becoming more wide spread.
 
Why wouldn't they just use another bridge -- there's a reason that Nvidia has been using them all these years.
 
Back
Top