Who's planning to buy 5700(?) GPU

Even a mid-range x570 board is going to come in North of $200+. PCI-E 4.0 requires etremely short copper runs in order to keep the signal intact...it can go 3.5-4" or so, but after that you need either redriver (costly) and PCB with much thicker copper traces, which means many more layers, which then drives the cost of the board up...

Part of the issue is that AMD didn't just do a first gen launch and have PCI-E hanging off the CPU for say one x16 slot and a x4 NVME slot...They went ahead and made the entire CPU-SOUTHBRIDGE use 4.0, which is a long run that requires agsin a higher quality layer board.

AMD did this as a slap in the face to Intel, since Intel is stuck on PCI-E 3.0, especially for their CPU-SB interconnect. This gives them a huge marketing point, as they are again then the "first" int the field.

Forgot to add that every board needs the VRM setup to run a 16 core at least at a mild OC, which means higher number of VRM phases and higher quality IR parts.
 
PCI-e 4 won’t matter much right now. While I am excited to see something new from AMD, PCI-e 4 while great is the not something that I will be making any purchasing decisions in for a mid range GPU. It will be helpful for NVMe drives immediately then GPUs.
 
This gives them a huge marketing point, as they are again then the "first" int the field.

It's a 'marketing point', but it should be pointed out that it is solely a marketing point that will not make a performance difference in end-user workloads.

Which is unfortunate, but it is what it is.
 
It's a 'marketing point', but it should be pointed out that it is solely a marketing point that will not make a performance difference in end-user workloads.

Which is unfortunate, but it is what it is.


From a GPU perspective, unless AMD has some crazy secret sauce that is going to make multi-die seem like a single to the OS (ala chiplet but on separate PCBs for now), then I agree to a point.


I realize there is a snowball chance in hell of that happening with launch NAVI, but it could happen in time. If RDNA removes the 4096 shader limit and AMD sees good scaling, then a massive 6k+ large Navi next year could very well need it.

There is a legit removal of the NVME bottleneck, and we have already seen quite a few announced with tons more to come. Won't make your game load that much quicker, but it will benefit those that use the storage for work/content creation purposes.


The major thing that I love is the updating of the CPU TO SB link with a doubling of bandwidth. This link has been proven to be a huge bottleneck at times, and with a NVME hanging off the SB, it's a welcome change.

Having more lanes on the platform with each of those lanes having double the bandwidth is a welcome change, and even the most zealous Intel fan (not saying you are just in general) can deny that.
 
I realize there is a snowball chance in hell of that happening with launch NAVI, but it could happen in time.

I remain pretty optimistic about this. In particular, we should see a resurgence of multi-GPU support for VR; essentially, both GPUs rendering the same FOV side by side. That should doable without imparting input lag or frametime issues due to frame render staggering. I'm betting that if the VR market were there, the GPU vendors could support it with today's components.

Beyond multi-GPU, I don't think it would be hard to do a multi-die solution that's transparent to software. I just don't think it's economical, yet. What we might see is a setup where multiple smaller GPU 'chiplets' have local HBM memory, and are then connected to a die that serves as a bulk GDDR controller as well as the PCIe interface and the display output. May even have the video transcoding circuitry, or most of it.

There is a legit removal of the NVME bottleneck, and we have already seen quite a few announced with tons more to come. Won't make your game load that much quicker, but it will benefit those that use the storage for work/content creation purposes.

Well, I see that there's a bottleneck if we're benchmarking, but I'd need to see a real workload that the extra speed- we're talking about going from 3.5GB/s reads to ~4.5GB/s reads with today's drives, perhaps 7.5GB/s per drive tops?- and I don't think it's going to make any difference soon. Don't mind being wrong, though.

The major thing that I love is the updating of the CPU TO SB link with a doubling of bandwidth. This link has been proven to be a huge bottleneck at times, and with a NVME hanging off the SB, it's a welcome change.

This would mean more if say Thunderbolt were more ubiquitous on AMD platforms. As it stands, there's very little to take advantage of here, versus the cost of PCIe 4.0 over PCIe 3.0- the board price floor basically doubles from US$100 to US$200. To really push the current interconnect, you'd need enterprise-grade gear, and at that point you'd might as well do it right with HEDT or a proper workstation setup.

Having more lanes on the platform with each of those lanes having double the bandwidth is a welcome change, and even the most zealous Intel fan (not saying you are just in general) can deny that.

I'm not going to disagree that it's welcome, from a 'cool technology' standpoint, but I am skeptical of technology that has little to no utility. Intel is guilty of this as well, without a doubt; it's just that PCIe 4.0 seems like more of a Pyrrhic victory for AMD. Getting more cores into a consumer socket without incurring notable side effects, to me, is a much bigger deal.
 
I remain pretty optimistic about this. In particular, we should see a resurgence of multi-GPU support for VR; essentially, both GPUs rendering the same FOV side by side. That should doable without imparting input lag or frametime issues due to frame render staggering. I'm betting that if the VR market were there, the GPU vendors could support it with today's components.

Beyond multi-GPU, I don't think it would be hard to do a multi-die solution that's transparent to software. I just don't think it's economical, yet. What we might see is a setup where multiple smaller GPU 'chiplets' have local HBM memory, and are then connected to a die that serves as a bulk GDDR controller as well as the PCIe interface and the display output. May even have the video transcoding circuitry, or most of it.



Well, I see that there's a bottleneck if we're benchmarking, but I'd need to see a real workload that the extra speed- we're talking about going from 3.5GB/s reads to ~4.5GB/s reads with today's drives, perhaps 7.5GB/s per drive tops?- and I don't think it's going to make any difference soon. Don't mind being wrong, though.



This would mean more if say Thunderbolt were more ubiquitous on AMD platforms. As it stands, there's very little to take advantage of here, versus the cost of PCIe 4.0 over PCIe 3.0- the board price floor basically doubles from US$100 to US$200. To really push the current interconnect, you'd need enterprise-grade gear, and at that point you'd might as well do it right with HEDT or a proper workstation setup.



I'm not going to disagree that it's welcome, from a 'cool technology' standpoint, but I am skeptical of technology that has little to no utility. Intel is guilty of this as well, without a doubt; it's just that PCIe 4.0 seems like more of a Pyrrhic victory for AMD. Getting more cores into a consumer socket without incurring notable side effects, to me, is a much bigger deal.


Where are you getting the 100% cost increase in Mobos for x570? I thought the leaks we saw were saying 30% due to the thicker PCB and redrivers needed. If the ODMs decide to tack on a bunch of features and that is th result of the extra 70+% you are talking about, that's a different story.

Reguardless, I am staying on x470 for now. I'd rather put $160-200 into 32GB of 3400-3600 once I see how the I/O die handles it. If there are n meaningful gains over 3200, then I will add another 16GB kit and pocket the rest toward a new 1TB NVME to replace my 512GB 960 Evo boot drive. That will allow me to move the single 256GB MLC based Samsung data drive I have into a storeMi config for a 8-10TB data and game storage setup.

I agree it's a bigger deal. I'd guess that 86%+ of the workstation market could get by with dual channel ddr4 (especially since 3200 is the new "floor") speeds of 3600~4000Mt/s for the 12 and 16 core setups we are going to see more of tomorrow I hope!

As far as the PCI-E 4.0 backbone, that single NVME drive can make use of it. Or a 3rd GPU doing compute etc etc. We
 
Looks like it will be the 5700XT 50th Anniversary Edition for me...
 
Now that there's no royalty associated with Thunderbolt it may show up on non-Intel associated motherboards.

The biggest issue, IIRC, was that Thunderbolt requires a video signal; I'm not sure if that's been lifted. AMD refusing to put a GPU on all but their most basic consumer CPUs likely didn't help there either.
 
The biggest issue, IIRC, was that Thunderbolt requires a video signal; I'm not sure if that's been lifted. AMD refusing to put a GPU on all but their most basic consumer CPUs likely didn't help there either.

ASRock is using a loop-back, with a DisplayPort In on the rear I/O...
 
Knowing Navi wasn’t going to outperform Radeon VII, I went ahead and got that to replace my Vega 64. The 50th anniversary edition does look sick though. As for pricing, it should be interesting to see how Nvidia responds when they release their “super” line up of cards and how they reprice they whole stack. This is good for consumers, competition, AMD now has a competing product in the midrange that they can finally profit from, Vega with its requirements for HBM was killing their margins. Though that said, I don’t know who’s going to buy these cards in mass, at the price point and performance, you’re not getting value, people that are willing to drop $380 or $450 for a card either already have a Vega 56/GTX 1080/RTX 2060, these cards aren’t an upgrade. Plus like Vega, they’re offering parity performance months late at the same price. The people upgrading would be RX 570/580, GTX 1050/1060 owners and they are at the $200-300 price point. Even with the “new” features like anti lag, image sharpening, and what not, it’s not as compelling a technology as ray tracing. Had they priced it at $299 and $399 I can see these cards moving, and at the rumored $250/$350, these would have sold like hot cakes. Let’s hope as consumers we force Nvidia/AMD to actually delivers generational changes with more performance per dollar (FPS) vs same performance per dollar but with “extra” useless features that they market and justify the price with, like we now get with CPU’s.
 
so neither of these boards are RX 580 replacements. It would be nice to know what their plans are for the 200-300USD market. If its the RX 580 and RX 590 I don't even have an upgrade option.
 
so neither of these boards are RX 580 replacements. It would be nice to know what their plans are for the 200-300USD market. If its the RX 580 and RX 590 I don't even have an upgrade option.
I was thinking the same thing on my way to work, more than likely they'll release cut down versions with 20/25 CU's later this year. The fab process is still new and TSMC more than likely can't generate enough volume to satisfy the higher demand for those GPU's, so they'll stock dies, improve yield then release the 5500/5600/5600XT series.
 
I was thinking the same thing on my way to work, more than likely they'll release cut down versions with 20/25 CU's later this year. The fab process is still new and TSMC more than likely can't generate enough volume to satisfy the higher demand for those GPU's, so they'll stock dies, improve yield then release the 5500/5600/5600XT series.
or,more likely, REBRAND. which will be a big middle finger to gaming audience.
 
so neither of these boards are RX 580 replacements. It would be nice to know what their plans are for the 200-300USD market. If its the RX 580 and RX 590 I don't even have an upgrade option.

GTX 1660TI?

Or: #waitformorenavipricedtesamwayasnvidia
 
GTX 1660TI?

Or: #waitformorenavipricedtesamwayasnvidia
OSS drivers are important to me. surprisingly, AMD has better OSS drivers. But not so much as to justify that price premium.

yeah - probably going to wait and hope Nvidia smacks back with Super/TI and drops prices. But the sad thing is that people are now using Nvidia's price creep as thier basis to say Navi isn't over priced. Both RTX and Navi are overpriced. The value prop of Polaris (480/470 then 580/470) is totally gone now.
 
  • Like
Reactions: Auer
like this
I notice a pattern. So many people are butthurt because this card is not a 2080ti devastator for $59.99

In reality which many seem to not live in or understand is that a really good GPU will deliver smooth beautiful visuals to your experience. It isn't necessarily about having a card that can pump 379 fps in as much as it's about a card with high minimums compared to its maximums with a fast frame buffer to prevent obnoxious stuttering events.

The 2070 is not a weak card, its a pretty damn fast card considering the average gamers expectations.

There are countless people online that while playing randomly with, I have asked in voice chat "what kind of GPU do you have?" And I get back "I have no idea but its fast enough"

For many of them a 1050ti or even a 970 is what they are using and they are impressed enough that they never care what it is, just that it works good for them.

Thus, if the 5700xt is fast enough to beat the 2070 in almost all metrics while maintaining very smooth minimum frame rates and fast frame times at a 3rd or more less in price, and given the average gamer uses 1080p or 1440p at best, I'd say the card is a fucking winner.

So why is it that so many of you just bitch and complain that NAVI is not the equivalent of an unreleased 3080ti for a 10 dollar bill?

The lack of reality given the goal of this card is what makes me scratch my head reading some of the commentary on Hard forums.
 
I think it's too early to determine whether the 5700/5700XT will be a success or failure. Sales will determine that. I think the complaints are a product of expectations vs reality, wishful thinking with all the "leaks" as well as the current state of the gaming GPU market. The midrange used to be between $200-$300 which has now been bumped to $350 - $600!!! Before we used to get 30% generational improvement at the same price point, and now we get the same performance, new features, at the same price. Are well all supposed to just accept this new reality?
 
I think it's too early to determine whether the 5700/5700XT will be a success or failure. Sales will determine that. I think the complaints are a product of expectations vs reality, wishful thinking with all the "leaks" as well as the current state of the gaming GPU market. The midrange used to be between $200-$300 which has now been bumped to $350 - $600!!! Before we used to get 30% generational improvement at the same price point, and now we get the same performance, new features, at the same price. Are well all supposed to just accept this new reality?

We used to live in a time, where die shrinks resulted in the getting double the transistors for about the same price.

Now die shrinks cost so much that double the transistors cost nearly double the price.

You really have no choice but to accept the current reality, which is, that those days of easy gains are largely in the past.

Progress will still be there but the pace will be much slower. At times it will look like it is standing still, interspersed with a nice jump once in a while.

On the upside, when you buy something it will remain closer to state of the art for much longer.
 
I agree, it's a hard pill to swallow since on the CPU side, AMD is making incredibly large strides with IPC and core count (frequency not so much) without drastically increasing price, and that was the bar/expectation that most people had. I mean, I bought the Radeon VII, upgrading from a perfectly acceptable Vega 64 so yes, I've definitely accepted it and helped enable it :) but hey, I wouldn't be in this forum if I didn't love and support tech.
 
My Vega 64 still works fine but was wanting to get HDMI 2.1 but that is not happening so…

Will see how Stadia goes, may never buy another PC gaming card.
 
I'll probably still hang onto my Vega 64, but if I find a good deal, I might pick one up.
 
My Vega 64 still works fine but was wanting to get HDMI 2.1 but that is not happening so…

Will see how Stadia goes, may never buy another PC gaming card.

So, because you're not getting hdmi 2.1 at this time your never going to buy another PC graphics card again? What is so special about 2.1 that your drawing such a hard line?
 
I notice a pattern. So many people are butthurt because this card is not a 2080ti devastator for $59.99

In reality which many seem to not live in or understand is that a really good GPU will deliver smooth beautiful visuals to your experience. It isn't necessarily about having a card that can pump 379 fps in as much as it's about a card with high minimums compared to its maximums with a fast frame buffer to prevent obnoxious stuttering events.

The 2070 is not a weak card, its a pretty damn fast card considering the average gamers expectations.

There are countless people online that while playing randomly with, I have asked in voice chat "what kind of GPU do you have?" And I get back "I have no idea but its fast enough"

For many of them a 1050ti or even a 970 is what they are using and they are impressed enough that they never care what it is, just that it works good for them.

Thus, if the 5700xt is fast enough to beat the 2070 in almost all metrics while maintaining very smooth minimum frame rates and fast frame times at a 3rd or more less in price, and given the average gamer uses 1080p or 1440p at best, I'd say the card is a fucking winner.

So why is it that so many of you just bitch and complain that NAVI is not the equivalent of an unreleased 3080ti for a 10 dollar bill?

The lack of reality given the goal of this card is what makes me scratch my head reading some of the commentary on Hard forums.

Actually if that leak from AdoredTV/fudzilla was right 1080 performance for $250 then I would have been happy. The problem does not lie for me with beating 2080TI because that is not where my interest lies. Mid range card mid range price unless it is something that can last several more years then the 2 to 3 years the mid range card does....
 
I thought it was also the hardware agnostic VRR support but I could be remembering wrong.
 
I thought it was also the hardware agnostic VRR support but I could be remembering wrong.

That's been back-hacked into HDMI 2.0; it's hardware-agnostic, but availability is extremely narrow. HDMI 2.1 should guarantee it across far more products.
 
I agree, it's a hard pill to swallow since on the CPU side, AMD is making incredibly large strides with IPC and core count (frequency not so much) without drastically increasing price, and that was the bar/expectation that most people had. I mean, I bought the Radeon VII, upgrading from a perfectly acceptable Vega 64 so yes, I've definitely accepted it and helped enable it :) but hey, I wouldn't be in this forum if I didn't love and support tech.

The Ryzen core count explosion was available for two reasons :

1. Intel had left mainstream volume on the table for years, even though it was there for the taking. 10-core Broadwell-E is testament to that, and Intel could have easily had a planned introduction of 6 cores with Kaby Lake (instead of rushed refreshes).

2. AMD built the CPU cores as simple as they could, but this time they didn't castrate FP/decode as much as Bulldozer. This meant that you had almost the same IPC in normal loads, and could rely on double the core counts for AVX-heavy stuff.

The GPUs have already been pushing the highest SP count available for years, so there's no room for quick growths. But the switch to GDDR6 and 7nm means AMD is no-longer losing money on Vega 64, and has the potential for the cut part to fall below $300 in the near future.
 
Last edited:
Looks like it will be the 5700XT 50th Anniversary Edition for me...
I wonder if these will gain anything with a custom water loop. Will BIOS mods work?

FuryX -> this would be an interesting side to side since the memory bandwidth is similar. Less CUs but much higher frequencies and cooler running.
 
Why do you want HDMI 2.1 for 4K/120 or 8K for a video card that can't hit those specs when gaming. If it's for media, what TV/monitor supports it. I'm all about new faster interfaces but there's no point in buying a product for a future proof interface when the device itself can't use it. i.e. buying the 5700/XT for PCIE 4.0 or a 2060 for Ray Tracing.
 
1. Intel had left mainstream volume on the table for years, even though it was there for the taking. 10-core Broadwell-E is testament to that, and could have easily introduced 6 cores with Kaby Lake.

Intel mostly wasn't prepared for their 10nm process to be delayed. Everything since Skylake was made up on the spot, and it seems that Kaby Lake was mostly due to OEM demand to have something 'new' to sell, on the desktop at least. They made real gains on mobile.

AMD's advantage is only now bearing fruit with 7nm, and exists entirely because of Intel's 10nm stumble: Intel had planned to have eight-core 10nm CPUs out to the desktop to succeed Skylake, and would have beaten AMD to the punch by years. History aside, the base Zen core is competitive much in the same vein as the K7 was vs. the P6 (Athlon vs. Pentium III), but they did need a few revisions to sort out the platform. Assuming that Zen 2 / Ryzen 3000 hits expected performance levels, we can largely consider Ryzen sorted.

But we should also be cognizant as to how there was a gap for it in the first place; we can expect Intel to come on strong with 10nm and 7nm, perhaps at different markets, and we can expect them also to very quickly reclaim the clockspeed and IPC lead if AMD doesn't have some strong revisions for Zen going forward.
 
Why do you want HDMI 2.1 for 4K/120 or 8K for a video card that can't hit those specs when gaming.

We can hit >4k60 today, why do you believe that GPUs arriving with HDMI 2.1 won't run higher?

Also, having HDMI 2.1 on this Navi release would be indicative of AMD having it on the platform, so essentially while it wouldn't be as useful, for AMD parts, we be 'past' that problem point and be able to test functionality with other equipment.
 
Why do you want HDMI 2.1 for 4K/120 or 8K for a video card that can't hit those specs when gaming. If it's for media, what TV/monitor supports it. I'm all about new faster interfaces but there's no point in buying a product for a future proof interface when the device itself can't use it. i.e. buying the 5700/XT for PCIE 4.0 or a 2060 for Ray Tracing.

Yeah, this is only used in a handful of TVs. Nvidia has long delayed their inclusion of higher-resolution HDMI ports. Kepler was when they finally brought HDMI 1.4b (2012), and Maxwell 2.0 was when they brought HDMI 2.0 (late 2015).

By comparison, HDMI 1.4 was created in 2009, and spurred the 3D HD TV revolution. And the first 4k TV with HDMI 2.0 support (through firmware update) was the LG 84lm9600, released in late 2012. So expect 3 years before anybody adds the spec to their cards. Late 2018 was the year of the first 8k TV with HDMI 2.1, so expect 2021 to see it in video cards.
 
Last edited:
The 5700 isn't a top end card. I'm planning still on getting a Radeon VII which I imagine I'll use for at least 2 years until AMD releases something faster, unless of course the dual GPU Vega II (that's being used in the Mac Pro) gets a PC based card. Then that becomes a possibility. It is a piece of slightly newer silicon from the VII and is different in a number of ways. I'm just not entirely sure if it's a desktop or workstation class card as the price differential might make it cost prohibitive.

I also may just choose to save the money and get a used Vega 64. With either card it's more about the compute rather than gaming performance. And it's not entirely necessary for me to have the best of the best.
 
Back
Top