Big Navi is coming

Good thread, lots of good talk both criticism and praises.

My problem is I would build a new system with 3900X and new Navi, but there are simply no good games right now. So I just don't see how building a new system to play some good games is a waste.
 
Good thread, lots of good talk both criticism and praises.

My problem is I would build a new system with 3900X and new Navi, but there are simply no good games right now. So I just don't see how building a new system to play some good games is a waste.
^^^^^^^^^^^^
Wake me up when they make a game as cool as ARK with the optimization of the new Battlefront II (or insert other highly optimized game here), and the multiplayer funness of Smash Bros or Quake Live...
 
AMD did not "win" financially

AMD promoted an open standard. They don't get licensing fees. They don't get to sell more cards because of any exclusivity. It doesn't even significantly burden their competition or make AMD more competitive in the field.

Maybe it's a moral win, if you want to look at it as something positive in the news for AMD, but it doesn't move the needle on anything else for AMD.
 
AMD promoted an open standard. They don't get licensing fees. They don't get to sell more cards because of any exclusivity. It doesn't even significantly burden their competition or make AMD more competitive in the field.

Maybe it's a moral win, if you want to look at it as something positive in the news for AMD, but it doesn't move the needle on anything else for AMD.


I'd argue that it opens up buyers of high-end monitors to no-longer care abut what company makes their GPUs. It's still on AMD to make something worth purchasing, but it's no longer a case of "I have to choose GPU A if I want to buy display A!" The number of new Gsync-only displays will evaporate over the next few years, making it moot.

And no, I wouldn't depend on HDMI VRR to fix things. We're still a year away from video cards supporting it, and displays will take even longer (was around 2 years after Maxwell before we saw HDMI 2.0 inputs on anything besides TVs) . So yea, the win from Freesync is important for the PC gaming community.
 
Last edited:
Or DXR are like Phys X and go the way of the Dodo, and AMD are betting on something not proprietary.

DXR is now a feature-set of DX12 and is not proprietary. RTX, nVidia's method of dealing with DXR calls, is proprietary but DXR is not.

I cant see how DXR or something else along those lines could make my gaming better / more enjoyable. i think just getting to DX 12 will be just fine for me, and then i will leave the "megapixel race" to the few elitist players.

Since DXR is part of DX12 you get DXR. Whether or not you have the hardware to translate DXR calls into pixels on your screen or a driver that enables software translation into pixels on your screen is another matter.

To me it seem like AMD won the freesync / G sync battle, and i think they can still win the next level grafix too, and next level grafix dont absolutely have to be DXR.

AMD may have won in the sense that nVidia is now supporting some monitors' FreeSync implementations but this is a two-edged sword: there have been countless times here on this very forum where someone would suggest someone not purchase an nVidia card thanks to FreeSync not working for nVidia products but that's now no longer the case for some people.

I'd say that consumers win but AMD lost a rather big trump card that they could play...
 
Will be interesting to see how big Navi is launched. With an R7 triple fan cooler or the usual ref blower fan. Either way, I'll bet water block makers will be ready for it from day 1. Performance will likely be on par with the 2080S and at $100 cheaper.
 
Somewhat unpopular opinion, but personally I would not consider any AMD card over $500 without RT.

I know it doesnt matter at all to some, but I have had some fun with RTX and expect more to come this winter.

So hopefully big Navi will bring some of that to the table. But probably not in a while...
 
I'd agree, Oldmodder . AMD did basically win the FreeSync/G-Sync battle. You can obviously spin it either way (since Nvidia did gain an advantage by taking away AMD's last big feature), I think in the long run you will see standards-based features winning out over proprietary (all else being equal).

With DXR it is a little different. While Nvidia's RTX hardware is proprietary, they were smart to work with Microsoft and incorporate it into DX12. This basically forces AMD's hand, they will be left behind even further if they can't come up with a viable RT solution.

And DXR does look nice. I actually chose to "upgrade" from a 1440p monitor to 1080p so I can experience DXR with higher framerates. And I'm happy I made that trade-off. It is obviously the future, and it will only get better as developers learn to use it.
It wasn't taken away, they offered it up for free to every, including Nvidia, like most of their tech... Nvidia was the one that those not to support it for so long, until they were running out of choices. It was just a driver update to enable and available in hardware for a long time. They just wanted to milk g sync as long as possible. Granted, I wish freesync defined minimum standards better, but g sync requiring licenses and custom Nvidia hardware was a crappy solution.
 
What exactly did AMD get by “winning”?
They won by not having to compete against a proprietary feature. If they went with a proprietary freesync, they would have had to compete with Nvidia for monitor makers to adopt. Now that Nvidia had to fold and start supporting freesync, people are free to switch from Nvidia to AMD or AMD to Nvidia and not have to purchase another monitor.
 
You keep saying Tensor cores (and RTX cores I assume) are not a long term solution. Based on what? I mean, its only the 1st generation RTX.

Based on transistor count. They hardly have enough room at 12-15 billion transistors to jam enough shader cores in as it is. If Ampere has a full tensor core (RT cores it doesn't matter what we call them) its going to push 20 billion transistors. At that point they are going to have again a chip that is much larger then the competition... larger chips are harder to fabricate. It's just physics. Every wafer is going to have defects if your die size is large the chances of fewer fully functioning chips goes up. This is why yields on NVs highest end turing chips suck.

At 7nm physics again becomes a problem with voltage leak. Super high clock speeds become a problem with that many transistors packed in that tight. Its why ryzen2 isn't pushing crazy 5ghz clocks... Intels won't either. Once we move past 7nm to 5nm that becomes even more a problem. Voltage leak is one reason why everyone is designing chiplets. Ampere may be a chiplet... I guess its possible NV is planning to fab 2 chips one shader and one tensor. At that point though they are going to be looking hard at how they can perform that math on shaders to avoid having to ship tensor hardware on consumer parts. If its not chiplet they will have to choose... ultimate shader performance, for lower clocks and fewer shaders so they can dedicate 3-5 billion transistors to tensor/RT cores.
 
Based on transistor count. They hardly have enough room at 12-15 billion transistors to jam enough shader cores in as it is. If Ampere has a full tensor core (RT cores it doesn't matter what we call them) its going to push 20 billion transistors. At that point they are going to have again a chip that is much larger then the competition... larger chips are harder to fabricate. It's just physics. Every wafer is going to have defects if your die size is large the chances of fewer fully functioning chips goes up. This is why yields on NVs highest end turing chips suck.

At 7nm physics again becomes a problem with voltage leak. Super high clock speeds become a problem with that many transistors packed in that tight. Its why ryzen2 isn't pushing crazy 5ghz clocks... Intels won't either. Once we move past 7nm to 5nm that becomes even more a problem. Voltage leak is one reason why everyone is designing chiplets. Ampere may be a chiplet... I guess its possible NV is planning to fab 2 chips one shader and one tensor. At that point though they are going to be looking hard at how they can perform that math on shaders to avoid having to ship tensor hardware on consumer parts. If its not chiplet they will have to choose... ultimate shader performance, for lower clocks and fewer shaders so they can dedicate 3-5 billion transistors to tensor/RT cores.

AMD needs to have courage and push the future with RT like nVidia is.
 
Well if they release 5800 and 5900 that pretty much make 5700 cards their lowest offering if we dont count old inventory, i dont think the low end Nvidia cards support RT either, so i would not say its sad.
I will agree if 5800 or at least 5900 dont have RT then it will be a bit sad, but then again did anyone say ( in regard to future consoles ASO ) that it would be hardware RT ?
Dident they demo some form or RT already on older vega cards, not hardware RT of course
 
Last edited:
Well if they release 5800 and 5900 that pretty much make 5700 cards their lowest offering if we dont count old inventory, i dont think the low end Nvidia cards support RT either, so i would not say its sad.
I will agree if 5800 or at least 5900 dont have RT then it will be a bit sad, but then again did anyone say ( in regard to future consoles ASO ) that it would be hardware RT ?
Dident they demo some form or RT already on older vega cards, not hardware RT of course

Well, without real hardware support it's not really a starter. Even the best current shipping RT hardware gets blasted right now for performance by many. And that is leaps and bounds beyond the cards without dedicated RT hardware.

If they launch without raytracing, they'd be planting a foot very firmly in the "these are for titles RIGHT NOW" camp. That's valid, but people who buy a card to look forward a couple years may naturally balk at that proposition. Depending upon price, of course.
 
Last edited:
Well if they release 5800 and 5900 that pretty much make 5700 cards their lowest offering if we dont count old inventory, i dont think the low end Nvidia cards support RT either, so i would not say its sad.
I will agree if 5800 or at least 5900 dont have RT then it will be a bit sad, but then again did anyone say ( in regard to future consoles ASO ) that it would be hardware RT ?
Dident they demo some form or RT already on older vega cards, not hardware RT of course

The 5700 cards compete with the RTX2060 and RTX2070 cards. Both are RTX. Super and non-super.

The Vega RT thing was a somewhat curious, very canned 30fps tech demo movie.
 
-IF- AMD can deliver a card by the holiday season that can compete with the 2080Ti in NON ray tracing performance for $100 less it will be a huge success.

It would have to be cheaper than that if it doesn't support any form of Hardware based Ray Tracing. Either that or be at least 10% faster in rasterized games.
 
Well they took the easy path with the 5700. Sad.

Yes they released a mainstream card that has been selling extremely well.. Indeed sad.

They also gave 5700 users things like Radeon sharpening for nothing... and it doesn't need developer implementation. 5700 xt is actually very very good at 4k with down sampling and sharpening. I saw a A B test the other day at the local geek shop and I could tell which one was downsampled... only because it was running a lot smoother. The IQ looked identical to my eye. NV solution is what DLSS which is horrid.

AMD is offering gamers features they can use today... not promising them features that won't work for real for another few generations of hardware.
 
AMD needs to have courage and push the future with RT like nVidia is.

Well I guess if they had AI customers to design chips for as well... and really wanted to boost their margins by designing one chip for both markets. I guess they too could include a bunch of matrix math acceleration units.
 
Personally I'm more looking forward to "small" Navi. I want to replace my GTX 1060 3GB for a newer $200 model...
 
  • Like
Reactions: ChadD
like this
They also gave 5700 users things like Radeon sharpening for nothing... and it doesn't need developer implementation. 5700 xt is actually very very good at 4k with down sampling and sharpening. I saw a A B test the other day at the local geek shop and I could tell which one was downsampled... only because it was running a lot smoother. The IQ looked identical to my eye. NV solution is what DLSS which is horrid.
Yeah, RIS works well and gives some extra performance boost when you need it. I only have a 1080p monitor on that rig, but running at 900p it is very good and only on close inspection can you tell it's not native res. At 1800p w/ a 4K screen I'm sure it would look even better.

I know you can do it on Reshade now, but only because AMD did it first and released it for free. Doesn't seem like it was heavily marketed, but it's a pretty huge feature. It may make 4K viable on this mid-range card.
 
AMD has Radeon Rays, which is a limited ray-tracing SDK, so they have some experience in the field, though this is not a competitor to DXR.

https://www.amd.com/en/technologies/radeon-prorender-developers

They were marketing it as a solution for professional CAD-like applications, and it has been around since before Nvidia announced their RTX plans.

NVIDIA's RTX came after Microsoft's DX12 DirectXRaytracing (DXR) plans FYI.

Somehow people got this idea that DXR is all NVIDIA's doing, when infact it is Microsoft doing the enabling:
https://devblogs.microsoft.com/directx/announcing-microsoft-directx-raytracing/

This says it all:
"3D Graphics is a Lie"

This part is also worth noting:
"In addition, while today marks the first public announcement of DirectX Raytracing, we have been working closely with hardware vendors and industry developers for nearly a year to design and tune the API. "

AMD, Intel, NVIDIA...they all knew DXR were comming way in advance...this was no surprise to them.

Final quote:
"This is the power of DirectX Raytracing: it lets developers have their cake and eat it."
 
Last edited:
Yeah, RIS works well and gives some extra performance boost when you need it. I only have a 1080p monitor on that rig, but running at 900p it is very good and only on close inspection can you tell it's not native res. At 1800p w/ a 4K screen I'm sure it would look even better.

I know you can do it on Reshade now, but only because AMD did it first and released it for free. Doesn't seem like it was heavily marketed, but it's a pretty huge feature. It may make 4K viable on this mid-range card.

I was surprised to see how good it looked at the store the other day. Almost tempted me to spend more money then I was out to spend that day. lol
I'm probably not going to bump up to 4k for awhile... but if AMD does get a 5800 big navi out soon. It will be interesting to see how it will do vs a 2080ti with RIS 1800 vs DLSS (at least in the games that actually have DLSS).
 
More like the AMD customers won

If you own an AMD card nothing really changes so it’s not clear what they won. Nvidia customers can now use freesync displays.

Nvidia and the monitor makers lose as it’ll be harder to extract that gsync premium now.
 
I will agree if 5800 or at least 5900 don’t have RT then it will be a bit sad, but then again did anyone say ( in regard to future consoles ASO ) that it would be hardware RT ?

We’ve had RT running on general purpose GPU shaders for over a decade and there have been multiple attempts at hardware RT so the challenges are well understood at this point.

Given Navi’s power efficiency it’s unlikely the upcoming consoles will be much faster than a 5700 but hopefully AMD can improve things by then. Suffice to say though that running RT on a Navi like architecture is going to be very slow.

For any useful number of rays it’ll need custom hardware or a massive increase in flops and bandwidth over current cards.
 
If you own an AMD card nothing really changes so it’s not clear what they won. Nvidia customers can now use freesync displays.

Nvidia and the monitor makers lose as it’ll be harder to extract that gsync premium now.

Yeah I should have said customers in general have won, While AMD looks like the good guy.
 
NVIDIA's RTX came after Microsoft's DX12 DirectXRaytracing (DXR) plans FYI.

Somehow people got this idea that DXR is all NVIDIA's doing, when infact it is Microsoft doing the enabling:
https://devblogs.microsoft.com/directx/announcing-microsoft-directx-raytracing/

This says it all:
"3D Graphics is a Lie"

This part is also worth noting:
"In addition, while today marks the first public announcement of DirectX Raytracing, we have been working closely with hardware vendors and industry developers for nearly a year to design and tune the API. "

AMD, Intel, NVIDIA...they all knew DXR were comming way in advance...this was no surprise to them.

Final quote:
"This is the power of DirectX Raytracing: it lets developers have their cake and eat it."



Still at it?

I think you are still confused about Nvidia's RTX cards.
Turing doesn't do native DXR, there is a team who works with each game to get "RTX On" to work, because Nvidia needs to write a specific API for each game to get RTX to emulate DXR calls. "Interfaces with...." = emulation


Turing was built for the Enterprise world, not gaming... it doesn't matter if Nvidia knew DXR was coming, they have not released a chip that can natively do DirectX-raytracing. As such, Nvidia's AI/Server cards take a 30% performance hit... so essentially Turing can't do DXR either.

When you have a true DXR card, it will do DX12 ray tracing natively. Those cards are coming in the next 13 months, both from Nvidia and AMD.
 
Yes they released a mainstream card that has been selling extremely well.. Indeed sad.

They also gave 5700 users things like Radeon sharpening for nothing... and it doesn't need developer implementation. 5700 xt is actually very very good at 4k with down sampling and sharpening. I saw a A B test the other day at the local geek shop and I could tell which one was downsampled... only because it was running a lot smoother. The IQ looked identical to my eye. NV solution is what DLSS which is horrid.

AMD is offering gamers features they can use today... not promising them features that won't work for real for another few generations of hardware.

Do we have sales numbers? I wouldn’t doubt Navi is selling well, I am just always interested in numbers and they are tough to find.
 
Still at it?

I think you are still confused about Nvidia's RTX cards.
Turing doesn't do native DXR, there is a team who works with each game to get "RTX On" to work, because Nvidia needs to write a specific API for each game to get RTX to emulate DXR calls. "Interfaces with...." = emulation


Turing was built for the Enterprise world, not gaming... it doesn't matter if Nvidia knew DXR was coming, they have not released a chip that can natively do DirectX-raytracing. As such, Nvidia's AI/Server cards take a 30% performance hit... so essentially Turing can't do DXR either.

When you have a true DXR card, it will do DX12 ray tracing natively. Those cards are coming in the next 13 months, both from Nvidia and AMD.

Do you have a source for this?
 
Already dropped.... plz follow along.

It is as if.... Some of you people who don't know things... purposely do so.
 
Turing doesn't do native DXR, there is a team who works with each game to get "RTX On" to work, because Nvidia needs to write a specific API for each game to get RTX to emulate DXR calls. "Interfaces with...." = emulation.

Really, post the source that says RTX is not utilizing DXR.

Yeah Gamer X , I'd also like to see that source, because that "emulation" bit doesn't make any sense. I've explained the DXR/RTX distinction several times in these forums, I understand what's going on there, and there's 0 emulation mentions from Microsoft or Nvidia, anywhere that I've seen - and I've read a bunch on it. Your mention of "emulation" would imply that Nvidia's RTX code "emulates" Microsoft's DXR code, and well, that's just not true. DXR is a DX12 code path to process raytracing instructions, but neither Nvidia nor AMD have to use exactly that path. They use their APIs to process instructions closer to metal (which, by definition, will be a different API for Nvidia and for AMD), which then spits out results in a form that DXR undesrtands. So, RTX is processing DXR data, but it does so in a Nvidia specific path. This is literally the same way graphics have worked for years - Direct3D isn't processed "natively" on any GPU. D3D sends the instructions, then Nvidia/AMD use their own APIs to process that data their way as fast as possible in their hardware architecture. This is what RTX is doing: it is essentially an accelerator of DXR code. No "emulation" going on anywhere in that process (that's why regular shaders can also process DXR code in the GTX 16 and 10 series... just much more slowly, because they're not designed to accelerate DXR code like RTX is)... so I too would like to understand what you mean, and see a source.
 
"Nvidia is bringing support." (Do those cards magically now do hardware raytracing?)

Plz, understand you are being fooled by Nvidia's marketing and subtle play with words. You are missing the concepts of supporting in software, and hardware support. Navi can already do Ray tracing... but there is no reason for it now, because it would impact the Game's performance..

Jensen just hid really well behind all His convoluted "It just Works" bs.



Turing & Ray Tracing is not a selling point (moot). It is ONLY a marketing point. You are trying to market Nvidia, but Gamers are knowledgeable folk and are able to see through your gimmicks as well.
 
Last edited:
"Nvidia is bringing support." (Do those cards magically now do hardware raytracing?)

Plz, understand you are being fooled by Nvidia's marketing and subtle play with words. You are missing the concepts of supporting in software, and hardware support. Navi can already do Ray tracing... but there is no reason for it now, because it would impact the Game's performance..

Jensen just hid really well behind all His convoluted "It just Works" bs.



Turing & Ray Tracing is not a selling point (moot). It is ONLY a marketing point. You are trying to market Nvidia, but Gamers are knowledgeable folk and are able to see through your gimmicks as well.

You still post zero in way of sources while speaking about Navi as if AMD has paid off your mortgage, saved your marriage and gave you a handy.

All bow in the presence of Lisa Su.

Also, you don't even own a Navi based gpu sooooooooo really failing to see where your support comes from other than blatant trolling.
 
Why do Nvidia always tend to make up the bulk of AMD threads, do people really want AMD to turn into Nvidia ? or just hands down compete on a playing field they almost exclusive laid down.
If thats the case AMD will never win, but maybe thats what some people want, even outside the Nvidia company that i am sure would like to be the google of computer grafix.

I dont need RT i just want a good deal on a card for my 1080p / 144Hz monitor, and the 5700 XT seem like a good deal, and as i said i could even fool myself to buy a even larger model, though i think time will prevent that this time around.

I feel a little like people are saying RT is the end all for everything in computer grafix, and to me thats like saying wind turbines are the only thing in renewable energy, which are of course silly to say.

O and then i want someone to make a bloody game i can play on my fine new computer / GFX card,,,,,,,,,, but i somehow doubt that so the rest of my gaming career with probably be one let down after another like it have been for the past 10 years.
 
Back
Top