Navi Rumors

Status
Not open for further replies.
Well, AMD would sure have to pull a rabbit out of a hat if what the video is claiming comes to pass.

Navi on 7nm would have to be almost twice as efficient as Vega on 7nm

You know it's only 85 watts difference from a 2080 to a VII or 215 to 300. Your extreme exaggerations are why a decent chunk of us dismiss your ideas. I am also tired of your personal grudge against AdoredTV, have a issue with him then make a thread on it but stop constantly derailing threads with your hate towards him. It's a rumor thread not a fact thread, it's a collection of all the rumors and a discussion about them. So far the rumors about Navi have been good which makes me optimistic that Navi will be a good choice for people. Also others have made good points on why they dont like a prediction and their reasons that go beyond your two lines of incredible insight.
 
You know it's only 85 watts difference from a 2080 to a VII or 215 to 300. Your extreme exaggerations are why a decent chunk of us dismiss your ideas. I am also tired of your personal grudge against AdoredTV, have a issue with him then make a thread on it but stop constantly derailing threads with your hate towards him. It's a rumor thread not a fact thread, it's a collection of all the rumors and a discussion about them. So far the rumors about Navi have been good which makes me optimistic that Navi will be a good choice for people. Also others have made good points on why they dont like a prediction and their reasons that go beyond your two lines of incredible insight.

You conveniently forgot to mention that Radeon VII is on TSMC 7 nm while GeForce RTX 2080 is on TSMC 12 nm.
 
You know it's only 85 watts difference from a 2080 to a VII or 215 to 300. Your extreme exaggerations are why a decent chunk of us dismiss your ideas. I am also tired of your personal grudge against AdoredTV, have a issue with him then make a thread on it but stop constantly derailing threads with your hate towards him. It's a rumor thread not a fact thread, it's a collection of all the rumors and a discussion about them. So far the rumors about Navi have been good which makes me optimistic that Navi will be a good choice for people. Also others have made good points on why they dont like a prediction and their reasons that go beyond your two lines of incredible insight.


You're missing the whole argument here. Adored TV is arguing that you get the following performance at 150w TDP:

Navi 10 150w TDP = 15% faster than Vega 64 = Radeon VII (currently 7nm with 295w TDP) = slightly slower than RTX 2080.

There i no way iwithin the laws of physics tio shrink that level of performance down, especially when it's aready been painfully shown that 7nm only improves performance by around 25%. You'd have to DOUBLE the efficiency of Radeon VII (already shrunken) to make it happen.

This is to say nothing of the complete horse shit dream that Navi's 256-bit GDDR6 memory controller could ever hope to feed that much performance (Nvidia is pushing 256-bit GDDR6 TO THE LIMIT to make the RTX 2080 run, and you know how much more advanced their tech is.)

The best you could feed off that (with small increase in controller efficiency between generation) is Vega 56, which is why those are the most credible rumors for Navi 10.
 
Last edited:
AMD would sure have to pull a rabbit out of a hat if what the video is claiming comes to pass.

Navi on 7nm would have to be almost twice as efficient as Vega on 7nm

No it is not , Navi RX 3080 ends up on 14nm at 300 Watt and performance below RX Vega 56.
What part of math do you fail to understand. it is 2 times less density and 2 times more power and subtract 1.35% from the rated performance at RX Vega 64 +15%.

Any 7nm die shrink Nvidia card would beat _any_ AMD card at 7nm. Have better thermals and better power usage.

Before you start about Vega 7nm that is a compute card for the professional market and the idea that it can be compared simply holds no water since the professional card uses all of the features and the Radeon VII simply has many floating point cores disabled.
 
No it is not , Navi RX 3080 ends up on 14nm at 300 Watt and performance below RX Vega 56.
What part of math do you fail to understand. it is 2 times less density and 2 times more power and subtract 1.35% from the rated performance at RX Vega 64 +15%.

Any 7nm die shrink Nvidia card would beat _any_ AMD card at 7nm. Have better thermals and better power usage.

Before you start about Vega 7nm that is a compute card for the professional market and the idea that it can be compared simply holds no water since the professional card uses all of the features and the Radeon VII simply has many floating point cores disabled.

You have to be really foolish to believe the manufacturer's marketing material.
 
You're missing the whole argument here. Adored TV is arguing that you get the following performance at 150w TDP:

Navi 10 150w TDP = 15% faster than Vega 64 = Radeon VII (currently 7nm with 295w TDP) = slightly slower than RTX 2080.

There i no way iwithin the laws of physics tio shrink that level of performance down, especially when it's aready been painfully shown that 7nm only improves performance by around 25%. You'd have to DOUBLE the efficiency of Radeon VII (already shrunken) to make it happen.

This is to say nothing of the complete horse shit dream that Navi's 256-bit GDDR6 memory controller could ever hope to feed that much performance (Nvidia is pushing 256-bit GDDR6 TO THE LIMIT to make the RTX 2080 run, and you know how much more advanced their tech is.)

The best you could feed off that (with small increase in controller efficiency between generation) is Vega 56, which is why those are the most credible rumors for Navi 10.

You need to ask yourself the following questions what does Vega on 7nm have to do with a new Navi architecture?
What do the laws of physics have to do exactly with die shrink of an inefficient architecture (Vega)?
Why is it that Navi was made if all they had to do was die shrink Vega?
When do the laws of physics change when you have something inefficient suddenly become efficient on a different lower nm process node?

I haven't seen anything yet but apples and oranges.

73385030-c770-4c7c-baa7-002e05afff6d-jpg.jpg


This is the only thing that shows you what you need to know on how things work on the 7nm process.
It shows you the benefits of the process and AMD has demonstrated this.
You have to be really foolish to believe the manufacturer's marketing material.
https://www.overclock3d.net/news/cp...en_3000_series_cpu_-_beats_intel_s_i9-9900k/1

Seeing is believing. 7nm process a 65 Watt rated AMD cpu beating the 135 Watt rated I9 9900K
How long are all of you willing to ignore facts ....
 
You're missing the whole argument here. Adored TV is arguing that you get the following performance at 150w TDP:

Navi 10 150w TDP = 15% faster than Vega 64 = Radeon VII (currently 7nm with 295w TDP) = slightly slower than RTX 2080.

There i no way iwithin the laws of physics tio shrink that level of performance down, especially when it's aready been painfully shown that 7nm only improves performance by around 25%. You'd have to DOUBLE the efficiency of Radeon VII (already shrunken) to make it happen.

This is to say nothing of the complete horse shit dream that Navi's 256-bit GDDR6 memory controller could ever hope to feed that much performance (Nvidia is pushing 256-bit GDDR6 TO THE LIMIT to make the RTX 2080 run, and you know how much more advanced their tech is.)

The best you could feed off that (with small increase in controller efficiency between generation) is Vega 56, which is why those are the most credible rumors for Navi 10.

I will not go on the "leaks" thinking or w/e
I will pose different way to look at
they did Radeon VII, if they cut a bunch of the BS that gamers "do not really need" to focus on what they likey want or will, they can claw a 25%+ deficit "gain" no problem in my honest opinion.
call BS on that all you want
Nv obviously did this going forward from say the 900 to GTX 1xx to now RTX 2xx, there is nothing at all stopping AMD from doing exactly that, cutting shit back to prop up what they need.

that is realistic looking at, not need a while crud of transistor to do fancy math or w/e, put that transistor budget towards whatever else...is a "cake batter" they just have not had the time in a number of years to go back to the mix and adjust like Nv has had the chance to do.

Navi likely will be the crust of some of the idea for post Navi (post GCN as well, that being said, GCN was based on VLIW-4/ VLIW-5 like Hypertransport became Infinity Fabric, likely GCN will still be "alive" but rolled into an even more advanced form...you do not throw away billions of $ of manhour and design etc afterall, you just find what works, AMD now has the working capital that they did not for a number of years, likely they have a bunch of magic nummies they are working on full tilt, time will tell ^.^
 
You need to ask yourself the following questions what does Vega on 7nm have to do with a new Navi architecture?
What do the laws of physics have to do exactly with die shrink of an inefficient architecture (Vega)?
Why is it that Navi was made if all they had to do was die shrink Vega?
When do the laws of physics change when you have something inefficient suddenly become efficient on a different lower process node?

I haven't seen anything yet but apples and oranges.

View attachment 155861

This is the only thing that shows you what you need to know on how things work on the 7nm process.
It shows you the benefits of the process and AMD has demonstrated this.

https://www.overclock3d.net/news/cp...en_3000_series_cpu_-_beats_intel_s_i9-9900k/1

Seeing is believing. 7nm process a 65 Watt rated AMD cpu beating the 135 Watt rated I9 9900K
How long are all of you willing to ignore facts ....

Wow. One cherry-picked benchmark.

Why not use World War Z to show that that the Radeon RX Vega 64 is faster than the GeForce RTX 2080?
 
Last edited:
I didn't watch the video. I used to like Adored, but now it seems like he is manufacturing these rumors himself.

AMD does need to do something crazy, but 2080 TI performance for $430 is a bit too crazy.
 
I didn't watch the video. I used to like Adored, but now it seems like he is manufacturing these rumors himself.

AMD does need to do something crazy, but 2080 TI performance for $430 is a bit too crazy.

Same, I used to enjoy his videos. Now it's almost as if he just makes shit up. When he's right he's a mesiah and when he's wrong it was his sources fault. Personally i'd rather just wait for AMD to set us straight. I do believe Navi might be a bit of a surprise. If they do cut out a lot of the compute, then it could well end up being a really low cost, cool running, quiet gaming GPU. At least that is what i'm hoping for.
 
You have to love how worked up some people get here, arguing about things of which we have zero proof. Some of you think Navi will be X powerful, other think it'll be a disappointment. Great! But arguing about it is pointless when none of you have any idea what the 7nm or the new architecture will entail. Even if it's the last form of GCN, that means nothing: technically, Fermi, Maxwell, Pascal are the same architecture with tweaks. Nvidia calls it a "new architecture" every time they change something, while AMD is a bit more transparent: mostly the same with some updates, aka GCN 1.X, etc.

Bottom line: speculate all you want, but arguing is pointless. None of you know anything about this. The only rational thing to do is wait until next month. Speculate as much as you want, it's fun for me to read... but I don't get it when people get angry, considering nobody has even a shred of facts in this thread.
 
Well we just got confirmation that PS5 is using an AMD Navi GPU. This is not a rumor, from Sony.

https://www.eurogamer.net/articles/2019-04-18-ps5-specs-details-games-6300

Most interesting is that Sony says it will have ray-tracing support. Granted, it's a custom chip and may not be the same as what is coming out on PC, but still.

I think AMD has some cards to play. I don't know either way what will happen, but I think it will be exciting.
 
Exciting for sure. However, any DX12 card can do raytracing, just need to enable it via software driver. Since PS5 is now expected to support it, I'd guess some minimal form of hardware support, otherwise it'll suck at it - like Pascal cards. If PS5 has hardware based DXR acceleration, I bet Navi parts for PC will have it too. Recall how AMD last year said they wouldn't enable DXR acceleration until they can do it top to bottom. It may be that Navi parts have RTX 2060 like minimal hardware (complete speculation on my part), so they could enable it at all levels (even PS5, which would be on that level), and then Navi 20 next year will flesh out the DXR capabilities.
 
Same, I used to enjoy his videos. Now it's almost as if he just makes shit up. When he's right he's a mesiah and when he's wrong it was his sources fault. Personally i'd rather just wait for AMD to set us straight. I do believe Navi might be a bit of a surprise. If they do cut out a lot of the compute, then it could well end up being a really low cost, cool running, quiet gaming GPU. At least that is what i'm hoping for.

It seems that everyone is pinning this one way or another on AdoredTV, lets say that his speculation part on chiplets is different from information that he gets from leaks.

If you can separate the two things then you are seeing things as they should be.

AdoredTV (Jim) is nothing more then a person (enthusiast) like us he reminds me a bit of JC (JCnews used to be a website where there were some interesting information around cpu at the time of AMD K5/6) JC got his stuff sometimes from close sources and that sometimes caused a bit of controversy when he was posting some stuff that was not exactly on the ball people inside companies (working on cpu) would contact him and explain stuff.

Jim has a problem that he always wants to explain things why and how (persistently) and that shows even in this video when he corrects people again about when those leaks came in later then the cpu stuff posted on reddit. Even tho he explained that 3 times already in previous videos.

AdoredTV has one option sit on his information and do nothing or make a video and hope that the leaks pan out. You can say a lot of things about this process but how many people would go out on a limb and do this. I know of one person exactly that did this ans his name is Kyle Bennett when he released the story about GPP and what disbelief and hatred caused this.

If this is not important why the videos? It is not like the information that you get from wccftech or fudzilla or other rumour sites when you know that there is a good change none of it is true. They don't have the intent to posted wrong information but it ends up sources not being informed well enough.

If anything Navi shows that it is something decent the CU count is where we will need to hear more about this if there is a synergy between the desktop product the new console APU then that would explain what is going on and also why there a low price attached to this (beside die size).

If all of this information on Navi was/is false and AdoredTV knows this why post it to begin with, it would be the end for his channel.
No one wants to post false information and feed false hope into AMD user base the backlash from this is far greater then the supposedly click bait reward.

What is obvious about these leaks that the pricing is very low compared to Nvidia but then again find me a time when Nvidia was not overcharging the crap out of their products. Using Nvidia as a metric on pricing is not a good idea to begin with.
 
Exciting for sure. However, any DX12 card can do raytracing, just need to enable it via software driver. Since PS5 is now expected to support it, I'd guess some minimal form of hardware support, otherwise it'll suck at it - like Pascal cards. If PS5 has hardware based DXR acceleration, I bet Navi parts for PC will have it too. Recall how AMD last year said they wouldn't enable DXR acceleration until they can do it top to bottom. It may be that Navi parts have RTX 2060 like minimal hardware (complete speculation on my part), so they could enable it at all levels (even PS5, which would be on that level), and then Navi 20 next year will flesh out the DXR capabilities.

Don't forget that the console chips are custom. If Sony/MS asked for some implementation that would allow some form of ray tracing it can be as broad or narrow as they want to define it.

On the ray tracing topic look at what has been implemented so far it comes down to things where things are scaled. And the lack of titles means that it is either hard to implement or requires such serious tuning to allow it to be a playable event.

On the other hand if the console SDK would allow implementation of ray tracing to be easy then dedicated hardware is not a problem on that platform (consoles are all the same) since most if not all titles can use it without any drawbacks.
 
On the other hand if the console SDK would allow implementation of ray tracing to be easy then dedicated hardware is not a problem on that platform (consoles are all the same) since most if not all titles can use it without any drawbacks.

I think that's the key. Devs have a known platform and an assured installed base with that platform. It makes the problem much more bounded, and I think we'll start getting reasonable raytracing integration over time with that kickstart.
 
On the ray tracing topic look at what has been implemented so far it comes down to things where things are scaled. And the lack of titles means that it is either hard to implement or requires such serious tuning to allow it to be a playable event.

It's not really either- it's that AAA games using it today are having it retrofitted to various degrees rather than being built with ray-tracing in mind. That's a reality of AAA-game development.

Actually implementing it is as easy as Nvidia's CEO claimed- but using it properly in a performant manner for the content in question is a larger issue.

With respect to Navi, especially on consoles, this isn't an issue. The hardware is easy enough and developers will have a single target. On the desktop it'll be a little harder, but there isn't anything that Nvidia is doing that AMD doesn't already know how to do- it's whether or not AMD is willing to put the hardware out there.

They could have rolled out a double-Polaris at any time to put pressure on Pascal. They didn't.
 
Don't forget that the console chips are custom. If Sony/MS asked for some implementation that would allow some form of ray tracing it can be as broad or narrow as they want to define it.

Absolutely. Not that past actions determine future development, but these custom designs are never THAT custom, meaning, they're not a complete departure from the PC hardware versions. In purely economic terms, it would make little sense for AMD to add ray-tracing capable hardware but then release PC hardware without those capabilities. Surely the custom designs have implications for the chip design, but that rarely means features will be disabled.

I'd expect Navi to have been partially designed with the PS5/XBnext ray-tracing requirements, and so the GPU will have very similar hardware on PC release. The work has already been done, so why shoot themselves on the foot? Whatever the performance level, it'd at least match next-gen console specs, that's a win right there, even if it's not super powerful. See, the mistake I see many people make is that if AMD doesn't top the 2080 TI, then it's garbage. Really, if it doesn't topple a $1300+ card it's garbage? I never spend more than $300 on a GPU, anything above = worthless for me re: value. At sub-$300, the majority of the market, people will take whatever goodies they get, as long as it's good value. I'd happily pay $300 for an RX3080 that basically equals RTX2060 and its raytracing capabilities: that's both a rational expectation and good value (for me). Even if PS5/XBnext are more performant, you'd still be able to accelerate DXR to a certain degree, and have good performance/price ratio. Finally, and this may be a provocative point to make for many, I feel like I've been seeing many videos like this one from Techspot, where it certainly looks like AMD cards are aging better than Nvidia cards (partly because AMD just gives us more VRAM on their cards). Raw performance is raw performance, but if AMD keeps giving us more VRAM than Nvidia and that makes a difference for those of us who only upgrade every 2-3 years, that's just another little push to consider Navi later this year (not trying to make this an Nvidia VS AMD shouting match, just making a point about other not-so-obvious benefits we could get with Navi, and I know this is debatable anyway).
 
Absolutely. Not that past actions determine future development, but these custom designs are never THAT custom, meaning, they're not a complete departure from the PC hardware versions. In purely economic terms, it would make little sense for AMD to add ray-tracing capable hardware but then release PC hardware without those capabilities. Surely the custom designs have implications for the chip design, but that rarely means features will be disabled.

I'd expect Navi to have been partially designed with the PS5/XBnext ray-tracing requirements, and so the GPU will have very similar hardware on PC release. The work has already been done, so why shoot themselves on the foot? Whatever the performance level, it'd at least match next-gen console specs, that's a win right there, even if it's not super powerful. See, the mistake I see many people make is that if AMD doesn't top the 2080 TI, then it's garbage. Really, if it doesn't topple a $1300+ card it's garbage? I never spend more than $300 on a GPU, anything above = worthless for me re: value. At sub-$300, the majority of the market, people will take whatever goodies they get, as long as it's good value. I'd happily pay $300 for an RX3080 that basically equals RTX2060 and its raytracing capabilities: that's both a rational expectation and good value (for me). Even if PS5/XBnext are more performant, you'd still be able to accelerate DXR to a certain degree, and have good performance/price ratio. Finally, and this may be a provocative point to make for many, I feel like I've been seeing many videos like this one from Techspot, where it certainly looks like AMD cards are aging better than Nvidia cards (partly because AMD just gives us more VRAM on their cards). Raw performance is raw performance, but if AMD keeps giving us more VRAM than Nvidia and that makes a difference for those of us who only upgrade every 2-3 years, that's just another little push to consider Navi later this year (not trying to make this an Nvidia VS AMD shouting match, just making a point about other not-so-obvious benefits we could get with Navi, and I know this is debatable anyway).

There are definitely benefits to ram and bandwidth attached to it.
People tend to be a little bit too binary when it comes to performance or performance crowns. While in reality most people still game in 1080p and your run of the mill budget card will do fine at that resolution. The only thing I find exciting about the next gen consoles is that there going to get to a place where 60 fps and the resolution 2160p which for consoles should allow a broader audience for gaming without breaking the bank.

I'm not to sure on what can be used from console(s) APU if it is custom I think it is off the table for other use. However if AMD implemented a way to use shaders/compute in a different way to accommodate ray tracing then it should not keep AMD from using this on a desktop variant.

I think we have not heard everything regarding Navi to get a better picture of it (feature wise). And the problem is everything is now tied to Computex 2019? and after that it will take a good while to turn up in shops.

There is a good change that I would purchase 2 Navi based cards this year if things turn up both RX 3080 and Navi 20 look really attractive. And with a bit of luck both will have a release date this year. And the "worst" part is that Navi is really not the architecture that is going to break certain barriers...
 
So whats wrong with showing what the architecture can do in Vulkan? It's not labeled in a misleading way, also AMD has been so tight lipped about Navi that everyone is just guessing on what it can do.

I was talking about cherry picking
 
Well i have 430 USD just burning a hole in my piggy bank, but yeah that do sound too good to be true.
But also performance as xxxx nvidia card, well is that performance over a slew of games or is that just 1 cherry picked AMD game ( who can tell )
And new navi cards are mid range gaming cards, but is that current AMD mid range or nividia mid range cards, or could AMD maybe be setting a whole new meaning for what is mid range in summer / fall 2019.

And why couldn't AMD have hit paydirt with manufacturing and chiplet design, and actually be able to sell GFX cards,,,,, nice GFX cards dirt cheap and still make money.
Sure if you have a good card you could price it as the competitions similar card, and make even more money of you have made that card really cheap,,,,, BUT ! thats not how you finish off your enemy that just create a stalemate, and you make some money ( which are fine short term )
I dunno guess we will have to see, i still would like my machine to be all AMD, but its not a sure thing yet.
 
Right. AMD needs to get aggressive here. If they have a shot they need to take it.

They don't really have a shot.

Undercutting Nvidia too much will just get Nvidia to lower prices too- good for us in the short term, but the reality is that AMD will just make less. Thus they'll follow Nvidia's pricing.

They can't command pricing until they have much closer parity in performance, production capacity, and market share- and those all depend on each other.

Quick example: they could best the 2080Ti by 20%, RT or not, and sell at a lower price- and then they'd sell out. Just like Vega during the crypto craze. Need to make more? Well, should have done that from the beginning. How do you do that? Have to have the market share in the first place to command preference from the fabs.

They have to claw their way up by executing consistently- that's how Nvidia did it, even when they didn't have the very best product. Same for Intel.

Navi might help, but it's not really 'AMD's shot'. It can, though, be a step showing that they can consistently execute. If they can show that they can consistently nip at Nvidia's heels, then they can build the market share that they need to take the kinds of risks that Nvidia takes.
 
Undercutting Nvidia too much will just get Nvidia to lower prices too- good for us in the short term, but the reality is that AMD will just make less. Thus they'll follow Nvidia's pricing.
I agree, but something has to give. If AMD really has a 2080 Ti competitor, they shouldn't waste "their shot", as I'm saying, pricing it at $1,200.
 
Well i have 430 USD just burning a hole in my piggy bank, but yeah that do sound too good to be true.
But also performance as xxxx nvidia card, well is that performance over a slew of games or is that just 1 cherry picked AMD game ( who can tell )
And new navi cards are mid range gaming cards, but is that current AMD mid range or nividia mid range cards, or could AMD maybe be setting a whole new meaning for what is mid range in summer / fall 2019.

And why couldn't AMD have hit paydirt with manufacturing and chiplet design, and actually be able to sell GFX cards,,,,, nice GFX cards dirt cheap and still make money.
Sure if you have a good card you could price it as the competitions similar card, and make even more money of you have made that card really cheap,,,,, BUT ! thats not how you finish off your enemy that just create a stalemate, and you make some money ( which are fine short term )
I dunno guess we will have to see, i still would like my machine to be all AMD, but its not a sure thing yet.

The problem is that people been brainwashed by Nvidia for too long. It is shown where people that bought Nvidia cards would suggest to people buying a Radeon VII that they should not purchase it because according to them it is overpriced.

The idea that AMD can make something use a smart manufacturing process to get there is still baffling to some people while it has been 3 years almost with Zen.

We would see something when Navi gets announced I hope they release all the information regarding CU pricing and their future roadmap for Navi as well for the rest of the year then we know what is true and what is not.

One thing is for sure if those $430 Navi 20 cards have ray tracing for $1290 you could match Nvidia pricing and prolly get better ray tracing with 3 of those cards then you would ever get from one 2080 TI. Never mind the serious amount of Terraflops difference...
 
Last edited:
The problem is that people been brainwashed by Nvidia for too long. It is shown where people that bought Nvidia cards would suggest to people buying a Radeon VII that they should not purchase it because according to them it is overpriced.

The idea that AMD can make something use a smart manufacturing process to get there is still baffling to some people while it has been 3 years almost with Zen.

We would see something when Navi gets announced I hope they release all the information regarding CU pricing and their future roadmap for Navi as well for the rest of the year then we know what is true and what is not.

One thing is for sure if those $430 Navi 20 cards have ray tracing for $1290 you could match Nvidia pricing and prolly get better ray tracing with 3 of those cards then you would ever get from one 2080 TI. Never mind the serious amount of Terraflops difference...

But the reality is that the VII does give you less for the same money as a RTX2080. Care for them or not, RTX and DLSS are features.

Brainwashed? If you buy the best card for the $$$ out there youre brainwashed???
So when AMD has a better featured card for same or less $$$ and I buy one I guess I got brainwashed by AMD....
 
But the reality is that the VII does give you less for the same money as a RTX2080. Care for them or not, RTX and DLSS are features.

Brainwashed? If you buy the best card for the $$$ out there youre brainwashed???
So when AMD has a better featured card for same or less $$$ and I buy one I guess I got brainwashed by AMD....

The Radeon VII has double precision floating point 16 gb of HBM2 is something which is far more valuable then whatever marketing gimmicks Nvidia has on the RTX 2080.

With Nvidia you will never buy the best card you will always pay for the best marketed card that is why you pay more. Nvidia convinced people to buy ray tracing and DLSS and said there worth the money at $1200. And for that money you get to play games (3) with ray tracing and some with DLSS (blurry mess more often then not).

When you go along with what Nvidia marketing deems to be good value then yes you are brainwashed.
 
The Radeon VII has double precision floating point 16 gb of HBM2 is something which is far more valuable then whatever marketing gimmicks Nvidia has on the RTX 2080.

With Nvidia you will never buy the best card you will always pay for the best marketed card that is why you pay more. Nvidia convinced people to buy ray tracing and DLSS and said there worth the money at $1200. And for that money you get to play games (3) with ray tracing and some with DLSS (blurry mess more often then not).

When you go along with what Nvidia marketing deems to be good value then yes you are brainwashed.

I was 100% on board with the more HBM2 is the better deal. Turns out it had zero impact in games.

We have something in common. We play Anthem you and I. DLSS allows me to play it @4k with a RTX2070. Nothing else does that for $500.
Blurry mess? No, not really. You should try it.

RNGRenviro.png
 
I was 100% on board with the more HBM2 is the better deal. Turns out it had zero impact in games.

We have something in common. We play Anthem you and I. DLSS allows me to play it @4k with a RTX2070. Nothing else does that for $500.
Blurry mess? No, not really. You should try it.

View attachment 156673

I did not say DLSS does not function but I was saying that the result varies per title. And it does not surprise me that you get excellent result due to that if the vmem footprint of DLSS is small enough that would have a definite advantage. Then again few titles ....

Btw I think that the [H]review of R9 390X 8gb had the same reflection on the extra 4gb but in the end when games evolve that extra memory tends to allow you to get more time out of your card.
 
Nvidia convinced people to buy ray tracing and DLSS and said there worth the money at $1200. And for that money you get to play games (3) with ray tracing and some with DLSS (blurry mess more often then not).

2 things: 1) you are right re: Nvidia marketing. Then again, does it matter? If they manage to get people paying their over-inflated prices, they've won (those prices are subsidizing work for their next architecture, like Apple subsidizes MacOS development with high prices on Macs). 2) DLSS is not necessarily blurry, it can be over-sharpened too. Either way, it blows my mind that people are touting its miracles: you get such a performance uplift! Well, duh, you're rendering everything at waaaaaay lower resolution, so of course performance improves. I can do the same by switching from 4K to 1440p or 1080p. The fact that Nvidia shouts about the performance "improvement" but neglects to explain to users they manage so by lowering the rendering resolution quality, just baffles me. It proves how far gone the market is, how sadly effective their marketing has been to date. People will buy whatever garbage Nvidia produces (not saying it is ALL garbage, I'm a happy 1060 user here, but you have to admit people buying Nvidia over the RX 570 is mindblowingly shortsighted). DLSS has some benefits (temporal quality of AA), but at the same time it fails miserably in many ways with lots of rendering artifacts.

I was 100% on board with the more HBM2 is the better deal. Turns out it had zero impact in games.

On this, you are correct. Then again, anyone spending 5 minutes thinking about it could have predicted that when Vega 56/64 were released. If a game requires 6, 8, 10 or 12GB or RAM, it doesn't matter if you have 256GB RAM on that GPU. It doesn't matter how fast HBM2 is if you've already reached your GPU's processing potential. More/faster memory helps nothing at that point, and GRRD5x or GDDR6 have not really been bottlenecks for %95 of the use scenarios. Besides that, HBM2 has never been about gaming, it's about video editing and number crunching. AMD have a more compute-focused GPU and they've reused it for gaming. Navi is supposed to begin the change to more gaming, less compute focus. Thus we shouldn't see much of HBM2 on Navi, bringing prices down too. We'll soon find out.
 
Last edited:
With Nvidia you will never buy the best card you will always pay for the best marketed card that is why you pay more.
I'm not being funny here, but have you missed a word or two out from the bolded part? Because if you meant "the best value card" then I could agree with you, but if you mean "the best performing card" then that's obviously wrong when we're talking about the 2080Ti or the previous-generation top-end cards.
 
On this, you are correct. Then again, anyone spending 5 minutes thinking about it could have predicted that when Vega 56/64 were released. If a game requires 6, 8, 10 or 12GB or RAM, it doesn't matter if you have 256GB RAM on that GPU. It doesn't matter how fast HBM2 is if you've already reached your GPU's processing potential.
Yep, I've yet to see proof of any game needing 16GB of VRAM (and I do mean needing and not utilising), or in fact any game needing more than the 11GB of the 1080Ti or 2080Ti.

On the other hand, I think HBM2 has a benefit as far as the VII goes - not necessarily in terms of raw speed, but in terms of bandwith. 1TB/s is monstrous compared to Vega 64, and more than double what that card has.
 
Status
Not open for further replies.
Back
Top