GeForce RTX 2060 VRAM Usage, Is 6GB's Enough for 1440p?

Joined
Dec 23, 2010
Messages
886
Please watch and learn.

https://www.techspot.com/article/1785-nvidia-geforce-rtx-2060-vram-enough/

Quote"
"
Bottom Line
It's clear that right now, even for 4K gaming, 6GB of VRAM really is enough. Of course, the RTX 2060 isn’t powerful enough to game at 4K, at least using maximum quality settings, but that’s not really the point. I can hear the roars already, this isn't about gaming today, it’s about gaming tomorrow. Like a much later tomorrow…
The argument is something like, yeah the RTX 2060 is okay now, but for future games it just won’t have enough VRAM. And while we don’t have a functioning crystal ball, we know this is going to be both true, and not so true. At some point games are absolutely going to require more than 6GB of VRAM for best visuals.
The question is, by the time that happens will the RTX 2060 be powerful enough to provide playable performance using those settings? It’s almost certainly not going to be an issue this year and I doubt it will be a real problem next year. Maybe in 3 years, you might have to start managing some quality settings then, 4 years probably, and I would say certainly in 5 years time.


 
Last edited:
I own 2070, it is too slow and stutters while playing BF5 with DX 12 and RTX, fact. It was also tested on here by Brent and found the same issues. Why do I need Steve to tell me otherwise?

Its not a bad card, but we are also not in 2015 either.
 
I think I'll keep my GTX 1070 and wait for the next gen RTX cards...I think Nvidia realized that giving the 2060 8GB VRAM would be too good of a deal at that price point
 
I own 2070, it is too slow and stutters while playing BF5 with DX 12 and RTX, fact. It was also tested on here by Brent and found the same issues. Why do I need Steve to tell me otherwise?

Its not a bad card, but we are also not in 2015 either.
You need to watch this video.
Battlefield 5 DXR vs RTX 2060: Is 1080p60 Ray Tracing Really Possible?
 
Explain the stuttering? I really don't need to watch a video to "Convince" me that BF5 runs like absolute shit on a 2070. Now its a great thing on a 2060. Give me a break.
 
Shadow Of The Tomb Raider Hidden City, running around on each test in 1440p:
1440P, Max graphics settings, DX12 with following VRam Tests using Vega FE
  1. SDR, TAA - 6.2gb
  2. HDR, TAA - 6.8gb
  3. HDR, SMAA4x - 7.8gb
HDR added about 1/2 gb, better AA added another 1gb. HDR in this game is fantastic with a good HDR monitor, I really don't think the 2060 would do well here, especially if you run SMAA4x which is jaggy free at 1440p. Anyone rendering at a higher resolution and down sampling would also be affected. Unless someone actually tests the 2060 with HDR and increase AA I would not know for sure if it would be a smooth running game. For sure I don't want to be on the borderline right off the bat buying a $360+ card.
 
  • Like
Reactions: N4CR
like this
You need to watch this video.
Battlefield 5 DXR vs RTX 2060: Is 1080p60 Ray Tracing Really Possible?

lol, the stuttering with Ultra Textures and then Smooth with High Quality textures is classic example of being VRam limited and this was at 1080p using DXR. Throw in HDR and that would make it even worst. So if you compromise, decrease quality settings here and there you can have DXR at 1080p for BFV - does not sound like too much of a win overall. Plus each game one may have to spend time with the settings to find one that will work with you - like he said - he went through many settings before he found the one that will give him 60 fps consistently.
 
Shadow Of The Tomb Raider Hidden City, running around on each test in 1440p:
1440P, Max graphics settings, DX12 with following VRam Tests using Vega FE
  1. SDR, TAA - 6.2gb
  2. HDR, TAA - 6.8gb
  3. HDR, SMAA4x - 7.8gb
HDR added about 1/2 gb, better AA added another 1gb. HDR in this game is fantastic with a good HDR monitor, I really don't think the 2060 would do well here, especially if you run SMAA4x which is jaggy free at 1440p. Anyone rendering at a higher resolution and down sampling would also be affected. Unless someone actually tests the 2060 with HDR and increase AA I would not know for sure if it would be a smooth running game. For sure I don't want to be on the borderline right off the bat buying a $360+ card.
That's how much ram is being allocated not how much ram needed.
watch the first video.
 
That's how much ram is being allocated not how much ram needed.
watch the first video.
Until properly tested I don't think we will really know. HDR will take more ram usage due to having to have HDR-10 frame buffers, lightmaps, less compression ratio etc.
SMAA4x uses multisampling which will increase the frame buffer - those will be real increases in VRam usage.

I did watch the video, his tests were weak in determining what conditions could cause VRam limitations. Why would you make a video about the 2060 6gb of ram is or is not enough if you really don't test it fully? No HDR tests, no AA that takes more ram to run? DXR? How about DX 12 tests as well? To me the video was more like justifying you don't need more than 6gb vice actually testing it to the max, maybe just hoping Nvidia will be nicer to him later is my first thought.

Another aspect of that video, it lacks the best indicator for VRam issues; which is frame times. You can have stutters which frame times will show but the 1% would not be significantly be affected by them in many cases. Video is just not convincing at least to me on the viability of the 2060 6gb of VRam.
 
Last edited:
For resolutions the card is intended to run in, 6GBwill not be a bottleneck anytime soon.

We've only just begun to hit the GTX 1060's 3GB ram barrier at 1440p in the last year, so you still have several years to go before you hit that VRAM limit.

People get pissed off that there's no doubling of ram size every two year, but it's completely unnecessary. You don't double your system ram every two years just to play the latest games, do you?
 
Last edited:
I own 2070, it is too slow and stutters while playing BF5 with DX 12 and RTX, fact. It was also tested on here by Brent and found the same issues. Why do I need Steve to tell me otherwise?

Its not a bad card, but we are also not in 2015 either.

Because you're throwing RTX on. The 2060 really shouldn't even have RTX support. Its kind of like saying "but at 4K" when the card really isn't powerful enough for 4K as is.
 
That's how much ram is being allocated not how much ram needed.
watch the first video.

It seems we have to explain this on every thread regarding vRam.

Until properly tested I don't think we will really know. HDR will take more ram usage due to having to have HDR-10 frame buffers, lightmaps, less compression ratio etc.
SMAA4x uses multisampling which will increase the frame buffer - those will be real increases in VRam usage.

I did watch the video, his tests were weak in determining what conditions could cause VRam limitations. Why would you make a video about the 2060 6gb of ram is or is not enough if you really don't test it fully? No HDR tests, no AA that takes more ram to run? DXR? How about DX 12 tests as well? To me the video was more like justifying you don't need more than 6gb vice actually testing it to the max, maybe just hoping Nvidia will be nicer to him later is my first thought.

Another aspect of that video, it lacks the best indicator for VRam issues; which is frame times. You can have stutters which frame times will show but the 1% would not be significantly be affected by them in many cases. Video is just not convincing at least to me on the viability of the 2060 6gb of VRam.

It has been "properly tested" by most recently [H], HUB, and others. There were several cases where the 8gb cards were showing 8gb used. In every case with no penalty to the 2060.
 
It seems we have to explain this on every thread regarding vRam.



It has been "properly tested" by most recently [H], HUB, and others. There were several cases where the 8gb cards were showing 8gb used. In every case with no penalty to the 2060.
You failed to understand my point. Hell I could get my Fury to play at 4K just fine, just reduce some settings.

Longer term I doubt 6gb will be enough, RTX shows already it is too little.

The 2060 at $249 would be a kick ass card, at $349+ not.
 
  • Like
Reactions: N4CR
like this
You failed to understand my point. Hell I could get my Fury to play at 4K just fine, just reduce some settings.

Longer term I doubt 6gb will be enough, RTX shows already it is too little.

The 2060 at $249 would be a kick ass card, at $349+ not.

Fair enough, and many are hoping that the GTX1160ti? will basically be that.
 
You failed to understand my point. Hell I could get my Fury to play at 4K just fine, just reduce some settings.
You fail to understand my point, you can play a game at ultra settings with a 8gb 2070 and use 7.8gb of memory, you can pmay the same game at ultra settings with a 2060 and use 5.8g of memory with no stuttering, and no loss of performance from a vram perspective.
The games will allocate what's needed through whats available on a per card basis.

But, I'm not saying a 4gb card would do the same, there is a limit depending on the game.

As of now there is no games that I know of that NEEDS more than 6gb of memory at 1440p.
 
Because you're throwing RTX on. The 2060 really shouldn't even have RTX support. Its kind of like saying "but at 4K" when the card really isn't powerful enough for 4K as is.
Removal of RT cores could decrease die size and with it also decrease production cost. Maybe even save Nvidia by as much as few bucks per chip XD

Not including RT cores would not not benefit users at all and not even Nvidia for which the best long term strategy is to have RTX features on whole range of card from lowest end to top end as soon as possible to avoid market segmentation and encourage developers to use these features - note: ray-tracing does not need to be used for full blown reflections, shadows, etc. that tank performance. It can also be used for example for calculating realistic audio propagation in which case performance without RT cores would be good but with them better. Same with using more artificial intelligence in games. Tensor cores will encourage developers to use them.

GPU's like TU116 which will be used in GTX 1660Ti were developed only because of shit-coin mining trend going on at the time. Mining trend collapsed but they already invested in development of the chip so it didn't made sense to not go through with it, especially since a lot of people tend to have stupid idea that not having RTX features makes card better...
 
Shadow Of The Tomb Raider Hidden City, running around on each test in 1440p:
1440P, Max graphics settings, DX12 with following VRam Tests using Vega FE
  1. SDR, TAA - 6.2gb
  2. HDR, TAA - 6.8gb
  3. HDR, SMAA4x - 7.8gb
HDR added about 1/2 gb, better AA added another 1gb. HDR in this game is fantastic with a good HDR monitor, I really don't think the 2060 would do well here, especially if you run SMAA4x which is jaggy free at 1440p. Anyone rendering at a higher resolution and down sampling would also be affected. Unless someone actually tests the 2060 with HDR and increase AA I would not know for sure if it would be a smooth running game. For sure I don't want to be on the borderline right off the bat buying a $360+ card.

Thanks for that. Tech jesus is an NDA signing cuck boy.
4Gb is most definitely a bottleneck on 1440p on older games for me, so 6Gb is barely out of the weeds. V64 chews it no problems though and an FE would even better.
 
For resolutions the card is intended to run in, 6GBwill not be a bottleneck anytime soon.

We've only just begun to hit the GTX 1060's 3GB ram barrier at 1440p in the last year, so you still have several years to go before you hit that VRAM limit.

People get pissed off that there's no doubling of ram size every two year, but it's completely unnecessary. You don't double your system ram every two years just to play the latest games, do you?
Bullshit.
My 290x was running out and stuttering in 1440p with 4gb, especially in NMS. Went to vega and it all magically went away, how surprising!
 
You fail to understand my point, you can play a game at ultra settings with a 8gb 2070 and use 7.8gb of memory, you can pmay the same game at ultra settings with a 2060 and use 5.8g of memory with no stuttering, and no loss of performance from a vram perspective.
The games will allocate what's needed through whats available on a per card basis.

But, I'm not saying a 4gb card would do the same, there is a limit depending on the game.

As of now there is no games that I know of that NEEDS more than 6gb of memory at 1440p.
Yes I understand that, if the game has good memory management (newer games, DX 12, Drivers etc. over the years have improved tremendously). Still when you start having very high complex scenes not with hundreds but thousand of objects each having shaders/polygons/textures/longer view distances 6gb is really cutting it short.

When DXR is used, more geometry comes into play for ray tracing adding to the memory footprint, add in the shaders/textures etc. to render the reflection and memory starts to climb fast. You cannot just cull everything not a direct shot from the camera view, reflection objects can be a lot of unseen objects. The whole active world becomes much larger with more of anything needed to be loaded - here even 8gb, [H]ardOCP has found to be too limited

Without HDR testing I would not call 6 gb good enough just yet. Reason being is your now dealing with more way more colors which for a given buffer means less lossless compression can be done since the chance of having duplicate colors go down dramatically when the number of colors goes up. This will hit ram as well as performance of the GPU. Plus your buffers become HDR 10 making them larger to begin with.

Start using some of the higher quality AA options is also a VRam user.

Every test calling 6gb good enough has been lacking in really pushing the settings, nor finding when it will break and cause frame time issues, stutters. Run Shadow of the Tomb Raider with HDR, SMAA4x at 1440p and see how 6gb fares. At the moment the 2060 would make a good gaming card except the Vega 64 will perform better in virtually everything (except Raytracingg which so far looks to be pointless on the 2060) for virtually the same cost and not only has more VRam but a very smart memory controller. Really every card Nvidia has out for Turing is priced too high.
 
You failed to understand my point. Hell I could get my Fury to play at 4K just fine, just reduce some settings.

Longer term I doubt 6gb will be enough, RTX shows already it is too little.

The 2060 at $249 would be a kick ass card, at $349+ not.

This might sound sarcastic but you know that the points of developing new technology and IP is to monetize it right?

Selling an RTX 2060 at $249 is really stupid.

Why?

First off the size of TU106 is way to large a chip to sell at that price point. Not only from a margin stand point but a volume standpoint. The 250 dollar price point is very high volume market and 440mm2 is more than twice the size of any of Nvidia GX106 series. What this means is not only is it twice as expensive to produce these chips(even more so because of the cost of finfet, and yields with larger chips), you get less than half the amount of good chips because of the big die size difference. Meaning price too low and you will quickly run out of chips very quickly. So combine this with the smaller margin on the big chip and you are basically running yourself out of business. And on top of this, GDDR6 cost 70% more than GDDR5.

So to summarize, does it make any sense whatsoever to sell a chip that is twice the size of your old chip which means double the cost, with half the number of chips produced and with memory that cost 70% more at the same price of your outgoing card? Especially considering the amount of money spent developing these cards along with the R and D for the architecture.

Considering even AMD is going the milking the consumer price point with the RX 590 $280 which uses the same old polaris chip which you defended in the post below.

https://hardforum.com/threads/xfx-r...-card-review-h.1971704/page-5#post-1043944475

This input AMD did was absolute nothing compared to the RTX 2060 and this shown with the performance. With the RX 580 to RX 590 transition, all AMD did was port the design to 12nm(same size chip), kept the same memory which lead to a 10% percent performance boost with a 21% increase in price. Cost were the same for AMD(perhaps less if they improved the yields which is highly likely on the 3rd iteration of the chip), minimal R and D was inputed and it's okay for AMD to price it at $280?

But it not okay for Nvidia to change the price of their products from 250 to 350 when there is a 60% percent boost in performance, using chips double the size, with memory 70% more expensive, on top of using 12nm(the only thing AMD included) and a new design and architecture. And they should charge 250 dollar and absorb the increase in production cost and not recover their investment on R and D? This is friggin backwards. AMD should have been the ones to lower the price on RX590 at this point. Particularly now since the RTX 2060 is 45% faster according to techpowerup and computerbase.de for 25% more money.

The RX 590 should have been hammered as a terrible card because old technology should not come with a higher price. In your original post you were trying to make AMD look like a good guy for not increasing the prices of their rebadges. This is not the way it has been the past and a poor precedence for consumers. Typically when a company rebadges and revises the technology, it has come with a lower price. GTX 480 to 570. GTX 680 --> 770. 7970--> 280x. 7870-->270x, 290x to 390x. All these chips came with better performance then their previous iterations/rebadge and a lower price. etc. But with Polaris AMD priced their cards at the same price of the outgoing chip to increase margins(from increasing yields over time). And they got extra greedy with the RX590 and increased the price on top of this. There is another company that uses this type of pricing structure and it has lead to the stagnation of performance increases. That company is Intel. It's a crappy tactic when Intel does it and it's a crappy tactic when AMD does it.

I would rather a company not rebadge their products and develop new technology if it means bigger performance and new cards more frequently. I would rather get a card that cost 40% more money if it is 60% faster than a card that is 10% faster for 21% more money.
 
This might sound sarcastic but you know that the points of developing new technology and IP is to monetize it right?

Selling an RTX 2060 at $249 is really stupid.

Why?

First off the size of TU106 is way to large a chip to sell at that price point. Not only from a margin stand point but a volume standpoint. The 250 dollar price point is very high volume market and 440mm2 is more than twice the size of any of Nvidia GX106 series. What this means is not only is it twice as expensive to produce these chips(even more so because of the cost of finfet, and yields with larger chips), you get less than half the amount of good chips because of the big die size difference. Meaning price too low and you will quickly run out of chips very quickly. So combine this with the smaller margin on the big chip and you are basically running yourself out of business. And on top of this, GDDR6 cost 70% more than GDDR5.

So to summarize, does it make any sense whatsoever to sell a chip that is twice the size of your old chip which means double the cost, with half the number of chips produced and with memory that cost 70% more at the same price of your outgoing card? Especially considering the amount of money spent developing these cards along with the R and D for the architecture.

Considering even AMD is going the milking the consumer price point with the RX 590 $280 which uses the same old polaris chip which you defended in the post below.

https://hardforum.com/threads/xfx-r...-card-review-h.1971704/page-5#post-1043944475

This input AMD did was absolute nothing compared to the RTX 2060 and this shown with the performance. With the RX 580 to RX 590 transition, all AMD did was port the design to 12nm(same size chip), kept the same memory which lead to a 10% percent performance boost with a 21% increase in price. Cost were the same for AMD(perhaps less if they improved the yields which is highly likely on the 3rd iteration of the chip), minimal R and D was inputed and it's okay for AMD to price it at $280?

But it not okay for Nvidia to change the price of their products from 250 to 350 when there is a 60% percent boost in performance, using chips double the size, with memory 70% more expensive, on top of using 12nm(the only thing AMD included) and a new design and architecture. And they should charge 250 dollar and absorb the increase in production cost and not recover their investment on R and D? This is friggin backwards. AMD should have been the ones to lower the price on RX590 at this point. Particularly now since the RTX 2060 is 45% faster according to techpowerup and computerbase.de for 25% more money.

The RX 590 should have been hammered as a terrible card because old technology should not come with a higher price. In your original post you were trying to make AMD look like a good guy for not increasing the prices of their rebadges. This is not the way it has been the past and a poor precedence for consumers. Typically when a company rebadges and revises the technology, it has come with a lower price. GTX 480 to 570. GTX 680 --> 770. 7970--> 280x. 7870-->270x, 290x to 390x. All these chips came with better performance then their previous iterations/rebadge and a lower price. etc. But with Polaris AMD priced their cards at the same price of the outgoing chip to increase margins(from increasing yields over time). And they got extra greedy with the RX590 and increased the price on top of this. There is another company that uses this type of pricing structure and it has lead to the stagnation of performance increases. That company is Intel. It's a crappy tactic when Intel does it and it's a crappy tactic when AMD does it.

I would rather a company not rebadge their products and develop new technology if it means bigger performance and new cards more frequently. I would rather get a card that cost 40% more money if it is 60% faster than a card that is 10% faster for 21% more money.
Nvidia size problem is coming from unproven and mostly unused tech. Take out the tensor cores and even the AI cores and now you have a better suited chip for what it can be used for. DSLL on paper sounds great - the lack of usage after this amount of time is far telling. In other words a lot of the space on the GPU is wasted.

Now I was comparing the 590 to the 1060 in the same price range where AMD has a very clear advantage plus supporting FreeSync, well Nvidia now supports adaptive sync so that is now a mute point. In the $200 range AMD has a very clear advantage. In the 2060 price range once again AMD has a very clear performance advantage as well with Vega 64/56 with more usable features (my opinion on that). Will be interesting in how Vega VII performs, it may extend AMD's adavantage over rthe 2080. As in $700 on down AMD becomes the better choice.
 
Nvidia size problem is coming from unproven and mostly unused tech. Take out the tensor cores and even the AI cores and now you have a better suited chip for what it can be used for.

how much room on the chip does this unused tech take up? %?
 
how much room on the chip does this unused tech take up? %?
Here is image from VideoCardZ for Streaming Multiprocessor showing I presume relative size of the Tensor Cores and RT core per SM. They do take up significant amount of space. Do not know percentages.

https://wccftech.com/nvidia-turing-gpu-architecture-geforce-rtx-graphics-cards-detailed/

Turing.png
 
Yes I understand that, if the game has good memory management (newer games, DX 12, Drivers etc. over the years have improved tremendously). Still when you start having very high complex scenes not with hundreds but thousand of objects each having shaders/polygons/textures/longer view distances 6gb is really cutting it short.

When DXR is used, more geometry comes into play for ray tracing adding to the memory footprint, add in the shaders/textures etc. to render the reflection and memory starts to climb fast. You cannot just cull everything not a direct shot from the camera view, reflection objects can be a lot of unseen objects. The whole active world becomes much larger with more of anything needed to be loaded - here even 8gb, [H]ardOCP has found to be too limited

Without HDR testing I would not call 6 gb good enough just yet. Reason being is your now dealing with more way more colors which for a given buffer means less lossless compression can be done since the chance of having duplicate colors go down dramatically when the number of colors goes up. This will hit ram as well as performance of the GPU. Plus your buffers become HDR 10 making them larger to begin with.

Start using some of the higher quality AA options is also a VRam user.

Every test calling 6gb good enough has been lacking in really pushing the settings, nor finding when it will break and cause frame time issues, stutters. Run Shadow of the Tomb Raider with HDR, SMAA4x at 1440p and see how 6gb fares. At the moment the 2060 would make a good gaming card except the Vega 64 will perform better in virtually everything (except Raytracingg which so far looks to be pointless on the 2060) for virtually the same cost and not only has more VRam but a very smart memory controller. Really every card Nvidia has out for Turing is priced too high.

Even with more VRAM, I doubt that the 2060 could handle DXR. At least not without drastically cutting back other graphics settings.

HDR is an insetting thing to look at. None of the big tech sites or Youtubers test HDR right now. I think its still such a niche thing, only supported by a handful of games, that its probably not worth investing the money in the equipment to do the testing right now. Doesn't help that the only monitors with good HDR are close to two grand and even for TVs you're looking at around $600, at minimum, to get something with WCG, FALD, and good enough brightness.
 
I think I'll keep my GTX 1070 and wait for the next gen RTX cards...I think Nvidia realized that giving the 2060 8GB VRAM would be too good of a deal at that price point
I’m also hanging on to my 1070 for the next round. Hoping for some competition.
 
And HDR is not nearly the same performance impact as RTX. It's more around the impact of enabling 2x MSAA.

Just expect 20 to 30 percent lower performance, with a similar increase in memory usage. It's not going to break the memory bank like RTX.
 
Last edited:
This whole thread and topic will be a non-issue in 5 months when 7nm Navi releases.
The Navi will be cheaper and better performance than 2060.
The 2060 will then be lowered to $299 or $279 where it should be and a price that justifies the 6GB Vram for mid range GPU segment.
For those that dont want to wait, just pay the $350 and the rest of us can let them be happy with their product.
 
Thanks for that. Tech jesus is an NDA signing cuck boy.
4Gb is most definitely a bottleneck on 1440p on older games for me, so 6Gb is barely out of the weeds. V64 chews it no problems though and an FE would even better.

What kind of comment is this? First off, HUB Steve is not Tech Jesus. That is Steve from Gamers Nexus. I doubt short-hair Steve is in favor with Nvidia as they did ship him a RTX 2060 at launch.
I doubt neither of them let anyone fuck their wives as the cuck boy comment suggests. Please grow up.

Please see the above comments for memory allocation versus memory needed.

Bullshit.
My 290x was running out and stuttering in 1440p with 4gb, especially in NMS. Went to vega and it all magically went away, how surprising!

No frickin way, that is so amazing. Well then, that ends the conversation about 6 GB being enough...
 
This whole thread and topic will be a non-issue in 5 months when 7nm Navi releases.
The Navi will be cheaper and better performance than 2060.
The 2060 will then be lowered to $299 or $279 where it should be and a price that justifies the 6GB Vram for mid range GPU segment.
For those that dont want to wait, just pay the $350 and the rest of us can let them be happy with their product.

This whole thread and topic is about if 6 GB is enough and NOT Navi vs RTX1060.

I don't know the latest Navi rumors, but it is entirely possible that it could have a 6 GB and 12 GB version of a model. No doubt the 6 GB would offer great savings if the performance drop was minimal.
 
Nvidia size problem is coming from unproven and mostly unused tech. Take out the tensor cores and even the AI cores and now you have a better suited chip for what it can be used for. DSLL on paper sounds great - the lack of usage after this amount of time is far telling. In other words a lot of the space on the GPU is wasted.

Now I was comparing the 590 to the 1060 in the same price range where AMD has a very clear advantage plus supporting FreeSync, well Nvidia now supports adaptive sync so that is now a mute point. In the $200 range AMD has a very clear advantage. In the 2060 price range once again AMD has a very clear performance advantage as well with Vega 64/56 with more usable features (my opinion on that). Will be interesting in how Vega VII performs, it may extend AMD's adavantage over rthe 2080. As in $700 on down AMD becomes the better choice.


The size is also bringing a significant uplift in performance. This performance jump is what your paying for mostly and this lead is likely to grow because Nvidia finally brought it's feature spec up to AMD when it comes to async and variable shading and beyond to the point where we see very big gains in games like wolfenstein 2 vs AMD.

Not only that, there is a card coming out without the RT, with a lower price and a smaller die to address the market at the under 300 price point. It's also not a recycle and is brand new tech and card which is alot more than can be said with the RX 590.

The Tensor cores and RT do have questionable uses at the moment but needed to be implemented to for developers to even consider the technology. I don't have too much faith in RT becoming useful at this level but there is a chance. Dice's DX12 implementation is horrific and already brings a massive drop in performance. RT needs to be given another chance with another game and studio before we close the book on it. However DLSS will be a feature that is likely to stay and improve over time. That is the nature of anything developed with AI.

However for now, we atleast seeing it being implemented along with it being implemented in future games. Why Does Vega have more usable features with Vega 56/64? Ironically talking about paying for features that have questionable use, AMD NGG and primitive shader implementation are broken on current Vega. It's DSBR seems limited to what Nvidia is doing and Nvidia basically has all the hardware features of Vega and then some including variable shaders(a more robust version of it) and async along with RT/DLSS.

Vega 56/64 is not a superior product over the RTX 2060 and most reviews are reflecting this. The RTX 2060 has performance that bisects that gap but with performance closer to Vega 64. There are some video's online(for gamers) that try inflate Vega 64's lead but we can blame AMD guerilla marketing team for that. I debunked that review as the results are fake if we cross reference the results with any existing reviews online. Add in the lower power consumption, the better coolers and the RTX 2060 has a compelling product vs the two vega cards. The only advantage AMD has is the additional 2gb of framebuffer but as we can see from the RTX 2080 lead extending over the GTX 1080 ti at higher resolutions, Nvidia memory usage is more efficient.

The RX 590 price is way to close to the RTX 2060 and that's why the price of it has been slowly falling since the release of the RTX 2060. The 280/ 350 dollar market are close enough that the cards compete with one another. The RTX earns it's premium because it's price well even when considering the reduce street pricing of Vega 56/64 and has been causing the price of Vega/RX 590 to start falling as a result. However, AMD needs the prices to fall further because the cards typically selling at competitive prices of the RX 2060 are reference cards which are generally not recommended for noise and heat concerns.
 
The size is also bringing a significant uplift in performance. This performance jump is what your paying for mostly and this lead is likely to grow because Nvidia finally brought it's feature spec up to AMD when it comes to async and variable shading and beyond to the point where we see very big gains in games like wolfenstein 2 vs AMD.

Not only that, there is a card coming out without the RT, with a lower price and a smaller die to address the market at the under 300 price point. It's also not a recycle and is brand new tech and card which is alot more than can be said with the RX 590.

The Tensor cores and RT do have questionable uses at the moment but needed to be implemented to for developers to even consider the technology. I don't have too much faith in RT becoming useful at this level but there is a chance. Dice's DX12 implementation is horrific and already brings a massive drop in performance. RT needs to be given another chance with another game and studio before we close the book on it. However DLSS will be a feature that is likely to stay and improve over time. That is the nature of anything developed with AI.

However for now, we atleast seeing it being implemented along with it being implemented in future games. Why Does Vega have more usable features with Vega 56/64? Ironically talking about paying for features that have questionable use, AMD NGG and primitive shader implementation are broken on current Vega. It's DSBR seems limited to what Nvidia is doing and Nvidia basically has all the hardware features of Vega and then some including variable shaders(a more robust version of it) and async along with RT/DLSS.

Vega 56/64 is not a superior product over the RTX 2060 and most reviews are reflecting this. The RTX 2060 has performance that bisects that gap but with performance closer to Vega 64. There are some video's online(for gamers) that try inflate Vega 64's lead but we can blame AMD guerilla marketing team for that. I debunked that review as the results are fake if we cross reference the results with any existing reviews online. Add in the lower power consumption, the better coolers and the RTX 2060 has a compelling product vs the two vega cards. The only advantage AMD has is the additional 2gb of framebuffer but as we can see from the RTX 2080 lead extending over the GTX 1080 ti at higher resolutions, Nvidia memory usage is more efficient.

The RX 590 price is way to close to the RTX 2060 and that's why the price of it has been slowly falling since the release of the RTX 2060. The 280/ 350 dollar market are close enough that the cards compete with one another. The RTX earns it's premium because it's price well even when considering the reduce street pricing of Vega 56/64 and has been causing the price of Vega/RX 590 to start falling as a result. However, AMD needs the prices to fall further because the cards typically selling at competitive prices of the RX 2060 are reference cards which are generally not recommended for noise and heat concerns.
I think you said it all:
  • "Tensor cores and RT have questionable uses" - So features not really a factor between the two similar priced cards 2060 vs Vega 56/64 for today pricing
  • "DSBR seems limited to what Nvidia is doing" - So pretty much feature parity on sifting out non-visible polygons - For AMD this gives up to a 10% performance lift when turned on by the drivers when it enhances a game which most games it seems to be turned on
  • "AMD NGG and primitive shader implementation are broken on current Vega" - Really? They have been shown to work, the issue is there are no API calls yet to use them. Will they be added to DX 12 and/or Vulkan being a rather small segment of the market? or will be added once next gen consoles come out is to be seen. Now if you do know they are broken then you might be able to get a lawsuit going and reap some rewards. In reality it was a plan that did not work out since it requires basically assembly level programming, no direct API calls and open solutions like Wolfenstein 2's compute shader primitive culling in Vulkan worked great. Now can we say the same thing for DSLL? With the one and only game having it has flashing objects, reduced texture clarity and you can get better results using TAA at a lower resolution upscaling with equivalent performance.
Other feature comparison
All said and done what is most important for gamers is performance for todays game and tomorrows - I think Vega 64 has more performance overall over a 2060. BF5 Vega 64 vs 2080 and please note ram usage: I've found DX 12 much better in BF5 with Vega, below was in DX 11



At this time I do believe Vega 64 in general is the better buy not only for games but also other areas such as compute, professional applications etc.

As for OCing very few sites know how to really push Vega. Only Gamers Nexus really showed how to push one, a Vega 56 which performed on par with a RTX 2070.
  • For Vega, undervolting is key and keeping the HBM cool leads to some very good HBM OCing. Cooling is king for really pushing it and the performance uplift becomes rather large.
 
Last edited:
I failed to even compare driver features of Vega 64 over the RTX 2060 - Usable features
  • built in OCing software which can be set for every application - absent Nvidia, must use external separate program
  • built in monitor as well as remote monitoring for hardware performance using like your mobile phone, table or other computer - absent Nvidia
  • built in video capture without having to load GeForce Experience
  • update drivers right in the drivers - Nvidia you must use GeForce Experience
  • way better multi-monitor support (Hardware/software)
  • A modern efficient UI compared Nvidia's hideous one, in my opinion
 
All said and done what is most important for gamers is performance for todays game and tomorrows - I think Vega 64 has more performance overall over a 2060. BF5 Vega 64 vs 2080 and please note ram usage: I've found DX 12 much better in BF5 with Vega, below was in DX 11



At this time I do believe Vega 64 in general is the better buy not only for games but also other areas such as compute, professional applications etc.

As for OCing very few sites know how to really push Vega. Only Gamers Nexus really showed how to push one, a Vega 56 which performed on par with a RTX 2070.
  • For Vega, undervolting is key and keeping the HBM cool leads to some very good HBM OCing. Cooling is king for really pushing it and the performance uplift becomes rather large.


Vega64 looks to have some advantages over the RTX 2080 in this title.
The RTX 2080 was surely 15-20% faster and used sligtly less vram at 3.7 GB compared to about 4.1 GB (better compression?)

However, Vega used far less system memory at 7.4 GB compared to around 9.2 GB, at least in the beginning section. This could potentially be an issue for users with 16 GB of system memory.
Also, Vega used way less CPU resources at around 40% compared to 60% on the RTX. Not significant for most, but could be a factor for live streamers and those with limited cpus.
 
Vega64 looks to have some advantages over the RTX 2080 in this title.
The RTX 2080 was surely 15-20% faster and used sligtly less vram at 3.7 GB compared to about 4.1 GB (better compression?)

However, Vega used far less system memory at 7.4 GB compared to around 9.2 GB, at least in the beginning section. This could potentially be an issue for users with 16 GB of system memory.
Also, Vega used way less CPU resources at around 40% compared to 60% on the RTX. Not significant for most, but could be a factor for live streamers and those with limited cpus.
Interesting on the differences, I would also expect more CPU usage for a higher performing card. In BFV DX12 for the Vega FE can be dramatically better in the game with an OC 2700 so some big variations with different systems.

There are a lot of videos now showing performance between the 2060 and Vega 64 which one can search for, most just shows Vega 64 overall ahead. Here is video which caught my attention since it has 1% averages (or 99% of the frames are faster than the given number) here at 1440p the 2060 takes a much larger hit over the Vega 64. Ram amount? Ram Bandwidth? Vega 64 has way more bandwidth than the RTX 2060 as a thought. Also the RTX 2060 is factory OC from Nvidia as a note while the Vega 64 is reference which has a lot of OC headroom potential.

 
Interesting on the differences, I would also expect more CPU usage for a higher performing card. In BFV DX12 for the Vega FE can be dramatically better in the game with an OC 2700 so some big variations with different systems.

There are a lot of videos now showing performance between the 2060 and Vega 64 which one can search for, most just shows Vega 64 overall ahead. Here is video which caught my attention since it has 1% averages (or 99% of the frames are faster than the given number) here at 1440p the 2060 takes a much larger hit over the Vega 64. Ram amount? Ram Bandwidth? Vega 64 has way more bandwidth than the RTX 2060 as a thought. Also the RTX 2060 is factory OC from Nvidia as a note while the Vega 64 is reference which has a lot of OC headroom potential.




I don't know about NJtech. If you are putting that much work into these benches, could you put in a little intro and not just a bunch of colored graphs with trans music?? Their format just doesn't feel legit.

Assuming all is legit, I doubt it is bandwidth as 336 gb/s seems enough for that weight class. Lack of vram is possible but it seemed to do good against a 1070ti in another video. I would need to see more independent test from the games that appear to be in trouble to verify it was not just driver issues with playability feedback.
 
I don't know about NJtech. If you are putting that much work into these benches, could you put in a little intro and not just a bunch of colored graphs with trans music?? Their format just doesn't feel legit.

Assuming all is legit, I doubt it is bandwidth as 336 gb/s seems enough for that weight class. Lack of vram is possible but it seemed to do good against a 1070ti in another video. I would need to see more independent test from the games that appear to be in trouble to verify it was not just driver issues with playability feedback.
Those are just canned benchmarks and you can find many of those available and compare them. You can find some with OC Radeon 64s against OC RTX 2060s where the gap really widens up. I don't think too many folks will go out and get a RTX 2060 (Unless for another specific type build) if they already have a Vega 64 because that would be a down grade, more like a 2080, 2080 TI or Radeon VII, so we are left with what is available for comparisons. Now if I had a sample or loaner I would do a totally HDR comparison review + FreeSync. Finding out when 6gb or even 8gb is limiting, bandwidth limitations etc.
 
I have the RTX 2060 and don't recommend it for above 1080p. If you do buy it and play higher resolution you may find yourself wanting to upgrade sooner than you wish.
 
Back
Top