Radeon 7 (Vega 2, 7nm, 16GB) - $699 available Feb 7th with 3 games

Brent and Kyle already ran into 8gb ram limitations on the 2080 and 2070 in BF5. Causing stutter and sporatic low frame rates.

HDR uses 10 bit light maps, textures etc which will add to usage of vram. In BF5 up to 12gb of VRAM can be seen used for more ready assets for Smooth game play and that is without RTX.

The 6gb RTX 2060 s a bad joke and as far as I am concern DOA. Plus another $100 added to it. Seriously will it be able to really use RTX in anything? Ram is the what, that will make this card pointless and a disappointment, Fury 2.

As for Freesync, there are many happy folks using it. My Freesync 2 monitor works flawless and is HDR. There is another aspect that purturbs me, HDR on the 1080 TI makes the desktop look like utter crap. Same monitor on the Vega 64 with hdr desktop and it correctly converts sdr content over, videos etc. then you see zero analysis or review over this aspect.

I will see how VRR works with NVidia.
I've never seen 12gb vram use with bfv at 4k on my setup.
 
I haven’t seen any benchmarks go past 7 GB of usage.


1546856165f4k2bi0vdz_10_1.png
 
Wow. That makes the 2080 look pretty silly for that game at 4K. Even the 2080 Ti might struggle. I was looking around at benchmarks, but didn’t think to check [H]. Thanks. Makes AMD’s decision to go with 16 GB look pretty good right now.

It would, if they had DXR ;)
 
Wow. That makes the 2080 look pretty silly for that game at 4K. Even the 2080 Ti might struggle. I was looking around at benchmarks, but didn’t think to check [H]. Thanks. Makes AMD’s decision to go with 16 GB look pretty good right now.

The decision might be good for professional or content creation situations, but it really seems like massive overkill for gaming. 12GB would have made a lot more sense and likely have allowed them to lower the price a little.
 
The decision might be good for professional or content creation situations, but it really seems like massive overkill for gaming. 12GB would have made a lot more sense and likely have allowed them to lower the price a little.

An odd configuration doesn't make much sense at first pass, but really, they could literally leave off a stack of HBM2 without affecting performance. If they could leave off two stacks, they could drop the interposer size and likely significantly reduce their BOM.
 
raytracing is a joke. so far they got it to work in one game, then gimped it so they had some frame rates.
OH BOY raytracing....everyone stop shooting and come look at this shadow...aint it purty


dlss on the other hand helps...can amd do it???
 
The Frostbyte engine has been slower in DX12 for years. I don't know why that is, it doesn't make sense, but it's a fact.
Depends upon your system, my Ryzen 2700 system/Vega FE's is faster in DX 12 than DX 11 in BF5 - plus DX 11 seems to be more erratic in framerates. Nvidia also has better DX 11 support so AMD users need to just test which one works best for them.

Also different aspects of any game can be different between the two API's, for example low content scenes as in objects, shaders etc. may excel under DX 11 single thread low latency (single thread controlling the draw calls) while DX 12 will excel at highly complexed scenes with tens of thousands of objects with the combined strength of multiple cores at work.
 
Interesting and in direct odds with HardOCPs findings of enabling DX12 incurring an immediate performance drop.
That was with current Intel/Nvidia hardware, drivers, OS update - that does not mean other systems with different hardware/configurations will have the same behavior or findings.
 
Last edited:
Wow. That makes the 2080 look pretty silly for that game at 4K. Even the 2080 Ti might struggle. I was looking around at benchmarks, but didn’t think to check [H]. Thanks. Makes AMD’s decision to go with 16 GB look pretty good right now.
VRAM used and VRAM needed are two different things, of course. Games will cache as much as possible to memory even if that doesn't actually improve real-world performance.
 
VRAM used and VRAM needed are two different things, of course. Games will cache as much as possible to memory even if that doesn't actually improve real-world performance.

Yup, it's the same thing as system RAM where unutilized RAM is useless RAM. So it makes sense to fill it to the brim with cached content for faster access if there is enough content to fill it in the first place. Doesn't mean it is needed.
 
Depends upon your system, my Ryzen 2700 system/Vega FE's is faster in DX 12 than DX 11 in BF5 - plus DX 11 seems to be more erratic in framerates. Nvidia also has better DX 11 support so AMD users need to just test which one works best for them.

Also different aspects of any game can be different between the two API's, for example low content scenes as in objects, shaders etc. may excel under DX 11 single thread low latency (single thread controlling the draw calls) while DX 12 will excel at highly complexed scenes with tens of thousands of objects with the combined strength of multiple cores at work.
Agreed.
My friends low core count system with a 580 prefers dx12. Anyone assuming it's just bad is just being lazy. Patches change things.
 
With AMD's poor DX11 support, I can see why DX12 might seem attractive in Frostbyte- but in DX12 it's still slower than Nvidia's DX11 ;)
 
The decision might be good for professional or content creation situations, but it really seems like massive overkill for gaming. 12GB would have made a lot more sense and likely have allowed them to lower the price a little.
If AMD could cut cost wouldn't they have done that?
Since when is it in AMD advantage to use high quality components to drive up costs in a market where they have trouble competing at the same level as Nvidia (being outsold all the time).
Must have been a problem where they could not use it in any other way (guess why Vega originally needed HBM as well).
 
With AMD's poor DX11 support, I can see why DX12 might seem attractive in Frostbyte- but in DX12 it's still slower than Nvidia's DX11 ;)
And RE2 (demo) DX11 was far and away better than DX12 for me and at least one other mentioned it in the thread.
 
With AMD's poor DX11 support, I can see why DX12 might seem attractive in Frostbyte- but in DX12 it's still slower than Nvidia's DX11 ;)
lol or Nvidia DX 12 just plain sucks :D. It is what it is. I have two video's processing comparing DX11 and DX12, short and sweet. Will be placed in this thread once done and if ok (should be third video post):

https://hardforum.com/threads/battlefield-v-ryzen-cpu-and-vega-testing.1975050/

As for ram usage, in DX 12 I've have not seen anything over 7gb in DX 12, DX 11 it climbs to around 12gb and then stays there, this is at 1440p single player stuff. Once I get to mediocre at multiplayer or good enough to hide and not get killed in 5 secs I may do some more boring videos ;).
 
Given that it's a game-to-game thing for DX12, not really. But AMD's DX11 (and DX10) drivers have struggled since their inception, of that there is no doubt.
yes and yes but sometimes no - for example FarCry 5 AMD drivers supports very well, works about perfect with CFX too. When AMD puts the effort in they can have some good DX 11 performance or efficiency.
 
yes and yes but sometimes no - for example FarCry 5 AMD drivers supports very well, works about perfect with CFX too. When AMD puts the effort in they can have some good DX 11 performance or efficiency.

Sometimes they get lucky- I like to think it has to do with less ratfuckery from the devs on particular titles, but who knows.
 
Buildzoid put up a video talking about the card. Pretty good run down of the card and some speculation on why the price is where its at. As usual with Buildzoid he rambles at times, but he still does a good job providing a bit of a different take on the card compared to a lot of the tech press.



I think his premise is flawed. He seems under the assumption that AMD has big sales in the data center, and this was just a lark to sell it to consumers.

I think both of those points are questionable.

1: I really doubt AMD has big data center sales. NVidia dominates data center even more than it dominates gaming. CUDA dominates scientific computing.
2: I really doubt AMD ever planned this as a data center only chip. AMD is all about using a chip in as many applications as possible.
 
2: I really doubt AMD ever planned this as a data center only chip. AMD is all about using a chip in as many applications as possible.

It's an advantage and a disadvantage- obviously some like Apple appreciate the multi-use capability, but it does put it them at a basic inefficiency disadvantage that has to be accounted for one way or another in final products.
 
Last edited:
If the 7 proves itself as a legit 4k gaming choice and also as a decent creative Pro performer the current price seems ok to me.

A
 
So this is 60 CUs with 16GB, I would love to see a 50 CU/8GB version for $475.

Why would you want to waste money at that? Just grab a vega 56 or 64 for cheaper lol. Its not just the 16GB of memory, that extra HBM2 stacks are more than doubling the bandwidth which is one of the primary reasons for the performance increase. if 8GB got the same performance amd would have loved to do that. There is a reason it has double the memory bandwidth.
 
Back
Top