Cranky1970
Weaksauce
- Joined
- Sep 2, 2017
- Messages
- 72
As per usual....a great review Kyle!
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
Brent_Justice does all the heavy lifting.As per usual....a great review Kyle!
Thanks, got it now.Sure.
Since the performance difference between Radeon VII and Geforce RTX 2080 is so large in Shadow of the Tomb Raider running in DirectX 11, I wonder if the same performance gap persists when running the game in DirectX 12.
Awesome review! Looks like a decent but not perfect card. Certainly nice that AMD is hanging at 4K now.
Quick question: do you think it would be worthwhile to sell my 2x Vega 64s and buy 1x Vega VII (for use on 4K TV)?
Looks like cost would be even, and I can avoid Crossfire issues/incompatibility but probably get less performance in the "best case" games.
Some games support Crossfire, yes, but it's a hit or miss.
However, I have this FreeSync TV and previously 1x Vega 64 was not enough for 4K.
With the VII, though, it looks like 4K is viable, so I might make the swap.
Shadow of the Tomb Raider should be tested on DirectX 12 to see if the gap closes somewhat.
With the VII, though, it looks like 4K is viable, so I might make the swap.
Silver rating?
The fan is louder than most modern HVAC systems.
Some games were double digit frame rate less performance (one 30% less performance) than an equivalently priced card that came out 2.5 years ago.
Your personal system locked up constantly with a x399 motherboard from the same Company.
You can’t overclock without crashing.
Silver rating???
Tomb Raider is faster with the RTX 2080 using dx12 and even faster when both using dx12.
Is that what you wanted to hear?
Just looked at availability, they are sold out everywhere.you should
Other websites show
I also want to see Radeon VII and GeForce RTX 2080 compared in Shadow of the Tomb Raider (DirectX 12).
Other websites show Radeon VII and GeForce RTX 2080 to be much closer in Shadow of the Tomb Raider, so I am surprised to see this huge performance gap.
Good review. I agree with the conclusion, if AMD came in a bit lower in price ($649) I would have no problem recommending this over the RTX 2080. At the same price though, I'd rather go with the RTX since it offers more consistent performance across different games and you have the option of using the RTX features (not that I think they are a big selling point at the moment, I've found DLSS pretty disappointing, but if I'm paying the same price I might as well get them).
No, I don't want to "hear" anything.
I also want to see Radeon VII and GeForce RTX 2080 compared in Shadow of the Tomb Raider (DirectX 12).
Other websites show Radeon VII and GeForce RTX 2080 to be much closer in Shadow of the Tomb Raider, so I am surprised to see this huge performance gap.
We do not comment on other websites' results. I can tell you that we have repeatable results, many times over, using real gameplay.What I am saying is that:
Overclocked3D tested Shadow of the Tomb Raider in DirectX 12 and the result shows the difference between Radeon VII and Geforce RTX 2080 to be much smaller.
I am wondering if this is because they were testing the game in DirectX 12.
Review from GameSpot also don't show this huge gap.
View attachment 141999
Yes.Quick question: do you think it would be worthwhile to sell my 2x Vega 64s and buy 1x Vega VII (for use on 4K TV)?
I did talk to AMD about this at CES, and to be exact, I did not get an answer, but the reaction I got did not seem to be a positive one.I really wonder if they could shave off one of the HBM units, leave the card with 12gb and 3/4 of the bandwidth, and charge $599-$649 for the card.
I don't think its a bad card, but its hard to recommend over a RTX2080. Its slower, hotter, noisier and draws more power, plus its costs the same and its hard to find.
I'm really impressed as what AMD has achieved bringing Vega to this heights, but its time for something else. Unfortunately Navi may not be up to the task either, but there's hope.
I mean, unless you really hate nvidia, then the RTX2080 is a much better choice.
Have seen some sustained clocks over 2K, but that was far and away from any kind of real testing.What clocks are you seeing on the core?
I really wonder if they could shave off one of the HBM units, leave the card with 12gb and 3/4 of the bandwidth, and charge $599-$649 for the card.
To explore this idea a bit, admittedly somewhat off-topic, a real improvement would be to shave off two HBM stacks so that the size of the interposer could also be shrunk. That'll drop the cost both in terms of bill of materials and very likely in terms of increasing yields.
Now the problem with that is that you'd also be cutting memory back to 8GB, but more importantly you'd be cutting bandwidth to Vega 64 levels as well, basically half of what the Radeon VII has to work with. This will likely erase much of the performance benefit of the Radeon VII.
To explore this idea a bit, admittedly somewhat off-topic, a real improvement would be to shave off two HBM stacks so that the size of the interposer could also be shrunk. That'll drop the cost both in terms of bill of materials and very likely in terms of increasing yields.
Now the problem with that is that you'd also be cutting memory back to 8GB, but more importantly you'd be cutting bandwidth to Vega 64 levels as well, basically half of what the Radeon VII has to work with. This will likely erase much of the performance benefit of the Radeon VII.
Let's get back on topic. Enough threads about hypotheticals already.That's why I said to leave it at 12gb HBM. It might keep the bandwidth high enough not to neuter the gains.
Testing using the fastest api for the system is the most honest way to evaluate. Anyways in my 4.1gbz 2700, DX12 is usually the faster api and at times way faster. That is not universal. Very fast Intel processors seems to do better with DX 11. I think [H] already proven more cores does not mean better performance over a quad core in general if clocked high as in speed is still king for games.I did. It said:
"In this game the differences between DX12 and DX11 on the AMD Radeon VII are much closer than any other game. However, you can still see that technically DX11 is faster. We also tested at 4K to verify, and the 4K results also show DX11 being faster than DX12. Therefore, we will test in DX11 since that shows us the highest performance."
I think [H] already proven more cores does not mean better performance over a quad core in general if clocked high as in speed is still king for games.