AMD Radeon VII Benchmarks Leak Out

Actually, the main part I've read in the past that makes it expensive isn't really the die itself- but the whole assembly. The parts themselves come at market cost, but getting them all put together successfully has been what's made HBM not be the low-cost solution it was widely expected to be. That's why Vega's memory inefficiency is an issue. If they could cut two stacks off, they'd have a significantly smaller overall assembly and cost per unit would drop dramatically as a result.



Don't see it. Games still aren't loading up the assets, and resolution increases don't hit memory as nearly as hard- remember, pixels are small, and even having over eight billion of them for 4k is just a few megabytes of memory. What actually eats up a bit of VRAM is the per-pixel processing- that's why ray tracing increases actual needed VRAM- but that's also not a problem here, since AMD neglected to implement that functionality ;)

C'mon man don't be so naive. Apparently you haven't seen this:

https://www.techspot.com/review/1784-resident-evil-2-benchmarks/

It's just 1 of a number of examples that show 4K games even released right now are hitting the 8GB ceiling. In another year the 2080 will be falling behind the 1080TI due to VRAM limits in 4K. This is the R9 Fury all over again.
 
For reference my V64 pulled 8482 in Timespy @ 1725/1150 vs 8700 for the stock VII . Looking forward to VII being waterblocked and overclocked. Lets see what 7nm can do for vega!
 
Let's not forget Radeon VII will come with 16GB of VRAM. This is where the RTX 2080 will struggle with only 8GB. I see 7-7.5GB of VRAM usage on my Vega 64 now in 4K. Upcoming titles will need more VRAM for 4K. The RTX 2080 is looking like Nvidia's Fury.
AMD needs to bump up VSR. If this card had the same exact performance, but DSR in place of VSR it would be a bit more exciting to graphic image quality purists.

The 1080 Ti is a little slower than the RTX 2080 overall but it is faster than the RTX 2080 in Fire Strike FYI.

https://www.guru3d.com/articles-pages/geforce-rtx-2080-founders-review,32.html


Like piscian18 said, these benchmarks are worthless since it's only firestrike.
Worthless no, but context we need more independent benchmarks to get a better reality of where it sits in terms of performance where it does well or more poorly. Adjust purchasing decisions based on the context of usage plain and simple. There will defiantly be instances where the RV7 performs better and worse and I wouldn't expect Final Fantasy benchmark to be one of the better performance instances given it's optimized for Nvidia.
 
Last edited:
The last conventional wisdom I heard was that this was a way to use up 'defective' leftover chips that couldn't be used in the $2k+ machine learning card and that was never really intended for gamers.
Also the cost of that 16G HBM2 memory means that they're losing money on every card they sell @ $700.

It's still probably ideal for machine learning or CAD/Engineering/Scientific apps - pc gaming, probably not so much
If they are using defective dies on GPUs that would.otherwise be tossed.....they aren't losing money now are they.
 
Ask that again in 5-10 years when the majority of games use real ray tracing. Nvidia's hybrid approach exists in one game with an abysmal implementation.
Try to look at the bright side...at least they are doing ray tracing. Sometimes great things have small beginnings....
 
C'mon man don't be so naive. Apparently you haven't seen this:

https://www.techspot.com/review/1784-resident-evil-2-benchmarks/

It's just 1 of a number of examples that show 4K games even released right now are hitting the 8GB ceiling. In another year the 2080 will be falling behind the 1080TI due to VRAM limits in 4K. This is the R9 Fury all over again.
R9 Fury all over again? Hmmm...perhaps not. Vast majority of people don't game at 4K. I have yet to see my 2080 Vram usage top 60% at UWQHD. Pretty sure I'll be ok for 2 years at the very least.
 
The last conventional wisdom I heard was that this was a way to use up 'defective' leftover chips that couldn't be used in the $2k+ machine learning card and that was never really intended for gamers.
Also the cost of that 16G HBM2 memory means that they're losing money on every card they sell @ $700.

It's still probably ideal for machine learning or CAD/Engineering/Scientific apps - pc gaming, probably not so much


Amd know how much the card costs to build, the rest of us don't. People throwing random numbers about in supposed "leaks from sources" doesn't really count for much. The hbm pricing numbers thrown around are probably also wrong as it's obvious amd would be buying this in large quantities so the price would be lower.
 
Why would you use that as an example?

I see 8GB cards that are faster than 11GB cards. Is another 5GB supposed to make the difference?
You didn't read the article.

Techspot said:
...given the RTX 2080 Ti only saw an allocation peak of 8GBs when gaming at 4K

We are seeing ~8GB of VRAM usage @4K in some games right now. I'm not sure why anyone here would believe as time moves on that number would lower?


R9 Fury all over again? Hmmm...perhaps not. Vast majority of people don't game at 4K. I have yet to see my 2080 Vram usage top 60% at UWQHD. Pretty sure I'll be ok for 2 years at the very least.

There are multiple games that hit or exceed 8GB of VRAM right now at 3840 x 2160.
 
Try to look at the bright side...at least they are doing ray tracing. Sometimes great things have small beginnings....

Oh, I agree. I even said the same thing when the cards were announced. However, I don't think anyone should be looking at ray-tracing as a primarily reason to buy the current generation of RTX cards. I also don't believe RTX itself is a value add to the current generation of cards, the support simply isn't there and likely won't be until long after these cards are woefully out of date.
 
You didn't understand the results.
I understand the results. They're not relevant to this discussion. What is relevant is the comment which I quoted above that states that Techspot saw RE2 saturate 8GB of VRAM. You claimed earlier in the thread it isn't happening.

You can remain in denial as long as you want. It doesn't change the fact that Nvidia cheaped out on the 2080 only including 8GB. That WILL hamper this card in future(and some current) games at 4K.
 
Thanks. Given that reference point, I find it very difficult to believe that they only eeked out a 1.5% improvement out of 7nm Vega VII w/ HBM2 relative to a 14nm Vega64 LC w/ HBM2. I'm not in a place where I can do a 1-to-1 comparison of stock specs, but at a glance it just doesn't add up.
It's actually a 7nm vega 56, not the 64.
 
Well, it's not an advantage. It's not an emotional reaction; AMD is losing their shirt on the Radeon VII. Beyond that, it's again going to be hungrier, hotter, and louder, and it's not going to perform any better. Because it's just Vega, again; compute- instead of gaming-focused, again. They've doubled the memory to get the memory bandwidth up, which is apparently necessary; which means that their compute architecture is significantly less efficient than the competition on yet another front.

The 'hate' is that AMD refuses to produce a top-end gaming part, when we all know that they could do it with their existing intellectual property.
Are you sure about that? These are harvested Instinct gpus that couldn't make the full thing and that thing sells for thousands. It's probably expensive to make more for it being on 7nm than for being on an interposer, the hbm itself being cheaper than gddr5.
 
Some people just like efficient, performance-focused products.

Seems like AMD fans have been complaining about us for over a decade... despite the fact that we were AMD fans first ;)
50 watt more than 2080 doesn't really matter much and doesn't require a nuclear plant to run. If people are that concerned about 50 watts then they all better be recycling every bottle of plastic and going solar for their power. Otherwise they can stop bitching. lol
 
50 watt more than 2080 doesn't really matter much and doesn't require a nuclear plant to run. If people are that concerned about 50 watts then they all better be recycling every bottle of plastic and going solar for their power. Otherwise they can stop bitching. lol

People would bitch over 1 watt if the 1 watt was in favour of the brand they're currently blowing. Pretty sad really but that's the mentality of fanboys. :confused:
 
50 watt more than 2080 doesn't really matter much and doesn't require a nuclear plant to run.

That's enough to light a house with today's LED lights-

But more so, it's unnecessary heat and noise.
I understand the results. They're not relevant to this discussion. What is relevant is the comment which I quoted above that states that Techspot saw RE2 saturate 8GB of VRAM.

Without an associated performance decrease ;)
 
That's enough to light a house with today's LED lights-

But more so, it's unnecessary heat and noise.


Without an associated performance decrease ;)

A 40m2 shithole apartment with at least 3 partitions isn't a modern house unless you like living in poorly lit conditions.
Lights operate far longer than most people game and that also assumes you are using maximum power of your computer all the time. In reality computers are idle mostly And idle power is usually ~50W without a screen. Your damn screen uses more power than that ffs.
Getting caught up over 50w for an hour or two of gaming is clutching for straws and you know it.. And the muh desert AC fringe scenario lol yeah better not invite any friends over - your AC bill will be horrific, much worse than 50W!
 
  • Like
Reactions: Zuul
like this
And after nvidia and amd both assembling interposers for years and in multiple countries means the price is far lower than for e.g. fury. Costs are not as high as you think. Either way they are reject cards which really don't cost shit. And that 8gb already is a limitation in at least two games, and we are in 1st month '19. You're going to get hammered on that one, hope you bought lube. History is on the side of increasing vram use...
 
  • Like
Reactions: Zuul
like this
That's enough to light a house with today's LED lights-

But more so, it's unnecessary heat and noise.


Without an associated performance decrease ;)

What if the card runs cool and triple fan are doing a great job? What then? If it had a blowe style I see your point. Let’s see the reviews Until then you can’t really complain about the heat and noise. I think AMD did that for a reason.
 
Last edited:
Getting caught up over 50w for an hour or two of gaming is clutching for straws and you know it..

Not really- 50w for a card is quite a bit of extra heat, and that has to be dealt with in terms of noise. Power draw may or may not be an issue in a particular locale, but heat and noise are. That heat and noise is next to you.

And after nvidia and amd both assembling interposers for years and in multiple countries means the price is far lower

This has been the hope all along- but until then, using HBM for lower-end parts is going to cost AMD. The continued development of GDDR is a testament to the difficulty HBM still faces.
 
Not really- 50w for a card is quite a bit of extra heat, and that has to be dealt with in terms of noise. Power draw may or may not be an issue in a particular locale, but heat and noise are. That heat and noise is next to you.



This has been the hope all along- but until then, using HBM for lower-end parts is going to cost AMD. The continued development of GDDR is a testament to the difficulty HBM still faces.

As long as AMD is using decent fans and has a properly set up fan curve then noise is unlikely to be an issue. As for heat. Eh. 50w isn't a huge deal over the course of a few hours gaming or whatever. I've had incredibly hot components (two 9800 GTX's crammed next to each other in SLI, an overclocked 580, heavily OC'd C2Ds and older i5s) while living in a tiny efficiency apartment and it wasn't a big deal. People certainly make a mountain out of a molehill when it comes to fairly small power increases.
 
It's another 50w of heat dumped into the room next to you.

You're trying to dismiss that- no one wants that next to them just because. And that's the argument for it: another 50w, just because.
 
It's another 50w of heat dumped into the room next to you.

You're trying to dismiss that- no one wants that next to them just because. And that's the argument for it: another 50w, just because.

Let's put it this way: If I were in the market for a $700 card, 50w of power would not even register as a reason to pick one card over another. Its such a small amount of power, and heat, that it makes no real difference one way or the other. I've had hotter cards in systems next to me back in the days before cases were properly designed for airflow. That tiny amount of heat is definitely not going to matter in the winter (especially when its freaking 20 below zero outside) and in the summer it's insignificant. For me, 50w sits solidly at a "who cares?" level of difference.
 
Then the card is efficiently dumping the heat in the room ;).

It's 50 watts man. LoL. You think it's dumping 10c in the room .Common now. Don't matter when you are 225+ watt. If it's get cooled properly it's not dumping the heat in your room like a blower. Not even the same thing to begin with .
 
Sure it would, if that's the only difference.

Well, sure. If it was basically the only appreciable difference. However, there are usually much more important differences between GPUs. 75-100+w, then I'd be writing it off pretty quickly unless there was a good performance difference.
 
Well, sure. If it was basically the only appreciable difference. However, there are usually much more important differences between GPUs. 75-100+w, then I'd be writing it off pretty quickly unless there was a good performance difference.

Sure, just overclock the AMD card- if it's +50w versus closest competitor at stock, then it's approaching +100w to it's closest competitor with both overclocked :D
 
To put it into perspective, you probably have about 50W of fans in your case (even if you only have 5). Your LEDs probably use 10-20w... (.1A * 5V = .5W * 20 = 10W)
Edit: well, maybe overestimating the led power. 100mA is a lot for a single, small led, and they may not be running at 5V
 
Last edited:
Lol to put into further perspective this is [H], we buy 1000w+ PSUs just because. In fact having to buy 650w Psu for our ITX cases hurts our soul, unless it weighs like 10lbs then that is acceptable.
 
And after nvidia and amd both assembling interposers for years and in multiple countries means the price is far lower than for e.g. fury. Costs are not as high as you think. Either way they are reject cards which really don't cost shit. And that 8gb already is a limitation in at least two games, and we are in 1st month '19. You're going to get hammered on that one, hope you bought lube. History is on the side of increasing vram use...
8GB here. No lube required ty. Would you like a chill pill?
 
Lol to put into further perspective this is [H], we buy 1000w+ PSUs just because. In fact having to buy 650w Psu for our ITX cases hurts our soul, unless it weighs like 10lbs then that is acceptable.
This. People will find anything to complain about to justify their purchases and favoritism however minuscule it is.
 
Don't worry folks, I have it on good authority that extra power consumption only matters in certain situations. (In reference to Intel using an additional 79w on the 9900 vs spec)

"Oh no, Intel is letting its CPUs run as fast as possible! Someone stop them!"

This behavior is desirable. Particularly if the cooling solution and power delivery can handle it, which most can; hell, what if it's running in a cooler environment?

The key here is stability, and that has not presented itself to be an issue. You can quibble about the marketing but the performance stands.

Edit:typo
 
Last edited:
Back
Top