IdiotInCharge
NVIDIA SHILL
- Joined
- Jun 13, 2003
- Messages
- 14,675
You mean like people spouting off but mAh rtx and you saying you can find not new 1080ti's for $500?
No, like people that like competitive graphics cards
You mean like people spouting off but mAh rtx and you saying you can find not new 1080ti's for $500?
Actually, the main part I've read in the past that makes it expensive isn't really the die itself- but the whole assembly. The parts themselves come at market cost, but getting them all put together successfully has been what's made HBM not be the low-cost solution it was widely expected to be. That's why Vega's memory inefficiency is an issue. If they could cut two stacks off, they'd have a significantly smaller overall assembly and cost per unit would drop dramatically as a result.
Don't see it. Games still aren't loading up the assets, and resolution increases don't hit memory as nearly as hard- remember, pixels are small, and even having over eight billion of them for 4k is just a few megabytes of memory. What actually eats up a bit of VRAM is the per-pixel processing- that's why ray tracing increases actual needed VRAM- but that's also not a problem here, since AMD neglected to implement that functionality![]()
C'mon man don't be so naive. Apparently you haven't seen this:
AMD needs to bump up VSR. If this card had the same exact performance, but DSR in place of VSR it would be a bit more exciting to graphic image quality purists.Let's not forget Radeon VII will come with 16GB of VRAM. This is where the RTX 2080 will struggle with only 8GB. I see 7-7.5GB of VRAM usage on my Vega 64 now in 4K. Upcoming titles will need more VRAM for 4K. The RTX 2080 is looking like Nvidia's Fury.
Worthless no, but context we need more independent benchmarks to get a better reality of where it sits in terms of performance where it does well or more poorly. Adjust purchasing decisions based on the context of usage plain and simple. There will defiantly be instances where the RV7 performs better and worse and I wouldn't expect Final Fantasy benchmark to be one of the better performance instances given it's optimized for Nvidia.The 1080 Ti is a little slower than the RTX 2080 overall but it is faster than the RTX 2080 in Fire Strike FYI.
https://www.guru3d.com/articles-pages/geforce-rtx-2080-founders-review,32.html
Like piscian18 said, these benchmarks are worthless since it's only firestrike.
If they are using defective dies on GPUs that would.otherwise be tossed.....they aren't losing money now are they.The last conventional wisdom I heard was that this was a way to use up 'defective' leftover chips that couldn't be used in the $2k+ machine learning card and that was never really intended for gamers.
Also the cost of that 16G HBM2 memory means that they're losing money on every card they sell @ $700.
It's still probably ideal for machine learning or CAD/Engineering/Scientific apps - pc gaming, probably not so much
Try to look at the bright side...at least they are doing ray tracing. Sometimes great things have small beginnings....Ask that again in 5-10 years when the majority of games use real ray tracing. Nvidia's hybrid approach exists in one game with an abysmal implementation.
R9 Fury all over again? Hmmm...perhaps not. Vast majority of people don't game at 4K. I have yet to see my 2080 Vram usage top 60% at UWQHD. Pretty sure I'll be ok for 2 years at the very least.C'mon man don't be so naive. Apparently you haven't seen this:
https://www.techspot.com/review/1784-resident-evil-2-benchmarks/
It's just 1 of a number of examples that show 4K games even released right now are hitting the 8GB ceiling. In another year the 2080 will be falling behind the 1080TI due to VRAM limits in 4K. This is the R9 Fury all over again.
The last conventional wisdom I heard was that this was a way to use up 'defective' leftover chips that couldn't be used in the $2k+ machine learning card and that was never really intended for gamers.
Also the cost of that 16G HBM2 memory means that they're losing money on every card they sell @ $700.
It's still probably ideal for machine learning or CAD/Engineering/Scientific apps - pc gaming, probably not so much
I see- that would put you well out of the mainstream, so I'll let you be on your way![]()
Even if AMD did this, people would still find an excuse to continue to buy nVidia.
You didn't read the article.Why would you use that as an example?
I see 8GB cards that are faster than 11GB cards. Is another 5GB supposed to make the difference?
Techspot said:...given the RTX 2080 Ti only saw an allocation peak of 8GBs when gaming at 4K
R9 Fury all over again? Hmmm...perhaps not. Vast majority of people don't game at 4K. I have yet to see my 2080 Vram usage top 60% at UWQHD. Pretty sure I'll be ok for 2 years at the very least.
Try to look at the bright side...at least they are doing ray tracing. Sometimes great things have small beginnings....
You didn't read the article.
I understand the results. They're not relevant to this discussion. What is relevant is the comment which I quoted above that states that Techspot saw RE2 saturate 8GB of VRAM. You claimed earlier in the thread it isn't happening.You didn't understand the results.
It's actually a 7nm vega 56, not the 64.Thanks. Given that reference point, I find it very difficult to believe that they only eeked out a 1.5% improvement out of 7nm Vega VII w/ HBM2 relative to a 14nm Vega64 LC w/ HBM2. I'm not in a place where I can do a 1-to-1 comparison of stock specs, but at a glance it just doesn't add up.
Are you sure about that? These are harvested Instinct gpus that couldn't make the full thing and that thing sells for thousands. It's probably expensive to make more for it being on 7nm than for being on an interposer, the hbm itself being cheaper than gddr5.Well, it's not an advantage. It's not an emotional reaction; AMD is losing their shirt on the Radeon VII. Beyond that, it's again going to be hungrier, hotter, and louder, and it's not going to perform any better. Because it's just Vega, again; compute- instead of gaming-focused, again. They've doubled the memory to get the memory bandwidth up, which is apparently necessary; which means that their compute architecture is significantly less efficient than the competition on yet another front.
The 'hate' is that AMD refuses to produce a top-end gaming part, when we all know that they could do it with their existing intellectual property.
More like V60It's actually a 7nm vega 56, not the 64.
That 5324 score, YIKES!
Not a $700 dollar score that's for sure.
50 watt more than 2080 doesn't really matter much and doesn't require a nuclear plant to run. If people are that concerned about 50 watts then they all better be recycling every bottle of plastic and going solar for their power. Otherwise they can stop bitching. lolSome people just like efficient, performance-focused products.
Seems like AMD fans have been complaining about us for over a decade... despite the fact that we were AMD fans first![]()
50 watt more than 2080 doesn't really matter much and doesn't require a nuclear plant to run. If people are that concerned about 50 watts then they all better be recycling every bottle of plastic and going solar for their power. Otherwise they can stop bitching. lol
50 watt more than 2080 doesn't really matter much and doesn't require a nuclear plant to run.
I understand the results. They're not relevant to this discussion. What is relevant is the comment which I quoted above that states that Techspot saw RE2 saturate 8GB of VRAM.
That's enough to light a house with today's LED lights-
But more so, it's unnecessary heat and noise.
Without an associated performance decrease![]()
That's enough to light a house with today's LED lights-
But more so, it's unnecessary heat and noise.
Without an associated performance decrease![]()
Getting caught up over 50w for an hour or two of gaming is clutching for straws and you know it..
And after nvidia and amd both assembling interposers for years and in multiple countries means the price is far lower
What if the card runs cool and triple fan are doing a great job? What then?
Not really- 50w for a card is quite a bit of extra heat, and that has to be dealt with in terms of noise. Power draw may or may not be an issue in a particular locale, but heat and noise are. That heat and noise is next to you.
This has been the hope all along- but until then, using HBM for lower-end parts is going to cost AMD. The continued development of GDDR is a testament to the difficulty HBM still faces.
It's another 50w of heat dumped into the room next to you.
You're trying to dismiss that- no one wants that next to them just because. And that's the argument for it: another 50w, just because.
50w of power would not even register as a reason to pick one card over another.
Then the card is efficiently dumping the heat in the room.
Sure it would, if that's the only difference.
It's 50 watts man. LoL. You think it's dumping 10c in the room .Common now. Don't matter when you are 225+ watt .
Well, sure. If it was basically the only appreciable difference. However, there are usually much more important differences between GPUs. 75-100+w, then I'd be writing it off pretty quickly unless there was a good performance difference.
8GB here. No lube required ty. Would you like a chill pill?And after nvidia and amd both assembling interposers for years and in multiple countries means the price is far lower than for e.g. fury. Costs are not as high as you think. Either way they are reject cards which really don't cost shit. And that 8gb already is a limitation in at least two games, and we are in 1st month '19. You're going to get hammered on that one, hope you bought lube. History is on the side of increasing vram use...
This. People will find anything to complain about to justify their purchases and favoritism however minuscule it is.Lol to put into further perspective this is [H], we buy 1000w+ PSUs just because. In fact having to buy 650w Psu for our ITX cases hurts our soul, unless it weighs like 10lbs then that is acceptable.
"Oh no, Intel is letting its CPUs run as fast as possible! Someone stop them!"
This behavior is desirable. Particularly if the cooling solution and power delivery can handle it, which most can; hell, what if it's running in a cooler environment?
The key here is stability, and that has not presented itself to be an issue. You can quibble about the marketing but the performance stands.