AMD Chief Confirms New High-End GPU: "We Will Be Competitive in High-End Graphics"

'Yawn' right back at you with your insistence in criticizing a technology that's been proven and absolutely is the future of real-time graphics because you can't wait a month.

Almost two months now. I don't believe in magic improvements, been around much too long for that. The proof is always in the pudding and this one is so far empty, much like Star citizen.
 
The most important takeaway is that pricing is not one-sided; if you believe the value proposition is poor, either look for alternatives or don't buy it. If enough people actually did that, NVidia would respond to rebalance supply / demand pricing levels to remain profitable. But it does mean that in the short-term, some of us would actually have to "do without". =)

Want to say that I agree with your points almost completely- and that generally speaking, this is what is happening. It's also why I don't disparage Nvidia as they're properly taking advantage of their market position- generally taking advantage of early-adopters that can't wait to get the next newest fastest thing, many here at the [H]!, who I also don't disparage ;), because Nvidia is still innovating and as mentioned earlier in the thread doing so while competing largely with themselves.

If you don't want to pay their prices for the performance they're offering, you can wait. I'd even recommend waiting, for most; in fact, I have.
 
Almost two months now. I don't believe in magic improvements, been around much too long for that. The proof is always in the pudding and this one is so far empty, much like Star citizen.

Two months from games that still aren't slated for release that were shown at a convention?

Do you bitch that you cannot yet buy a 2021 model year car in 2018 too?
 
Lets hope they bring the competition back, just to bring Nvidia back down from the stratosphere in terms of pricing. They are my go to because of the bang for your buck they provide. My vega plays everything I want at 75fps on my UWQHD 75Hz freesync monitor. I dont need anything more currently. But I cant believe folks spending the money they are on the 2080 and 2080Ti, which is $1699 here in Canada.

Competition is key, lets hope for a Ryzen moment in GPU's to bring things back on level. They did say they are investing much of that Ryzen money into RTG, so lets hope they can come up with something formidable, I do love the idea of the chiplet idea as long as they can keep latency low. 7nm is gonna be great for a performance uplift, but its only a bandaid until they can bring the next big thing for gaming. Vega is a jack of all trades, they need a gaming focused killer chip, hopefully Navi can give us some of that.
 
Really the main reason I'd be rooting for AMD (and to be clear, I'm never rooting against AMD) is that they'll likely be tapped again for both MS and Sony console refreshes, so I hope that they get their ray tracing tech up to par fast and into said console refreshes.
 
Uh-huh. So exactly what do they mean by competitive? Do they mean it will actually perform like Nvidia's high-end cards or will they try to be price competitive? Or maybe they'll pull the same "you can't tell the difference" bullshit as they did with the Vega cards. I would love for AMD to come out with something that is actually on par with Nvidia's high-end cards, but there have been too many years of disappointing results for me to believe it without something beyond empty promises.
 
How quickly will RTX be put on hold. It does not seem like even the 2080 Ti is able to drive it at 1080 not to mention 1440 where most people are moving to. Im wondering if Nvidia will sponsor every game in development to have it implemented.
It would seem like a lot of $ to have this accomplished. What titles that are in development are supporting it. The new metro will most likely run like shit with RTX on and BFV is a joke.
 
:ROFLMAO:

Because it's not already working in all of the engines?

And five years? At worst, it will be next year- and that will be due to typical developer release slippage, ray tracing is the easy part. And I love the half-assed Physx comparisons, no better way to say "I have no idea what's going on" lol!

It's not half assed it's bang on. "but what about cuda and physx" the question on buyers lips during the 200 series vs hd 4000 series days. It never amounted to a hill of beans in the end. What real difference is there this time? In fact how many times in the past have we expected things to turn out one way to have them turn out another in the end? Go back ovet the years this isn't the first time a 'wonder feature X' has come along to work everybody up into a frenzy.

From a marketing standpoint, sure- but it's incomparable from a technology standpoint. PhysX made sense from the perspective of limited threading on CPUs the same way that hardware audio made sense- now that's all software too.

Ray tracing is an entirely different level of processing.

But isn't that how every other next big thing is framed? They're always new and very different and so far it's all just been a bump in the road in the end. Truform was a classic, took many years before tessellation took off, cuda and physx, years later again the capabilities are rolled up into the dx api that generic shader cores can process. Ray tracing is getting incorporated into the api but it's still going to be a big old wait before it's just a toy feature with a handful of more substantial implementations.

And even then the chip area used in 12nm turing is pretty freaking enormous and what is it expected to achieve at present, 1080p partially ray traced gaming? And that's one of the meatier Turings. So we need a hefty die shrink to get that moved up to say 4 mega pixels and 60fps . Now we're into the realm of 7nm cards and looking at that future hardware to do ray tracing decently in today's games.
Coding for raytracing is one thing but having hardware to do it justice is quite another.
 
Last edited:
  • Like
Reactions: otg
like this
That statement is too vague.... competitive at what price point? If they put out a $900 cart that goes toe to toe with the 2080TI than great if they put out a $450 card that struggles to keep up with the 2070 that is much less so.
 
So you're OK with the 2080 Ti costing $500 more than the card it replaces? The 2080 costing $300 more and the 2070 costing $200 more than their replacement? By that logic, when the 3080 Ti comes out, it should cost $2000 amirite.

Corvette costs about 3x what a Camaro does, goes just a "little bit" faster, actually has few seats. Neither car is 'really' a race car, track car, nor drag car stock. Apparently, people ARE actually willing to spend money for better performance. Maybe you should join the real world and start seeing how things actually work out here. nVidia is more than justified in charging what they're charging for their 20x0 series cards.

Maybe you should complain some about why AMD/nVidia's professional cards are basically 5x-10x more expensive than the consumer cards for what effectively amounts to a driver switch setting.
 
Uh-huh. So exactly what do they mean by competitive? Do they mean it will actually perform like Nvidia's high-end cards or will they try to be price competitive? Or maybe they'll pull the same "you can't tell the difference" bullshit as they did with the Vega cards. I would love for AMD to come out with something that is actually on par with Nvidia's high-end cards, but there have been too many years of disappointing results for me to believe it without something beyond empty promises.
Also wonder what they classify as high end.
 
It's not half assed it's bang on. "but what about cuda and physx" the question on buyers lips during the 200 series vs hd 4000 series days

PhysX became unnecessary; CUDA is the HPC standard, and neither similar to ray tracing, so 'half assed' was being nice. They're both shitty comparisons. Ray tracing is the new standard, and unlike both PhysX and CUDA, AMD is embracing it too- largely because it's already in the same APIs they use.

But isn't that how every other next big thing is framed?

Only if you don't have two fucks a clue what ray tracing is.
 
All of us know what it is. its best case scenario will be 45 fps at 1080p. So what the fuck are you trying to argue here.
 
I really hope AMD is able to deliver. We need some competition at the 4K capable performance level, and I'd love to be able to get FreeSync monitors as a future upgrade.

Corvette costs about 3x what a Camaro does, goes just a "little bit" faster, actually has few seats. Neither car is 'really' a race car, track car, nor drag car stock. Apparently, people ARE actually willing to spend money for better performance. Maybe you should join the real world and start seeing how things actually work out here. nVidia is more than justified in charging what they're charging for their 20x0 series cards.

Maybe you should complain some about why AMD/nVidia's professional cards are basically 5x-10x more expensive than the consumer cards for what effectively amounts to a driver switch setting.
To stick with your car analogy, when a Camaro gets old and people are looking for a new one, they just get a new Camaro at about the same price point as what they paid for the previous one. They don't normally trade in the Camaro to get a Corvette. What you're comparing by going from a Camaro to a Corvette is going from a 1080 to a Titan V - in which case you're absolutely spending 3x as much for not 3x worth of performance increase.

But we're not talking about going from a 1080 to a Titan V, we're talking about going from a 1080 Ti to a 2080 Ti - which is equivalent to going from a 2017 Camaro to a 2018 Camaro. Or if you want to look at the 1080 to 2080, that's a 2016 Camaro to a 2018 Camaro. I don't follow car pricing very closely, but I'm pretty sure there wasn't a 45% price increase in going from the 2016/2017 Camaro to the 2018 model.
 
As long as it's better than the current gen, I will buy it if it means getting this 1070 out of my rig.

Don't care if Nvidia doubles their performance with a driver update on launch day of whatever AMD drops. I cannot in good conscience support a company that can't handle honest tech journalism calling them out on their bullshit.
 
that "sir" is exactly why Ngreedia "wins" because it is way too easy to fuck around and get the higher FPS numbers even if the end result is anything but "smooth" and "good looking" experience, far far to easy to screw with software to make the "numbers" appear larger than they actually are Intel has been doing this for decades and Nv is on the same bandwagon as well (both open pocketbooks to 3rd party devs to "tweak")

Objects in mirror are not always as they appear.

As for some of the other folks saying AMD has never had faster GPU then Nv ever, or for that matter faster for less.

That depends on your thinking.
Hd4k--HD5k--HD6k-HD7k and "some" of the R5-7-R9 (some of course being pure rebrands) absolutely were competitive if not trashing Nv if one takes every side into question.
features, performance, build quality, sound profile, cost in power or heat or temperatures given, game performance quality (eye candy quality, not "BS" but as "real" as possible) density/mm2, performance per watt, watt to performance and so forth.

Yes Intel has been the "better" for CPU if one chases overall performance for the price, power use etc and build quality till the 3000-4000 series (mostly) but GPU wise Nv is not as clear cut a winner in my opinion mainly because since the 500 series they have been cutting things away just to get raw FPS up more than anything else.

-------------------------------------------
-----------------

I guess some folks really do like their trucks (sort of speak) to be as loud and sounding fast as possible instead of having "sleepers" that are all grunt without having to resort to BS tricks to make you think they are faster than actually are.

anyways...yes the :most modern ones: from Nv being the 900/1000 now 2000 series are "fast" and do not use all that much power to give it BUT there are other BS shaddy things that Nv has done all along more so over the last few years than prior generations to "achieve" this.

mehh, I know where my $ goes, certainly not into Nv pocket that is for damn sure. Difference between AMD and Nv (in my books) is the former admits to mistakes either ahead of time OR when "discovered" (tries to avoid pounding the dog future product release) and the latter never admits to any wrongdoing even when they are given the proof of it.

folks keep throwing $$$$$$ at them (Nv) like they are a shriveled up hooker on a pole waving saggy tits around for bus fare and then demanding to get more because she is the best you can get (while blinding you with acid).

Feel free to die on that cross, however people will buy what they think is worth their money and AMD has done a poor job at making GPUs people want to buy.

Nobody cries when AMD sells Threadripper CPUs for $1000, because they performs great and are a luxury item. You know who “needs” 16 core CPUs? Basically nobody. Then people lose their shit when others spend that much on Nvidia, because holy fuck corporate justice!

Don’t be mad other people spend money, be mad AMD isn’t giving them anything to buy.

And if you want to talk BS, never forget this gem where AMD previously declared the upcoming Vega cards were going to skip right over Pascal cards and beat Volta (which was the original code name of the post-Pascal Nvidia architecture):

bVobYT.jpg


AND PEOPLE BOUGHT INTO THIS HYPE! The only thing dumber than an Nvidia fan is and AMD fan.
 
Last edited:
Both Nvidia and Intel need to overcharge a lot when they have a huge lead. It is the moral thing to do. The higher their profit margin, the more attractive the market is to competition and the easier it is for AMD or others to raise cash.

If they kept within say 10% of their costs they could slow progress right down and really milk the market. AMD would be squeezed out and no other competition would show up because it wouldn't be worth the attempt for such low margins. Keeping prices low stops competition from entering or raising capital is a very anti-competitive defensive move.
 
You can add me to the "I'll believe it when I see it" list. Unfortunately AMD have form for making this sort of pronouncement and not delivering. We need competition, but that requires action not words.
 
If AMD doesn’t have hardware acceleration for RT I won’t be considering them.

I know a lot of professionals also won’t be considering them either. It would be in their best interest to follow Nvidia’s lead.
 
Feel free to die on that cross, however people will buy what they think is worth their money and AMD has done a poor job at making GPUs people want to buy.

Nobody cries when AMD sells Threadripper CPUs for $1000, because they performs great and are a luxury item. You know who “needs” 16 core CPUs? Basically nobody. Then people lose their shit when others spend that much on Nvidia, because holy fuck corporate justice!

Don’t be mad other people spend money, be mad AMD isn’t giving them anything to buy.

And if you want to talk BS, never forget this gem where AMD previously declared the upcoming Vega cards were going to skip right over Pascal cards and beat Volta (which was the original code name of the post-Pascal Nvidia architecture):

View attachment 115405

AND PEOPLE BOUGHT INTO THIS HYPE! The only thing dumber than an Nvidia fan is and AMD fan.

You do realize what the word hype means right. You did not realize that RTX is hype there are no games for ray tracing.
You call people names without any regard for people that have valid points why they would not buy Nvidia

To call people dumb reflects more on you then it does on others ...
Calling names fanboy all of the sorts do not win any arguments , facts do .....
Sorry for that :)
 
  • Like
Reactions: Zuul
like this
too much to hope for that AMD can be competitive in both the GPU and CPU market at the same time...
 
From a marketing standpoint, sure- but it's incomparable from a technology standpoint. PhysX made sense from the perspective of limited threading on CPUs the same way that hardware audio made sense- now that's all software too.

Ray tracing is an entirely different level of processing.
PhysX became unnecessary; CUDA is the HPC standard, and neither similar to ray tracing, so 'half assed' was being nice. They're both shitty comparisons. Ray tracing is the new standard, and unlike both PhysX and CUDA, AMD is embracing it too- largely because it's already in the same APIs they use.



Only if you don't have two fucks a clue what ray tracing is.

I should've said cuda for the home user. In that sense it went nowhere. Whereas it might have done and it was a genuine possibility at the time.

Ray tracing is a different technology to all the other Next Big Thing ones. They always are. The running around in circles riding the hype train part stays the same. The only difference this time is microsoft is on board getting it into direct X early. Ray tracing will become a thing *eventually* but it wont be more than a toy feature in Turing's lifetime.
 
The only difference this time is Microsoft plus game developers and professional market essentially begging for this

MS is only a small part in the whole picture, the software and capability has existed for a rather long time now. Incorporating hardware acceleration AS its being released shouldn't take too long as we have 2 of the three things ready, just need the developers to do what they can as human beings and implement it.
 
As for Navi, I do think that it is getting a bit overhyped as most AMD GPU's tend to be. I do expect a massive overhaul of the internal design as Vega wasn't as extensive as a change from Polaris as marketing would lead you to believe: they have evolved from GCN roots. .

Dug this up and posted it in a more obscure thread a while back but you might find it interesting after your comment about how the architectures are similar. It goes much further than you realize and is well documented. These are memory structure block diagrams, R700 is pre-GCN HD 4870 etc and very little has changed in basic functionality since then, let alone GCN itself.

R700/HD4800 series R700 pre gcn hd48xx .jpg R9 280/HD7900 gcn 7970 southern islands.jpg R9 200/290X etc R9 200x GCN3.jpg Vega GCN Vega 256.jpg


Check out the family block structure as well, quite similar.

R700/HD4800 R700 HD4800 unit block diagram.jpg R9 280/HD7900 r9 280 hd7970 unit block diagram.jpg R9 200/290X etc R9 200 GCN3 unit block diagram.jpg Vega Vega unit block diagram.jpg


Edit: Also regarding chiplets/MCM. AMD will do it and do it well, I expect them to first deploy this on Zen 2 with an active interposer design. If not, maybe on 5nm node. It would likely allow for 16 core dies to be fabricated. It solves all the die space/interlink issues in one, then it will come with luck to Navi, or the next-generation uarch after that. They already have 500gb/sec IF links in Vega, which is enough for testing to a decent extent this principle. Remember Polaris lead the market with SSG for some niche industries for a while, due to utillising infinity fabric. Why do you think they chose that name?
 
Last edited:
Well, nearly all of that was on AMD- which is not atypical of the 'competition' that they bring. Crap drivers was a lot of it, but it was also loud and hot- and by the time they had that sorted the competition had superseded them.
Thermi was slower and comparatively worse than the 290x launch and yet the brand marketing suckers lapped it up, same with Presshot vs Athlon 64 - Intel paid retailers to carry and push their shit. This time around though AMD has a growing mindshare thanks to Ryzen so they may do better.
 
  • Like
Reactions: Zuul
like this
I hope so, we've never needed it more. Nvidia has proven they can't be trusted to operate without competition to keep them honest.

You do know that competition is the only way to keep prices down? It has nothing to do with Nvidia being honest or dishonest. How do you mean dishonest? They have a monopoly on the high performance GPUs and they take advantage of it. Personally I will not buy at today's prices for the RTX series, but any manager of a corporation tries to maximize profits. If no one buys, they'll have to lower prices, but if they sell in sufficient volume, they'll keep prices up.
 
Feel free to die on that cross, however people will buy what they think is worth their money and AMD has done a poor job at making GPUs people want to buy.

Nobody cries when AMD sells Threadripper CPUs for $1000, because they performs great and are a luxury item. You know who “needs” 16 core CPUs? Basically nobody. Then people lose their shit when others spend that much on Nvidia, because holy fuck corporate justice!

Don’t be mad other people spend money, be mad AMD isn’t giving them anything to buy.

And if you want to talk BS, never forget this gem where AMD previously declared the upcoming Vega cards were going to skip right over Pascal cards and beat Volta (which was the original code name of the post-Pascal Nvidia architecture):

View attachment 115405

AND PEOPLE BOUGHT INTO THIS HYPE! The only thing dumber than an Nvidia fan is and AMD fan.

Actually, they were right in some ways. Where is consumer volta? Not talking a 3k Titan-V, but consumer volta? It didn't happen, did it? Because Nvidia probably got wind of AMD working on Ray Tracing, then decided to out-ray trace them, and here we end up, with AMD looking at a strongly raster-focused uarch next year... maybe 'poor volta' was all part of a corporate espionage joke?
I'm certainly giving zero fucks about a 1080p ray tracing experience. If it's not 4k playable, or at minimum 1440p60+, Nvidia can fuck off with their new shiny. We were only just getting to high Hz, which is hilarious to hear all those people suddenly sticking up for 45Hz ray tracing. I thought Hz was far more important than IQ? Now with new shiny 2080Ti, console Hz is fine as long as it's ray tracing. Oh, at 1080p Lmao. Justification for a rather overpriced card that has wholly under delivered. Let's be honest, the 2080 launch has been a fucking Fiji-level flop. Like Raja parading CFX 480s vs a 1080.. or a chilled 28 core to 'beat' a competitors air cooled product. Pathetic.
 
You do realize that the 290X was competing with Kepler, right? Would you mind adding some historical accuracy in with your condescension?
290X launched competing and easily beating Thermi (it was already beat by the prior generation - that was my point) and the Titan. Then the 780s came out. For a period of time the 780Ti held a tiny lead, especially over the shitty 290x ref cooler models. Then the aftermarket cards came out, drivers were better and guess what, since then the 290x is faster even today by quite a decent margin especially in vram limited scenarios. Keep in mind I run the very best model of these cards and am quite aware of its history.
 
Keep in mind I run the very best model of these cards and am quite aware of its history.

I'm aware of the history too- and I don't run noisy space-heaters for the hell of it. And yeah, AMD did eventually unfuck their drivers for the most part. But they were still behind.
 
I hope so, we've never needed it more. Nvidia has proven they can't be trusted to operate without competition to keep them honest.

All those years Intel had no competition, they kept their prices in check. Yeah they only bumped performance by 3% each generation but the 8000 series chips didn't cost twice what the 7000's did.

any company is not your friend. and don't compare intel to nvidia either. does 2080ti being 5% faster than 1080ti for almost double the price? i don't like the price increase either but when it comes to GPU nvidia is the only one that still provide at least 30% performance increase every year.
 
Back
Top