Separate names with a comma.
Discussion in '[H]ard|OCP Front Page News' started by Megalith, Feb 10, 2019.
Oh man, you missed a golden chance here. I would have gone with Feliz Navi-dud.
Everything from AMD has been a stopgap since the 290X...
Navi is not based on GCN. AMD is, thankfully, finally putting GCN out to pasture with Navi. It will be 10 years that AMD has been on the same microarchitecture when Navi finally comes out. Even NVIDIA only milked their highly successful Tesla microarchitecture for 4 years.
Isn't Navi the last generation to be still based on GCN?
Yes Gcn as far as wiki said. But gcn was/ is modular or some such.. I mean if the changes are significant enough, what difference does it make.
Not sure. According to an old old slide, it's supposed to use "nexgen" memory, whatever that means.
Nah, it is known that GCN is very underutilized, Vega 64 LC edition barely competed with the GTX 1080 at launch, and the air cooled version consistently underperformed, now the air cooled version gives a hard time to the GTX 1080 and the Liquid Cooling edition either matches and outperforms the GTX 1080 OC more often than not. Just like the RX 480 being slower than the GTX 1060 and now it is fast. Is just that newer drivers along with more complex games are able to exploit the parallelism of GCN which suffers from underutlization, specially on DX11 titles where the Draw Calls are single threaded on AMD.
No, it was confusion over compute units vs precision.
A lot of people here touting that the Radeon VII is a productivity card that can also game - which it might very well be, but AMD didn't market it as that. They marketed it as a gaming card and a direct competitor to the RTX 2080. During their reveal they showed benchmarks against the RTX 2080, demo'd Devil May Cry 5, brought a The Division 2 dev up on stage, and to top it off bundled the GPU with three $60 games. I believe they had only one slide covering productivity performance and that's it. While its a decent gaming card, it overall fell short of the mark when it comes to gaming performance, power consumption, and noise levels compared to the equally priced RTX 2080 and even the GTX 1080 Ti.
That statement doesn’t mean there’s not room for improvement. It’s performing as expected upon release but over time things ‘could’ get better. As I said in my last post that given time and in enough peoples hands we’ll more than likely see what the card is truly capable of.
AMD's statement implies it is doing as well as it ever will. But if you want to hold your breath waiting on a 3-8% improvement that even the manufacturer does not think is coming, so be it.
3 - 8 percent..., so then it would only be 4% slower than a 2080.
Card is dead to me as it probably is for everyone else except for the most diehard AMD fans.
Considering the inconsistency in some of these benchmarks, I think AMD may have released these with half-baked drivers. Just look at battlefront II versus battlefield 5 benchmarks - both running on the same engine. If I was AMD, I would be optimizing for BFV as a priority vs something older and less played like battlefront II. The difference is a spread between -10% vs 2080 (unoptimized?) and +13% vs 2080 on BFV.
From another review, they undervolted it and ended up with 284W@ 1800Mhz GPU
2080 is what 350w?
I got mine for £649 shipped which is about the same cost as the cheapest 2080 around me. But the VII comes with three games all of which I was looking to buy - something I couldn't say for the 2080. All in all, it probably cost me £549 net (649 - the £100 I likely would have forked out for the games) so not hugely off what I'd consider the sweet spot.
Also, it does look like Sapphire is coyly hinting a Nitro+ version so may make sense to wait still.
YAY! As a SolidWorks power user, this is f'n EPIC news.
With 8GB HBM2, it would have half the memory bandwidth and performs worse.
This is completely false.
AMD had to use HBM2.
Vega (and GCN in general) is too inefficient and, in comparison to GDDR5/X available at the time of release, HBM2 offers the additional memory bandwidth and lower power consumption that Vega needs.
It clearly does, otherwise there would be less memory bandwidth, and gaming performance would be worse.[/QUOTE]
I don't know that I totally blame AMD for lagging (somewhat) behind NVIDIA considering they seem to have been focusing on their CPU business to some extent. Their CPU business, prior to Ryzen, was dead for nearly all intents and purposes; and they likely see more growth coming out of that area as Intel doesn't have a lot to hit Ryzen/Threadripper with at the moment. They can stave off NVIDIA for a while longer with rehashes of existing vega/polaris architecture while they shore up their CPU position and then come back later in 2019 and hit NVIDIA with navi.
Additionally, from what I can tell, the Radeon VII looks like it may do well with the crypto miners that are still left standing.
It sucks because the market has been so broadly competition-starved that we'll take just about anything, but for gaming purposes I would definitely call the Radeon VII the lower end of 'just about anything.'
Seriously looking forward to Kyle's review. I would like to know what the clock speeds are on each game because my guess is, anything that is under performing is not fully utilizing the card and the clocks do not peak out.
You are incorrect on a few points.
Well my observations were based on the reveal presentation and launch reviews from various tech websites. I'll be interested to see what HardOCP's review will reveal especially if you have access to updated/newer drivers.
What you stated is not true about AMD and it's marketing of the part.
How so? I'm assuming we watched the same public release presentation. The majority of time spent during that presentation on the reveal of the Radeon VII involved gaming elements. Even AMD's website lists the Radeon VII as, "THE WORLD'S FIRST 7nm GAMING GPU". The focus has been on its gaming performance, not the productivity. I'm not saying they're ignoring the productivity.
One observation in that sentence "the world's first 7nm gaming GPU" I'd like to point out is that 7nm comes first and gaming GPU comes second If that makes sense.
AMD isn't going to come out and say look a Radeon Pro WX9100 with a die shrunk GPU for 700 bucks. Cause we know you all hated that our 16gb workstation workhorse was only $700 cheaper then the Nvidia alternative. So lets just make the VII 700 bucks. lol
They may not have talked about it as much on stage.... but trust me everyone watching thought oh my, its a 16gb workstation card for under a grand. I have been asked about it 4 or 5 times last week by clients that do video. That user base has taken notice. And now AMD just announced Radeon VII can use their pro driver. Game changer for the workstation market. AMD just made every workstation card under $2500 obsolete.
The 2080 goes past 1.8 GHz out of the box and it averages 215W, peaking around 220-230W when gaming. I don't know where you got the 350W number from. Even the 2080 Ti doesn't go past 290W when gaming.
System draw was the # i was going by, 2080 looks like a power hungry pig now
I think we all are. It sucks that they probably finished the review using the broken Press Kit pre-release drivers and then the "fixed" public drivers dropped and the have to do some of it all over again.
I bet Kyle and gang are pulling their hair out.
Is it a Pig? or is it damn near equivalent to the R7.
And do they now test them with the radeon pro enterprise software as well ? lol joking
Brent has actually not had any real issues with his RVII card. All the issues were on my end, and AMD did warn of known issues on X399 platforms.
I'm giddy about it. I can't sell the purchase of a 2080 to the wife. I can easily sell "but I can do work too!" to her
What happened to Ashes of the Singularity? I thought that was the #1 benchmark for AMD cards just a while back (Async Compute, anybody?)
Now it's not even on the list.
Kidding aside, I think the RVII is pointing things in the right direction for AMD. So it's not a grand slam, but it's got a lot of potential. Definitely turned a few heads in the DX12/4K stuff.
Really? They're both within 2-3W of each other stock. I'm sure the 2080 would show similar numbers if it was also undervolted.
Then they should test that, AMD does undervolting directly in their software so it's a simple feature every end user has access to, 50W less for same performance, not sure how 2080 would perform with lower voltages.
Another positive is card is cheaper, more ram and has a good 3 game package, isn't looking too bad for AMD this round
The BFV optimization argument doesn't hold water as the boost in BFV was 31% while BF2 was 32% when compared to Vega64. There is no reason why AMD wouldn't have tried to optimize Vega 64 for BF2 last year as it was a big game then
That's quite the strawman. To paraphrase. "Undervolting the R7 probably won't have problems, and undervolting the 2080 probably will. So the AMD is 50 Watts less! Definitely better."
I am not even sure if I know what strawman means. Is it the same as scapegoat? I feel like strawman has been overused in the last 6 months.
Or " i can make you look even better in videoes"
"an intentionally misrepresented proposition that is set up because it is easier to defeat than an opponent's real argument."
In other words, his best argument was inventing a theory that the R7 could undervolt and save 50W, where the 2080 could not, and therefore made the R7 better. It might be true, of course, but he had no evidence. Just made something up and used that to justify his position.
Vega 56/64 bottleneck from the reviews I read was bandwidth
Radeon 7 speeds up Vega by giving it more bandwidth, removing the bottleneck eeking out more performance that was in Vega that couldn't be harnessed, in addition to just being faster due to a lower process
The Achilles heel is that it also only has 60 Compute Units rather than the 64 which is the max of the architecture. (perhaps to accommodate for yields?)
If this was a full 64 compute units, would be much closer if not better than the 2080
From the reviews, Drivers at launch are apparently god awful as well. Clearly this was rushed out the door. Hopefully there will be some pickup with drivers, but how much will be hard to say.
Nevertheless, I'll be waiting for Navi.
Again, this card was intended just to use up non-optimal chips intended for the Vega 20 cards. It was never intentionally planned or designed to be a top-tier 'gaming' GPU.
It was also never intended to make a profit... just reduce what would otherwise be a greater inventory loss of otherwise unsalable product.
As a scientific or production card it is exceptional - that is the true market for this card.
AMD was crawling in bankruptcy for several years, now their CPU's are letting them walk briskly again. Top performance GPU's will come... perhaps with Navi, but only after they're prepared to start a real footrace with nVidia.
/an RTX killing product means a price-war... which AMD isn't ready for yet
Let me spare everyone the suspense.
The Radeon VII was a lucky happenstance for AMD, as they originally didn't have plans to release a "high end" card. They found something they intended for content creators could get "close enough" in gaming, so they released this card.
Navi is NOT going to be the nVidia killer either - this is the Polaris replacement, although, if the rumors of it being better than a Vega 64 at $250 or cheaper end up being true, that in and of itself would make it a damn good steal at that price point.
If you want something to contend with nVidia, it is going to be the NextGen which likely will not hit until late 2020 at the earliest, and likely based on recent reports, not till 2021. Although, it will also be the first post-GCN architecture, which takes time to develop a new architecture.
That's the roadmap folks. Best to keep your expectations in check. Although even if its slower than a Radeon VII and a GTX 2080, if Navi can deliver on those rumors of better than Vega at less than $250, I'll finally have not only a reason but a means to convince my wife to let me finally upgrade my Radeon 390...