AMD Radeon VII Interview with Scott Herkelman @ [H]

2 + years ago when I bought the GTX 1080 at launch, I wasn't so fed up with Nvidia. I was upset about the founders edition crap, I was naive enough to believe they wouldn't keep exploiting and Wallet Abusing the hardcore PC Gamer back then. After witnessing the 1080Ti, Titan X, Titan XP, and now this RTX Series, I'm so rooting against them. I have to also admit, I feel AMD has a small part in this by not competing as aggressively as they could have tho.
Certainly not rooting against NVIDIA here, but surely want AMD to bring more competition to the market. This will also keep a lot of heat on Intel as well with its upcoming GPUs. This is all great news for those of us that buy video cards. :)
 
and nvidia really did successful in riling up the public with it.

To a negative reaction, yes. Jensen talked some complete and utter shit regarding freesync and it not working. He may as well said any of the tech press that has positively reviewed a freesync monitor was a dumbfuck as according to him it simply doesn't work.
 
Is there any word on what VR performance is like?

I would love to dump my Pascal cards and go back to AMD, but VR makes up a major part of my gaming time, and the last bit of reading I did suggested that there was more to nVidia's advantage in VR performance than just plain being faster. I guess it was that simultaneous multi-projection feature they talked about back in 2016?

Edit: I should probably admit that I haven't thought about that in a while, so that last bit of reading I did may be pre-Vega.
 
Maybe someone with the right email addresses puts out a feeler towards a Radeon VII Nano to support the SFF movement...?!? ;^p
I do not see that happening.
 
FP64 support was the major selling point of this card for me and for many other amateur professionals, students and budget conscious researchers for whom Instinct is not a good fit at all. Without it I've lost interest and so have many of the people I've talked to. It went from a sellout hit to a mediocre flop imo. They should have increased the price and marketed it against the Titan V which is the only recent FP64 enabled gaming card.



No it is nothing like the Titan. Titan V has FP64 acceleration. This doesn't so it is a half-baked compute card.

It's not even available yet and already has gone from hit to flop?

Amazing.

A
 
Get the waterblocks ready! Lets see what 7nm Vega can do... 1900-2000Mhz on the core would be sweet. HBM im guessing will hit the 1200 numbers, thats a shit ton of bandwidth. Lets open er up!
 
I suspect this will be a great card for a matching use case. Hopefully they drop the price by at least $50. $100 to make it a no-brainer.

Releasing directly allows AMD to maintain the pipeline so hopefully we don't get "shortages" from AIBs trying to milk the crypto wave. Plus it keep more $$ in their pocket.

Seeing Jensen get all flustered in his leather jacket was amusing. Looks like competition is back!
 
I suspect this will be a great card for a matching use case. Hopefully they drop the price by at least $50. $100 to make it a no-brainer.

Releasing directly allows AMD to maintain the pipeline so hopefully we don't get "shortages" from AIBs trying to milk the crypto wave. Plus it keep more $$ in their pocket.

Seeing Jensen get all flustered in his leather jacket was amusing. Looks like competition is back!

I wouldn't say competition is completely back since Radeon VII is only competing with Nvidia second best, but for AMD fans, they don't have to wait an eternity that was Vega 64/56.
 
For all those people saying they did it so that it doesn't compete with their Instincts. I have to say that it's not like anyone buys AMD cards for enterprise anyway. During 3Q2018 earnings call, Lisa Su confirmed that enterprise GPU sales were about $20M. If you assume an ASP of around $2000 that means they sell about 10K units per quarter.

Might as well take a chance and toss a low cost FP64 card out there and maybe you'll get some actual market penetration.
 
Why is a new GPU model releasing in 2019 with no HDMI 2.1 and/or no adapter included to make up for this shortcoming?
 
i wish there was a question about nvidia's claim that 95%+ of freesync/VRR screens are "broken".
 
FP64 support was the major selling point of this card for me and for many other amateur professionals, students and budget conscious researchers for whom Instinct is not a good fit at all. Without it I've lost interest and so have many of the people I've talked to. It went from a sellout hit to a mediocre flop imo. They should have increased the price and marketed it against the Titan V which is the only recent FP64 enabled gaming card.



No it is nothing like the Titan. Titan V has FP64 acceleration. This doesn't so it is a half-baked compute card.

I'm curious as to the split of gamers vs pro users, to increase teh price further (which is already a point of contention) so that you can have FP64 would be silly I feel. You are a professional, pay for the Mi50 it is based on.... just like my production rigs, they use Xeons because at the time they were the best and most reliable solution for what was needed.
For me and many other ML users, FP16 is enough and this card delivers for those who game and work. For more serious FP64 you have typically had to be shoehorned towards more expensive cards anyway.
 
That may be true, it may offer little or no benefit (though, the system being able to use some of that 16GB of HBM for system memory could be useful); however, that's also not how you go about promoting a technology you've developed and touted as capable of greatness.

AMD currently has people's attention now, which they really didn't back with HSA and so they lacked the ability to push people to support it (I saw that as something I very much looked forward to, so was bummed when it really never was utilized). This time they do, and while AMD isn't really one to push for people to use what they've cooked up like nV is... point is that people CAN'T if they don't integrate it into products. So if the framework is there, how can they expect anyone to utilize it?

Furthermore, even if it doesn't help with Vega all that much, how can they convince people to begin to for consumer market if there aren't consumer products to develop for? It's like RTX, it's not a huge hit yet, but it's there to utilize at least, and there's been adoption. Would just be nice if AMD took that approach. After all, it's "if you build it, they will come", and not "If you lay the groundwork, they will build it for you" heh
Uh, OK. Not sure shoehorning technology into where it is not needed is a really great approach, but if you think so, roll with it.
 
Does a car really need a Twin-Turbo compound Supercharged W12 with well over 1000HP?
Where does the industry get when products aren't pushed out with "unneeded" features? Intel seems like the posterchild for that outcome.

But whatever, maybe I'm just smoking something. :\
Vega was shat on for having tech that wasn't ready or poorly implemented (wait for vega lol), 2000 series Nvidia is making the same mistakes while AMD is learning from them and listening to us the consumer and buyer for a change. I think by focusing on core markets AMD has delivered this time, plenty of VRAM, good FP16 for majority of mainstream compute tasks, pricing same as the 2080 but with more and better games and always room to reduce pricing in future.
 
  • Like
Reactions: Boil
like this
To a negative reaction, yes. Jensen talked some complete and utter shit regarding freesync and it not working. He may as well said any of the tech press that has positively reviewed a freesync monitor was a dumbfuck as according to him it simply doesn't work.

that is the narrative that he should be playing. and majority of people fell for it.
 
I'm curious as to the split of gamers vs pro users, to increase teh price further (which is already a point of contention) so that you can have FP64 would be silly I feel. You are a professional, pay for the Mi50 it is based on.... just like my production rigs, they use Xeons because at the time they were the best and most reliable solution for what was needed.
For me and many other ML users, FP16 is enough and this card delivers for those who game and work. For more serious FP64 you have typically had to be shoehorned towards more expensive cards anyway.

This shows a terrible ignorance of what an instinct card actually is. First off I’m not a professional, I’m a student. Two, an instinct card costs 5k minimum. It’s probably more like 10k if I’m honest. Hell Nvidia wants 20k for comparible professional cards. Three, it’s not really available to consumers. I’d have to contact data center suppliers who might not even be willing to sell it. Four, it’s Linux only, with no display output and zero gaming capabilty.

So basically I’m shoehorned into getting a Titan V cause Nvidia, unbelievable I know, is the only company that provides a product that meets my needs. And that $2999 MSRP is hard to swallow.

Cuda is terribly entrenched. If AMD wants amateurs, students and budget conscious researchers to use openCl and later Vulkan they need to provide the resources for them to do so in the first place. People will write code for the card that works for them. It will give AMD a better footing in the market than the very poor one they have now.
 
Last edited:
How's it going to OC? My 2070 OC to over 2080 levels. So is this rx7 just a card that can't OC to 2070 levels, doesn't have RTX, and doesn't have any future proof?

But costs $200 more?

Wasn't this place [H] once?
 
How's it going to OC? My 2070 OC to over 2080 levels. So is this rx7 just a card that can't OC to 2070 levels, doesn't have RTX, and doesn't have any future proof?

But costs $200 more?

Wasn't this place [H] once?

The RX7 is a sports car from Mazda, the Radeon VII is a GPU from AMD...

The Radeon VII cannot OC to RTX 2070 levels because it is already at RTX 2080 levels of performance...

We will not know how the Radeon VII actually does with overclocking until we see some reviews, but I would bet undervolting the GPU & overclocking the HBM2 will yield better performance...

I would say the 16GB of HBM2 is a good bit of future-proofing...

The Radeon VII is actually 100 bucks cheaper than the RTX 2080...
 
The RX7 is a sports car from Mazda, the Radeon VII is a GPU from AMD...

The Radeon VII cannot OC to RTX 2070 levels because it is already at RTX 2080 levels of performance...

We will not know how the Radeon VII actually does with overclocking until we see some reviews, but I would bet undervolting the GPU & overclocking the HBM2 will yield better performance...

I would say the 16GB of HBM2 is a good bit of future-proofing...

The Radeon VII is actually 100 bucks cheaper than the RTX 2080...

16GB of VRAM is good for 4K gaming now, but who knows about the future. The VRAM amount might be good for the future but there is no way the GPU itself has the power to push games at setting that would require that kind of VRAM usage. When it comes primarily to gaming future-proofing really doesn't exist. Could be a different story on the professional side of things however.

Its the same price as the MSRP of the 2080 and there are some models at that price point. How well those MSRP 2080s perform compared to this or the 2080 FE I have no idea but I'd imagine they're all going to end up being pretty close to each other.
 
This shows a terrible ignorance of what an instinct card actually is. First off I’m not a professional, I’m a student. Two, an instinct card costs 5k minimum. It’s probably more like 10k if I’m honest. Hell Nvidia wants 20k for comparible professional cards. Three, it’s not really available to consumers. I’d have to contact data center suppliers who might not even be willing to sell it. Four, it’s Linux only, with no display output and zero gaming capabilty.

So basically I’m shoehorned into getting a Titan V cause Nvidia, unbelievable I know, is the only company that provides a product that meets my needs. And that $2999 MSRP is hard to swallow.

Cuda is terribly entrenched. If AMD wants amateurs, students and budget conscious researchers to use openCl and later Vulkan they need to provide the resources for them to do so in the first place. People will write code for the card that works for them. It will give AMD a better footing in the market than the very poor one they have now.
Sorry I thought instinct was like frontier edition etc. But if you think that's expensive, try being a student pilot...
Where I'm going with that is your university should have access to this equipment, otherwise what is the point in teaching you? You're not a professional as such yet so I don't expect you to afford that equipment, so ask uni? If not.. hit up AMD, they are pretty active on twatbook and plebbit (/r/amd) and often respond, explain your situation and see what they say. I get a feeling you are a edge case though as most FP64 I'm aware of is scientific use where numerical precision is required.
Good luck.
 
16GB of VRAM is good for 4K gaming now, but who knows about the future. The VRAM amount might be good for the future but there is no way the GPU itself has the power to push games at setting that would require that kind of VRAM usage. When it comes primarily to gaming future-proofing really doesn't exist. Could be a different story on the professional side of things however.

Its the same price as the MSRP of the 2080 and there are some models at that price point. How well those MSRP 2080s perform compared to this or the 2080 FE I have no idea but I'd imagine they're all going to end up being pretty close to each other.

Honestly I was looking at 2080s after this came out. All the good ones with triple fan cooling seem to be close to or priced above the founders edition card. So I honestly don't think AMD priced it bad at 699 if the cooler lives up to its looks and handles the temps right.
 
I would say the 16GB of HBM2 is a good bit of future-proofing...

16GB of VRAM is good for 4K gaming now, but who knows about the future.

The 16GB HBM2 in the Radeon VII is more "future proof" than the 11GB GDDR6 in the RTX 2080 Ti or the 8GB of GDDR6 in the RTX 2080...

When we need more than 16GB of video memory, we will also need better GPUs; so that is upgrade time, faster GPU & 32GB of video memory...! ;^p
 
  • Like
Reactions: N4CR
like this
The RX7 is a sports car from Mazda, the Radeon VII is a GPU from AMD...

The Radeon VII cannot OC to RTX 2070 levels because it is already at RTX 2080 levels of performance...

We will not know how the Radeon VII actually does with overclocking until we see some reviews, but I would bet undervolting the GPU & overclocking the HBM2 will yield better performance...

I would say the 16GB of HBM2 is a good bit of future-proofing...


The Radeon VII is actually 100 bucks cheaper than the RTX 2080...

Considering I just bought a RTX 2080 from NewEgg for $633 ($720 - 10% instant discount - $15 rebate), the Radeon VII is most definitely NOT cheaper for anyone watching for deals. Even without discounts, they're easy to find for $700.

I really hope AMD does well with this card as nVidia needs competition, but silly price comparisons are pointless.
 
Sorry I thought instinct was like frontier edition etc. But if you think that's expensive, try being a student pilot...
Where I'm going with that is your university should have access to this equipment, otherwise what is the point in teaching you? You're not a professional as such yet so I don't expect you to afford that equipment, so ask uni? If not.. hit up AMD, they are pretty active on twatbook and plebbit (/r/amd) and often respond, explain your situation and see what they say. I get a feeling you are a edge case though as most FP64 I'm aware of is scientific use where numerical precision is required.
Good luck.
LOL when I was in college they had us writing C code in notepad and compiling via gcc in a command line terminal. This was back in 2003-2004, and college budgets have only gotten worse. Do you honestly think college campuses are readily stocked with computers running $5-10k professional GPUs?
 
I’ve heard you want more system ram than VRAM. I wonder if this card will make 32GB system ram the default now.
 
Thanks for the interview Kyle.

I like that it has 16Gb vram. It's good that they have a card that has performance around a 2080, but has more ram. If I was in the market for a card in that price range or performance, the 16Gb would be very compelling. Not yet convinced that HBM2 is actually faster the GDDRx, if it was I think we would see it in more graphics cards by now.

A few questions about how the memory is accessed in the paragraph below, perhaps [H] can submit this to AMD/nVidia for clarification?

Questions: They give us a bandwidth spec but that is all of the chips being accessed at once, isn't it? (For both GDDRx and HBM) 4096bits wide memory interface for HBM2 (sounds amazing!), but a texture in a game is maybe what, 32bits wide? It would likely download out of a single HBM2 chip, which has (I think) 256GBps per package, or 32Gbps (in bytes). Doesn't sound nearly as fast as 1TBps (which I believe is in bits, not bytes. Converting to bytes /8 = 128Gbps that is obviously a score counting multiple packages). This same question could be put to GDDR flavors as well. A 1080ti has 11Gb GDDR5X, rated at 11Gbps. It already sounds slower than HBM2, even a single chip (If I understood the specs correctly from wikipedia). But what about individual chip speed/thruput, but also how are either of these technologies utilized in how they store individual textures? Is it spread out across all of the chips, or would textures be complete items in individual chips? If the latter was the case, then individual chip thruput would be a more important spec than the "Total bandwidth", wouldn't it? If the items in memory were spread out across all of the chips, then a total bandwidth measurement seems like it would be most useful. The answer to those questions is something I haven't found asked or answered anywhere. We just listen to the marketing saying "Bajillions of GBs!", or how wide the memory bus is.

Hopefully the above questions and reasons for asking them have been articulated well enough to get some kind of answer from someone. Reading specs on wikipedia regarding HBM2 and GDDR5X, the above questions weren't answered.

Competition is good, can't wait for the review on this one. [H], please add a 1080ti in the review, since it has more vram than a 2080 has and about the same performance.
 
With reference to my RX-480 system at home and how easy it was to do 590/480 X-fire, note that of the following hardware components, the only thing I had to add was the RX-590--just dropped it in and booted up and that was it. Nothing else required. Made a huge difference @ 4k.

So I should sell my Vega 56 or X-Fire it with Radeon VII?
 
So I should sell my Vega 56 or X-Fire it with Radeon VII?

I would sell and buy the VII, since the vram amounts are not equal between the 2 cards, and the memory bandwidth is more than double on the VII vs the 56. Plus, selling the old makes the upgrade more affordable.
 
So I should sell my Vega 56 or X-Fire it with Radeon VII?

Honestly, it's not for me to say...;) It's too early to really look at RVII--the product isn't finalized yet. Could turn out to be much faster than the hints from the engineering samples AMD has benched that we've seen thus far. All I can say is that I've been very happy with this RX-590/480 system--much more than I actually thought when I tried it, as my original plan was to shunt the 480 to the wife and keep the 590, but it works so well I decided to keep both (she is completely undemanding of all things 3d, thankfully)--it's also got 64 ROPs--just like the RVII--but of course there are *other* differences, too *cough* The RVII is of course much newer engineering! I had purchased the RX-480 about a year ago, so buying the RX-590 just before x-mas was keeping with my self-inflicted budget for GPUs these days--<$300 is my sweet spot. But anyway...to try and answer your question, your Vega56 could be worth more, or worth less, after RVII ships--it will depend, I think, on the quality of the RVII product. So I might counsel keeping your V56, then getting the RVII, then doing an analysis yourself as to whether or not keeping the 56 or selling it is something you'd want to do. But again, it might be worth more to sell it now, or after the RVII ships--can't say which. One other thing that AMD has done with multi-GPU support (Crossfire) is they've eliminated the need for the cards to be clock-synchronized as used to be the case--so no worries about the slower-clocked card slowing you down because the master card is running at a higher clock. Also, the current drivers do a good job in the default Crossfire setting in automatically disabling crossfire if your game just doesn't support it--I have not had to manually turn off Crossfire even once--and I have a whole passel of games installed...!
 
Honestly, it's not for me to say...;) It's too early to really look at RVII--the product isn't finalized yet. Could turn out to be much faster than the hints from the engineering samples AMD has benched that we've seen thus far. All I can say is that I've been very happy with this RX-590/480 system--much more than I actually thought when I tried it, as my original plan was to shunt the 480 to the wife and keep the 590, but it works so well I decided to keep both (she is completely undemanding of all things 3d, thankfully)--it's also got 64 ROPs--just like the RVII--but of course there are *other* differences, too *cough* The RVII is of course much newer engineering! I had purchased the RX-480 about a year ago, so buying the RX-590 just before x-mas was keeping with my self-inflicted budget for GPUs these days--<$300 is my sweet spot. But anyway...to try and answer your question, your Vega56 could be worth more, or worth less, after RVII ships--it will depend, I think, on the quality of the RVII product. So I might counsel keeping your V56, then getting the RVII, then doing an analysis yourself as to whether or not keeping the 56 or selling it is something you'd want to do. But again, it might be worth more to sell it now, or after the RVII ships--can't say which. One other thing that AMD has done with multi-GPU support (Crossfire) is they've eliminated the need for the cards to be clock-synchronized as used to be the case--so no worries about the slower-clocked card slowing you down because the master card is running at a higher clock. Also, the current drivers do a good job in the default Crossfire setting in automatically disabling crossfire if your game just doesn't support it--I have not had to manually turn off Crossfire even once--and I have a whole passel of games installed...!

At this point is should be finalized, you can buy one in less than 3 weeks, suppliers are probably shipping them out to their etailers/retailers as we speak.
 
I am really hoping that we see a price decrease soon or Independent reviews show a reason to buy Radeon VII over RTX 2080. In the interview he kept saying that he's glad to see new games taking advantage of all Radeon VII "new technology", but there is no new technology here. It's a die shrink + 8 GB more HBM2 memory. No new technology included; which is why I find it a hard sell for now compared to a RTX 2080. If the rasterization performance is the same, the price is the same, and Gsync ends up working on many/most quality freesync panels....then why would you buy something without RTX and DLSS capability (even if it's not really ready yet). It's clear that Ray Tracing isn't just an Nvidia gimic like hairworks; even if it's not ready for mainstream use yet. The visual clarity and realism it provides is the future of lighting in games and just needs development time. But buying RTX 2080 gives you the ability to use it and DLSS (which should be a nice improvement over traditional Anti-Aliasing).

AMD has regained prominence by providing budget/mid tier performance significantly cheaper than Nvidia competition. Coming to the table 4 months late with RTX 2080 performance and 0 new technologies just doesn't do enough to shift the needle, when you are also paying RTX 2080 prices.
 
Honestly, it's not for me to say...;) It's too early to really look at RVII--the product isn't finalized yet. Could turn out to be much faster than the hints from the engineering samples AMD has benched that we've seen thus far. All I can say is that I've been very happy with this RX-590/480 system--much more than I actually thought when I tried it, as my original plan was to shunt the 480 to the wife and keep the 590, but it works so well I decided to keep both (she is completely undemanding of all things 3d, thankfully)--it's also got 64 ROPs--just like the RVII--but of course there are *other* differences, too *cough* The RVII is of course much newer engineering! I had purchased the RX-480 about a year ago, so buying the RX-590 just before x-mas was keeping with my self-inflicted budget for GPUs these days--<$300 is my sweet spot. But anyway...to try and answer your question, your Vega56 could be worth more, or worth less, after RVII ships--it will depend, I think, on the quality of the RVII product. So I might counsel keeping your V56, then getting the RVII, then doing an analysis yourself as to whether or not keeping the 56 or selling it is something you'd want to do. But again, it might be worth more to sell it now, or after the RVII ships--can't say which. One other thing that AMD has done with multi-GPU support (Crossfire) is they've eliminated the need for the cards to be clock-synchronized as used to be the case--so no worries about the slower-clocked card slowing you down because the master card is running at a higher clock. Also, the current drivers do a good job in the default Crossfire setting in automatically disabling crossfire if your game just doesn't support it--I have not had to manually turn off Crossfire even once--and I have a whole passel of games installed...!
Did you watch AMD's keynote? They gave specs and showed benchmarks. I think you might be thinking of Zen2.
Try to keep up mmmkay!
 
Nice Q&A there! Thanks Kyle! So, he indirectly confirms that Vega7 is an one-off Halo product to allow big Navi to be made properly to battle for the performance crown vs nVidia and small Navi is on track to take the place of Polaris in price and Vega56/64 in performance. A very sensible strategy imho for a company being behind in R&D money for so long.
 
Wouldn't it be nice if AMD sampled a card to [H]? :) (Nice to see more and more open dialogue like this) - Can't imagine purchasing yet another card so quickly lol. Hopefully these ones won't start smoking or having all sorts of... issues.
 
I am really hoping that we see a price decrease soon or Independent reviews show a reason to buy Radeon VII over RTX 2080. In the interview he kept saying that he's glad to see new games taking advantage of all Radeon VII "new technology", but there is no new technology here. It's a die shrink + 8 GB more HBM2 memory. No new technology included; which is why I find it a hard sell for now compared to a RTX 2080. If the rasterization performance is the same, the price is the same, and Gsync ends up working on many/most quality freesync panels....then why would you buy something without RTX and DLSS capability (even if it's not really ready yet). It's clear that Ray Tracing isn't just an Nvidia gimic like hairworks; even if it's not ready for mainstream use yet. The visual clarity and realism it provides is the future of lighting in games and just needs development time. But buying RTX 2080 gives you the ability to use it and DLSS (which should be a nice improvement over traditional Anti-Aliasing).

AMD has regained prominence by providing budget/mid tier performance significantly cheaper than Nvidia competition. Coming to the table 4 months late with RTX 2080 performance and 0 new technologies just doesn't do enough to shift the needle, when you are also paying RTX 2080 prices.


So I was looking at RTX 2080s and the ones that I like are all over 800 lol. RTX isn't worth spending money on until its mainstream and DLSS I refuse to use, I want real resolution and I will never upscale. One can sugarcoat it all they like. DLSS is upscaling in a nutshell and I am not really willing to sacrifice any quality what so ever. It seems like Radeon VII ends up being 150 or so cheaper for a triple fan cooler than an rtx 2080.

On top of the price I am terrified of the space invaders lol
 
Back
Top