AMD Radeon VII Interview with Scott Herkelman @ [H]

I can't wait for the review
I've been waiting for Navi this long so I might just hold off unless the review tells me otherwise.
 
LOL when I was in college they had us writing C code in notepad and compiling via gcc in a command line terminal. This was back in 2003-2004, and college budgets have only gotten worse. Do you honestly think college campuses are readily stocked with computers running $5-10k professional GPUs?
They should with what they charge and the churn rate, but my experience then was the same as yours, I left after a semester because I felt university was almost a scam for many vocations.You can learn most of it yourself, then begin to gain experience and if you're good you don't need that paper. Only one of my friends actually did what they went to uni for, most were on average shitty jobs while I was doing okay going the experience route. Eng/Dr etc yeah uni is for you. The rest, especially 'business studies' or whatever they call it.. if you need to do a course to be interested in business you are not cut out for it.
Universities at higher levels have some juicy gear though for smaller numbers of students, I do work with some of them in my field.
 
I am really hoping that we see a price decrease soon or Independent reviews show a reason to buy Radeon VII over RTX 2080. In the interview he kept saying that he's glad to see new games taking advantage of all Radeon VII "new technology", but there is no new technology here.

He never said "new", he said "all"

We were pleased that Ubisoft announced that The Division 2 will support all Radeon VII technology features and we are working with many other developers as well.
 
The one question I wanted to ask wasn't there:

Why should someone buy a Radeon VII instead of a Geforce RTX 2080?

Faster better memory and superior Vulkan performance. Seems better then slow Ray tracing and supposed DLSS support. Also if a title supports Rapid Pack Math it does speed it up quite a bit on AMD. You also have to consider the large failure rate so unless these radeon cards start dying after they release, then that alone would drive me to buy the radeon over the 2080 at the same price. Just a matter of what is important to you.
 
The one question I wanted to ask wasn't there:

Why should someone buy a Radeon VII instead of a Geforce RTX 2080?

I'll have to see the benchmarks, but if it has around RTX 2080 performance I think I would get the Radeon VII over the 2080. The extra VRAM is nice, I played the RE2 remake demo recently and the max settings in that can bring you to over 10GB usage.

I'm also curious if they left SR-IOV support enabled (which the Instinct card has...). If so it would let you pass a single GPU to multiple virtual machines. That would make the card a must buy for me. Unlikely that they did though.
 
No, they didn’t fall for it. That’s the point. He just made himself look like a petulant child to everyone.

and that's why i said people fell for it. he want people to look what he was doing as bad and hate it. that guy is no fool. we can hate it all we want but the reality is his company still going to generate profit. it is all the better if there is no "true" competitor that can reduce that profit. if a little bit of drama can make that happen then so be it.
 
Great piece, thanks guys. I haven't seen many, if any, direct interviews with AMD folks on their CES announcements.

I'm still lukewarm on VII considering it doesn't look to be pushing forward price/performance or features at all vis a vis what was available two years ago from the competition. AMD's stress on extra VRAM and content creation is straight out of the marketing playbook i.e. if you're steak isn't amazing, talk about the fixins'. As always, I'm reserving judgement until the review, but may pick one up if a) it edges out RTX 2080 in games I play, b) power draw is reasonable, and c) can get it at or below MSRP.
 
I like also how he explained that the extra Vram wasn't for gaming per say but to allow budget minded people an option to game and have some compute options.
 
The one question I wanted to ask wasn't there:

Why should someone buy a Radeon VII instead of a Geforce RTX 2080?

Maybe because they don't want to WASTE $1,000 to play Space Invaders?

148906s.jpg
 
Tired of people complaining. Its just a vicious loop cycle. AMD makes no graphics card - people get mad. AMD makes a card that somewhat competes - People still get mad and say to buy Nvidia. If this card isn't yours then don't even bother bitching about it.
 
So I was looking at RTX 2080s and the ones that I like are all over 800 lol. RTX isn't worth spending money on until its mainstream and DLSS I refuse to use, I want real resolution and I will never upscale. One can sugarcoat it all they like. DLSS is upscaling in a nutshell and I am not really willing to sacrifice any quality what so ever. It seems like Radeon VII ends up being 150 or so cheaper for a triple fan cooler than an rtx 2080.

On top of the price I am terrified of the space invaders lol

If there is a $150 price difference once the cards are actually available, I agree with you. I don't think RTX/DLSS is ready enough yet to warrant the extra cost. In another generation or two, maybe. However, if the actual retail prices end up being the same; and rasturization performance is within 1-3%; then the RTX 2080 gives you more for your $$$. With Radeon VII only being released in low quantities (<5000), its not very likely that most people will be able to get one for $699. Most will have to buy it from a reseller that will most assuredly mark up the price. Ultimately we'll have to wait and see once they hit the shelves.
 
This was a great interview, very informative & great selection of questions asked! Thank you!
 
Thanks for the interview Kyle.

I like that it has 16Gb vram. It's good that they have a card that has performance around a 2080, but has more ram. If I was in the market for a card in that price range or performance, the 16Gb would be very compelling. Not yet convinced that HBM2 is actually faster the GDDRx, if it was I think we would see it in more graphics cards by now.

A few questions about how the memory is accessed in the paragraph below, perhaps [H] can submit this to AMD/nVidia for clarification?

Questions: They give us a bandwidth spec but that is all of the chips being accessed at once, isn't it? (For both GDDRx and HBM) 4096bits wide memory interface for HBM2 (sounds amazing!), but a texture in a game is maybe what, 32bits wide? It would likely download out of a single HBM2 chip, which has (I think) 256GBps per package, or 32Gbps (in bytes). Doesn't sound nearly as fast as 1TBps (which I believe is in bits, not bytes. Converting to bytes /8 = 128Gbps that is obviously a score counting multiple packages). This same question could be put to GDDR flavors as well. A 1080ti has 11Gb GDDR5X, rated at 11Gbps. It already sounds slower than HBM2, even a single chip (If I understood the specs correctly from wikipedia). But what about individual chip speed/thruput, but also how are either of these technologies utilized in how they store individual textures? Is it spread out across all of the chips, or would textures be complete items in individual chips? If the latter was the case, then individual chip thruput would be a more important spec than the "Total bandwidth", wouldn't it? If the items in memory were spread out across all of the chips, then a total bandwidth measurement seems like it would be most useful. The answer to those questions is something I haven't found asked or answered anywhere. We just listen to the marketing saying "Bajillions of GBs!", or how wide the memory bus is.

Hopefully the above questions and reasons for asking them have been articulated well enough to get some kind of answer from someone. Reading specs on wikipedia regarding HBM2 and GDDR5X, the above questions weren't answered.

Competition is good, can't wait for the review on this one. [H], please add a 1080ti in the review, since it has more vram than a 2080 has and about the same performance.

The texture is going to be some size, probably several megabytes, and all of that data will be striped across all the memory channels just like RAID 0. It's a single interface of x bits wide.

Thats 1TB/sec is in bytes, not bits, yes an entire terabyte per second. Sounds insane, because it honestly is.
 
Might be an edge case scenario but, if you can swing it, could you do some VR testing as well? Maybe just general impression type testing? (nothing major if it impacts normal testing schedule)

I'm looking to replace a 1080Ti and a Pascal Titan to get all nVidia kit out of my builds and this may be the card that allows me to do that.
I second this. 4K and VR is the target market for a 16GB card.
 
The texture is going to be some size, probably several megabytes, and all of that data will be striped across all the memory channels just like RAID 0. It's a single interface of x bits wide.

Thats 1TB/sec is in bytes, not bits, yes an entire terabyte per second. Sounds insane, because it honestly is.

Whats the source for that information?
 
Whats the source for that information?

That's how memory channels work, they stripe data across a single pool of multiple controllers, this is how your dual (or quad) channel memory works on your CPU. a single channel is 64 bit, a dual channel interface is 128 bits.
 
^ Yup no reason for him over think this. To keep it simple the important thing is all that matters is memory bandwidth for peformance. Memory bus width mainly determines how many memory chips a card will use while of course the wider the bus the more bandwidth.
 
Maybe because they don't want to WASTE $1,000 to play Space Invaders?

View attachment 135410

Hahaha. This is true. I ordered an open box zotac amp rtx 2080 for 599. Only because of the price from amazon. I am terrified of them space invaders. So I will keeping my return policy in mind and wont feel bad for returning since it was already open box. But man those space invaders are back of my mind though. If it even freezes on me once or black screens without space invaders its going back. HAHA

I was thinking about getting radeon 7 though. But 599 price was hard to pass up. I was browsing for a few days and I grabbed it as soon as it popped up. Also if it has any freesync issues with my CGH70 its going back. its in for torture testing and I am going to make sure it satisfy me fully before I call it a keeper.
 
Last edited:
Faster better memory and superior Vulkan performance. Seems better then slow Ray tracing and supposed DLSS support. Also if a title supports Rapid Pack Math it does speed it up quite a bit on AMD. You also have to consider the large failure rate so unless these radeon cards start dying after they release, then that alone would drive me to buy the radeon over the 2080 at the same price. Just a matter of what is important to you.

DLSS is looking to be an even bigger marketing scam than RTX.

Just don't bash it if you plan on doing an Nvidia review
 
DLSS is looking to be an even bigger marketing scam than RTX.

Just don't bash it if you plan on doing an Nvidia review



This is already true. Read techpowerup articles at times. I think last few Radeon 7 news they were already throwing jabs at it before even reviewing it. Like lousy insults. I have already seen a few people call them out. They are going out of their way to preach to Nvidia. Must be tough lol! Very few tech journalists with balls. I am so glad [H] didn't sign the NDA.
 
What i would like to know is anyone here going to get one of these. If Kyle gets one and in his review maybe tries out VR to see how it goes. I have been thinking of getting an HYC Vive and i know my RX480 won't cut the mustard. I can afford one of these cards, but it would have to be able to handle VR gaming. From what i have read Vega 64 also isn't myth chop in VR. So I'm interested to see how the Radeon 7 goes.
 
What i would like to know is anyone here going to get one of these. If Kyle gets one and in his review maybe tries out VR to see how it goes. I have been thinking of getting an HYC Vive and i know my RX480 won't cut the mustard. I can afford one of these cards, but it would have to be able to handle VR gaming. From what i have read Vega 64 also isn't myth chop in VR. So I'm interested to see how the Radeon 7 goes.
I am going to guess the RVII will be an excellent VR card
 
This is already true. Read techpowerup articles at times. I think last few Radeon 7 news they were already throwing jabs at it before even reviewing it. Like lousy insults. I have already seen a few people call them out. They are going out of their way to preach to Nvidia. Must be tough lol! Very few tech journalists with balls. I am so glad [H] didn't sign the NDA.
I really hope Nvidia is stupid enough to put this practice into writing. Like really really hope they are dumb enough to send a memo to NDA signees and remind them that praise be to the RTX or no more samples.

I’m pretty if they were to do such a thing it would be illegal enough to get away with breaking the NDA to release to the public.
 
What i would like to know is anyone here going to get one of these. If Kyle gets one and in his review maybe tries out VR to see how it goes. I have been thinking of getting an HYC Vive and i know my RX480 won't cut the mustard. I can afford one of these cards, but it would have to be able to handle VR gaming. From what i have read Vega 64 also isn't myth chop in VR. So I'm interested to see how the Radeon 7 goes.

Considering one myself.
 
I missed this interview earlier and just caught up.

I hope they are able to get someone to make a displayport 1.4 to hdmi 2.1 adapter, would go a long way to boosting these cards. But it mainly matters if they are able to get vrr and perhaps even the low latency mode working through the adapter. And what kind of latency is introduced just by using an adapter in the first place? Trivial or something people might notice?


It was also interesting reading an answer that actually gave the die size of 7nm Vega 2. 331mm - nice midrange gpu size. For comparison:

turing 14nm:

2060/2070 - 445mm
2080 - 545mm
2080ti - 754mm

These are all really large gpu dies. The 2080ti is insanely large to me, and I think it might be the single largest consumer die gpu ever. But imagine keeping the same die size and scaling up that much more horsepower on 7nm of their own? The performance would be insane even on the same architecture.


Whatever comes after Turing/Vega/Navi has the potential to cut through 4k content with ease, so I hope they start expanding into more new effects on the AMD side in the future like more potent ray tracing.

Even if nvidias implementation is not perfect and dependent on dlss working, their efforts have made it clear that it's something I'd like to see in future cards and added to games for better effects.

I am interested to see how Navi does, but I'm worried it won't target higher tier performance until much later. What if navi is only a 200mm card? Even if the architecture improvements are there, maybe it might be too small to overcome the brute force of larger die size cards. If navi was the same die size or larger than Vega 2 (which again, at only 331mm is not that big) then it has the potential to go after the 2080ti or higher.
 
I missed this interview earlier and just caught up.

I hope they are able to get someone to make a displayport 1.4 to hdmi 2.1 adapter, would go a long way to boosting these cards. But it mainly matters if they are able to get vrr and perhaps even the low latency mode working through the adapter. And what kind of latency is introduced just by using an adapter in the first place? Trivial or something people might notice?


It was also interesting reading an answer that actually gave the die size of 7nm Vega 2. 331mm - nice midrange gpu size. For comparison:

turing 14nm:

2060/2070 - 445mm
2080 - 545mm
2080ti - 754mm

These are all really large gpu dies. The 2080ti is insanely large to me, and I think it might be the single largest consumer die gpu ever. But imagine keeping the same die size and scaling up that much more horsepower on 7nm of their own? The performance would be insane even on the same architecture.


Whatever comes after Turing/Vega/Navi has the potential to cut through 4k content with ease, so I hope they start expanding into more new effects on the AMD side in the future like more potent ray tracing.

Even if nvidias implementation is not perfect and dependent on dlss working, their efforts have made it clear that it's something I'd like to see in future cards and added to games for better effects.

I am interested to see how Navi does, but I'm worried it won't target higher tier performance until much later. What if navi is only a 200mm card? Even if the architecture improvements are there, maybe it might be too small to overcome the brute force of larger die size cards. If navi was the same die size or larger than Vega 2 (which again, at only 331mm is not that big) then it has the potential to go after the 2080ti or higher.

The problem is that only 1/3 of those Turing Dies are ACTUAL graphics cores that can render games. The rest of the core goes to RTX (yes, the tensor cores are absolutely required for RTX to work at all) so really, if they were to drop that much die space on CUDA cores, we'd have at LEAST double the performance of what we have now, and that would allow for games looking MUCH better than what RTX adds to BFV
 
Am I the only one wondering about the 331mm2 a@ 300w tdp @ 7nm?

For comparison the 2080 its performance is compared to is 545 mm2 @ 215W tdp @ 12nm.

I get that we are now in a phase where all companies calculate tdp differently. Is it the hbm2 w a 4096 memory interface?

I'm a layman but remeber that same node shrink I think it was nvidias Maxwell that got a big performance boost significantly because they started using internal hardware to compress the picture before processing? Don't think AMD ever copied that tech. Think it's like a 75% image compression that so far no one has complained about. I'll see if I can find a link.

Edit: add link below see under color compression. I've just never heard about the AMD iteration.

https://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/3

Edit2: nvm I guess they did add a delta color compression with gcn 1.2 around the r9 280. With hbm bandwidth is not so constrained but I am always a fan of greater efficiency. The with 4k gaming and Moores law limits these kind of technologies will become very important.
 
Last edited:
But will it really be 300W? I mean no one here is not going to tweak voltages for core and memory to lower power and increase clocks, especially on Vega chip cards. On either side of the brand this is the case. And some are accurate TDP ratings by the manufacturer and some are not, no? I guess I think to simply :)
 
Am I the only one wondering about the 331mm2 a@ 300w tdp @ 7nm?

For comparison the 2080 its performance is compared to is 545 mm2 @ 215W tdp @ 12nm.

I get that we are now in a phase where all companies calculate tdp differently. Is it the hbm2 w a 4096 memory interface?

I'm a layman but remeber that same node shrink I think it was nvidias Maxwell that got a big performance boost significantly because they started using internal hardware to compress the picture before processing? Don't think AMD ever copied that tech. Think it's like a 75% image compression that so far no one has complained about. I'll see if I can find a link.

Edit: add link below see under color compression. I've just never heard about the AMD iteration.

https://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/3

Edit2: nvm I guess they did add a delta color compression with gcn 1.2 around the r9 280. With hbm bandwidth is not so constrained but I am always a fan of greater efficiency. The with 4k gaming and Moores law limits these kind of technologies will become very important.


Big boost to maxwell was due to immidiate mode tiled rasterization. That was their secret sauce and gave them great efficiency/performance boost.
 
Hah, oh boy... Gibbo at OCUK has confirmed only 100 parts for ALL of the UK.

That only 5000 rumors sounds like it may be vastly optimistic.
 
Back
Top