Radeon 7 (Vega 2, 7nm, 16GB) - $699 available Feb 7th with 3 games

Buildzoid put up a video talking about the card. Pretty good run down of the card and some speculation on why the price is where its at. As usual with Buildzoid he rambles at times, but he still does a good job providing a bit of a different take on the card compared to a lot of the tech press.

He make some great points here:
- I did not realize that a 512 bit config was even tougher on GDDR6 than it is with GDDR5.
- Instinct is already developed, so any sales of Vega 7 is just a bonus. Developing a GDDR6 version would cost alot.

One thing he may have gotten wrong, unless it was a rumor, is the fp64 performance. It looks like this will not be the same as Instinct and less than 1 TH/s, which is lower than the HD 7970. This is a bummer as it gave the Tahiti cards great resale value and alot of use for those that liked F@H and other programs.
 

Tweaktown, who posted the 5000 unit rumor, is now the bottom of the barrel.

They have lost a lot of respect after their RTX cheerleading reviews.
RTX 2080ti: Seriously just buy it.
RTX 2070: the perfect 1440p card... For $600.
Somehow they avoided scrutiny while Tom's was chastised for their optimism on the RTX lineup.
 
- I did not realize that a 512 bit config was even tougher on GDDR6 than it is with GDDR5.
- Instinct is already developed, so any sales of Vega 7 is just a bonus. Developing a GDDR6 version would cost alot.

Part of the point is that AMD should have been developing larger-than-Polaris GDDR-based GPUs alongside their professionally oriented Vega GPUs. I guess they save money by doing it, but they end up losing quite a bit of high-end sales as well as marketing leverage, and no doubt margins.

And to be very specific: if Nvidia can put out 1080Ti/2080 performance with 352/256-bit memory controllers using high-clocked GDDRx, so can AMD. A 512-bit double-Polaris would be a force to be reckoned with in the gaming market, if AMD ever bothered to make one.
 
  • Like
Reactions: N4CR
like this
Part of the point is that AMD should have been developing larger-than-Polaris GDDR-based GPUs alongside their professionally oriented Vega GPUs. I guess they save money by doing it, but they end up losing quite a bit of high-end sales as well as marketing leverage, and no doubt margins.

And to be very specific: if Nvidia can put out 1080Ti/2080 performance with 352/256-bit memory controllers using high-clocked GDDRx, so can AMD. A 512-bit double-Polaris would be a force to be reckoned with in the gaming market, if AMD ever bothered to make one.

Except that Nvidia uses compression techniques that AMD does not. That is one of the reasons that side by side, AMD's final onscreen image looks better than Nvidia's. Therefore, AMD needs greater bandwidth.
 
Except that Nvidia uses compression techniques that AMD does not. That is one of the reasons that side by side, AMD's final onscreen image looks better than Nvidia's. Therefore, AMD needs greater bandwidth.

You have your evidence for that claim bookmarked and peer reviewed, right?

(we all know about your subjective misconceptions :ROFLMAO:)
 
Part of the point is that AMD should have been developing larger-than-Polaris GDDR-based GPUs alongside their professionally oriented Vega GPUs. I guess they save money by doing it, but they end up losing quite a bit of high-end sales as well as marketing leverage, and no doubt margins.

And to be very specific: if Nvidia can put out 1080Ti/2080 performance with 352/256-bit memory controllers using high-clocked GDDRx, so can AMD. A 512-bit double-Polaris would be a force to be reckoned with in the gaming market, if AMD ever bothered to make one.

I kind of thought that this is what Navi was going to be.
NAVI for gamers; Radeon 7 for gamers/creators; Instinct for creators/compute geeks?

Polaris is only 256 bit on gddr5. If a newer version used gddr6 or even gddr5x, 256 bit would still be great. Add some higher clocks and shaders via 7nm and AMD could still have a great GTX 2060/2070 fighter for cheap.

...we might just have to wait a while to see it.
 
You have your evidence for that claim bookmarked and peer reviewed, right?

(we all know about your subjective misconceptions :ROFLMAO:)

My own physical eyes as well as the eyes of thousands of other users. This is simply objective fact but hey, whether you agree with me or not does not concern me in the least, hard evidence is the way I go.

Edit: And this point is not for derailing but as a statement of why AMD runs better with more memory bandwidth.
 
  • Like
Reactions: noko
like this
Ignoring DXR, what I find frustrating is that they have the building blocks to compete with the RTX2080Ti in pure raster performance. Hell, they've had the building blocks for a while, and have simply chosen not to build, while their 'top end' Vega parts are expensive to produce and limited in raster performance due to professional focus and use of HBM.
 
My own physical eyes as well as the eyes of thousands of other users. This is simply objective fact but hey, whether you agree with me or not does not concern me in the least, hard evidence is the way I go.

You just used your subjectivity absent any supporting facts to declare a fact, as literally everyone in this thread could have anticipated.

Bravo, thanks for clarifying your complete lack of proof!
 
You just used your subjectivity absent any supporting facts to declare a fact, as literally everyone in this thread could have anticipated.

Bravo, thanks for clarifying your complete lack of proof!

:eek::rolleyes: Ok, whatever you say, you made a point about memory bandwidth and I gave you hard evidence of why AMD runs better with more. *Shrug*

Edit: As literally everyone in this thread who has used both can objectively attest to.
 
  • Like
Reactions: noko
like this
:eek::rolleyes: Ok, whatever you say, you made a point about memory bandwidth and I gave you hard evidence of why AMD runs better with more. *Shrug*

Edit: As literally everyone in this thread who has used both can objectively attest to.


What evidence are you talking about? Most gpus run better with more bandwidth.

More importantly, "AMD runs better with more bandwidth" is overly simplified. How about Polaris/Vega/Volta runs better with more bandwidth??

I will be cliche and use the car analogy, but it is like saying Ford runs much better with an 10 speed transmission instead of the 5.0 coyote runs much better.
 
Where? A link? A forum post?

What hard evidence do you have?

We all want to see!

With my eyes and the eyes all those who have used both, it is a common consensus. Also, we know that Nvidia uses compression techniques to fit stuff into memory faster across the smaller memory bus bandwidth that they use. It is what it is, AMD does not use those compression techniques, from what I understand. In fact, overclocking memory on an AMD card provides a much larger boost.
 
  • Like
Reactions: noko
like this
What evidence are you talking about? Most gpus run better with more bandwidth.

More importantly, "AMD runs better with more bandwidth" is overly simplified. How about Polaris/Vega/Volta runs better with more bandwidth??

I will be cliche and use the car analogy, but it is like saying Ford runs much better with an 10 speed transmission instead of the 5.0 coyote runs much better.

Not arguing it, it is objectively factually that AMD needs more memory bandwidth than Nv to run better. That is cool, which is part of the reason for the increased performance of the Radeon VII.
 
It's just that, rumours. One site said 5000 cards, another said 20k cards with another 40k in the works. Wouldn't pay much attention to it. Just the usual click bait shite to get people talking.

Yeah, I heard a rumor they were only making 3 cards and you have to participate in an MMA competition to be eligible to win. Top 3 get the cards.
 
Yeah, I heard a rumor they were only making 3 cards and you have to participate in an MMA competition to be eligible to win. Top 3 get the cards.

You can get a free R7 if you deliver Jensen Huang's leather jacket to AMD headquarters.

A
 
For literally all the answers, watch the Buildzoid video talking about the architecture and the architecture changes on the previous page.

Alternatively GamersNexus while at CES also broke it down.

So, I watched BOTH videos on your recommendation, but honestly, they didn't answer ANYTHING. Buildzoid basically said "This has lots of memory bandwidth that it probably can't use" or "This card really shouldn't exist".

So once again, how can 4 less CUs and 100 extra maximum MHz make a card 20% faster!?
 
So, I watched BOTH videos on your recommendation, but honestly, they didn't answer ANYTHING. Buildzoid basically said "This has lots of memory bandwidth that it probably can't use" or "This card really shouldn't exist".

So once again, how can 4 less CUs and 100 extra maximum MHz make a card 20% faster!?

Honestly its pretty much a fact that Vega was heavily bandwidth starved. As I mentioned previously when I had the vega 64 overclocking HBM to maximum stable frequency really gave me a decent boost. It was the case with other user feedback. Almost everyone said that Vega really liked the higher memory clocks and more bandwidth. I do think that double the memory bandwidth is really giving it another 10-15% minimum boost and you likely have much higher sustained boost clock that is likely giving the other boost. Vega really was bandwidth starved and I think when they were forced to use slower memory due to early silicon and inventory shortage it likely crippled vega a little.
 
  • Like
Reactions: N4CR
like this
Part of the point is that AMD should have been developing larger-than-Polaris GDDR-based GPUs alongside their professionally oriented Vega GPUs. I guess they save money by doing it, but they end up losing quite a bit of high-end sales as well as marketing leverage, and no doubt margins.

And to be very specific: if Nvidia can put out 1080Ti/2080 performance with 352/256-bit memory controllers using high-clocked GDDRx, so can AMD. A 512-bit double-Polaris would be a force to be reckoned with in the gaming market, if AMD ever bothered to make one.

Honestly report is AMD would have preferred Raja to make bigger Polaris instead of pushing Vega out how it was. Since the development is already under way I think it takes a lot to change that. So its one of those what's done is done type of thing. They will likely change things a big moving forward after their Next gen architecture is launched. Can they make bigger Navi? I am sure they could, honestly if Navi is performing better then expected as they rumored then they might make a bigger one with GDDR6 this year and cash out on it. It would make sense and then release the higher end next gen product when its ready.

On the side note I scored a Zotac GTX 2080 AMP for 599 on Amazon. It wan as open box or I would have never bought it for 800 retail lol. Pretty much the price of brand new gtx 2070.

Unless it has space invaders I am good for this year (and honestly space invaders really scare me about turing). But if Navi impresses and they release the bigger Navi then I will offload it here for 499 and make someone else happy.
 
So, I watched BOTH videos on your recommendation, but honestly, they didn't answer ANYTHING. Buildzoid basically said "This has lots of memory bandwidth that it probably can't use" or "This card really shouldn't exist".

So once again, how can 4 less CUs and 100 extra maximum MHz make a card 20% faster!?

I have some of the fastest VEGAs out there and have tweaking them since launch week. They do indeed benefit from running super cool..All 6 of my WC'd 56s will do over 1700Mhz sustained and 1100Mhz HBM, with my golden sample doing 1800Mhz/1150 if pushed to its max. My golden 64 will do 1750/1100 but I have to feed it a lot of current (which I usually do). I accidentally said this card will 1800mhz in another thread but I actually meant my 56 (had a brain fart and couldn't remember the thread to fix it).

If you give VEGA 1700Mhz+ on the core with 1100mhz HMB, you get over 563.2GB/s of bandwidth and the card's performance reflects that in linear steps. At these speeds a 56 will run neck and neck with a AIB 1080, and a 64 is a fair bit quicker depending on the title....Giving the VII 1 TB/s of memory speed will truly unshackle the uarch...

I would love to see what one of these will do with a full cover block like my cards have. I would bet you can break 2~2.1Ghz if you are not afraid to give it enough current, which would give it an easy lead over the 2080 and come much closer to a stock 2080TI then Nvidia would like....The smaller number of shaders with the icy cold temps that WC'ing allows should let it fly.
 
My own physical eyes as well as the eyes of thousands of other users. This is simply objective fact but hey, whether you agree with me or not does not concern me in the least, hard evidence is the way I go.

Edit: And this point is not for derailing but as a statement of why AMD runs better with more memory bandwidth.



19789999.jpg
 
Funny how all the Vega owners in this thread including myself all are telling you (the people who do not have much or any Vega experience) that memory OC gives the biggest gains of all on average on a Vega (this was known a long time ago) and is why VII is much faster without much clock speed bump and less CU. The benchmarks and data are out there to prove it with Vega 1, why else would AMD claim 25% on a game with just 100Mhz and almost double the bandwidth? Do you think they are going to lie that badly with Lisa after being pretty spot on with benchmarks post-Raja?

If I had the time I would benchmark this for you on my own rig. But unfortunately I have to go and do two months of R&D in a few days far, far away from here. So I'll leave it to those who are better positioned to do so.

I can't wait for someone to do a clock-clock comparison between V1 and V2, ~halving the bandwidth and then you can see for yourself.
 
Still, a 180MHz OC on HBM2 gets it past VFE stock HBM2 speeds (945MHz), so not bad on that front. The performance uplift from an HBM-only approach is noteworthy, and more worth the effort. Core overclocking, in our still-limited and expanding experience, does not seem to impact tests quite as much as HBM+power offset.
https://www.gamersnexus.net/hwreviews/3020-amd-rx-vega-56-review-undervoltage-hbm-vs-core
I'm not a tech jesus fan but was it that hard for you guys to google?

And that heavily depends on the game/application.
Keep in mind V64 has less bandwidth than Fury because it only has lower clocked (compared to alleged production target) and two HBM chips. That difference will be exacerbated on the VII as it's four chips, like Fury again..
 
You have no proof.
I would also agree with ManofGod dealing with image color quality AMD vs Nvidia. They have subtle differences and sometimes stark differences. HDR seems to exacerbate compression type artifacts. Still valid point on some objective proof. Nvidia level of detail seems to be less negative hence more blurrier textures since the mipmaps (lower versions of textures) are push closer to the camera view. It may just be how Nvidia and AMD sets their default image settings. Which some find AMD sharper but nosier while others find Nvidia softer, blurrier making it just a preference - many just can't tell the difference to begin with making it a mute point or the differences are so small that it is insignificant.

With new drivers and Windows update I will see if I can capture the HDR differences between Nvidia and AMD - the thing is if the monitor being used to look at the differences is poor - it will prove nothing. This is a hard one to fully show. Taking pictures of a 10 bit HDR image on a SDR 8 bit camera for example - or 10 bit image being shown on a 8 bit panel. Now if one can show the differences considering the limitations - the actual differences will definitely be more pronounced with higher quality monitors.
 
HDR is well supported on AMD. Far Cry 5, Battlefield V looks great. The soon to be released RE2. Biohazard 7 just to name a few. They all play great in HDR + FreeSync2. Right where im typing from . :D
 
HDR is well supported on AMD. Far Cry 5, Battlefield V looks great. The soon to be released RE2. Biohazard 7 just to name a few. They all play great in HDR + FreeSync2. Right where im typing from . :D
Same here for me(Vega 64/Ryzen 2700), but I’m not going to pretend that IdiotInCharge is wrong, HDR on Windows is a mess and issues are common. Far Cry 5 for me was fine, broke and then after a day or two (and no updates) worked flawlessly again after a restart.

Compared to consoles HDR doesn’t have the “it just works” going for them.
 
Same here for me(Vega 64/Ryzen 2700), but I’m not going to pretend that IdiotInCharge is wrong, HDR on Windows is a mess and issues are common. Far Cry 5 for me was fine, broke and then after a day or two (and no updates) worked flawlessly again after a restart.

Compared to consoles HDR doesn’t have the “it just works” going for them.

I'm sure it works in some games, but it's absolutely hideous on the desktop. I just wanted to watch some shows.
 
I have no Idea what monitors you guys are using , but mine works as far back as BF1 and does it flawlessly. Why would anyone give a shit about it working in windows desktop looking at static images.
I know the 10 series Nvidia cards have some massive performance hits with HDR, but I could care less because none of my Nvidia cards are used for gaming.
 
There is some great info regarding Microsoft's Direct Machine learning in DX12. According to an Interview with Japanese website 4Gamer.net, Adam Kozak (Senior Manager of GPU Product at AMD}.
This clarifies the leather jacket's tantrum a bit more...lol :D. This new Vega is 1.62 x faster than 2080 in Luxmark Ray Traycing benchmark. details below.

https://wccftech.com/amd-radeon-vii-excellent-result-directml/

Yeah I read this as well it is surprising to see that AMD would go this route but it is all still in development stage (not sure alpha or beta). It is DrectX answer to DLSS. Who knows if it goes somewhere.
 

Yeah, it does mean exactly what I think it means. You have to actually use your own senses, like many have and can attest to the fact of what I am saying. You cannot take an image, screen shot or video and post it online because that is not the same as the final result. (Compression, bit, color depth when sharing something online just does not work.) Hey, you do not have to agree with me, the results speak for themselves.
 
There is some great info regarding Microsoft's Direct Machine learning in DX12. According to an Interview with Japanese website 4Gamer.net, Adam Kozak (Senior Manager of GPU Product at AMD}.
This clarifies the leather jacket's tantrum a bit more...lol :D. This new Vega is 1.62 x faster than 2080 in Luxmark Ray Traycing benchmark. details below.

https://wccftech.com/amd-radeon-vii-excellent-result-directml/

You are kind of conflating two things that don't go together.

Direct Machine learning in DX12 has NOTHING to do with the Luxmark Ray Tracing benchmark, which is just OpenCL GPGPU non real time rendering (you know, like Cinebench, the other favorite AMD benchmark).

Luxmark uses neither RT, nor Tensor cores on the 2080. When Raytracing packages are actually being updated to use them, they will blow past traditional GPGPU rendering.
 
Last edited:
Yeah, it does mean exactly what I think it means. You have to actually use your own senses, like many have and can attest to the fact of what I am saying. You cannot take an image, screen shot or video and post it online because that is not the same as the final result. (Compression, bit, color depth when sharing something online just does not work.) Hey, you do not have to agree with me, the results speak for themselves.

PRETTY SURE you just posted the definition of subjective but IDK

tomato.jpg




or OBJECTIVE SUBJECTIVE I guess.
 
That card looks THICC. I was hoping for something I could fit in my ncase that also let me afford a freesync monitor so I don't have to pay the gsync tax.

The specs look awesome though and I am super unconvinced by RTX in real use after seeing the HUGE performance hit on my friend's system.

Well. Now that you can use a Freesync monitor with Nvidia GPUs pretty much just fine, there's no longer a 'gsync' tax on monitors.

At $699, the Radeon 7 and the 2080 cost the same and perform about the same. The question now is 16gb of memory vs 8gb + Raytracing + DLSS.

Seems to me that AMD was forced to go with HBM2 memory again because of RND costs already sunk into Vega. It was easier for them to release a Vega 2.0 than to redo the memory interface to support cheaper GDDR6 memory.

I still question the amount of memory though, 16 GB is only useful for professional applications, AI, data center usage, etc.

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.
 
Last edited:
Well. Now that you can use a Freesync monitor with Nvidia GPUs pretty much just fine, there's no longer a 'gsync' tax on monitors.

At $699, the Radeon 7 and the 2080 cost the same and perform about the same. The question now is 16gb of memory vs 8gb + Raytracing + DLSS.

Seems to me that AMD was forced to go with HBM2 memory again because of RND costs already sunk into Vega. It was easier for them to release a Vega 2.0 than to redo the memory interface to support cheaper GDDR6 memory.

I still question the amount of memory though, 16 GB is only useful for professional applications, AI, data center usage, etc.

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.


Vega 7 also has higher compute if that factors into anyones decision.
 
Back
Top