Radeon 7 (Vega 2, 7nm, 16GB) - $699 available Feb 7th with 3 games

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.

Shaving on stack of HBM, might save AMD what? $25? Even less because judging by past AMD products, it would probably still still populate all 4 stacks and just disable one.

So cutting $150+ would probably make the card a money loser.
 
Part of the point is that AMD should have been developing larger-than-Polaris GDDR-based GPUs alongside their professionally oriented Vega GPUs. I guess they save money by doing it, but they end up losing quite a bit of high-end sales as well as marketing leverage, and no doubt margins.

And to be very specific: if Nvidia can put out 1080Ti/2080 performance with 352/256-bit memory controllers using high-clocked GDDRx, so can AMD. A 512-bit double-Polaris would be a force to be reckoned with in the gaming market, if AMD ever bothered to make one.
*Should have* AMD makes so much less money than nvidia and intel yet AMD is just now releasing Zen2 which will take the crown in server space and dekstop too. Amd is also keeping up with nvidia for discrete GPU's Not winning but is competitive. It's a damned miracle AMD is competing as high as they are.
 
*Should have* AMD makes so much less money than nvidia and intel yet AMD is just now releasing Zen2 which will take the crown in server space and dekstop too. Amd is also keeping up with nvidia for discrete GPU's Not winning but is competitive. It's a damned miracle AMD is competing as high as they are.

...and there's the expected 'poor AMD' defense :D
 
...and there's the expected 'poor AMD' defense :D
Yeah because when you don't have money and have a good deal of debt you can just produce gpu out of thin air. Never mind the 3 year development cycle for gpu. So unless you have time travel available and willing to lend AMD a hand bring them some money from the future then it is not going to change development pace or status.
 
Yeah because when you don't have money and have a good deal of debt you can just produce gpu out of thin air. Never mind the 3 year development cycle for gpu. So unless you have time travel available and willing to lend AMD a hand bring them some money from the future then it is not going to change development pace or status.

Their mistakes to make, I agree ;)
 
...and there's the expected 'poor AMD' defense :D

It is also a fact. Between Intel illegally fucking them over during the P4 era and a series of spectacularly stupid decions under previous management AMD really doesn’t have the kind of money to throw around that their competition does. They do some pretty outstanding stuff with their limited budget but it is what it is.
 
Well. Now that you can use a Freesync monitor with Nvidia GPUs pretty much just fine, there's no longer a 'gsync' tax on monitors.

At $699, the Radeon 7 and the 2080 cost the same and perform about the same. The question now is 16gb of memory vs 8gb + Raytracing + DLSS.

Seems to me that AMD was forced to go with HBM2 memory again because of RND costs already sunk into Vega. It was easier for them to release a Vega 2.0 than to redo the memory interface to support cheaper GDDR6 memory.

I still question the amount of memory though, 16 GB is only useful for professional applications, AI, data center usage, etc.

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.

Reference the part of your post I bolded and underlined. See https://www.guru3d.com/news-story/a...tive-with-radeon-vii-though-directml-api.html

I'm not sure if that means anything...yet.
 
Reference the part of your post I bolded and underlined. See https://www.guru3d.com/news-story/a...tive-with-radeon-vii-though-directml-api.html

I'm not sure if that means anything...yet.

I don’t see how it would be useful since it would take tflops away from normal rendering and I don’t think AMD has the expertise, manpower, or money to pull it off. It’s not like it has idle tensor cores sitting around like the RTX series.

For example, the 2080ti has 110 tflops of int8 just sitting around doing nothing except for RT and DLSS. Vega has ~60 Tflops (?) of int8 if the card commits 100% of itself to int8.
 
  • Like
Reactions: c3k
like this
So cutting $150+ would probably make the card a money loser.
Exactly. If it wasn't for Nvidia's ridiculous pricing and stupidly limited RAM for such expensive cards, AMD would have been forced to stand down until large Navi.
AMD looks to have a very competitive card for the price, especially for those who do more then game.
 
  • Like
Reactions: noko
like this
I don’t see how it would be useful since it would take tflops away from normal rendering and I don’t think AMD has the expertise, manpower, or money to pull it off. It’s not like it has idle tensor cores sitting around like the RTX series.

For example, the 2080ti has 110 tflops of int8 just sitting around doing nothing except for RT and DLSS. Vega has ~60 Tflops (?) of int8 if the card commits 100% of itself to int8.
I see this very useful if it is supported with multiple cards and not necessarily the same model card. Not SLI/CFX or Multi-GPU but using the second card for process, ML and maybe even RT. For example primary card does the game in what ever resolution, the second card does the ML processing then to your monitor it goes. Lag would be my only concern in this method. You would not have to render at a lower resolution like what Nvidia does but could render at full resolution and reap the benefit of the processing power of your second card.

The Microsoft demo shows some spectacular results from 1080p to 4K. This one example looks like it is much better than DSLL but too soon to tell. What is awesome about this is that anyone with a DX 12 card can use it. 4K gaming may come to a large number of folks now without needing to upgrade.

https://www.overclock3d.net/news/gp..._supports_directml_-_an_alternative_to_dlss/1
 
Last edited:
I see this very useful if it is supported with multiple cards and not necessarily the same model card. Not SLI/CFX or Multi-GPU but using the second card for process, ML and maybe even RT. For example primary card does the game in what ever resolution, the second card does the ML processing then to your monitor it goes. Lag would be my only concern in this method. You would not have to render at a lower resolution like what Nvidia does but could render at full resolution and reap the benefit of the processing power of your second card.

I agree it’d be great if a second card could be purposed just for RT and such (but also do xfire for older games). I think a lot of people think the same and it’s definitely a way for AMD to catch up without going down the massive die route.

I once read someone saying RT would be easy to split off but I’ve never gotten that deep into it.
 
I agree it’d be great if a second card could be purposes just for RT and such. I think a lot of people think the same and it’s definitely a way for AMD to catch up without going down the massive die route.

I once read someone saying RT would be easy to split off but I’ve never gotten that deep into it.
The other aspect is you don't need to invest in Nvidia Tax to get it, any DX 12 card will do it. Just updated last post just before yours which has a snippet about Microsoft demonstration.
 
The Microsoft demo shows some spectacular results from 1080p to 4K. This one example looks like it is much better than DSLL but too soon to tell. What is awesome about this is that anyone with a DX 12 card can use it. 4K gaming may come to a large number of folks now without needing to upgrade.

https://www.overclock3d.net/news/gp..._supports_directml_-_an_alternative_to_dlss/1

Hadn't seen this yet, thank you for posting the link.

All of these (non-xfire, non-sli) multi-gpu capabilities that have been / are being introduced in D3D 12 are pretty neat - I like the way the multi-GPU works and this is another neat use for secondary cards in a system.

Wondering if gaming systems in a year or two are going to look like the early PhysX days, with everyone buying / keeping older cards for dedicated ML.
 
So if one was going to buy a card (not Vega 7 necessarily) strictly for water cooling, who would you go though?
 
I would also agree with ManofGod dealing with image color quality AMD vs Nvidia. They have subtle differences and sometimes stark differences. HDR seems to exacerbate compression type artifacts. Still valid point on some objective proof. Nvidia level of detail seems to be less negative hence more blurrier textures since the mipmaps (lower versions of textures) are push closer to the camera view. It may just be how Nvidia and AMD sets their default image settings. Which some find AMD sharper but nosier while others find Nvidia softer, blurrier making it just a preference - many just can't tell the difference to begin with making it a mute point or the differences are so small that it is insignificant.

With new drivers and Windows update I will see if I can capture the HDR differences between Nvidia and AMD - the thing is if the monitor being used to look at the differences is poor - it will prove nothing. This is a hard one to fully show. Taking pictures of a 10 bit HDR image on a SDR 8 bit camera for example - or 10 bit image being shown on a 8 bit panel. Now if one can show the differences considering the limitations - the actual differences will definitely be more pronounced with higher quality monitors.

NO, NO, NO!!!

Bandwith compression is LOSSLESS!
What you are descibing is akain to "Loudness" in music....aka "vibrance".

Again, it has nothing to do with bandwith or compression!!!

And the "AMD has better colors" or "AMD looks sharper" has been utterly debunked a looooooong time ago:
https://hardforum.com/threads/a-real-test-of-nvidia-vs-amd-2d-image-quality.1694755/
 
Vega 7 also has higher compute if that factors into anyones decision.
I'm actually curious about this. Some places say it has kept the same DP64 performance of the MI150 but techpowerup as updated their page showing otherwise.
 
I'm actually curious about this. Some places say it has kept the same DP64 performance of the MI150 but techpowerup as updated their page showing otherwise.

Techpowerup is right the DP64 is way less than MI50 Instinct, otherwise people will buy VII rather than much more expensive MI50.
 
NO, NO, NO!!!

Bandwith compression is LOSSLESS!
What you are descibing is akain to "Loudness" in music....aka "vibrance".

Again, it has nothing to do with bandwith or compression!!!

And the "AMD has better colors" or "AMD looks sharper" has been utterly debunked a looooooong time ago:
https://hardforum.com/threads/a-real-test-of-nvidia-vs-amd-2d-image-quality.1694755/
but but but my eyes don't lie :)

With increase bit depth, HDR-10 -> way more colors -> less chance of duplicate colors to compress -> compression ration goes down -> Memory bandwidth needed, will go up. Pascal had some issue with loosing performance with HDR much more than AMD. I am tending to think it lies with this.

The thread listed is dealing with non-gaming graphics but was a good read. Pretty sure Microsoft WHQL mandates the rendering of 2d text quality so that should be very close if not the same. Dealing with gaming graphics Nvidia does some things differently. For example on my 144mhz HDR monitor, AMD maintains 10bit depth HDR-10 at all refresh rates. Nvidia only has 10 bit depth at 144hz, anything else and it uses 8 bit color plus dithering. Unless what Windows is reporting is wrong, AMD image is a much better quality HDR-10 image on refresh rates less than 144hz..

It has been awhile since I went looking at mipmaps, will have to get back with that. If Nvidia and AMD maintained their relative L.O.D then AMD will have a sharper image in general.
 
Real? Using my google fu
[情報] ASRock PHANTOM GAMING RADEON VII曝光
https://www.pttweb.cc/bbs/PC_Shopping/M.1548154114.A.938
zqaE64d.jpg
 
Well looks like Redit, Tom's Hardware, VideoCardz, and Guru3d are all now running with the ASRock RADEON VII news.

You saw it here first folks ;) guess they don't give credit lol
 
I've become sort of an Amature/Hobbiest Vlogger.
So I'll be buying one for the 'work productivity' aspect.
Plus I have a 4k Freesync monitor, so there's that...
 
Vega VII:
1400MHz base. 1750MHz boost.

Vega 64 Sapphire Nitro:
1373MHz base. 1580MHz boost. (They offered a 1673MHz boost on their best liquid cooled card)
 
Even if they're limited to reference PCB, there also hasn't been any alterations to the cooling (at least there doesn't appear to be in either of these linked cards). Which is disappointing. Perhaps there wasn't enough time R&D to production time for AIBs to do it. Hopefully we'll see some of that in the future.
 
Even if they're limited to reference PCB, there also hasn't been any alterations to the cooling (at least there doesn't appear to be in either of these linked cards). Which is disappointing. Perhaps there wasn't enough time R&D to production time for AIBs to do it. Hopefully we'll see some of that in the future.
At least it's not a blower. It'll be interesting to see if 7nm Vega will be more or less power hungry than original Vega.
 
Last edited:
Even if they're limited to reference PCB, there also hasn't been any alterations to the cooling (at least there doesn't appear to be in either of these linked cards). Which is disappointing. Perhaps there wasn't enough time R&D to production time for AIBs to do it. Hopefully we'll see some of that in the future.
Aye, both the of AIB versions of VII that I've seen images of look to be absolute reference versions, using the reference AMD triple-fan cooler. In fact, the only difference between the AIB and AMD cards seems to be the sticker on the card.
 
Back
Top