AMD reveal next-gen ‘Nvidia killer’ graphics card at CES?

I expect it will be a fairly low end part. Flagship parts will remain TSMC, because they can likely wring more performance from TSMC. On the low end where it doesn't matter they can throw Samsung a bone.



I see no evidence that FudZ is right. It's just aimless click-baiting IMO.
Being that Apple version is 24CU, I am not sure the 22 CU 5500 XT is a cut version or an actual different skew.

https://www.pcgamesn.com/amd-rx-5500-xt-samsung-tsmc-performance
I am getting the impression that if Big Navi is announced at CES, it won't be shipping for quite a while. Pretty much every recent AMD release has been telegraphed by appearances in some of the online benchmark tools.

We have seen zilch on Big Navi, which leads me to believe it's launch isn't anytime soon.

Long lead pre-announcing works for AMD, because they don't have a product competing in the upper tier to stall, so they can attempt to stall sales of NVidia 2080-2080Ti parts, by talking up something coming in the future...
I am more incline to suspect TSMC 7nm and 7nm+ is swamped. RUMORS, has Samsung 7nm EUV at 30% yield on 100mm size chips, if that is true all those customers will try to flock to TSMC as well. Qualcomm Snapdragon had yield issues at Samsung. Jest is that only TSMC has a viable 7nm process now and many want access. Does not help that Intel can't get their 10nm process working well enough making even more wanting AMD processors. Basically a perfect storm for shortages for everyone.
 
We have seen zilch on Big Navi, which leads me to believe it's launch isn't anytime soon.
Also heard practically nothing about the 5600 and 5500... 5700 was a little hype but people expected a full stack launch which didn't happen.
AMD has been pretty good at keeping the rumour mill in check lately.
 
Also heard practically nothing about the 5600 and 5500... 5700 was a little hype but people expected a full stack launch which didn't happen.
AMD has been pretty good at keeping the rumour mill in check lately.

Benchmarks leaked quite a while before for the 5500, and 5600 benchmarks have leaked as well.
 
2017- 2018 - mining craze - overpriced and hard to find Graphics cards

2018- 2019 - fake RT phase - over priced cards even higher than the mining craze

2019 - 2020 - No new cards - everyone is wanting 7nm chips made at TSMC - Apple wins out, everyone else looses :D

2021 - Intel releases their first 14+++++++++ GPU
 
Last edited:
It all comes down to how big AMD is will to go.

I think it's extremely likely NVidia will launch a >500mm^2 7nm part for Ampere Titan/3080Ti.

But will AMD go >500mm^2? Then it will be a crown taking beast, out in front until NVdia launches Ampere Titan/3080Ti, which could be quite as ways in the future.

OTOH If big Navi is under 400mm^2, that is going to be disappointing, probably falling behind 2080ti.

Between 400 and 500, there is lots of room for an interesting product.

Really looking forward to that CES announcement, hopefully there are some solid details.
After some thought, I disagree. Go fast and small first and only go big if you can't.

If you look at the performance delta from the 251mm^2 Navi 10 (57xx series, soon 56xx) to a 754mm^2 TU 102 (Titan RTX, 1080Ti . . .) it is not as far apart as one would think. 2560 shaders to 4608 - huge difference, chip size 3x smaller or bigger is a huge difference. So what is that difference in performance?
Use thought experiment:
  • Take the Navi 10 GPU at 251mm^2, increase the efficiency by using 7nm+ node, arch changes as needed ending in a chip exactly the same size of 251mm^2
  • Use HBM2e, 2 stacks, 2048bit, 830mb/s plus bandwidth or DDR 6 and 384bit to support the faster chip
  • Successfully increase the gaming clock from 1755 to 2400mhz, 1.37x faster then before
  • Performance improvement could potentially be 37% or better or less over Navi 10 if all aspects were improve upon inside and outside of the GPU
  • While a totally imaginary part, it would be a 251mm^2 GPU equalling the performance of a 2080Ti
Small die, higher yield, lower cost but higher clocks. The magic would be very fast clock speeds. Of course AMD may not be able to do that or Nvidia will, don't know. A hybrid of that would be faster clocks and some more CU's, maybe like 50CU's or 3200 shaders while still being less than 320mm^2.

My rather good die 5700XT AE, blower cooler, will do 2050mhz to 2100mhz gaming, adding another 400mhz or more maybe not that insurmountable on a much improved design using TSMC 7nm+.
 
Last edited:

Why even bother posting comparative benchmarks that don't comparing frametimes -- if you're going to make an assertion in terms of relative performance, you're going to want some solid numbers to back that assertion up. You're also going to want to look at 4k, where the same review puts the 2080Ti at 50% faster.

Yeah, we want lower-resolution numbers for CPU benchmarks, and we also want higher-resolution numbers for GPU benchmarks -- and we want frametime analysis for both!
 
Why even bother posting comparative benchmarks that don't comparing frametimes -- if you're going to make an assertion in terms of relative performance, you're going to want some solid numbers to back that assertion up. You're also going to want to look at 4k, where the same review puts the 2080Ti at 50% faster.

Yeah, we want lower-resolution numbers for CPU benchmarks, and we also want higher-resolution numbers for GPU benchmarks -- and we want frametime analysis for both!
It is all hypothetical, performance maybe won this coming round in who has the significantly faster clocks - Nvidia has not yet have a 7nm part successfully launched, AMD on the other hand has had much success. I hope Nvidia is successful but that does lead to doubts if they can or not at this stage. AMD experience should help them out tremendously dealing with next round of 7nm GPU's. What is very unique about Navi 10 is the small size and the punch it really delivers, get the power down and clock speed could go up a lot.
 
It is all hypothetical, performance maybe won this coming round in who has the significantly faster clocks - Nvidia has not yet have a 7nm part successfully launched, AMD on the other hand has had much success. I hope Nvidia is successful but that does lead to doubts if they can or not at this stage. AMD experience should help them out tremendously dealing with next round of 7nm GPU's. What is very unique about Navi 10 is the small size and the punch it really delivers, get the power down and clock speed could go up a lot.

Here's the thing: TSMCs 7nm process works. Nvidia has been the leading GPU company basically since AMD bought ATi and spent years just trying to produce a GPU with a full featureset.

Nvidia has consistently moved to new nodes without drama, unlike AMD.

The only reason to introduce doubt is personal.
 
Here's the thing: TSMCs 7nm process works. Nvidia has been the leading GPU company basically since AMD bought ATi and spent years just trying to produce a GPU with a full featureset.

Nvidia has consistently moved to new nodes without drama, unlike AMD.

The only reason to introduce doubt is personal.
Well I hope you are right as in Nvidia getting it right the first product out using it and later ones. Nvidia did have drama going down in a node in the past, the GeForce 5800 was one of those hiccups.
 
  • Like
Reactions: N4CR
like this
Well I hope you are right as in Nvidia getting it right the first product out using it and later ones. Nvidia did have drama going down in a node in the past GeForce 5800 was one of those hiccups.

They had drama, but more akin to AMDs 2900XT premiere -- they built a GPU that software wasn't targeting, and the results were pretty disastrous.


They haven't had real issues with process nodes, again in stark contrast to AMD. So if AMD was able to run with 7nm GPUs, there's no reason to expect Nvidia to have an issue. The opposite is literally 99% more likely, historically.
 
After some thought, I disagree. Go fast and small first and only go big if you can't.

If you look at the performance delta from the 251mm^2 Navi 10 (57xx series, soon 56xx) to a 754mm^2 TU 102 (Titan RTX, 1080Ti . . .) it is not as far apart as one would think. 2560 shaders to 4608 - huge difference, chip size 3x smaller or bigger is a huge difference. So what is that difference in performance?
Use thought experiment:
  • Take the Navi 10 GPU at 251mm^2, increase the efficiency by using 7nm+ node, arch changes as needed ending in a chip exactly the same size of 251mm^2
  • Use HBM2e, 2 stacks, 2048bit, 830mb/s plus bandwidth or DDR 6 and 384bit to support the faster chip
  • Successfully increase the gaming clock from 1755 to 2400mhz, 1.37x faster then before
  • Performance improvement could potentially be 37% or better or less over Navi 10 if all aspects were improve upon inside and outside of the GPU
  • While a totally imaginary part, it would be a 251mm^2 GPU equalling the performance of a 2080Ti
Small die, higher yield, lower cost but higher clocks. The magic would be very fast clock speeds. Of course AMD may not be able to do that or Nvidia will, don't know. A hybrid of that would be faster clocks and some more CU's, maybe like 50CU's or 3200 shaders while still being less than 320mm^2.

My rather good die 5700XT AE, blower cooler, will do 2050mhz to 2100mhz gaming, adding another 400mhz or more maybe not that insurmountable on a much improved design using TSMC 7nm+.

Stop, just stop.

Way to much conjecture. Lets just be clear the 1080ti is a 16nm part at 471nm. It's slightly smaller than vega 64 which use 12nm from GF.

Random Fact - My gtx 1080ti strix runs at 1984 with a fan curve and only a fan curve (*yea, I know, it literally doesn't pertain to this made up conversation).

Now- Lets shrink it and it will magically get 30 percent faster and be about 310mm (100 PERCENT FICTIONAL, I just made up some numbers using vega64 die size to Radeon VII). In addition we use some Turing Shaders on the 1080ti (which will make it bigger, so lets go with 340mm). Lets add a ddr6 controller and that will be faster than the 5800/5900xt.

made up stuff

Or nvidia will just shrink the rtx 2080 super, which is already faster than the 5700xt, and it will compete with the 5800/5900xt.

more made up stuff


If the 5800/5900xt is not faster than the rtx 2080ti, I'll be sad. IT WAS RELEASED SEPTEMBER OF 2018.

Goes back to playing games on his 2 year 3 month old gtx 1080ti.
 
Stop, just stop.

Way to much conjecture. Lets just be clear the 1080ti is a 16nm part at 471nm. It's slightly smaller than vega 64 which use 12nm from GF.

Random Fact - My gtx 1080ti strix runs at 1984 with a fan curve and only a fan curve (*yea, I know, it literally doesn't pertain to this made up conversation).

Now- Lets shrink it and it will magically get 30 percent faster and be about 310mm (100 PERCENT FICTIONAL, I just made up some numbers using vega64 die size to Radeon VII). In addition we use some Turing Shaders on the 1080ti (which will make it bigger, so lets go with 340mm). Lets add a ddr6 controller and that will be faster than the 5800/5900xt.

made up stuff

Or nvidia will just shrink the rtx 2080 super, which is already faster than the 5700xt, and it will compete with the 5800/5900xt.

more made up stuff


If the 5800/5900xt is not faster than the rtx 2080ti, I'll be sad. IT WAS RELEASED SEPTEMBER OF 2018.

Goes back to playing games on his 2 year 3 month old gtx 1080ti.
Now this is not conjecture, the RX 5700XT kicks the 1080Ti's ass in a pure DX12/Vulkan game -> Red Dead Redemption 2 and it is a 251mm^2 chip compared to the 471mm^2 chip. Just have to throw that in there for no good reason. Most of this thread is conjecture, surprised you made it this far. Minor point, Vega 64 was 14nm from GloFlo

Yes Nvidia could do very much the same thing, why you think they have to go large to make it work? The question is can Nvidia do it before AMD strikes? CES (which maybe just a let down again).
 
Now this is not conjecture, the RX 5700XT kicks the 1080Ti's ass in a pure DX12/Vulkan game -> Red Dead Redemption 2 and it is a 251mm^2 chip compared to the 471mm^2 chip. Just have to throw that in there for no good reason. Most of this thread is conjecture, surprised you made it this far. Minor point, Vega 64 was 14nm from GloFlo

Yes Nvidia could do very much the same thing, why you think they have to go large to make it work? The question is can Nvidia do it before AMD strikes? CES (which maybe just a let down again).

Correct, I made a mistake on the size.

As for RDR2 the 5700xt kills the 1080ti.
 
Correct, I made a mistake on the size.

As for RDR2 the 5700xt kills the 1080ti.
Very good game so far. I find these types of threads interesting because you can argue, throw stuff all over the place, hissy fits over nothing and rarely does any of it come true :D. Except I predicted Vega die size and clock speeds relatively well against many nay sayers previously, performance not so well on that go around. I have no clue what is coming.
 
Being that Apple version is 24CU, I am not sure the 22 CU 5500 XT is a cut version or an actual different skew.

22 seems an odd number for a “full” die but I guess it wouldn’t be the first time we didn’t get a factor of 8 CUs somewhere. Vega 11 from the Ryzen 5 2400G is a good example.

I keep thinking of the Tonga GPU in Radeon 285 or the little Polaris in Radeon RX 460, where the next generation saw the full die used even though I don’t think there was any mention of it during initial release.

also the term you’re looking for is “SKU” -it’s a retail acronym for “stock keeping unit” and is basically a way to refer to “item A,” “item B,” etc, with different part numbers.
 
Being that Apple version is 24CU, I am not sure the 22 CU 5500 XT is a cut version or an actual different skew.

https://www.pcgamesn.com/amd-rx-5500-xt-samsung-tsmc-performance.

People have been nonsense posting stuff like this for years. It's been wrong every time. A couple of months ago they were harping on the same thing about some NVidia RTX chips being produced by Samsung, because someone noticed they had "Korea" stamped on the chips. Searching on the net will show examples of nearly every GPU in generations that had existing examples with both of "Korea" and "Taiwan" individuall on examples the dies. It's just every once in a while someone notices this for the first time and it's a news "story" to them.

In reailty they do final packaging and testing in several locations and those location markings end up on the chips. That doesn't mean they changed foundries. All the NVidia RTX are made by TSMC.

In this case all the AMD RX 5000 chips are also made by TSMC.

Finally, The Apple 24CU part is the same part that is in 22 CU 5500 XT. It makes ZERO sense to produce two nearly identical dies, unless you just like burning 10 of millions of dollars for kicks.
 
  • Like
Reactions: noko
like this
22 seems an odd number for a “full” die

With computer stuff we usually like to think in powers of two, and sums and products of those numbers, so 24 would make more sense -- but that only matters in certain cases. Realistically, a 'Compute Unit' or CU is comprised of some 'regular' number of cores.

Beyond that loose relationship with binary numbers, very little needs to be fixed. The number of 22 cores is likely what fit a number of factors and constraints best for this particular product.
 
In terms of performance, the Navi 21 GPU is said to be at least twice as fast as the Navi 10 GPU. The Radeon RX 5700 XT is the best case for the Navi 10 GPU and it comes close to the GeForce RTX 2070 SUPER, so the Navi 21 GPU could exceed the RTX 2080 SUPER's performance and even end up coming close to the RTX 2080 Ti. This might explain why we were hearing rumors of NVIDIA's RTX 2080 Ti SUPER in the works.

he rumor states that AMD's high-end Navi GPU, which is being referred to as the Navi 21 GPU, has been taped out. The chip has a die size of 505mm2 which is twice as big as Navi 10 which has a die size of 251mm2. This is even bigger than AMD's Vega 20 GPU which had a die size of 331mm2, so it could mean that we are looking at a powerhouse of a chip which should definitely be faster than anything AMD has released yet. The AMD Vega 20 GPU featured 13.2 Billion transistors so the Navi 21 GPU could exceed 15-16 Billion transistors which would make the chip far denser than anything else on the market.

https://wccftech.com/amd-radeon-rx-navi-21-gpu-2x-performance-5700-xt-die-size-rumor/
 
  • Like
Reactions: Mega6
like this
This might explain why we were hearing rumors of NVIDIA's RTX 2080 Ti SUPER in the works.

That should be expected; the 'Super' lineup as is was simply a way for Nvidia to rebrand GPUs at slightly lower prices -- likely reflecting better yields and slowing sales -- without lowering prices on existing SKUs. A 'Super' Ti wasn't really needed at the time, but if AMD approaches the 2080Ti, why not?

This is even bigger than AMD's Vega 20 GPU which had a die size of 331mm2, so it could mean that we are looking at a powerhouse of a chip which should definitely be faster than anything AMD has released yet. The AMD Vega 20 GPU featured 13.2 Billion transistors so the Navi 21 GPU could exceed 15-16 Billion transistors which would make the chip far denser than anything else on the market.

If it's bigger than Vega 20, then HBM is likely off the list, going by the limits pushed by that previous GPU. So some iteration of GDDR6 it is. Beyond that, we should hope that AMD didn't concede too much raster performance for compute, as in the past that has been a revenue pushing choice that led to less than stellar performance in gaming, as seen in Vega.

I wonder if they'll go the Nvidia route this time and produce an all-out gaming GPU on a large die with GDDR, then produce a compute focused large die with HBM?
 
That should be expected; the 'Super' lineup as is was simply a way for Nvidia to rebrand GPUs at slightly lower prices -- likely reflecting better yields and slowing sales -- without lowering prices on existing SKUs. A 'Super' Ti wasn't really needed at the time, but if AMD approaches the 2080Ti, why not?



If it's bigger than Vega 20, then HBM is likely off the list, going by the limits pushed by that previous GPU. So some iteration of GDDR6 it is. Beyond that, we should hope that AMD didn't concede too much raster performance for compute, as in the past that has been a revenue pushing choice that led to less than stellar performance in gaming, as seen in Vega.

I wonder if they'll go the Nvidia route this time and produce an all-out gaming GPU on a large die with GDDR, then produce a compute focused large die with HBM?
About same size as Vega 10, so HBM is not off the table, HBM2e, 2 stacks can push the memory bandwidth pretty high now, costs have come down as well. While DDR6 is heading to shortages, HBM might be a viable option for the costier cards.
 
About same size as Vega 10, so HBM is not off the table, HBM2e, 2 stacks can push the memory bandwidth pretty high now, costs have come down as well. While DDR6 is heading to shortages, HBM might be a viable option for the costier cards.

I hope not as I want one. If Navi 21 uses HBM it’ll probably end up costing close to the price of the 2080 Ti. Whilst I might pay that many an AMD person will not. I’m hoping for 16gb DDR 6.
 
The only way i see HMB fitting in - is on a high end AI/Compute card for the Datacenter. Looking for GDDR6 on consumer consumer gaming cards to keep pricing down - as this has been AMDs Playbook.
 
Yes AI/Compute solutions HBM2E shows it superiority to DDR 6:
https://www.eeweb.com/profile/schar...-memory-ready-for-ai-primetime-hbm2e-vs-gddr6

Cost wise DDR6 is more expensive then the preceeding DDR versions. HBM2E and interposer costs have come down -> The difference is not that far apart to not consider on a higher spec card since HBM has clear advantages:
  • Lower power requirements
  • Compact
  • Very High Bandwidth
You have to way in circuit board design, traces, power etc. for DDR 6 as extra costs. The Interposer and assembly for using HBM. AMD has much experience with multiple designs using HBM.

I would think for an ultra high end AMD GPU for gaming, HBM2E would be used but in the end it would not matter if DDR 6 or HBM2E if the performance/quality/features are very good for the price.

Two stacks of HBM2E memory will give over 800mb/s, Hynix has HBM2E where two stacks are 920mb/s. It will be interesting if this memory will be used on consumer orientated products.
 
Last edited:
I hope not as I want one. If Navi 21 uses HBM it’ll probably end up costing close to the price of the 2080 Ti. Whilst I might pay that many an AMD person will not. I’m hoping for 16gb DDR 6.
I don't think AMD is shy to push up costs to make more profit but also being very competitive as in still giving more than the competition. If AMD decides to release or can a big Navi or Navi 2 chip at this time, I am not sure RT will be incorporated - that may come after the consoles are released later this year. Logic is more a Developer/Hardware maturity getting what best works and is usable for customers. A feature that truly benefits the overall experience without compromises is my jest. Not sure if AMD/Nvidia/Developers/Sony/Microsoft are all working together dealing with RT or is it more Nvidia lead/push, maybe power attempt grab (usual Nvidia methods but in a way can push the industry forward, even being a bad cop so to speak). Since AMD has close ties in some respects to Microsoft/Sony, RT maybe much different in the end then what we have now.

Will be interesting if AMD does more than just releases the 5600 series of video cards at CES, which will be very nice option for a number of folks but also boring for a few others. Of course the ThreadRippers 5980x/5990x shock can occur as well.
 
Rambus, Samsung and Micron are all planning on pushing GDDR6 up to 20 Gbps, which would offer bandwidth of 880 GB/s on a 352-bit bus and 960 GB/s on a 384-bit bus. We may see 18 Gbps GDDR6 on video cards released this year.

https://www.guru3d.com/news-story/m...o-push-gddr6-performance-towards-20-gbps.html
https://www.guru3d.com/news-story/s...duction-that-offers-bandwidth-of-18gbits.html
https://www.anandtech.com/show/15054/rambus-demonstrates-gddr6-at-18gbps

Good news. Will be nice to see what AMD and NVIDIA can do with the extra bandwidth in next gen cards.
 
Rambus, Samsung and Micron are all planning on pushing GDDR6 up to 20 Gbps, which would offer bandwidth of 880 GB/s on a 352-bit bus and 960 GB/s on a 384-bit bus. We may see 18 Gbps GDDR6 on video cards released this year.

https://www.guru3d.com/news-story/m...o-push-gddr6-performance-towards-20-gbps.html
https://www.guru3d.com/news-story/s...duction-that-offers-bandwidth-of-18gbits.html
https://www.anandtech.com/show/15054/rambus-demonstrates-gddr6-at-18gbps
That was good news last year. At what manufacturing capacity? Cost? Rambus license? Design considerations for the board, traces also have to support to and from the GPU? May the best solution win. I think we could win either way. HBM2E with just two stacks already can achieve this and more, 4 stacks and the game is over but not really a consumer product either. Will be interesting how this pans out for 2020. Hopefully AMD will have some good news shortly. AMD already pretty much designed a two stacked HBM2 with a 500mm^2 chip, board etc. The interposer maybe updated as needed, meaning engineering was mostly already done -> I can see AMD going HBM2E route since it is pin compatible, if it makes sense and I would not be disappointed either way with HBM2E or very fast DDR6. All I am saying either one can be used by AMD.
 
AMD learned their lesson on HBM and shouldn't be using it again anytime soon. Recent leaks point to th gddr6 for their new offering. They had to use it on the fury cause the performance just wasn't there with gddr to compete with Nvidia so they had to go all in on HBM to make it closer. Pricing wise it left them out to dry as the cards were at least 25% more expensive than their performance dictated at the time, leaving them expensive, and not so alluring.
 
AMD learned their lesson on HBM and shouldn't be using it again anytime soon. Recent leaks point to th gddr6 for their new offering. They had to use it on the fury cause the performance just wasn't there with gddr to compete with Nvidia so they had to go all in on HBM to make it closer. Pricing wise it left them out to dry as the cards were at least 25% more expensive than their performance dictated at the time, leaving them expensive, and not so alluring.

I am think NVidia might go HBM on 3080 Ti. Leaving AMDs GDDR6 solution in the dust.
 
AMD learned their lesson on HBM and shouldn't be using it again anytime soon. Recent leaks point to th gddr6 for their new offering. They had to use it on the fury cause the performance just wasn't there with gddr to compete with Nvidia so they had to go all in on HBM to make it closer. Pricing wise it left them out to dry as the cards were at least 25% more expensive than their performance dictated at the time, leaving them expensive, and not so alluring.

The issue with HBM is that it requires large interposers that push silicon fabrication reticle limits, and when you push those, failure rates skyrocket. This is what killed Vega 64.

From a pure BoM perspective, HBM should actually be cheaper, and it might become the preferred option for smaller parts and mobile parts at some point.
 
let me laugh at everyone whom has hoped for some "massive" - "new" gpu from anyone at this CES.
tumblr_ninvcgmYal1r7sijxo1_500.gif


hey, wait for GTC next month for Arcturus MI compute card. spread all the fan fiction rumors its nice reading them.

(btw. gtc is nv show)
 
Last edited:
the question was about discreet laptop graphics

and i quote her response.
"The discrete graphics market, especially at the high end, is very important to us."

you'll get a big navi in your laptop. hur dur
View attachment 214332

This was Lisa Su's answer:
"I know those on Reddit want a high end Navi! You should expect that we will have a high-end Navi, and that it is important to have it. The discrete graphics market, especially at the high end, is very important to us. So you should expect that we will have a high-end Navi, although I don’t usually comment on unannounced products."
https://www.anandtech.com/show/15344/amd-at-ces-2020-qa-with-dr-lisa-su


More than just laptops buddy, threadripper, Arm, TSMC - it was an interview not just on laptops. What I've found interesting is what she says about Ray Tracing in 2020 from AMD. What product will that be in?
Dean Takahashi, VentureBeat: Is real time ray-tracing in graphics going to be as big as NVIDIA says it is?

LS: I’ve said in the past that ray tracing is important, and I still believe that, but if you look at where we are today it is still very early. We are investing heavily in ray tracing and investing heavily in the ecosystem around it – both of our console partners have also said that they are using ray tracing. You should expect that our discrete graphics as we go through 2020 will also have ray tracing. I do believe though it is still very early, and the ecosystem needs to develop. We need more games and more software and more applications to take advantage of it. At AMD, we feel very good about our position on ray tracing.
 
Last edited:
Back
Top