Nvidia Killer

Status
Not open for further replies.
Turing is not going to be great all of a sudden, because it is on 7nm. :confused:


Jensen knows this and why he belayed moving to 7nm and shopped around, because he needs time to re-develop Ampere into something that can compete with RDNA. Usually Nvidia knows what AMD is doing, but RDNA was top-secret and caught Nvidia off guard (Hence SUPER).

Thus, Nvidia's new architecture will be 13 months late. While AMD's RDNA will be in a half a dozen chips by that time....!

Understand that Turing, (no matter what size), can not compete with RDNA. So Nvidia is essentially stalled for 13 months... until Jensen drop the 7nm RTX3000 series (Ampere). But AMD will reign on that party too, with 5nm RDNA chips... (given TSMC's latest announcements). I believe that 5900 Series chip is aptly named "nvidia killer", because it literally cuts nvidia off at the legs. Nvidia big gpus serve the 4k crowd, but Nvidia's small gpu are going to have to support 4k too. How can they do that without mesing with the cost of their servers hand me downs... is to design a game only chip and Jensen is scrambling to figure out how to do that.


In One Year's time, Nvidia has to release a brand new gaming architecture, on a brand new node process, with brand new features on their 1st try. While AMD will have a full year of RDNA driver optimization and console support..



That is how it is going to play out. But late 2020 is going to be the 4k GPU price wars! Until then, it is all AMD.

I just gave data points saying Navi and Turing approximately on par per transistor and you quoted me without addressing it at all. Instead you posted some kind of marketing rant. I even let the part about nVidia being about parity on an older, slow, cheaper node and a portion of the die being for RTX out of it.

No one even cares if you’re hard biased AMD and believe they are better. It’s more that you post things that are blantantly wrong that we have data for.

I hope rdna2 knocks nVidia on their asses with chiplets or some innovative tech but Rdna is only on par with Turing, slightly behind if nVidia removed RTX tech.
 
I don't understand the Raytracing either...a year this month after the RTX release and the best showcase for the product is a tech demo of a 20 year old Quake 2. But I'm sure someone will tell me that all the cool games are coming "soon."

We can explain it to you, but we cannot understand it for you...keep clinging on to the fake graphics past...the world is moving on.
 
We can explain it to you, but we cannot understand it for you...keep clinging on to the fake graphics past...the world is moving on.

You keep telling yourself that an RTX investment in August 2018 was money well spent for Raytracing purposes... That extra performance in the "fake graphics past" is all you got for your $1200.

In August 2020/2021, that might be a different story AS I'VE SAID MULTIPLE TIMES.
 
You keep telling yourself that an RTX investment in August 2018 was money well spent for Raytracing purposes... That extra performance in the "fake graphics past" is all you got for your $1200.

In August 2020/2021, that might be a different story AS I'VE SAID MULTIPLE TIMES.

I think that’s fair.

IMO since I first heard about RT I thought nVidia was doing it for:

- Rasterized performance is hitting diminishing returns, people are more likely to think their current card is “good enough”, an RT craze would bring back the buy the best each gen for more people.
- they had the rasterized lead on AMD to sacrifice some of their die
- they can use their tensor core / AI expertise in a “synergystic” way
- and probably the one businesses love most, a market discriminator, a new technology AMD wasn’t even close to

Top level sounded awesome. I have a feeling the rollout didn’t go as well as hoped.

I said since the beginning this is a chance for AMD to get back market share if they could strike before RT takes off since nVidia basically gimped themselves.

Also... pretty sure I just killed a 2080ti with a water leak. I might be in the market for big Navi depending on price lol. Koolance clamp failed. The smartest among us mix water and electronics. ;)
 
I know Vega was god awful at VR (relative to an equally priced nVidia) Was this ever resolved with Navi? I can’t find any good reviews. Most reviews use VRmark which is trash and not relevant to real games.

It’s basically only an issue with my main rig since that’s the VR rig. I just got curious since we played VR for a few hours this morning and got me thinking.
 
Last edited:
I think that’s fair.

IMO since I first heard about RT I thought nVidia was doing it for:

- Rasterized performance is hitting diminishing returns, people are more likely to think their current card is “good enough”, an RT craze would bring back the buy the best each gen for more people.
- they had the rasterized lead on AMD to sacrifice some of their die
- they can use their tensor core / AI expertise in a “synergystic” way
- and probably the one businesses love most, a market discriminator, a new technology AMD wasn’t even close to

Top level sounded awesome. I have a feeling the rollout didn’t go as well as hoped.

I said since the beginning this is a chance for AMD to get back market share if they could strike before RT takes off since nVidia basically gimped themselves.

Also... pretty sure I just killed a 2080ti with a water leak. I might be in the market for big Navi depending on price lol. Koolance clamp failed. The smartest among us mix water and electronics. ;)

Well here is the more likely reason they produced RTX.

NV does not design gaming chips anymore. Fabricating chips is so expensive even a company like NV does not want to produce multiple insanely expensive designs. So when they got to Volta they sandwiched Googles tensor cores onto the same package. Now at the time with little competition from AMD or anyone else they didn't really need to update their consumer cards. So they choose to not update their consumer cards. Volta is no doubt a much more expensive chip to manufacture then pascal. When they got to turing they again designed on chip to handle both consumer and AI parts. So to make that work from a marketing standpoint they put their software engineers to work finding ways to use those extra hardware bits in games.

They came up with DLSS... which in theory is interesting but as we have all seen it is really destroyed by much simpler sharpening methods. Still it was an interesting way to use the bits that where there anyway.

They also looked at them and said I think there is a way we can do this Tracing stuff that is coming in a couple years. MS did not build DXR for NV... it is a industry standard. MS and Sony are both planning to add ray traced lighting into their next gen consoles. That isn't something they decided after seeing NVs tech. That has been the plan for years and the NV engineers know that and they have for a long time. So they cleverly found a way to make it work ahead of schedule... it may not work very well but it works well enough for the marketing dept to run with.

There is zero chance MS was adding something to DXR without a plan to bring it to their next gen consoles. AMDs plans for ray tracing are likely much older then NVs... I have no idea if that was a chiplet idea which makes sense especially seeing as Dr. Su was the project lead on the Cell design. Or some form of shader improved tracing with rdna was the long term plan. Yes the PS5/Xboxnextnextone chip will be x86 but it is also for sure going to be chiplet being ryzen 2 / navi and there is a good chance it will have a third chiplet handling potentially a bunch of cool stuff. The cell was in many ways a proto chiplet... this next gaming console chip will be its spiritual successor.

NVs turning is a great chip don't get me wrong... and their software people managing to get things like DXR up and running is impressive. They one upped AMD by pulling a feature THEY where working on with the next gen console players and made it sound like their feature. lol I feel for all the early adopter RTX people, I have a feeling they are going to be hurt in a year when PS5 is doing ray tracing better then their $1000+ GPUs.
 
Well here is the more likely reason they produced RTX.

NV does not design gaming chips anymore. Fabricating chips is so expensive even a company like NV does not want to produce multiple insanely expensive designs. So when they got to Volta they sandwiched Googles tensor cores onto the same package. Now at the time with little competition from AMD or anyone else they didn't really need to update their consumer cards. So they choose to not update their consumer cards. Volta is no doubt a much more expensive chip to manufacture then pascal. When they got to turing they again designed on chip to handle both consumer and AI parts. So to make that work from a marketing standpoint they put their software engineers to work finding ways to use those extra hardware bits in games.

They came up with DLSS... which in theory is interesting but as we have all seen it is really destroyed by much simpler sharpening methods. Still it was an interesting way to use the bits that where there anyway.

They also looked at them and said I think there is a way we can do this Tracing stuff that is coming in a couple years. MS did not build DXR for NV... it is a industry standard. MS and Sony are both planning to add ray traced lighting into their next gen consoles. That isn't something they decided after seeing NVs tech. That has been the plan for years and the NV engineers know that and they have for a long time. So they cleverly found a way to make it work ahead of schedule... it may not work very well but it works well enough for the marketing dept to run with.

There is zero chance MS was adding something to DXR without a plan to bring it to their next gen consoles. AMDs plans for ray tracing are likely much older then NVs... I have no idea if that was a chiplet idea which makes sense especially seeing as Dr. Su was the project lead on the Cell design. Or some form of shader improved tracing with rdna was the long term plan. Yes the PS5/Xboxnextnextone chip will be x86 but it is also for sure going to be chiplet being ryzen 2 / navi and there is a good chance it will have a third chiplet handling potentially a bunch of cool stuff. The cell was in many ways a proto chiplet... this next gaming console chip will be its spiritual successor.

NVs turning is a great chip don't get me wrong... and their software people managing to get things like DXR up and running is impressive. They one upped AMD by pulling a feature THEY where working on with the next gen console players and made it sound like their feature. lol I feel for all the early adopter RTX people, I have a feeling they are going to be hurt in a year when PS5 is doing ray tracing better then their $1000+ GPUs.

I don’t think anyone that bought $1,000+ GPUs factored in ray tracing into their choice, at least I didn’t. I upgraded for the 45% boost in performance.

I also have a hard time fathoming how a GPU with less than a 5700 (?) amount of compute will do better than today’s cards. It’s simply a massive amount of math that isn’t going to magically go away. It’ll be a lot easier for dev’s to optimize for one piece of hardware so it gets a small bonus but DXR is DXR. There’s no non-impact way of calculating all these ray intersections, ect.

I don’t know how RT with no performance impact became a narrative or expectation... especially since traditionally lighting and shadows are the most computational intense graphics, that hasn’t changed with RT.
 
I don’t think anyone that bought $1,000+ GPUs factored in ray tracing into their choice, at least I didn’t. I upgraded for the 45% boost in performance.

I also have a hard time fathoming how a GPU with less than a 5700 (?) amount of compute will do better than today’s cards. It’s simply a massive amount of math that isn’t going to magically go away. It’ll be a lot easier for dev’s to optimize for one piece of hardware so it gets a small bonus but DXR is DXR. There’s no non-impact way of calculating all these ray intersections, ect.

I don’t know how RT with no performance impact became a narrative or expectation... especially since traditionally lighting and shadows are the most computational intense graphics, that hasn’t changed with RT.

Because only one vendor has hardware out supporting DXR...the "explanation" is THAT simple.
 
I don’t think anyone that bought $1,000+ GPUs factored in ray tracing into their choice, at least I didn’t. I upgraded for the 45% boost in performance.

I also have a hard time fathoming how a GPU with less than a 5700 (?) amount of compute will do better than today’s cards. It’s simply a massive amount of math that isn’t going to magically go away. It’ll be a lot easier for dev’s to optimize for one piece of hardware so it gets a small bonus but DXR is DXR. There’s no non-impact way of calculating all these ray intersections, ect.

I don’t know how RT with no performance impact became a narrative or expectation... especially since traditionally lighting and shadows are the most computational intense graphics, that hasn’t changed with RT.

RT math is not general compute math or traditional raster. I'm not suggesting AMD is going to destroy ray tracing with navi alone. They are either going to use lessons learned as far back as cell to do things like software hyper threading or use shaders at ultra low resolution to increase bandwidth and raw numbers. OR more likely their chiplet design is going to include a third chip designed RT and other physics. Imagine it as a general use physics matrix math co processor for developers. With the same type of infinity fabric connection AMD will be using to connect navi and ryzen.

https://www.semanticscholar.org/pap...Wald/665ce72346061ffa081986f811d8a36b4a537b5b

13 years ago the cell processor was capable of doing basic ray tracing with simple shading at 30fps and with shading and 3 factor shadow throws at 20fps. (x86 of the day could do the same at around 1fps) Check out figure 5.

I can guarantee you that when Sony who knew Dr. Su from her Cell design asked her "what can we do to make PS5 a must have" her answer was.... let me add a cell like chiplet for you. I mean perhaps the next gen part will just be a 8core ryzen + navi... but I doubt it. Cause everyone is right RT is probably not happening even at 30fps on that alone. Throw in a 7nm cell like chiplet though. Ya it becomes very possible that a 60fps target is no big deal.
 
*Shrug* We don't know what is coming but rest assured, it is not the same RTG that we were once dealing with that we have today. I see no hype going on, no poor whatever silly campaign, and the people who were in charge before are now gone, thankfully. Navi and RDNA has turned out quite well, it is just the beginning and the future for RTG is looking bright. Eventually, they will have a "Nvidia Killer", in so far as what they end up producing but, that does not mean that it will kill Nvidia and put them out of business.

Where things go from here will be fun to watch and all we have to do is wait. :)
 
This reminds me of the good ol'days where ATI was always going to release an NVIDIA KILLER CARD, none more hyped then the 2900 series and then we were waiting for the mysterious NVIDIA KILLER DRIVERS that would unlock untold performance in a GPU that was already over heating and massively underperforming.
 
This reminds me of the good ol'days where ATI was always going to release an NVIDIA KILLER CARD, none more hyped then the 2900 series and then we were waiting for the mysterious NVIDIA KILLER DRIVERS that would unlock untold performance in a GPU that was already over heating and massively underperforming.

Sounds a lot like the Vega rumors thread and now the Big Navi thread.

At least this time it’s in the realm of possibility....
 
71U9iDvwjIL._SY606_.jpg
 
Maybe with all these rumors we will a true follow up to the 1080ti from Nvidia. That was part of the reason the 1080ti was so good, they believed Vega was going to be a monster.
 
This reminds me of the good ol'days where ATI was always going to release an NVIDIA KILLER CARD, none more hyped then the 2900 series and then we were waiting for the mysterious NVIDIA KILLER DRIVERS that would unlock untold performance in a GPU that was already over heating and massively underperforming.

Does not remind me of that, considering that the 2900 was a good card, just not what users hyped it to be. Of course, old AMD does not equal new AMD / RTG. We know that is a fact because, otherwise, Nvidia would have been producing GTX400 series heat mizer cards right up until today.
 
Maybe with all these rumors we will a true follow up to the 1080ti from Nvidia. That was part of the reason the 1080ti was so good, they believed Vega was going to be a monster.

I would not count on it...next SKU from NVIDIA will be "Ampere" on 7nm....like it or not.
 
While I think its enterily possible that AMD can come up with another 9700/9800 card (which was the last time AMD had a clear performance advantage vs nvidia), you have to take into account that AMD is at least a generation behind. I mean nvidia even skipped volta for consumer.

It seemed like forever and now finally Navi has a card that competes with Pascal top of the line at much lower prices, but the story is different with Turing. Which is also faster than pascal plus it has RTX.

Contrary to Intel, who remained stagnant for so many years (I mean SB cpus are still popular and good performers today), nvidia has seen jump in performance/efficiency for several generations. As great as big Navi can be, its anyone guess if it can go head to head with the RTX2080Ti. But even if it does, nvidia would have to seriously drop the ball with ampere, so could AMD regain the performance crown.
 
9700 series was a good series but it wasn't an unbelievable jump in performance, it was just better than Nvidia's leaf blower, which was also a jump in performance from their 4000 series GPU (just a meager jump). That's what happens when companies (ie Intel currently) gets too complacent and forgets it could have competition.

AMD's 4700 GPU release was one such release where it really stuck it to NVidia, 95% of the performance for 2/3'rds the price.
 
I know we get hyped up like this for every upcoming AMD release, but this time it seems like they might pull it off.

Remember, there is probably only a small market for $1,200 video cards. As good as the 2080 Ti is, most people don't want to spend that kind of dough on a GPU.

So if AMD can even slightly beat that 2080 Ti for a more reasonable price (not sure what is reasonable these days, but lets say $700-800) then that would be a big win.

Sure, Nvidia will release a 3080 Ti in time, but at what price? Without competition that card from Nvidia might be $1,500 the way things have been going. AMD has a shot.
 
re: AMD's hardware DXR acceleration solution, this may provide some clues

no idea how it compares architecturally to Nvidia's "RT Cores" as I've yet to see any substantial information on precisely how the RT cores fit into the rest of the Turing GPU but the patent suggests that the end result is substantially similar- having dedicated hardware to do the Bounding Volume Hierarchy calculations that tell the rest of the core where the rays will hit- which is apparently the most time-consuming part of the PT pipeline- while passing off the work of figuring out what happens when they hit to the shader cores. I'll reiterate that given the geometry/texture/shader performance of Navi seems to be on-par with Turing, the differentiation is going to be in how quick those BVH numbers can be crunched and how efficiently the results can be passed back to the shader cores for... erm... shading.
 
re: AMD's hardware DXR acceleration solution, this may provide some clues

no idea how it compares architecturally to Nvidia's "RT Cores" as I've yet to see any substantial information on precisely how the RT cores fit into the rest of the Turing GPU but the patent suggests that the end result is substantially similar- having dedicated hardware to do the Bounding Volume Hierarchy calculations that tell the rest of the core where the rays will hit- which is apparently the most time-consuming part of the PT pipeline- while passing off the work of figuring out what happens when they hit to the shader cores. I'll reiterate that given the geometry/texture/shader performance of Navi seems to be on-par with Turing, the differentiation is going to be in how quick those BVH numbers can be crunched and how efficiently the results can be passed back to the shader cores for... erm... shading.

I would imagine we’ll see the same performance hit Turing has if it is similar in process.
 
I know we get hyped up like this for every upcoming AMD release, but this time it seems like they might pull it off.

Remember, there is probably only a small market for $1,200 video cards. As good as the 2080 Ti is, most people don't want to spend that kind of dough on a GPU.

So if AMD can even slightly beat that 2080 Ti for a more reasonable price (not sure what is reasonable these days, but lets say $700-800) then that would be a big win.

Sure, Nvidia will release a 3080 Ti in time, but at what price? Without competition that card from Nvidia might be $1,500 the way things have been going. AMD has a shot.

I agree that AMD seems to be in a better position GPU-wise than they have in a very long time but let's not get ahead of ourselves. Small Navi was hyped to beat RTX 2070 for 1/2 the price; instead it matched it for 4/5 the price. For Big Navi I'm gonna guess evenly matched with 2080Ti for $999 (actual price, not $999 MSRP but $1200 retail like 2080Ti) with a cut-down model at $749 that slightly beats the 2080S. That would leave a big gap in the middle but I don't see how to avoid that since the 5700XT is already close to the vanilla 2080. Maybe they also release something that's like 5700XT + <10-15% but with hardware DXR in the $499-$599 range?
 
I would imagine we’ll see the same performance hit Turing has if it is similar in process.

in the same ballpark, yeah... but the implementations look different enough that it's hard to compare apples-to-apples. Turing has one BVH engine ("RT core") for every 64 shader cores and the RT cores are thus presumably linked to the "streaming multiprocessor" shader clusters. This AMD patent on the other hand indicates BVH engines linked to the TMUs which at face value suggests WAY more width than NVs implementation, but surely there's more to the story.

of course this is all speculation based on a single patent from two years ago, so who knows... just having some fun thought experiments on a lazy Saturday.
 
re: AMD's hardware DXR acceleration solution, this may provide some clues

no idea how it compares architecturally to Nvidia's "RT Cores" as I've yet to see any substantial information on precisely how the RT cores fit into the rest of the Turing GPU but the patent suggests that the end result is substantially similar- having dedicated hardware to do the Bounding Volume Hierarchy calculations that tell the rest of the core where the rays will hit- which is apparently the most time-consuming part of the PT pipeline- while passing off the work of figuring out what happens when they hit to the shader cores. I'll reiterate that given the geometry/texture/shader performance of Navi seems to be on-par with Turing, the differentiation is going to be in how quick those BVH numbers can be crunched and how efficiently the results can be passed back to the shader cores for... erm... shading.

I the same thread I linked some NVIDIA patents:
NVIDIA patents for Raytracing for comparison:
2014: http://www.freepatentsonline.com/20160070820.pdf
2015: http://www.freepatentsonline.com/9582607.pdf

Looks like they got a 2-3 years headstart on AMD.

 
I the same thread I linked some NVIDIA patents:

thanks for pointing that out! I admittedly didn't read the thread I linked to- I had seen the AMD patent elsewhere when it first surfaced and just grabbed the first link that Bing found me which happened to be that one
 
Its also possible that NV statement that they are going to use both fabs going forward could just be dmg control from a bunch of news reports claiming NV is switching to Samsung in a year. (they still have to do business with TSMC)

Another possibility Samsung is a stop gap. With TMSC filled up with Apple/AMD perhaps they contracted TMSC for a 7nm+ ampere refresh part. With Samsung doing the first gen to get them out the door quicker.

If Nvidia tries to squeeze 20 billion transistors into their first 7nm part... chances are it will either need a jet powered fan to keep it under 90 degrees, and or their yields will be so bad every wafer will have 3 working chips.

OR more likely their chiplet design is going to include a third chip designed RT and other physics. Imagine it as a general use physics matrix math co processor for developers. With the same type of infinity fabric connection AMD will be using to connect navi and ryzen.

I think you've nailed it here. Connecting the dots, it may be that Nvidia is going with an AMD-style chiplet design with traditional cores on a large 7nm chiplet from one CM along with a large standalone 7nm RT chiplet from a different CM. It would be a pretty radical architecture departure for Nvidia, but if a massive increase in RT cores is what it's going to take in order to do 60fps 4K RT, then this might be the only viable route.
 
Boy am I glad I am not the one to make the tough trade-off calls on the next gen from either team.

RT is undoubtedly the next thing. But how much effort / silicon do you put into it? This is doubly frustrating since it requires developers to really use it well - ask NV. If nobody uses it, it's largely wasted die space. You could then be burned by the competition if they put all their chips in traditional shaders. But if Killer App A shows up and makes RT truly awesome, then you blew it without that support.

It will all sort out in time, but the next gen will be interesting to see where each vendor puts their (poker) chips in their (doped silicon) chips.
 
I think you've nailed it here. Connecting the dots, it may be that Nvidia is going with an AMD-style chiplet design with traditional cores on a large 7nm chiplet from one CM along with a large standalone 7nm RT chiplet from a different CM. It would be a pretty radical architecture departure for Nvidia, but if a massive increase in RT cores is what it's going to take in order to do 60fps 4K RT, then this might be the only viable route.

Nvidia has a working interference research part that consists of up to 36 16nm chiplets. I think those are only something like 6nm each or something. Point is they are for sure working on making the tech work. So I don't expect ampere is a chiplet design... much like the first version of navi isn't. Still it wouldn't be out of left field if it is.... and they could well have a monolithic first version planned with a pretty quick chiplet version follow up.

The Navi 10s parts are all monolithic. But I have a feeling the rumors of Navi 20s coming sooner then expected are the Navi2 chiplet stuff that will be the actual stuff going into the console parts. NV may well do the same... contract Samsung to get a 7nm monolithic Ampere out especially if they need an answer to Navi2. All while having TMSC working on tapping out a 7nm+ ampere chiplet design to drop when Intel gets XE out.

Seems like they are all jockeying for position right now trying to time things to hit at the exact right time. NV hasn't really had to worry about timing for a long time now. lol
 
The details are " infinity fabric" there is a reason why Dr. Lisa Su is the CEO of AMD. Eco system and custom designs. All of the consoles are locked in and the next 5 years will be the real show. Trust me it may sound a little creazy at the moment , but ARM is onto something . The beginnings of auto industry where pretty much Daimler and Ford. GPU's and CPU's are heading in the same direction. The future is bright and there will be more choices as time passes.
 
Yes, Intel might have something cooking but I seriously doubt they could come out the gate with a 2080 Ti class product. You never know, though.

We can all hope. They do have at least some GPU experience.

I have to imagine all players are working on GPU chiplet designs. It’ll be interesting though since GPUs require way more bandwidth to seamlessly interconnect than CPUs. I don’t think we’ve ever seen anything like it for GPUs.
 
I would not count on it...next SKU from NVIDIA will be "Ampere" on 7nm....like it or not.
I'm talking performance improvement because they thought AMD were releasing a better performing card. If they knew what Vega really was we wouldn't have gotten a 1080ti that was as good as it was, they would have shaved it down and charged the same.
 
We can all hope. They do have at least some GPU experience.
Yes. People often forget that Intel commands around 70% of GPU market overall with their integrated graphics.

Granted they are not high powered, but they do have tons of driver experience and the top-end iGPUs can play okay already.

I'm sure they will be competitive, hopefully a surprise too, I guess a new player might have some luck.
 
I've seen some people praise chiplets on Big Navi as if was the 2nd coming of Christ. But I don't think its even feasable on a GPU.

If I understand correctly, on a CPU each chiplet is a separate die with a cluster of CPUcores which in essence work as a multi core cpu. That's a non issue for cpus as multicore/multithread applications don't really see the difference.

But on a GPU each chiplet is effectively a separate GPU, so that would mean it would behave like crossfireX on chip. And that would mean game developers would need dedicated support for it. Given that developers have basically forgot about SLI/crossfire on DX12, I don't think there would be any support at least initially.

So unless AMD has figured out how to make separate GPUs work as one on DX12/Vulkan (the holy grail for sli/crossfire) I don't think big navi could use chiplets.

And lets not forget DX11/DX9 legacy multigpu support
 
But on a GPU each chiplet is effectively a separate GPU, so that would mean it would behave like crossfireX on chip. And that would mean game developers would need dedicated support for it. Given that developers have basically forgot about SLI/crossfire on DX12, I don't think there would be any support at least initially.
You are not correct about this.
 
Status
Not open for further replies.
Back
Top