Join us on November 3rd as we unveil AMD RDNA™ 3 to the world!

but I'd like NVIDIA knocked off their pedestal for once. Jensen really needs some humbling.
Not going to happen. AMD would have to perfectly replicate every single feature of an Nvidia card and then make it better. Even if they made a card that beat the 4090 in raw gaming performance, the lists of "yeah but it doesn't have A,B,C...all the way to Z". Then throw in "AMD drivers kicked my puppy"

Best to just ignore the pedestal and just enjoy whatever RDNA 3 turns out to be as long as it's good.
 
Power/specs level. Yes anything AMD throws out there can play everything, but I'd like NVIDIA knocked off their pedestal for once. Jensen really needs some humbling.

Rumors maintain that it's possible that AMD could do it this time, but may choose not to (or leave it to the AIBs) as ray-tracing is still going to trade blows even if AMD pushes it. They are gonna try to look like the good guys with RDNA3. RDNA4, on the other hand, they're going for the jugular.

They're already a couple of generations ahead of Nvidia in terms of cache designs and chiplets. It's not AMD fanboyism to recognize that AMD has a low-cost, small-die (and low-power) advantage over Nvidia. They don't have market share, they don't have mind share. So that's why it makes sense to play it safe to them. Just be the good guys.
 
Not going to happen. AMD would have to perfectly replicate every single feature of an Nvidia card and then make it better. Even if they made a card that beat the 4090 in raw gaming performance, the lists of "yeah but it doesn't have A,B,C...all the way to Z". Then throw in "AMD drivers kicked my puppy"

Best to just ignore the pedestal and just enjoy whatever RDNA 3 turns out to be as long as it's good.

AMD has been doing good on the features and software side recently. They're a bit later, but typically have stuff comparable to DLSS, G Sync, etc. Big down side at the moment is ray tracing performance.
 
Rumors maintain that it's possible that AMD could do it this time, but may choose not to (or leave it to the AIBs) as ray-tracing is still going to trade blows even if AMD pushes it. They are gonna try to look like the good guys with RDNA3. RDNA4, on the other hand, they're going for the jugular.

They're already a couple of generations ahead of Nvidia in terms of cache designs and chiplets. It's not AMD fanboyism to recognize that AMD has a low-cost, small-die (and low-power) advantage over Nvidia. They don't have market share, they don't have mind share. So that's why it makes sense to play it safe to them. Just be the good guys.
While I agree, we all have to consider that many of those advantages like the chiplets aren't so much AMDs as they are TSMCs, same with the interposers that make them work. While it is very true that AMD currently holds an advantage with their practice in designing with those concepts in mind, many of their existing concepts are made redundant or obsolete by the UCIe standards which will more or less level the playing field in that they both get to start fresh. You also have to consider that Nvidia has a significant investment in their NVlink technology which can probably be scaled down to work across modern interposer technology which could probably greatly help them when they do go multi-chip, as one of the more difficult parts of the design is resource allocation and sharing of which Nvidia's algorithms are very advanced and refined, the scaling you get across their workstation NVLinks is nothing to scoff at.
 
You also have to consider that Nvidia has a significant investment in their NVlink technology which can probably be scaled down to work across modern interposer technology which could probably greatly help them when they do go multi-chip, as one of the more difficult parts of the design is resource allocation and sharing of which Nvidia's algorithms are very advanced and refined, the scaling you get across their workstation NVLinks is nothing to scoff at.

Nvidia is far from toothless in this. That's why I mostly expect AMD to just target the bulk of the PC mindshare with this. Yeah, they can let Sapphire or whoever to do something crazy. Nvidia will bank on Team Green, and they'll sell more stock. It'll be years before AMD can really integrate Xillinux.

What AMD can do is sell to gamers who don't want to buy new PSUs, OEMs who don't want to supply new PSUs, and enterprise who can't even deal with the wattages. They have an A+A advantage this gen for mobile, and a real chance of coming out on top as the good guys.

They're not good guys, I make no mistake about it. This generation is about changing their branding. Make Nvidia look like a-holes, then, apply marketing gains and whatever industry/driver improvements for the next-gen.

I won't discount both companies' fear of Intel at all. Arc might as well have failed, but that's no reason to brush them off. AMD and Nvidia need mobile, and they need OEM.
 
Nvidia is far from toothless in this. That's why I mostly expect AMD to just target the bulk of the PC mindshare with this. Yeah, they can let Sapphire or whoever to do something crazy. Nvidia will bank on Team Green, and they'll sell more stock. It'll be years before AMD can really integrate Xillinux.

What AMD can do is sell to gamers who don't want to buy new PSUs, OEMs who don't want to supply new PSUs, and enterprise who can't even deal with the wattages. They have an A+A advantage this gen for mobile, and a real chance of coming out on top as the good guys.

They're not good guys, I make no mistake about it. This generation is about changing their branding. Make Nvidia look like a-holes, then, apply marketing gains and whatever industry/driver improvements for the next-gen.

I won't discount both companies' fear of Intel at all. Arc might as well have failed, but that's no reason to brush them off. AMD and Nvidia need mobile, and they need OEM.
Arc failed gamers for sure, but Arc Pro holds promise and the intel Server stack is looking better than it has in a LONG time. Intel just needs to deliver something that can stand up with the A2000 6GB and they will rake up.

AMD can't go after OEMs in a heavy way and we don't want them to... AMD can't supply that demand, Dell will sell more 13600k's then AMD will make CPUs let alone the rest of the OEMs out there. Let's let AMD stick most of their OEM demand with Microsoft and Sony until they can get more time with TSMC.

Power draw realistically isn't a problem here, for the normal lineup the voltage difference is some 30w between this year and last year, not exactly a significant amount on the desktop side, for mobile it won't have changed much at all power envelopes there are very strict, when Intel announces the 13'th gen mobile lineup I don't expect much variation there maybe 5w increase on the CPU for a decrease of about that amount due to DDR5 improvements and better screens. Battery life and the need to stick to a 65w USB-C type charger for the bulk of the mobile world severely limits the ability to make too many changes there.

AMD and Intel both need to be a little cautious of the Nvidia MX550, they are cheap, easily manufactured, and on a "mature" and low-demand process they also share the same pinout and package footprint and power draw as the old MX450 chips. It's more than enough to counter the existing AMD 680M and Intel A350 while offering more features than both. The little things that Nvidia has been up to with camera filters, panning/tracking, and audio cleanup for video conferencing are kinda neat if you live in that sort of environment, so for any laptop purchases for Management, it's the chip I look for at the moment.
 
Last edited:
I expect strong raster and weak RT from RDNA3, based on few leaks long time ago, but soon we will know :)

Not with ray tracing.
Did you try to play on 200Hz++ monitor with RT? Did you think that DLSS will help you in fast titles in the way to see everything fine/crystal?

With Nvidia 5000 / Radeon 8000, probably we will have nice RT playable games (maybe).
 
Lies. Most big new games come with raytracing, and your 6800xt has no DLSS plus suffers at 4k. Nice try.
People act like FSR2 doesn't exist.
I myself am still on a RX 5700 and it can handle my UE5 projects that use Lumen fine at 1080p. Although Lumen can be accelerated by DXR hardware which I don't have.
I even have room for a 20% overclock using mpt and haven't really needed it yet
 
Make Nvidia look like a-holes, ...
Nvidia has been doing this to themselves for generations now. They don't need AMD to do it for them. So many levels of shenanigans from GPP (and before) to 4090 pricing to "unlaunching" to EVGA terminating toxic relationship.
Unfortunately, consumers are fickle and forget or don't care, and ONLY see FPS, and keep feeding the beast.
 
I said most new games, nothing about your library. Nice straw man you have there!
You forgot you said "Lies" when talking about his game library...
It's inferior by a noticeable amount when gaming. Same thing with nvenc/Shadow play vs AMD's half baked solution.
FSR 1.0 yes
FSR 2.1 Have you even seen in-depth comparisons???
In general upscaling tech is detrimental to gameplay graphics and I hate using it
I agree with you on NVENC but I don't have a use for it.
 
You forgot you said "Lies" when talking about his game library...

FSR 1.0 yes
FSR 2.1 Have you even seen in-depth comparisons???
In general upscaling tech is detrimental to gameplay graphics and I hate using it
I agree with you on NVENC but I don't have a use for it.
I said lies to his claim most new games don't come with raytracing yet.

I have tried fsr 2 vs my own card's dlss 2 and found it lacking at my 4k native. Different strokes I guess... I see more artifacting with fsr.l 2 and lower performance gains.
 
I said lies to his claim most new games don't come with raytracing yet.

I have tried fsr 2 vs my own card's dlss 2 and found it lacking at my 4k native. Different strokes I guess... I see more artifacting with fsr.l 2 and lower performance gains.
Well no surprise that DLSS runs better than FSR on your NVidia GPU.
Just like FSR runs better on RDNA 2 than RDNA1 and Nvidia cards.

Also, you should check your own posts...

Back on topic,
I hope AMD doesn't waste resources on frame generation tech like DLSS 3. To me that just seems like a huge waste of time. Gives you the input lag of whatever half of your frame rate is, so it is only "useful" when you are above 120 FPS, and at that point you likely want lower input lag, OR are playing a game not sensitive to input lag at all, at which point won't be sensitive to FPS either. Seems Nvidia only did that so they could say "2-4x faster than 3090ti" and be at the top of the charts with a "new feature" that AMD doesn't have.
What do you guys think?
 
While I agree, we all have to consider that many of those advantages like the chiplets aren't so much AMDs as they are TSMCs, same with the interposers that make them work. While it is very true that AMD currently holds an advantage with their practice in designing with those concepts in mind, many of their existing concepts are made redundant or obsolete by the UCIe standards which will more or less level the playing field in that they both get to start fresh. You also have to consider that Nvidia has a significant investment in their NVlink technology which can probably be scaled down to work across modern interposer technology which could probably greatly help them when they do go multi-chip, as one of the more difficult parts of the design is resource allocation and sharing of which Nvidia's algorithms are very advanced and refined, the scaling you get across their workstation NVLinks is nothing to scoff at.
If it wasn't AMD tech making it work everyone would be doing it. If the only factor at play was a TMSC owned tech, they would be selling it. Every company has tweaks to the fab process... that doesn't make those tweaks the fab companies doing, or property. AMD Apple Nvidia they are fabless and have massive teams of engineers doing nothing but working with their fab partners... TMSC does remarkably little for any of them, there is a reason tooling up for a chip on any process is stupidly expensive for the company designing the chip. You can't just sketch some electrical paths and say here make this.

I hope everyone moves to chiplets its good for consumers to help bring down their costs. I'm not lookin forward to $3k flagship GPUS. AMD has 5 generations of CPU chiplet experience with 4 different fab processes being used under their belt. No one else has shipped a mass market design at this point.

Perhaps Nvidias next design won't be monolithic. That is the rumor. However they are also trying to push for an industry standard... which is the simple way of saying THEY don't want to put in the work themselves because its stupid expensive and they are probably going to screw up and need to redo it for gen 2 (as AMD had to improve Zens interfaces each gen... zen1 chiplets < then Zen 2 then 3) They are hoping they can jump to fully functional, and benefit from AMDs work. As much as AMD is an open source embracing company on the software side I very much hope they say HA HA HA to the idea of sharing any of their patents to a open standard. TMSC does NOT own the patents on the tech AMD is using in their foundry. Nvidia is going to have a MUCH harder time making the transition. (Unless AMD makes the boneheaded business decision of sharing their decade of work) AMD with 7000 has a massive advantage being able to move over a team of 100s of engineers with experience. Those material engineers don't care the package is a GPU instead of a CPU, just another chiplet problem for them to solve.
 
If it wasn't AMD tech making it work everyone would be doing it. If the only factor at play was a TMSC owned tech, they would be selling it. Every company has tweaks to the fab process... that doesn't make those tweaks the fab companies doing, or property. AMD Apple Nvidia they are fabless and have massive teams of engineers doing nothing but working with their fab partners... TMSC does remarkably little for any of them, there is a reason tooling up for a chip on any process is stupidly expensive for the company designing the chip. You can't just sketch some electrical paths and say here make this.

I hope everyone moves to chiplets its good for consumers to help bring down their costs. I'm not lookin forward to $3k flagship GPUS. AMD has 5 generations of CPU chiplet experience with 4 different fab processes being used under their belt. No one else has shipped a mass market design at this point.

Perhaps Nvidias next design won't be monolithic. That is the rumor. However they are also trying to push for an industry standard... which is the simple way of saying THEY don't want to put in the work themselves because its stupid expensive and they are probably going to screw up and need to redo it for gen 2 (as AMD had to improve Zens interfaces each gen... zen1 chiplets < then Zen 2 then 3) They are hoping they can jump to fully functional, and benefit from AMDs work. As much as AMD is an open source embracing company on the software side I very much hope they say HA HA HA to the idea of sharing any of their patents to a open standard. TMSC does NOT own the patents on the tech AMD is using in their foundry. Nvidia is going to have a MUCH harder time making the transition. (Unless AMD makes the boneheaded business decision of sharing their decade of work) AMD with 7000 has a massive advantage being able to move over a team of 100s of engineers with experience. Those material engineers don't care the package is a GPU instead of a CPU, just another chiplet problem for them to solve.
This makes sense as to why no one else has chiplet CPUs or GPUs. If TSMC was selling chiplet fabbing options, why wouldn't others be jumping on it since its obviously cheaper to produce chiplet based CPUs with how great the yields are...
 
Nvidia has been doing this to themselves for generations now. They don't need AMD to do it for them. So many levels of shenanigans from GPP (and before) to 4090 pricing to "unlaunching" to EVGA terminating toxic relationship.
Unfortunately, consumers are fickle and forget or don't care, and ONLY see FPS, and keep feeding the beast.

Only if you’re in the know. Most people just think they charge more because they do more.

If AMD wants to gain mindshare, they have to embarrass Nvidia.
 
I said most new games, nothing about your library. Nice straw man you have there!
The only straw man here is coming from you. You invented an argument out of nowhere and then proceeded to claim that he lied about something that you pretended he said. In fact, your post is one of the cleanest examples of the straw man fallacy I've seen in a while. There was never any mention of new or upcoming titles anywhere. He was solely talking about his library and his uses. You created an upcoming game argument and tried to say he lied about something that was never said.
 
The only straw man here is coming from you. You invented an argument out of nowhere and then proceeded to claim that he lied about something that you pretended he said. In fact, your post is one of the cleanest examples of the straw man fallacy I've seen in a while. There was never any mention of new or upcoming titles anywhere. He was solely talking about his library and his uses. You created an upcoming game argument and tried to say he lied about something that was never said.
Well, his library will only expand in raytracing over time even if he has few titles now that use it. That was part of my point.
 
If it wasn't AMD tech making it work everyone would be doing it. If the only factor at play was a TMSC owned tech, they would be selling it. Every company has tweaks to the fab process... that doesn't make those tweaks the fab companies doing, or property. AMD Apple Nvidia they are fabless and have massive teams of engineers doing nothing but working with their fab partners... TMSC does remarkably little for any of them, there is a reason tooling up for a chip on any process is stupidly expensive for the company designing the chip. You can't just sketch some electrical paths and say here make this.

I hope everyone moves to chiplets its good for consumers to help bring down their costs. I'm not lookin forward to $3k flagship GPUS. AMD has 5 generations of CPU chiplet experience with 4 different fab processes being used under their belt. No one else has shipped a mass market design at this point.

Perhaps Nvidias next design won't be monolithic. That is the rumor. However they are also trying to push for an industry standard... which is the simple way of saying THEY don't want to put in the work themselves because its stupid expensive and they are probably going to screw up and need to redo it for gen 2 (as AMD had to improve Zens interfaces each gen... zen1 chiplets < then Zen 2 then 3) They are hoping they can jump to fully functional, and benefit from AMDs work. As much as AMD is an open source embracing company on the software side I very much hope they say HA HA HA to the idea of sharing any of their patents to a open standard. TMSC does NOT own the patents on the tech AMD is using in their foundry. Nvidia is going to have a MUCH harder time making the transition. (Unless AMD makes the boneheaded business decision of sharing their decade of work) AMD with 7000 has a massive advantage being able to move over a team of 100s of engineers with experience. Those material engineers don't care the package is a GPU instead of a CPU, just another chiplet problem for them to solve.
It is very true that AMD or more Jim Keller did invent chiplets and their concepts, then how much of the nitty-gritty is up to some debate as both GoFLO and Samsung were involved there with getting specific parts up and running

Other groups have used chiplets but for many things they just don't work or aren't needed, ARM showcased a chiplet-based A-72 back in 2019 that they developed in partnership with TSMC but they determined that it was more effective to use TSMC's chip-on-wafer-on-substrate (CoWoS) packaging and essentially glue the "chiplets" together internally using an on-die interconnect mesh bus which makes it now an MCM based design.

Intel has been working on its version of "Chiplets" for a while, and on paper Intel's Foveros packaging technology should be the superior approach, and that is the concept that the UCIe group (of which AMD and Nvidia are members) is working with because they agree there are just some fun nitty gritty parts to work out still.

Nvidia's next design is unlikely to be monolithic, TSMC and Apple jointly produced some new and exciting methods for interconnecting packages that get some insane speeds, which is what made the M1-Ultra, a viable product as it is an MCM-based processor. Nvidia has been publishing papers on MCM-based GPUs since 2017, it's first being MCM-GPU: Multi-Chip-Module GPUs for Continued Performance Scalability.

AMD has already launched their MCM-based GPU the MI 250, and by all accounts, it is pretty problematic. The upcoming consumer GPU series simplifies the design by only moving the IO off the GPU die but still doesn't incorporate multiple GPU dies, that is probably still a few years out. The internal logistics of resource allocation and sharing between chips while maintaining backward compatibility is still beyond them I understand. Apple is technically the first to get an MCM GPU to the consumer market in the M1 Ultra and that is thanks to their custom interconnect 2.5Tbps is just insane... but Apple didn't have to worry about any backward compatibility.

I won't deny that AMD currently has the advantage of years of experience, but things are changing pretty quickly and AMD is working with everybody including Intel and Nvidia to sort it out.
 
Last edited:
Arc failed gamers for sure,
No it hasn't. Arc failed gamers who want the fastest graphics, but for those who buy mid ranged then Arc is not a bad choice. That's like saying Chevy failed because their Corvette is slow, even though the majority aren't able to afford them.
but Arc Pro holds promise and the intel Server stack is looking better than it has in a LONG time. Intel just needs to deliver something that can stand up with the A2000 6GB and they will rake up.
Intel needs much better drivers. Once they have proper functioning DX11 drivers then Arc becomes a very good choice.
AMD and Intel both need to be a little cautious of the Nvidia MX550, they are cheap, easily manufactured, and on a "mature" and low-demand process they also share the same pinout and package footprint and power draw as the old MX450 chips. It's more than enough to counter the existing AMD 680M and Intel A350 while offering more features than both. The little things that Nvidia has been up to with camera filters, panning/tracking, and audio cleanup for video conferencing are kinda neat if you live in that sort of environment, so for any laptop purchases for Management, it's the chip I look for at the moment.
MX550 is trash. Half the performance of a GTX 1650 is not something AMD and Intel should worry about. Doesn't have Ray Tracing and is based on Turing. AMD now includes RDNA2 graphics in all their new CPU's, so unless the MX550 is more feature rich and faster, then MX550 is just more silicon to laptops for no reason.
 
Well, his library will only expand in raytracing over time even if he has few titles now that use it. That was part of my point.
You could just own up to falsely saying I'm a liar instead of moving the goalposts.

Whatever, man. I'm sorry I offended the honor of Nvidia and Raytracing.
 
Intel needs much better drivers. Once they have proper functioning DX11 drivers then Arc becomes a very good choice.
But they have made it clear they are not putting out DX 11 drivers, that is off the table. They would be offering limited support for specific titles, not general DX11.
MX550 is trash. Half the performance of a GTX 1650 is not something AMD and Intel should worry about. Doesn't have Ray Tracing and is based on Turing. AMD now includes RDNA2 graphics in all their new CPU's, so unless the MX550 is more feature rich and faster, then MX550 is just more silicon to laptops for no reason.
The MX550 is a godsend for office work, we need the Nvidia product stack there for their work with camera focusing, auto-sense zoom, audio cleanup, and the various streaming aspects where the 1650 is overkill. AMD did launch the RDNA 2 680M for their mobile lineup and it is slightly better performing (~10%), but consumes 2x the power while having none of the features for zoom, teams, etc.
 
It is very true that AMD or more Jim Keller did invent chiplets and their concepts, then how much of the nitty-gritty is up to some debate as both GoFLO and Samsung were involved there with getting specific parts up and running

Other groups have used chiplets but for many things they just don't work or aren't needed, ARM showcased a chiplet-based A-72 back in 2019 that they developed in partnership with TSMC but they determined that it was more effective to use TSMC's chip-on-wafer-on-substrate (CoWoS) packaging and essentially glue the "chiplets" together internally using an on-die interconnect mesh bus which makes it now an MCM based design.

Intel has been working on its version of "Chiplets" for a while, and on paper Intel's Foveros packaging technology should be the superior approach, and that is the concept that the UCIe group (of which AMD and Nvidia are members) is working with because they agree there are just some fun nitty gritty parts to work out still.

Nvidia's next design is unlikely to be monolithic, TSMC and Apple jointly produced some new and exciting methods for interconnecting packages that get some insane speeds, which is what made the M1-Ultra, a viable product as it is an MCM-based processor. Nvidia has been publishing papers on MCM-based GPUs since 2017, it's first being MCM-GPU: Multi-Chip-Module GPUs for Continued Performance Scalability.

AMD has already launched their MCM-based GPU the MI 250, and by all accounts, it is pretty problematic. The upcoming consumer GPU series simplifies the design by only moving the IO off the GPU die but still doesn't incorporate multiple GPU dies, that is probably still a few years out. The internal logistics of resource allocation and sharing between chips while maintaining backward compatibility is still beyond them I understand. Apple is technically the first to get an MCM GPU to the consumer market in the M1 Ultra and that is thanks to their custom interconnect 2.5Tbps is just insane... but Apple didn't have to worry about any backward compatibility.

I won't deny that AMD currently has the advantage of years of experience, but things are changing pretty quickly and AMD is working with everybody including Intel and Nvidia to sort it out.

Wasn't Jim alone but yes he was working for AMD and heading that team.

Nvidia is pushing hard for a standard yes.
Intels tech is a bust... its been in R&D mode for over a decade. Foveros will never see mass production. Just look at the insane amounts of wattage getting pushed through chips at the die size we are at... and you can see how stacking is going to be a bust. Stacking of cache like AMDs 3D cache is one thing... having logic stacked on top of logic. Intel doesn't talk about it but they have had tons of issue with cross talk. Its just not going to work for logic, or at the very least they are going to have to be extremely picky in the design about what gets stacked where. The tolerances will exclude a lot of things from being stacked.

AMD is a part of standard talks... however as you point out basically everyone is going their own way. I doubt Apple comes on board with a "standard"... they will do their own thing. At that point what does AMD gain ? Nvidia offers nothing AMD doesn't already have... and sharing only benefits Nvidia, which doesn't help AMD. As much as Nvidia has patented ideas for interconnects what I have read is very theoretical and not material science based... they talk about methods on the design side to make the computation more amenable to splitting. Making it easier on controllers. Unless I have missed some fillings that are more process related.

I expect Nvidias next chips will be chiplet... they have a lot of money and smart people I'm sure what they come up with will work, and work well. First gen is going to cost them. Long term it will save them a ton of money. AMD is already there... 7000 launch should be interesting. I think it will also be interesting to see how hard Nvidia goes into the chiplet space with their first gen. They may be pretty conservative, which might be smart.
 
Nvidia is pushing hard for a standard yes.
Nvidia was one of the last people to sign up, it's mostly been the big 3; TSMC, Samsung, and Intel who are pushing for the standard. TSMC and Samsung don't want to have to change up tooling for each different client when they do their chiplet designs as it adds cost and complexity so they want a single method for doing it. Intel is playing ball because they want in on fabbing other people's parts they've already partnered with Qualcomm at 16nm for god only knows what.

Intels tech is a bust... its been in R&D mode for over a decade. Foveros will never see mass production. Just look at the insane amounts of wattage getting pushed through chips at the die size we are at... and you can see how stacking is going to be a bust. Stacking of cache like AMDs 3D cache is one thing... having logic stacked on top of logic. Intel doesn't talk about it but they have had tons of issue with cross talk. Its just not going to work for logic, or at the very least they are going to have to be extremely picky in the design about what gets stacked where. The tolerances will exclude a lot of things from being stacked.
The leaks of the Sapphire Rapids stuff on the E3 stepping was pretty good, they have moved up to E5 stepping now.
Intel has been shipping the Ponte Vecchio and Sapphire Rapids to the Aurora Supercomputer since September so it must have passed Cray's validation testing between the April leaks and then. Both of those use Foveros, Ponte Vecchio is packing 63 tiles, and Sapphire Rapids up to 60.

I expect Nvidias next chips will be chiplet.
I don't know if they go Chiplet or they go MCM, MCM has much lower latency than Chiplet and I would think that the latency would be the opposite of what you want on a GPU, it would also fall more in line with the research papers they have been publishing.
 
Nvidia was one of the last people to sign up, it's mostly been the big 3; TSMC, Samsung, and Intel who are pushing for the standard. TSMC and Samsung don't want to have to change up tooling for each different client when they do their chiplet designs as it adds cost and complexity so they want a single method for doing it. Intel is playing ball because they want in on fabbing other people's parts they've already partnered with Qualcomm at 16nm for god only knows what.


The leaks of the Sapphire Rapids stuff on the E3 stepping was pretty good, they have moved up to E5 stepping now.
Intel has been shipping the Ponte Vecchio and Sapphire Rapids to the Aurora Supercomputer since September so it must have passed Cray's validation testing between the April leaks and then. Both of those use Foveros, Ponte Vecchio is packing 63 tiles, and Sapphire Rapids up to 60.


I don't know if they go Chiplet or they go MCM, MCM has much lower latency than Chiplet and I would think that the latency would be the opposite of what you want on a GPU, it would also fall more in line with the research papers they have been publishing.
I can see why Samsung wants a standard for sure... even TMSC I guess they wouldn't want anyone tooled for Samsung and not them. I don't see the upside for AMD still... I suspect as you say Nvidias solution will be very different. Which is good let the best tech win between AMD Nvidia and Intel.

With Intels stacking tech... ya I don't see them doing anything with it consumer facing still. They might manage enough working bits to power their super computer commitments. I just really really doubt its cost effective enough to be an option for anything in the consumer sphere. Fantastic if I'm wrong Intel is going to have to do something to keep up going forward... kudos to them for mostly keeping up with what they have working though.
 
Intel needs much better drivers. Once they have proper functioning DX11 drivers then Arc becomes a very good choice.

I really want Intel to shape up and turn Arc into a great product but unless they actually invest heavily in their driver team the question isn’t when will they fix their DX11 (and other) issues but if they will. Intel’s current driver team has been dogshit for a couple decades now and without upper management taking the GPU market seriously that will never change.
 
The current chapter of Fortnite has a chrome infection taking over the island. Its UE 5, it has ray tracing. Its so god damn pretty. It flies with RT on on my 6900XT.
 
But they have made it clear they are not putting out DX 11 drivers, that is off the table. They would be offering limited support for specific titles, not general DX11.
That's stupid of Intel. Valve literally made a Vulkan driver for Intel on Linux, and even made a Vulkan driver for AMD on Linux. Then on top of that Valve also made DXVK and VKD3D-Proton so DX11 and DX12 games ran. I get that DX11 is old but when someone else is making your drivers, you know you have problems.
The MX550 is a godsend for office work, we need the Nvidia product stack there for their work with camera focusing, auto-sense zoom, audio cleanup, and the various streaming aspects where the 1650 is overkill. AMD did launch the RDNA 2 680M for their mobile lineup and it is slightly better performing (~10%), but consumes 2x the power while having none of the features for zoom, teams, etc.
How exactly does Nvidia help in regard to those productivity applications? What you listed is a lot of generic stuff that any GPU can do. Also 680M doesn't matter since again AMD is including RDNA2 graphics in all their CPU's, and next year they'll include RDNA3 in their laptops.
The only difference is that it did pay off in the end when AMD did get better drivers. Also, wasn't AMD's DX10 cards superior to Nvidia's due to DX10.1? Intel just needs to hire Valve to make drivers for them because Valve seems to be doing a great job at it.


I really want Intel to shape up and turn Arc into a great product but unless they actually invest heavily in their driver team the question isn’t when will they fix their DX11 (and other) issues but if they will. Intel’s current driver team has been dogshit for a couple decades now and without upper management taking the GPU market seriously that will never change.
Intel is cutting costs by using a Microsoft made DX11 to DX12 wrapper which is definitely not optimized enough to make it work. Even a good wrapper should only lose 1% to 3% performance compared to native. Valve (which I mention a lot) has made many changes on Linux to optimize performance. Again, for other companies hardware. Valve made Vulkan drivers for Intel on Linux, and then did the same for AMD. On linux you have AMD's AMDVLK and Valve's RADV, and everyone uses RADV. That wasn't enough, so Valve commissioned ACO which replaces LLVM because it was too slow. Then Valve's like Vulkan is the future, so they helped create DXVK and VKD3D-Proton. Then Valve said Linux Kernel too slow for Wine gaming, and then they helped create futex2 which speeds up Windows games.

Valve isn't selling the hardware, but is pumping a lot of resources into it so this hardware works good on Linux, and in some cases these games run faster on Linux than Windows. So either Intel needs to make DX11 drivers or Intel needs to hire Valve to make a DX11->DX12 wrapper since Valve is so good at it on Linux.
 
  • Like
Reactions: Axman
like this
One think that it is nice on Arc gpus, the IO, the A770 has 1xhdmi 2.1 and 3x DP 2.0, that quite the amount of pixels, with some motherboard that support 4 DP 1.4 monitor, it make regular dell-hp type machine ability to support 6-7-8 monitor/vr/tv, many at 8k 120 fps.
 
How exactly does Nvidia help in regard to those productivity applications? What you listed is a lot of generic stuff that any GPU can do. Also 680M doesn't matter since again AMD is including RDNA2 graphics in all their CPU's, and next year they'll include RDNA3 in their laptops.
The 680M is the RDNA2 GPU that is included in the 6000 series mobile chips, the 610M is the version built into the 7000 series desktop chips.
Nvidia has a lot of stuff with automatic camera tracking and stuff they have intended for streamers, that actually transition really well to the new remote workplace, is it needed no, but does it make management feel important YES. Does it work better than Apple's center stage, NO, but it does at least make the staff using PCs feel like they aren't getting the "inferior" device. Yeah, it sort of does, they still eyeball the Macs like they are nicer but they understand that parts of their workflow just don't translate, and they don't want to change their workflows to adapt to a new platform.
 
People act like FSR2 doesn't exist.
I myself am still on a RX 5700 and it can handle my UE5 projects that use Lumen fine at 1080p. Although Lumen can be accelerated by DXR hardware which I don't have.
I even have room for a 20% overclock using mpt and haven't really needed it yet

The software version of Lumen doesn't trace against triangles. It's just a straight up different rendering technique.

The hardware version of Lumen can be much slower, though it's usually higher quality more has more features.
 
The 680M is the RDNA2 GPU that is included in the 6000 series mobile chips, the 610M is the version built into the 7000 series desktop chips.
Nvidia has a lot of stuff with automatic camera tracking and stuff they have intended for streamers, that actually transition really well to the new remote workplace, is it needed no, but does it make management feel important YES. Does it work better than Apple's center stage, NO, but it does at least make the staff using PCs feel like they aren't getting the "inferior" device. Yeah, it sort of does, they still eyeball the Macs like they are nicer but they understand that parts of their workflow just don't translate, and they don't want to change their workflows to adapt to a new platform.
What software makes use of this camera tracking feature?
 
What software makes use of this camera tracking feature?
Pretty mch all software that use a camera (Nvidia broadcast create a virtual camera that will considered has a normal camera by the other software), same for the micro, speaker.

When Broadcast is running, you can choose Broadcast Speaker has your speaker, broadcast has your mic, etc... tracking being something Kinect on Xbox-Skype was doing and I imagine many others, same for noise reduction.

Not sure how much better than the Zoom, Skype, Webex and others home solution, but it is getting quite impressive how much background noise can disappear and the quality of the greenscreen without a greenscreen background replacement effect.
 
Pretty mch all software that use a camera (Nvidia broadcast create a virtual camera that will considered has a normal camera by the other software), same for the micro, speaker.

When Broadcast is running, you can choose Broadcast Speaker has your speaker, broadcast has your mic, etc... tracking being something Kinect on Xbox-Skype was doing and I imagine many others, same for noise reduction.

Not sure how much better than the Zoom, Skype, Webex and others home solution, but it is getting quite impressive how much background noise can disappear and the quality of the greenscreen without a greenscreen background replacement effect.
And if your conference rooms have been equipped with cameras like the Logitech Meetup, it absolutely crushes it! Management loves it when the other side messages them after a meeting asking about their setup because it looks and sounds better than theirs.
 
Back
Top