Lovelace pci 4.0

There is no point in adding cost and complexity when 4.0 is more then fast enough for next gen cards. I don't believe it will be a rebrand but just the next evolution of their current tech. Don't expect a huge change til Hopper at least.
 
Incidentally, how much of the PCIE 4.0 x16 bandwidth do even the highest current gen cards take up while gaming? Is it even half?
Because some game can see a small difference between pcie 3.0 and 4.0 I would imagine it mean that it must be a little more than half at the minimum (if there is nothing else going on outside bandwidth), but considering how small it is versus 3.0 x16-3.0x8 one can imagine there is a lot of room (specially if more and more the uncompression of asset is made on the card via the different direct storage solution, which I would imagine mean the same bandwidth will send way more game data).
 
Might make a tangible difference once game titles start picking up DirectStorage.
 
Might make a tangible difference once game titles start picking up DirectStorage.

I just thought the exact opposite.. which is a bit funny

Texture compression ratio achieved by some modern algorithm can go has high has lossless 2:1 to 9:1 ratio:
http://www.radgametools.com/oodlekraken.htm

Meaning if it is uncompressed on the GPU instead on the CPU then passed to the GPU you need 2 to 9 time less pcie-bandwith for it.

If title start to use direct storage but continue to do the uncompression on the CPU side (quite possible, considering the sleeping cores) you could be right.
 
I just thought the exact opposite.. which is a bit funny

Texture compression ratio achieved by some modern algorithm can go has high has lossless 2:1 to 9:1 ratio:
http://www.radgametools.com/oodlekraken.htm

Meaning if it is uncompressed on the GPU instead on the CPU then passed to the GPU you need 2 to 9 time less pcie-bandwith for it.

If title start to use direct storage but continue to do the uncompression on the CPU side (quite possible, considering the sleeping cores) you could be right.
There's more to DirectStorage than just compression ratios.
 
There's more to DirectStorage than just compression ratios.
Considering that the first game using it do not even use gpu uncompression you are certainly right.

With how little I know about any of this and how little I played with directStorage I could be obviously really wrong, but would it not make sense that if decompression start to happen in good part on the GPU side, bandwidth become less relevant not more, which counterbalance the fact they get read more from NVME drive?
 
Does it matter? If it's not going to saturate a PCI 4.0 bus, then who cares? This would only be an issue if they were somehow limiting the potential of the card by making it PCI 4.0 instead of 5.0.
 
Because some game can see a small difference between pcie 3.0 and 4.0 I would imagine it mean that it must be a little more than half at the minimum (if there is nothing else going on outside bandwidth), but considering how small it is versus 3.0 x16-3.0x8 one can imagine there is a lot of room (specially if more and more the uncompression of asset is made on the card via the different direct storage solution, which I would imagine mean the same bandwidth will send way more game data).
Unless those titles also take a massive hit running on a 3.0 x8 link, a few percent faster is likely gains from reduced latency and or faster completion of burst requests not from saturating >50% of the bus.
 
Considering that the first game using it do not even use gpu uncompression you are certainly right.

With how little I know about any of this and how little I played with directStorage I could be obviously really wrong, but would it not make sense that if decompression start to happen in good part on the GPU side, bandwidth become less relevant not more, which counterbalance the fact they get read more from NVME drive?
The flaw with your argument is that it assumes that the data set in question is fixed function. For an older title having this lazily patched in, this might be the case. However for a newer title, the developer is given more freedom to utilize larger datasets while leveraging the newer technologies. In contrast to the old ways, your maximum streamable dataset now becomes larger, arrives more frequently and in larger chunks, thus less on-the-fly compromises such as LOD placeholders, asset pop-in, etc.

Worse case scenario, the title in question does not take advantage of any of this and the net results is the same as before.
 
The flaw with your argument is that it assumes that the data set in question is fixed function. For an older title having this lazily patched in, this might be the case. However for a newer title, the developer is given more freedom to utilize larger datasets while leveraging the newer technologies. In contrast to the old ways, your maximum streamable dataset now becomes larger, arrives more frequently and in larger chunks, thus less on-the-fly compromises such as LOD placeholders, asset pop-in, etc.

Sure it is all else being equal argument, much large game could obviously require more bandwidth than smaller game, depending of the compression ratio it would need to be quite larger than before to end up being an actual larger one on the bus (again if they ever go on the gpu decompression which is still quite uncertain they will to me)
 
My point was, if lovelace is so similar to ampere, is it really a new chip or just a rebrand?
 
My point was, if lovelace is so similar to ampere, is it really a new chip or just a rebrand?

Your basis for "so similar" is that it uses PCIE 4.0 (which as others have pointed out is a complete nonissue, for now and for quite a while, possibly even after PCIE 5.0 native cards come out) and that it's pin compatible. I'm not sure if that's significant basis to claim that it's a rebrand, but maybe someone else can weigh in.
 
My point was, if lovelace is so similar to ampere, is it really a new chip or just a rebrand?
What do you mean by so similar, the fact they would be both using PCI 4.0 ?
The 5700xt is pcie 4.0 the 6800xt is pcie 4.0, was it just rebranding ? (and I imagine we had a long list of PCI 3.0 cards in the past, could have used the Pascal from Maxwell jump)

Lovelace will be probably a 4 or 5nm TSMC instead of a samsung 8 with so much more unit that it could be more than 50% more performance, that alone make it more than a branding exercise.

His your question something like are the announced Lovelace a very similar architecture but just with much more of the similar units (possible because of the smaller process), more cache, etc... to achieve more performance ?
That seem a bit unlinked, you can have quite different architecture running on the same pci express version and quite similar one running on different pci-express no ?
 
Last edited:
If you're going to mock an argument by reducto absurdim, no half measures please.

Lovelace will be PCIe4, the 5700XT is PCIe4. They're basically the same thing right, NVidia is just slapping its logo on an old AMD part and calling it good.

🤣🤣🤣🤣🤣🤣🤣
 
Your basis for "so similar" is that it uses PCIE 4.0 (which as others have pointed out is a complete nonissue, for now and for quite a while, possibly even after PCIE 5.0 native cards come out) and that it's pin compatible. I'm not sure if that's significant basis to claim that it's a rebrand, but maybe someone else can weigh in.

It's not. If you need 8 lanes of traffic and you build a new highway with 10, it isn't a bad design that's not up to current standards just because you didn't build 12. Designing at PCI 4.0 doesn't make it a rebrand, unless the architecture is otherwise identical. We're at the point right now where this comes down to marketing, essentially. "Number is bigger, card is better". If you're not saturating PCI 4.0, then there is no benefit to designing a card for, and marketing it as, PCI 5.0, and it certainly doesn't make it a rebrand.
 
Does this, and it being "pin compatible" with ampere mean it is just a rebrand?
Breaking News: AMD admits that they haven't launched a single new CPU since the AM4 socket was released in 2016. All they have done is rebrand their original AM4-compatible CPU.

Makes sense, right? I mean, if being "pin compatible" means "rebrand"....
 
Just making an observation based on the details released so far.

-Pin compatible: (what other gpu major releases were pin compatible with a previous gen?)
-same pci as previous
-absurd power requirements even with a node shrink.

Wonder if something went wrong and they had to pivot late in the development cycle.

Seems strange. And yes AMD has laid many turds. Im not asking this because I am an AMD fanboy/fartboy -- it ust seems odd.
 
Just making an observation based on the details released so far.

-Pin compatible: (what other gpu major releases were pin compatible with a previous gen?)
-same pci as previous
-absurd power requirements even with a node shrink.

Wonder if something went wrong and they had to pivot late in the development cycle.

Seems strange. And yes AMD has laid many turds. Im not asking this because I am an AMD fanboy/fartboy -- it ust seems odd.

node shrink can be used in different ways to your advantage. You can produce the same chip that uses less power, or you can add a whole bunch of extra cuda / tensor cores etc and boost clocks(silicon willing) and eat up all the power saving from the node shrink. OR some combination in between. Node change + more power = likely a huge performance boost. I will take more performance please and thanks. If anything I figure they were hoping to have a performance boost and keep the power levels the same or lower but speculation of upcoming AMD refresh is forcing their hand to go balls to the walls with power. As other have said the PCIx bus / pin compatible has very little to say about the silicon on top of it as long as they don't present a bottleneck in some way which they don't.
 
Just making an observation based on the details released so far.

-Pin compatible: (what other gpu major releases were pin compatible with a previous gen?)
-same pci as previous
-absurd power requirements even with a node shrink.

Wonder if something went wrong and they had to pivot late in the development cycle.

Seems strange. And yes AMD has laid many turds. Im not asking this because I am an AMD fanboy/fartboy -- it ust seems odd.

PCIe 3.0 came out in 2010. Many generations of GPUs were built on that interface before PCIe 4.0 came out. It's not unusual. They don't have a new PCI standard for every GPU release.
 
Just making an observation based on the details released so far.

-Pin compatible: (what other gpu major releases were pin compatible with a previous gen?)
This doesn't mean a whole lot. It's normal in the CPU world. It may have been done as a cost-saving measure for AIBs.

-same pci as previous
This is normal and has, at the very most and only in a very limited set of titles, nearly zero performance impact.

-absurd power requirements even with a node shrink.
This originates with claims made by a guy with more than one AMD tattoo on his body and a portrait of Dr Su in his wallet.
 
There is no problem here. I'll eat a (hopefully tasty) shoe if that is proven wrong.
 
Back
Top