GPU - diminishing returns reached

Given how massively parallel GPUs are I don't get why they can't just make them bigger. It's not like CPUs where additional cores don't necessarily get you anything. GPUs are parallel by nature. Who gives a shit if you can't keep packing in transistors at the same form factor. Just make them bigger and slap water cooling blocks on them.

View attachment 259049
The cost is in the wafer. If you’re building a supercomputer you don’t really care. But if the only way to make an faster GPU is put two of them on a PCB, you’ve effectively just doubled the cost as well as doubled the difficulty with binning.

The Vega II as an example is a specialized part that costs $5000 for 1.

Although the 3080 is expected to be faster than the previous gen, if the only way faster past that is multi gpu, we can more or less expect GPU costs at the top end to shoot through the roof. As appealing as SLI was (since it’s no longer supported) less than .01% really had the capacity to afford multi top end GPUs. Putting multiple GPUs on the same PCB just gives the same problems in terms of cost and complexity as SLI.
 
But aren't the new interconnects fast enough to finally pool all the memory from multiple cards? I thought duplicating the memory was the biggest weakness of sli/crossfire
 
The cost is in the wafer. If you’re building a supercomputer you don’t really care. But if the only way to make an faster GPU is put two of them on a PCB, you’ve effectively just doubled the cost as well as doubled the difficulty with binning.

The Vega II as an example is a specialized part that costs $5000 for 1.

Although the 3080 is expected to be faster than the previous gen, if the only way faster past that is multi gpu, we can more or less expect GPU costs at the top end to shoot through the roof. As appealing as SLI was (since it’s no longer supported) less than .01% really had the capacity to afford multi top end GPUs. Putting multiple GPUs on the same PCB just gives the same problems in terms of cost and complexity as SLI.
Exactly. We can just look to the past for gaming card examples. The price of the dual-GPU GTX 690 was exactly twice that of the single-GPU GTX 680. On the AMD side of things the R9 295X2 was nearly thrice the price of the R9 290X.
 
Another problem is the constant pressure to buy the latest and greatest, for example we built a custom 2080 super rig for a customer on a 299 platform.
Out of curiosity I plugged his system in to my 1440p LG VA 144 Hz and gave it a spin and found that the gaming experience was not worth replacing my 5 yr old 390X. When the RTX, DLSS and FPS counter was off you could not tell which card/PC was running. The difference between ultra and high is negligible to most causal eyes. Now if the 2080 was 100 bucks yea I would go to it but not at 800 dollars.
 
  • Like
Reactions: N4CR
like this
Another problem is the constant pressure to buy the latest and greatest, for example we built a custom 2080 super rig for a customer on a 299 platform.
Out of curiosity I plugged his system in to my 1440p LG VA 144 Hz and gave it a spin and found that the gaming experience was not worth replacing my 5 yr old 390X. When the RTX, DLSS and FPS counter was off you could not tell which card/PC was running. The difference between ultra and high is negligible to most causal eyes. Now if the 2080 was 100 bucks yea I would go to it but not at 800 dollars.
This is a little bit of a different issue. It has more to do with how much is "enough". For a lot of folks, having a fast computer is a hobby and it's worth pursing to them because they like it.
It's like the often used car analogy: some people just want to have 1000HP engines even though 99% of the time they are "forced" to drive street legal speed limits. Honestly for most folks they'd have more gains by getting good tires, better performing brakes, and a street tuned suspension (which is where the analogy breaks down). Is a 1000HP engine required to enjoy racing on a track? Not at all. But some folks like that sort of thing and that's okay.

On the other end of the spectrum, performance gains on GPUs at the top benefits all of us. Because then the lesser expensive cards (that nVidia and AMD really sell more of anyway) are also more performative than last gen (barring when they just take a previous gen card and rename it).
Don't feel the pressure to upgrade unless that's just what you want to do. There are plenty of ways to game at all pricing levels. I would be lying if I didn't say that it's nice to have a top end card at launch every now and again though.
 
Yeah but it not worth a mortgage payment and unless you are doing computational science GPU horsepower does not benefit much. As an undergraduate our lab computers were connected to the Cray which at the time was the faster supercomputer in the world, we did molecular determination. When I ran a analytical XRD / electron microscope research center my PC at home was faster than 95% of the engineering campus PCs.
We are lucky to be able to afford a system through credit or cash that most users do not desire or need. Most people Facebook and the internet is their only use and that is the catch no one is spending 1200 on a TI to game for 10 mins a day.
 
But aren't the new interconnects fast enough to finally pool all the memory from multiple cards? I thought duplicating the memory was the biggest weakness of sli/crossfire
It's not just this, it's also coordinating all those accesses. What if one board is writing to memory while the other is trying to read? You need some sort of locking mechanism and coordination between them.

Also, just look at the difference between the 3300x and 3100 CPU. The only difference is one has to communicate across a really fast bus, yet there is a large performance difference. GPU's are even more critical when it comes to memory access (hence, they are on GDDR6 rather than DDR4).
 
Last edited:
Another problem is the constant pressure to buy the latest and greatest, for example we built a custom 2080 super rig for a customer on a 299 platform.
Out of curiosity I plugged his system in to my 1440p LG VA 144 Hz and gave it a spin and found that the gaming experience was not worth replacing my 5 yr old 390X. When the RTX, DLSS and FPS counter was off you could not tell which card/PC was running. The difference between ultra and high is negligible to most causal eyes. Now if the 2080 was 100 bucks yea I would go to it but not at 800 dollars.
I highly doubt that. The 2080 Super is 175-200% faster than the 390X. Put it this way: The 390X was a 1080p 60 FPS card when it came out, while the 2080 Super is a 4K 60 FPS card right now. At 1440p, if you were getting 60 FPS with the 390X you're getting at least 165 FPS with the 2080 Super.
 
Yeah but it not worth a mortgage payment and unless you are doing computational science GPU horsepower does not benefit much. As an undergraduate our lab computers were connected to the Cray which at the time was the faster supercomputer in the world, we did molecular determination. When I ran a analytical XRD / electron microscope research center my PC at home was faster than 95% of the engineering campus PCs.
We are lucky to be able to afford a system through credit or cash that most users do not desire or need. Most people Facebook and the internet is their only use and that is the catch no one is spending 1200 on a TI to game for 10 mins a day.
The value of something is determinant on its importance to the buyer.
There are cars that cost $3 million dollars. $3 million dollars outstrips what a blue collar worker will likely make in his/her lifetime. The manufacturers of said cars will still sell every one of them they make.
It's not worth it to you to invest $20k into an engine to make 1000HP. But it's clearly worth it to these folks to invest $3k+ into a gaming PC that you argue they'll only use for 10 minutes a day (which I doubt, they probably will game every weekend and all night). People that only use social media, Facebook, and Youtube aren't buying 2080Ti's. Yes casual computer users are the majority. And to that point most people aren't getting 1000HP engines either.
It's not your money anyway. You don't have to worry about how other people spend theirs.
 
The cost is in the wafer. If you’re building a supercomputer you don’t really care. But if the only way to make an faster GPU is put two of them on a PCB, you’ve effectively just doubled the cost as well as doubled the difficulty with binning.

The Vega II as an example is a specialized part that costs $5000 for 1.

Although the 3080 is expected to be faster than the previous gen, if the only way faster past that is multi gpu, we can more or less expect GPU costs at the top end to shoot through the roof. As appealing as SLI was (since it’s no longer supported) less than .01% really had the capacity to afford multi top end GPUs. Putting multiple GPUs on the same PCB just gives the same problems in terms of cost and complexity as SLI.

The only reason they shot through the roof in the first place is that AMD completely dropped the ball. I highly suspect that there's price-fixing agreements between AMD and Nvidia, too, because AMD tends to price even complete pieces of shit at the $600+ range knowing that no one will buy them anyway.
 
The only reason they shot through the roof in the first place is that AMD completely dropped the ball. I highly suspect that there's price-fixing agreements between AMD and Nvidia, too, because AMD tends to price even complete pieces of shit at the $600+ range knowing that no one will buy them anyway.
Go ahead and keep telling yourself that.

1594129106346.png


1594129141812.png
 
All of you Pascal owners who feel underwhelmed by Turing, imagine what it feels like still being on Maxwell.

I'm still using a GTX 980 I got at the tail end of 2015, new and sealed, for the price of a mere 970. Pascal wouldn't be out for several more months, and I was planning on just skipping that generation and going straight to Volta (Turing wasn't on the roadmap yet). After all, my GTX 760 needed to be replaced if I wanted to be sure I could handle the forthcoming Oculus Rift CV1 on day one.

Big mistake. Pascal turned out to be an immense improvement per dollar spent at release, at least before the alt-coin cryptocurrency craze kicked in and made every GPU stupid expensive and perpetually out of stock. (Everyone forgets this happened years ago with AMD GPUs specifically, but then ASICs for Bitcoin mining got released and made GPU mining worthless. Took a while for the same thing to happen to Ethereum and some of the other big alternatives.)

Then Turing hit the market at stupid inflated prices, $1,200 RTX 2080 Ti and all, and it's just priced me out of an upgrade for the most part, even though temporarily swapping my GTX 980 for an RTX 2080 (non-Super) doubled my DCS framerates in VR. I know I'm due for it.

But that's the thing - I don't care about raytracing performance at the moment, I want maximum VR performance, especially now that I'm now on a Valve Index. You need as much single-GPU performance as possible for VR, and there currently is no alternative to the 2080 Ti, or even the 2080 Super at a less-ridiculous price point. AMD simply isn't competing at the high end, and while Intel's teasing Xe as their return to dedicated graphics, we all know what happened to i740 and Larrabee decades ago. (The short version: i740 pushed AGP but got spanked by PCI 3dfx cards, and Larrabee was effectively vaporware warped into the Xeon Phi coprocessors.)

This is why I'm waiting it out for Ampere later this year. I'm not so much expecting dramatic performance increases over the prior generation as much as being able to get what was once $1,200+ GPU performance for just $700 or so this time around, and still have NVIDIA support all the new software features they tend to reserve for their current GPU architecture. (As someone pointed out, they abandon their older architectures immediately in terms of driver-enabled new features, as I learned the hard way on Maxwell.) Oh, and HDMI 2.1 support for those new LG CX displays capable of 4K 120 Hz G-SYNC would be pretty nice.

However, the speculated bump in performance with this coming generation is all predicated on NVIDIA actually choosing to bump up rasterization performance enough. They're trying to sell ray-tracing really hard, and will probably dedicate more of their increased transistor budget to improving ray-tracing performance, but I still think we're a long way off from getting real-time ray-tracing in VR smoothly when it's hard enough to get a rasterized game to run without framedrops.
 
I think you misunderstand the point of this thread. They're talking about diminishing returns in terms of how much they're capable of pushing transistor counts in silicon. Meaning they're reaching a point where it's becoming increasingly more difficult to make a graphics card faster than it is now.
There is never fast enough graphics cards. That much is obvious. I can't drive triple 8k monitors at 240Hz/240fpz, so obviously I haven't hit my performance target yet.
In __ years that will seem like old school tech
 
Would be a much better chart had it been accurate...
3700x = $329. It also has 2 CCX's EACH with 3.9 billion transistors... so x2 = 7.8 billion. This is a difficult one, because the IOD is on 12nm... but it's another 2.09B transistors.

"Each 7nm CCD measures ~74mm2 and has 3.9 billion transistors, while the 12nm IOD is ~125mm2 and has 2.09 billion transistors. That means the 3900X comes with ~273mm2 of silicon that sports ~9.89 billion transistors."
https://www.tomshardware.com/reviews/ryzen-9-3900x-7-3700x-review,6214.html

"3.9 billion per CCD and 2.09 billion IOD"
https://www.overclockers.com/amd-ryzen-9-3900x-and-ryzen-7-3700x-cpu-review/

"3.9 Billion transistor CCD chiplets – one in the case of eight-cores and below and two for the 12 and 16 core parts – connect to the 125mm2, 2.09 billion transistor IO chiplet via AMD’s Infinity Fabric."
https://www.kitguru.net/components/cpu/luke-hill/amd-ryzen-9-3900x-ryzen-7-3700x-zen-2-cpu-review/

So, lets just ignore the 2.09B transistors for the IO..... an say 7.8. Using the actual MSRP of $329 we end up with $0.042 and a free 2 billion @ 12nm on top of that. So in essence, the cost from 14nm to 7nm went down by close to 1/2 plus a free IO die that had 1/2 the transistors of the original we're comparing it to. I haven't double checked the rest of the chart, but this was a glaring error as I knew the transistor count from 2700x to 3700x didn't go down like shown.

3700X was $329 at launch. Otherwise, great table!

No, that wasn't the only issue. Please see my response above. He reduce the # of transistors in the 3700x by over half, which put the cost at more than double what it should have been. As unpopular as it may seem, the cost per transistor from AMD (on the 1700x line) went down by about 1/2 from 14nm to 7nm.

edit: when I said cost per transistor I meant to the end user, not their cost @ TSMC.
 
There is compelling reason to upgrade for people on 1440p ultrawide or 4k 144hz+ monitors. With that said, I think we are approaching a limit on the human eye's ability to appreciate higher resolutions in graphics so hopefully ray tracing gets some strong development behind it.

You still need speed. A 4k 27 inch monitor is great, but until it hits 240hz we can still see great improvement. That is still a lot of pixels to push.
 
The really big change now to me is how long top end cards last. And that is creating a different market pressure. For the first time I ever I am actually considering buying the 3080ti or whatever they are going to call it. This is because I wish I had just bought a launch 1080ti and had a great performer for a very long time. If feels like in the past when you could get maybe 18 months out of a top end card you can get like 3 years out of them now.
 
The really big change now to me is how long top end cards last. And that is creating a different market pressure. For the first time I ever I am actually considering buying the 3080ti or whatever they are going to call it. This is because I wish I had just bought a launch 1080ti and had a great performer for a very long time. If feels like in the past when you could get maybe 18 months out of a top end card you can get like 3 years out of them now.
But do you wish you bought a 2080ti at launch? Do you know which one the 3080ti will be?
 
But do you wish you bought a 2080ti at launch? Do you know which one the 3080ti will be?
If I was in the market I would have taken a 2080ti but due to life circumstances I went with a full laptop which had a 2070, and with laptops I dont feel the same value in high end graphics as the whole rest of the package is disposable.
 
If I was in the market I would have taken a 2080ti but due to life circumstances I went with a full laptop which had a 2070, and with laptops I dont feel the same value in high end graphics as the whole rest of the package is disposable.
I was just curious, because the performance between the 1080ti and 2080ti was just ok (raw performance). Features wise it had some stuff going for it, but I don't think it will be as long lasting as a 1080ti was. It's always easy to figure out after the fact, but it's hard to guess ahead of time. I do agree though, GPU's seem to be lasting longer nowadays, so it's probably a fair point on just buying higher end and keeping it longer. I still can't convince myself to spend $1200 on a GPU though. Maybe after I get my next raise, lol.
 
I was just curious, because the performance between the 1080ti and 2080ti was just ok (raw performance). Features wise it had some stuff going for it, but I don't think it will be as long lasting as a 1080ti was. It's always easy to figure out after the fact, but it's hard to guess ahead of time. I do agree though, GPU's seem to be lasting longer nowadays, so it's probably a fair point on just buying higher end and keeping it longer. I still can't convince myself to spend $1200 on a GPU though. Maybe after I get my next raise, lol.

Well you can also figure in the ability to sell the one you have. 1200 seems like a lot for a GPU, but 1200 over 3 years becomes 400 a year. Then sell it for let's say 300 and you are down to 300 a year. Now that's not too bad. And those top end cards in the line always hold a little extra value. And I am factoring in that cpus last longer now too so I dont have to upgrade. I literally had a 2600k up until my laptop with 9700k
 
Well you can also figure in the ability to sell the one you have. 1200 seems like a lot for a GPU, but 1200 over 3 years becomes 400 a year. Then sell it for let's say 300 and you are down to 300 a year. Now that's not too bad. And those top end cards in the line always hold a little extra value. And I am factoring in that cpus last longer now too so I dont have to upgrade. I literally had a 2600k up until my laptop with 9700k
I understand, but I keep 5-6 desktops usable at any given time.. so if I upgrade once every 5 years and put a $1200 GPU in.. that's $1200 per year just on the GPU, not counting the rest of the PC! Normally I upgrade one or two at a time and pass down GPU's. The logic is sound and it'd probably still make sense as passing down a 1080ti/2080ti would get more time out of each box (aka, lowest common denominator would be higher). It's so much easier to convince my wife to upgrade a $200 video card than a $1200 card though :).
 
I understand, but I keep 5-6 desktops usable at any given time.. so if I upgrade once every 5 years and put a $1200 GPU in.. that's $1200 per year just on the GPU, not counting the rest of the PC! Normally I upgrade one or two at a time and pass down GPU's. The logic is sound and it'd probably still make sense as passing down a 1080ti/2080ti would get more time out of each box (aka, lowest common denominator would be higher). It's so much easier to convince my wife to upgrade a $200 video card than a $1200 card though :).
You're looking at it all wrong.

I view my upgrades actually as gifts for the kids because they get my old stuff. I'm being generous, not selfish!
 
There is compelling reason to upgrade for people on 1440p ultrawide or 4k 144hz+ monitors. With that said, I think we are approaching a limit on the human eye's ability to appreciate higher resolutions in graphics so hopefully ray tracing gets some strong development behind it.

Blasphemy!
 
You're looking at it all wrong.

I view my upgrades actually as gifts for the kids because they get my old stuff. I'm being generous, not selfish!

My video card flows to the wife flows to the Plex server or a friend.
Workstation cards get sold to fund the replacement, or to a friend depending on what it is (they tend to be worth a LOT less).
 
The main problem is that most gamers are at 1080p. Not enough competition at higher resolutions to push prices down. At most of the high end, nvidia basically competes with itself and has no reason to push prices lower.
 
My video card flows to the wife flows to the Plex server or a friend.
Workstation cards get sold to fund the replacement, or to a friend depending on what it is (they tend to be worth a LOT less).
I just moved around 5 GPUs and 3 processors due to an upgrade, lol. I feel your pain. Wasn't thinking and put a g4560 into a b150.... Bios was older for a 6600 and wouldn't boot. Swap CPU and heatsinks back around to flash bios, and swap back again, lol. Now I can't get steam to work, library or store shows for like 2-3 seconds and disappears. Uninstall GPU drivers, still happening. Reinstall, disable GPU accel in steam. Still no go. Fun day.
 
I just moved around 5 GPUs and 3 processors due to an upgrade, lol. I feel your pain. Wasn't thinking and put a g4560 into a b150.... Bios was older for a 6600 and wouldn't boot. Swap CPU and heatsinks back around to flash bios, and swap back again, lol. Now I can't get steam to work, library or store shows for like 2-3 seconds and disappears. Uninstall GPU drivers, still happening. Reinstall, disable GPU accel in steam. Still no go. Fun day.

Reinstall time!
 
The main problem is that most gamers are at 1080p. Not enough competition at higher resolutions to push prices down. At most of the high end, nvidia basically competes with itself and has no reason to push prices lower.

I could see 1080 on a gaming only system, but that'd be miserable on anything for real work. I'm going to be at 2x 1440P here on the workstation, if not 3x (gotta see if it fits on the damned desk, or I'm going to have to get an arm), and I've considered 4k a few times (except any gaming at that resolution takes a high end card).
 
I could see 1080 on a gaming only system, but that'd be miserable on anything for real work. I'm going to be at 2x 1440P here on the workstation, if not 3x (gotta see if it fits on the damned desk, or I'm going to have to get an arm), and I've considered 4k a few times (except any gaming at that resolution takes a high end card).
? I run dual 1080p monitors at home and work. I do computer programming for a living, no issues with 1080p. Matter of fact, I had way more issues with 4k displays and real work due to other people's crappy programming (or lack there of) for high DPI scaling. I'd take a 27-32" 1080p for real work over a 2160p anytime. 1440p @ 32" is a good compromise where it's not to high DPI to cause any real pains in crappy software. Can't believe how long it is taking M$ to implement properly, let alone other companies who have much less control or resources. 4k is much better, but only when it works properly though.
 
? I run dual 1080p monitors at home and work. I do computer programming for a living, no issues with 1080p. Matter of fact, I had way more issues with 4k displays and real work due to other people's crappy programming (or lack there of) for high DPI scaling. I'd take a 27-32" 1080p for real work over a 2160p anytime. 1440p @ 32" is a good compromise where it's not to high DPI to cause any real pains in crappy software. Can't believe how long it is taking M$ to implement properly, let alone other companies who have much less control or resources. 4k is much better, but only when it works properly though.

Yeah, that's the other part that stops me from doing 4k. 1440 seems to be pretty good - more real estate (I do programming, data center work, video work, etc - mixed bag), but no scaling issues (or they're so minor you don't notice them), but I definitely notice my bigger 1080P screens (I use 27") not having enough realestate.

The one that always got me was switching - I have a 4k laptop, and plugging it into my docking station switched to 3x1200P screens... and you had to reboot to make it usable.
 
Yeah, that's the other part that stops me from doing 4k. 1440 seems to be pretty good - more real estate (I do programming, data center work, video work, etc - mixed bag), but no scaling issues (or they're so minor you don't notice them), but I definitely notice my bigger 1080P screens (I use 27") not having enough realestate.

The one that always got me was switching - I have a 4k laptop, and plugging it into my docking station switched to 3x1200P screens... and you had to reboot to make it usable.
I had a microsoft surface book with that weird 3000x2000 display... even microsoft software had no clue how to scale. It was great that it was portable, but with that high of DPI it was difficult to use without a lot of messing around. I would have just prefered it to be a 1440p display or even 1080 and it would have been more easily usable. The worst part was since they used a non standard res screen, I couldn't even turn the resolution down without having black bars!!! I mean, as if the display wasn't small enough I didn't want to give up more real estate because they can't get their software to be compatible with their own hardware. Still lots of issues with screens and resolutions whether it's one or multiple it seems. I would much prefer 4k if everything just worked. As it is I run 2 27" 1080p curved freesync monitors at home. Almost was going to put a third one up to get some more screen real estate while working, but right now I'm working from home and have to share my desktop space with my work laptop, so I'm quickly running out of real estate on my desk (2 monitors, my desktop, 2 keyboard/mice and a laptop).
 
I had a microsoft surface book with that weird 3000x2000 display... even microsoft software had no clue how to scale. It was great that it was portable, but with that high of DPI it was difficult to use without a lot of messing around. I would have just prefered it to be a 1440p display or even 1080 and it would have been more easily usable. The worst part was since they used a non standard res screen, I couldn't even turn the resolution down without having black bars!!! I mean, as if the display wasn't small enough I didn't want to give up more real estate because they can't get their software to be compatible with their own hardware. Still lots of issues with screens and resolutions whether it's one or multiple it seems. I would much prefer 4k if everything just worked. As it is I run 2 27" 1080p curved freesync monitors at home. Almost was going to put a third one up to get some more screen real estate while working, but right now I'm working from home and have to share my desktop space with my work laptop, so I'm quickly running out of real estate on my desk (2 monitors, my desktop, 2 keyboard/mice and a laptop).
Love the 3440x1440p for working with side to side open programs as well as for gaming, thinking of even a wider variation of 5120x1440p. Come to same conclusions with 4K, great for videos, games etc. not so good with programs, UIs. To relieve the high DPI problems with Windows a bigger monitor as in 48" is one solution but then that will cause a new problem of information not available unless you constantly look around on the screen even to turning your head over and over again. Then again a 3840 x 1600 maybe even better for a much wider number of uses. Getting HDR 1000+, 144hz, Ultra Wide aspect ratios all at once one has very limited options, the Samsung CRG9 is the closest monitor for that or the Samsung 49" Odyssey G9 Gaming Monitor that exceeds that, 240hz, HDR 1000, GSync and FreSync Premium, 5120x1440 :
https://www.samsung.com/us/computing/monitors/gaming/49--odyssey-g9-gaming-monitor-lc49g95tssnxza/

Why or why didn't the Odyssey G9 not use HDMI 2.1 and/or DP 2.0? Still using HDMI 2.0 and DP 1.4. It must use DSC (Display Stream Compression) for that resolution and refresh rate is my understanding on the matter.
 
Why or why didn't the Odyssey G9 not use HDMI 2.1 and/or DP 2.0? Still using HDMI 2.0 and DP 1.4. It must use DSC (Display Stream Compression) for that resolution and refresh rate is my understanding on the matter.

To sell you the new version next year. Can't sell a holy grail..
 
Back
Top