Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I paid $1200 for a Titan X Pascal in 2016.
To upgrade today my only upgrade path is a 2080 Ti which is maybe 30% faster for another $1200+ dollars.
GPUs the last 3 years have been a total joke.
Are GPUs really hitting the limits of Moore's Law yet though? GPUs are highly parallel vs CPUs and there doesn't really seem to be an upper bound with just making them larger and more powerful so long as the relative components of the GPU aren't bottlenecking each other (e.g. AMD's lack of ROPs on some older products, or cards that are starved for memory bandwidth).Welcome to Moore's Law slowing down, and sumltanesouly, the industry looking for a way to stem the continually falling number of discrete GPUs sold.
Coin Minning wasn't sustainable, so the only option remaining is RTX...instead of gettign another 40% perforrmance increase (like Maxwell), you only get 20%!
You'll just have to wait for Ampere for your upgrade...but it will be a pretty respectable RTX upgrade to go with it!
Are GPUs really hitting the limits of Moore's Law yet though? GPUs are highly parallel vs CPUs and there doesn't really seem to be an upper bound with just making them larger and more powerful so long as the relative components of the GPU aren't bottlenecking each other (e.g. AMD's lack of ROPs on some older products, or cards that are starved for memory bandwidth).
Yes, there are.
Performance increases are harder than they have ever been because we have a power density problem in Silicon. This means if you make things more dense (like Navi), you have to turn off more of the chip to keep up with cooling (see Dark Silicon). Or you can go larger like Turing, with the understanding that there is an affordable upper-limit on die sizes (Think TU106 at 445 mm²). This value will in crease in the future, but it's not going to grow qwith leeaps-and-bounds like it has in the past.
An improving your architecture's efficiency is a lot harder than it used to be. Even without RTX, Turning was only 30% more efficient over Pascal.
From a consumer standpoint I don't think smaller dies matter. Some people care about power consumption, especially in the mobile space, but I don't think most end users (certainly not the ones on this forum) are concerned with this so much as long as the technology works.Okay so why does it seem everyone wants smaller dies then?
"Intel needs to get on that 7nm game! " Hurry up Nvidia!"
If smaller equals less power, how can everyone expect more ferocity (brute power) from the unit at the same time?
Okay so why does it seem everyone wants smaller dies then?
"Intel needs to get on that 7nm game! " Hurry up Nvidia!"
If smaller equals less power, how can everyone expect more ferocity (brute power) from the unit at the same time?
From a consumer standpoint I don't think smaller dies matter. Some people care about power consumption, especially in the mobile space, but I don't think most end users (certainly not the ones on this forum) are concerned with this so much as long as the technology works.
On the manufacturing side, smaller die sizes used to lead to better cost efficiency per wafer, however that ship has long since sailed.
The reason people bag on Intel is because their 14nm process is completely tapped out and they have hit a wall with their silicon. The Core i9-10900K is rumored to pull over 300W TDP and the Core i9-10990XE is reported to have a listed TDP of 380W. It's starting to become unmanageable.
You can still increase performance with a smaller die size - you just have to understand that *some* of that performance improvement you would normally see from a process node reduction will dissappear into dark silicon.
Smaller dies are cheaper to make once you have gotten yields up. And by the second generation (process node +), you have enough combined power reduction and die size increase that you finally can put an impressive chip performance in a massive package.
Just look at the RTX 2080 Ti, and Raytracing performance: it shits all over the GeForce 3 in any game using programmable shaders, and that is all because the limits of reticle size have gone way up. A large die is also what produced the 8800 GTX.
These limits are moving a lot slower than they used to, but they are still moving. Hence, the three years without a new process node for Nvidia.
So with todays tech vs back in the geforce 3s days, would using bigger dies ultimately be better than to fit more tech on it then? Higher power consumption but far more performance. No?
Hardware Unboxed compares the Rx 470/570 and 950/1050ti vs the new 5500XT and 1650 cards.
TLDR/Watch: The 2016 $200 and under video cards are still the better value.