2016 GPUs vs 2019 GPUs

Ranulfo

2[H]4U
Joined
Feb 9, 2006
Messages
3,939
Hardware Unboxed compares the Rx 470/570 and 950/1050ti vs the new 5500XT and 1650 cards.

TLDR/Watch: The 2016 $200 and under video cards are still the better value.

 
Yeah, at least when the GTX 960 was released ($200 for 128-bit), it was %40 faster than the GTX 660.

The 4gb RX 5500 is barely 20% faster than the RX 470. Pathetic price for half the memory complexity of Polaris.

If AMD really wanted to shake things up they would have launched the RX 5500 at $150 and the RX 5600 at $220. But they're just treading water at those launch prices.
 
Last edited:
Anyone count how many times he says "cheeeper"?

I would supprt them, but those hoodies just dont give me wear per dollar I deserve.
 
My basic Power Color Red Dragon as a base clock of 1250mhz / 2000mhz RX 570 as his looks low as mine really is no boost just 1250Mhz
 
It makes sense why three years later we're seeing an abysmal upgrade for this price range.


Keep feeding the 1080p market crumb sized upgrades each year, you'll eventually see 1080p usage go down and 1440p/4K usage go up.
 
I paid $1200 for a Titan X Pascal in 2016. To upgrade today my only upgrade path is a 2080 Ti which is maybe 30% faster for another $1200+ dollars. GPUs the last 3 years have been a total joke.

Although conversely I absolutely got my money's worth out of this Titan card.
 
Last edited:
I paid $1200 for a Titan X Pascal in 2016.

To upgrade today my only upgrade path is a 2080 Ti which is maybe 30% faster for another $1200+ dollars.

GPUs the last 3 years have been a total joke.


Welcome to Moore's Law slowing down, and sumltanesouly, the industry looking for a way to stem the continually falling number of discrete GPUs sold.

Coin Minning wasn't sustainable, so the only option remaining is RTX...instead of gettign another 40% perforrmance increase (like Maxwell), you only get 20%!

You'll just have to wait for Ampere for your upgrade...but it will be a pretty respectable RTX upgrade to go with it!
 
Last edited:
Welcome to Moore's Law slowing down, and sumltanesouly, the industry looking for a way to stem the continually falling number of discrete GPUs sold.

Coin Minning wasn't sustainable, so the only option remaining is RTX...instead of gettign another 40% perforrmance increase (like Maxwell), you only get 20%!

You'll just have to wait for Ampere for your upgrade...but it will be a pretty respectable RTX upgrade to go with it!
Are GPUs really hitting the limits of Moore's Law yet though? GPUs are highly parallel vs CPUs and there doesn't really seem to be an upper bound with just making them larger and more powerful so long as the relative components of the GPU aren't bottlenecking each other (e.g. AMD's lack of ROPs on some older products, or cards that are starved for memory bandwidth).

I just think that Turing wasn't a great design, and AMD shifted all of their R&D into Ryzen vs trying to compete with NVIDIA (which objectively has been the right call).

I'm hoping that Ampere boosts both rasterization and RTX performance further.
 
Are GPUs really hitting the limits of Moore's Law yet though? GPUs are highly parallel vs CPUs and there doesn't really seem to be an upper bound with just making them larger and more powerful so long as the relative components of the GPU aren't bottlenecking each other (e.g. AMD's lack of ROPs on some older products, or cards that are starved for memory bandwidth).

Yes, there are.

Performance increases are harder than they have ever been because we have a power density problem in Silicon. This means if you make things more dense (like Navi), you have to turn off more of the chip to keep up with cooling (see [url=https://en.wikipedia.org/wiki/Dark_silicon]Dark Silicon[/URL]). Or you can go larger like Turing, with the understanding that there is an affordable upper-limit on die sizes (Think TU106 at 445 mm²). This value will in crease in the future, but it's not going to grow qwith leeaps-and-bounds like it has in the past.

And improving your architecture's efficiency is a lot harder than it used to be. Even without RTX, Turning was only 30% more efficient over Pascal.

Because the power reductions from a new process node are lower than the increase in power density, new tricks are required to keep things going.
 
Last edited:
I watched the video and was like: *Shrug*. Things change and stuff costs more than before. Still, there are a lot of folks that could use an upgrade from something earlier than the 470, in my opinion. There is costing more and then there is gouging........
 
Yes, there are.

Performance increases are harder than they have ever been because we have a power density problem in Silicon. This means if you make things more dense (like Navi), you have to turn off more of the chip to keep up with cooling (see Dark Silicon). Or you can go larger like Turing, with the understanding that there is an affordable upper-limit on die sizes (Think TU106 at 445 mm²). This value will in crease in the future, but it's not going to grow qwith leeaps-and-bounds like it has in the past.

An improving your architecture's efficiency is a lot harder than it used to be. Even without RTX, Turning was only 30% more efficient over Pascal.


Okay so why does it seem everyone wants smaller dies then?
"Intel needs to get on that 7nm game! " Hurry up Nvidia!"
If smaller equals less power, how can everyone expect more ferocity (brute power) from the unit at the same time?
 
Okay so why does it seem everyone wants smaller dies then?
"Intel needs to get on that 7nm game! " Hurry up Nvidia!"
If smaller equals less power, how can everyone expect more ferocity (brute power) from the unit at the same time?
From a consumer standpoint I don't think smaller dies matter. Some people care about power consumption, especially in the mobile space, but I don't think most end users (certainly not the ones on this forum) are concerned with this so much as long as the technology works.

On the manufacturing side, smaller die sizes used to lead to better cost efficiency per wafer, however that ship has long since sailed.

The reason people bag on Intel is because their 14nm process is completely tapped out and they have hit a wall with their silicon. The Core i9-10900K is rumored to pull over 300W TDP and the Core i9-10990XE is reported to have a listed TDP of 380W. It's starting to become unmanageable.
 
Okay so why does it seem everyone wants smaller dies then?
"Intel needs to get on that 7nm game! " Hurry up Nvidia!"
If smaller equals less power, how can everyone expect more ferocity (brute power) from the unit at the same time?

You can still increase performance with a smaller die size - you just have to understand that *some* of that performance improvement you would normally see from a process node reduction will dissappear into dark silicon.

Smaller dies are cheaper to make once you have gotten yields up. And by the second generation (process node +), you have enough combined power reduction and die size increase that you finally can put an impressive chip performance in a massive package.

Just look at the RTX 2080 Ti, and Raytracing performance: it shits all over the GeForce 3 in any game using programmable shaders, and that is all because the limits of reticle size have gone way up. A large die is also what produced the 8800 GTX.

These limits are moving a lot slower than they used to, but they are still moving. Hence, the three years without a new process node for Nvidia.
 
Last edited:
From a consumer standpoint I don't think smaller dies matter. Some people care about power consumption, especially in the mobile space, but I don't think most end users (certainly not the ones on this forum) are concerned with this so much as long as the technology works.

On the manufacturing side, smaller die sizes used to lead to better cost efficiency per wafer, however that ship has long since sailed.

The reason people bag on Intel is because their 14nm process is completely tapped out and they have hit a wall with their silicon. The Core i9-10900K is rumored to pull over 300W TDP and the Core i9-10990XE is reported to have a listed TDP of 380W. It's starting to become unmanageable.

So then when do we expect and accept such high power consumptions for better performance?
 
You can still increase performance with a smaller die size - you just have to understand that *some* of that performance improvement you would normally see from a process node reduction will dissappear into dark silicon.

Smaller dies are cheaper to make once you have gotten yields up. And by the second generation (process node +), you have enough combined power reduction and die size increase that you finally can put an impressive chip performance in a massive package.

Just look at the RTX 2080 Ti, and Raytracing performance: it shits all over the GeForce 3 in any game using programmable shaders, and that is all because the limits of reticle size have gone way up. A large die is also what produced the 8800 GTX.

These limits are moving a lot slower than they used to, but they are still moving. Hence, the three years without a new process node for Nvidia.


So with todays tech vs back in the geforce 3s days, would using bigger dies ultimately be better than to fit more tech on it then? Higher power consumption but far more performance. No?
 
So with todays tech vs back in the geforce 3s days, would using bigger dies ultimately be better than to fit more tech on it then? Higher power consumption but far more performance. No?

You can increase efficiency with larger die size: you can get the same performance at lower power consumption (lower clocks + lower voltage).

You can also go balls-out and jut max out your performance, efficiency-be-damned (that's what the GeForce 8800 GTX did, with 175w of power!)

The GeForce 3 would have required external power connector if you went with the same "don't care" philosophy, and made it a hug die. Assuming you could manage 256-bit bus, performance would have been at least on-par with a 9700 Pro, but it would have been an even more expensive chip to make on that brand new 150nm process.

ATI did it right by waiting another 18 months to build the first Goliath on the very same 150nm process, with the 9700 Pro. Giant dies on ancient process nodes are often the best path, if you're pushing state-of-the-art...and over time, those become the new "entry-level" die size.
 
Last edited:
Hardware Unboxed compares the Rx 470/570 and 950/1050ti vs the new 5500XT and 1650 cards.

TLDR/Watch: The 2016 $200 and under video cards are still the better value.


In my opinion, there are two main reasons for this. And its nothing to do with super technical details about GPU design.

1. While in my opinion, AMD has been doing a solid job of keeping up in the low end and mid range: There has not been enough competition in the market, to drive a fundamental shift from the top, down. If AMD had something two years ago, to at least mostly meet the 1080ti and everything in between: we'd probably have RTX 2060 - like performance for $200 right now/ have had for a little while. Instead, Nvidia has been able to bring out higher crads and price them to the sky. With very little downward shift. Until Navi finally forced them to do a little bit. But its not enough yet, to shift the entire stack. As we see in this video.

2. Also, games just don't require better videocards (see: you can play brand new games at 1080p high with 60fps avg on an RX470). So, GPU makers (Nvidia) haven't had any reason to be more aggressive. Whether it be from competition or demand from software.
 
Last edited:
RX 5500 XT seems meh, almost no upgrade from an RX 570/470 and costs almost the same..
 
Back
Top