AdoredTV: NVIDIA’s Performance Improvement Has Dropped from 70% to 30%

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Gaming technology channel AdoredTV has uploaded two videos detailing the history of NVIDIA’s GeForce graphics cards: a graph detailing the performance of the company’s GPUs per generation suggests that improvement has been significantly reduced, dropping from 70% (pre-2010) to 30% (post-2010). DSOGaming claims that this is all due to AMD failing to compete.

Prior to 2010, and with the exception of the NVIDIA GeForce 9800GTX, PC gamers could expect a new high-end graphics card to be faster by 70% than its predecessor (that’s average percentage). Not only that, but in some cases, PC gamers got graphics cards that were more than 100% faster than their previous generation offerings. Two such cards were the NVIDIA GeForce 6800Ultra and the NVIDIA GeForce 8800GTX. However, these days the performance gap has been severely reduced.
 
Its the same from the AMD side as well. Might have something to do with both companies focusing on other avenues with their cards and it also might have to do with the death of Moore's Law. Lack of solid competition might be a factor from Nvidia, but I doubt it is the largest one.
 
Maybe it's just to keep people upgrading sooner. If your next card is 70-100% better than your last then maybe it hangs in there too long as games catch up?;)

Or maybe it's lack of competition?

Or maybe it's just because we all want to push millions more pixels now? Games are full of lazy code and unoptimized garbage since we all have waaaay more grunt at our disposal these days?

Who knows but it seems fairly reasonable that as we approach the limits of certain tech the generational improvement will slow down. Just some thoughts.
 
The guy is wrong on one point in his video tho. The Ti started with the GF3 Ti 200 and 500 not the GF4 line of cards
Yep, I got a ti500 for free due to a cockup :D
I believe it was £440 at the time which was stacks!
 
Could just be that it's getting harder to squeeze performance out of each process, especially when they don't shrink them as often as they used to you know.

thats what I was thinking as well.
 
No stats on the improvement per generation from AMD over the same periods.
Rofl I wonder why. Even if he puts a video out eventually, it will be spun and rationalized in every way to put AMD in a good light and antagonize nVidia.
 
One of the biggest factors which nobody had mentioned yet:

Consoles precede PC

This wasn't the case in the past but it's the reality now unfortunately.
 
It's really not due to lack of competition, at least that is not a major driver.

There are three major ways to increase throughput (GPU centric, but applies to CPUs to a large degree as well):
1) Faster clock rate
2) More execution units
3) Find ways of avoiding work in the first place

Early on we were able to hit all three pretty easily.
Now we're running into power and thermal issues limiting the first two. We can squeeze a bit more in with a node size reduction, but that too is getting much harder.
The third is completely non-trivial. It is very hard to find new ways of avoiding work, and sadly, that too requires work - which requires power, makes heat, etc.

I have zero doubt if NV could make a part which was twice as fast, they would do so. Same with Intel.
 
One of the biggest factors which nobody had mentioned yet:

Consoles precede PC

This wasn't the case in the past but it's the reality now unfortunately.

What does that have to do with GPU power? Consoles were a massive force before 2010. Consoles do not explain the post-2010 drop. Nvidia, and AMD, have changed priorities since 2010 and very little of that has been influenced by consoles.
 
No stats on the improvement per generation from AMD over the same periods.

it's roughly the same as nvidia's percentage wise.. the problem isn't specific to the lack of competition, like derangel mentioned it probably has more to do with the fact that GPU's post 2010 are doing a hell of a lot more now then just displaying graphic textures.
 
What does that have to do with GPU power? Consoles were a massive force before 2010. Consoles do not explain the post-2010 drop. Nvidia, and AMD, have changed priorities since 2010 and very little of that has been influenced by consoles.

$$$
 
Simple, they switched their priorities, from performance to sneaky data gathering!

Long live the free telemetry!

On a more serious note, seems like all chips sections in the industry are reaching a plateau.

Seems like intel hit it first and the rest are getting there.
 
Last edited:

That isn't an answer. You're trying to blame consoles for everything, which is just dumb. Consoles have little to no effect on high-end GPU developement. Other industries do, however. Both companies have spent the last seven+ years expanding their tech into non-gaming ventures.
 
Gaming technology channel AdoredTV has uploaded two videos detailing the history of NVIDIA’s GeForce graphics cards: a graph detailing the performance of the company’s GPUs per generation suggests that improvement has been significantly reduced, dropping from 70% (pre-2010) to 30% (post-2010). DSOGaming claims that this is all due to AMD failing to compete.

So, it's more AMD's responsibility to keep Nvidia fair and honest with its consumer dealings than it is Nvidia's own responsibility? Competition can be what gets a company to put in effort, but the bigger reason for this slowdown is because Nvidia doesn't care about moving things ahead or about consumer value, as can be seen by things such as the Nvidia g-sync premium and refusal to support free-sync, and by the jacking up of prices in the GPU market when AMD isn't there to compete. Those things aren't just a lack of action to move forward without the of competition being nearby. They're actions towards the negative in terms of exploiting consumers while not giving them fair returns for their money.

Imagine how people might react if car manufacturers had a deal with gas stations or oil companies to share profits from gas sales, and then the car manufacturers were putting filters on fuel lines that made their vehicles 50% less fuel efficient, resulting in car owners having to buy twice as much fuel from gas stations. The same kind of ploy is what Nvidia is doing with their hardware g-sync, which is an artificial restriction meant to make consumers pay more and get locked into Nvidia's eco-system, all while none of it is necessary. And it's' the same thing with lowering generational performance increases while jacking up GPU prices - causing consumers to pay more for less despite Nvidia's profits at previous pricing, already being at massive all-time highs. It's all fleecing consumers because they can get away with it. That malicious intention is not AMD's doing.


No stats on the improvement per generation from AMD over the same periods.

Rofl I wonder why. Even if he puts a video out eventually, it will be spun and rationalized in every way to put AMD in a good light and antagonize nVidia.

Well, up to the GTX 900 series, AMD was pretty competitive, and at times was the clear leader in GPU performance and value. So, I would generally think of AMD's overall performance progression to be very similar - except when it comes to the GTX 10XX generation. I think that AdoredTV has criticized AMD thoroughly over the outcome of Vega, and so I don't think this is a case of bias. I think it's more a case that we already know, and AdoredTV has already expressed that AMD's wasn't with it for Vega, and that AMD isn't the GPU leader right now.
 
It's really not due to lack of competition, at least that is not a major driver.

There are three major ways to increase throughput (GPU centric, but applies to CPUs to a large degree as well):
1) Faster clock rate
2) More execution units
3) Find ways of avoiding work in the first place

Early on we were able to hit all three pretty easily.
Now we're running into power and thermal issues limiting the first two. We can squeeze a bit more in with a node size reduction, but that too is getting much harder.
The third is completely non-trivial. It is very hard to find new ways of avoiding work, and sadly, that too requires work - which requires power, makes heat, etc.

I have zero doubt if NV could make a part which was twice as fast, they would do so. Same with Intel.

This guy gets it.
 
I have zero doubt if NV could make a part which was twice as fast, they would do so. Same with Intel.
The do make those cards, they're called the Ti's but they milk the non Ti's since there's no competition, it's business 101 everyone would do this if they were in Nvidias position.
 
I have zero doubt if NV could make a part which was twice as fast, they would do so. Same with Intel.

The histories of these companies, especially their recent histories, prove that they wouldn't.

After the hugely overclockable Sandy Bridge saw people not upgrading their CPUs for many years (I'm still running an overclocked 2600K with some breathing room in graphically-intensive games), Intel greatly slowed down its generational performance gains to prevent a reoccurrance of the same thing. Intel also locked in 4 core consumer CPUs for years, despite making server CPUs with 22 cores, and has only recently budged on consumer cores due to AMD upping the ante to 8 core consumer CPUs - and Intel is still exercising restraint in only going for 6 cores with their upcoming Coffee Lake, while relying on their better IPC to give them a marketing edge against AMD.

And Intel's upcoming 6-core Coffee Lake is still a refresh of Kaby Lake, which is a refresh of Skylake. And despite relying on refreshes, which means that Intel's internal costs to develop and produce these CPUs is going down with each generation, Intel has still been marginally increasing the price of new CPUs with each generation. So, Intel's profit margins are increasing with each CPU refresh that they do.

Since Intel's new 6-core CPUs are Skylake refreshes, and Intel was offering 22-core server CPUs in early 2016, Intel could have easily made a consumer CPU with a higher core count when Skylake first launched - but they didn't. And right now, Intel could make Coffee Lake 8 cores instead of 6 cores - but they aren't. They're not doing more than they have to to remain in the lead and to generate new sales while not getting so far ahead of the market that they make less money in the long-term.


And Nvidia could drop a far more powerful GPU at any moment they wished - which is exactly what they did with the GTX 1080 Ti when Vega was approaching and rumours were going around concerning its potential performance. If not for Vega, Nvidia might have withheld the 1080 Ti. But to steal AMDs thunder (though Vega turned out to be disappointing), Nvidia dropped the 1080 Ti just when the PR focus was starting to be what AMD's upcoming GPU lineup was going to offer, and turned the focus back to being about Nvidia, and reinforcing its image as undisputed GPU leader and premium brand. And Nvidia could surely drop a 1080 Ti Ti right now, if they wanted to.

AMD couldn't do similar because their GPU development department was effectively shut down for 3 years prior to Polaris' release. AMD's Raja said in an interview that AMD expected high performance GPU gaming to be a phase, which had passed, and was not actively investing in GPU development at the time when Raja left Apple in 2013 and returned to AMD. Raja said that he had to convince AMD's execs that GPU development was important, and to restart AMD's GPU research department. Raja also said that when research on Polaris began, AMD was a few years behind Nvidia and wouldn't be able to close the gap with Nvidia within one or two generations, and that it'd be a slow processes to get back to being fully competitive.

So, Nvidia has been resting comfortably while AMD struggles to catch up.

And it shows that Nvidia is not just going to release a vastly more powerful GPU even if they could, when Nvidia marks up the prices for existing GPU generations to capitalize on market demand. Part of launching a more powerful GPU is having people buy it - and if Nvidia sets the market price higher for existing GPUs, then that pushes new and more powerful GPUs further out of affordability for people. Raising prices on older hardware to milk that generation further is an action stemming from the opposite alignment of will as would be releasing a 2x more powerful GPU just because they're able to do so.


And Microsoft could release a Windows 10 without any data-theft and spying features built into it - but they haven't.


In 2016, Nvidia's share value more than tripled, without releasing a GPU lineup that was 2x more powerful than their previous generation. So, what's going to make Nvidia's shareholders sign off on releasing a 2x more powerful GPU at affordable prices that results in people who drove that 3x share value increase not buying a new GPU for many years to come? Those shareholders largely want to provoke large, frequent GPU purchases, and not a large one-time GPU purchase. So, they find the sweet-spot in generational performance increases and price that is just enough to get people to buy up produceable stock at frequent intervals.
 
Last edited:
So, I once had a problem with software I was developing. It was taking too long to perform some of its tasks. I used Quantify to examine the running code and find out why it was taking long. I found that one part of the code was taking 90% of the time. It was something that didn't need to run that frequently. My first test fix was to call it every other time (it was kind of like a garbage collector in our software - long boring story I won't go into). That simple fix improved the performance speed by 1200%. The managers of our software were happy - but they then asked if we could make it at least another 500-1000% faster again since it only took us about a week to figure this out.
Running Quantify again was showing us that calls into stdlib.c where taking the CPU time - that basically means the easy optimizations are done. I think we spent another 2-3 weeks and meeked out about a 1-2% improvement (after the massive 1200%). Management wasn't necessarily happy - but it was also not possible to squeeze any more performance out of the code. It had work to do and was doing it. It just took a while.
The point of my story is I don't think AMD caused Nvidia to not innovate - innovation is costly and expensive. After a while, you reach ceilings that are difficult to climb past. When you get past the "easy" stuff - it becomes much harder to increased performance. Back to my story, the next step would have been to spend 6 months completely rewriting the software and perhaps trying out some different algorithms. Even then, not sure how much faster it might have been. My optimization had a side effect as well - since the garbage collector we wrote wasn't being called as often, the software used more memory as it wasn't releasing objects from its tree.
 
If each release was 100% faster I would be buying cards more often. Nvidia's milking is only shooting themselves in their own foot with people like me.
 
I generally upgrade when these are true.

1. My current card is not fast enough
2. New cards that are not astronomically priced are at least double the speed of my current card

OR when my current card dies.

The pricing of current cards alongside the fact that everything I play still gets at least 60fps with all the detail turned to max leaves me with 0 desire to buy a new video card now.

Maybe next year or if I get a really, really, really good deal on a card before then. With the lame mining craze sucking up all the cards I don't see that happening unless the mining craze crashes.
 
  • Like
Reactions: DF-1
like this
If each release was 100% faster I would be buying cards more often. Nvidia's milking is only shooting themselves in their own foot with people like me.
Not really, they get more profitability this way. Say you lose 10% of people who won't upgrade unless it's 100% faster like you say, but continue to sell cards to everyone else. Then you can stretch your development cycle from 3 years to 10 and make money all along the way. If you're in the lead, holding back performance is far more profitable in the long term as long as it's faster than the previous generation. Technology aside, they have a financial incentive not to make their new cards TOO much faster than the old. This is basic capitalism, it operates on maximum profitability.
 
If each release was 100% faster I would be buying cards more often. Nvidia's milking is only shooting themselves in their own foot with people like me.
You are a minority. Most people don't buy the latest and greatest $700+ video card every time it is released. Nvidia and AMD bread and butter is low-mid range cards. Also OEM market.
 
The histories of these companies, especially their recent histories, prove that they wouldn't.

After the hugely overclockable Sandy Bridge saw people not upgrading their CPUs for many years (I'm still running an overclocked 2600K with some breathing room in graphically-intensive games), Intel greatly slowed down its generational performance gains to prevent a reoccurrance of the same thing. Intel also locked in 4 core consumer CPUs for years, despite making server CPUs with 22 cores, and has only recently budged on consumer cores due to AMD upping the ante to 8 core consumer CPUs - and Intel is still exercising restraint in only going for 6 cores with their upcoming Coffee Lake, while relying on their better IPC to give them a marketing edge against AMD.

And Intel's upcoming 6-core Coffee Lake is still a refresh of Kaby Lake, which is a refresh of Skylake. And despite relying on refreshes, which means that Intel's internal costs to develop and produce these CPUs is going down with each generation, Intel has still been marginally increasing the price of new CPUs with each generation. So, Intel's profit margins are increasing with each CPU refresh that they do.

Since Intel's new 6-core CPUs are Skylake refreshes, and Intel was offering 22-core server CPUs in early 2016, Intel could have easily made a consumer CPU with a higher core count when Skylake first launched - but they didn't. And right now, Intel could make Coffee Lake 8 cores instead of 6 cores - but they aren't. They're not doing more than they have to to remain in the lead and to generate new sales while not getting so far ahead of the market that they make less money in the long-term.


And Nvidia could drop a far more powerful GPU at any moment they wished - which is exactly what they did with the GTX 1080 Ti when Vega was approaching and rumours were going around concerning its potential performance. If not for Vega, Nvidia might have withheld the 1080 Ti. But to steal AMDs thunder (though Vega turned out to be disappointing), Nvidia dropped the 1080 Ti just when the PR focus was starting to be what AMD's upcoming GPU lineup was going to offer, and turned the focus back to being about Nvidia, and reinforcing its image as undisputed GPU leader and premium brand. And Nvidia could surely drop a 1080 Ti Ti right now, if they wanted to.

AMD couldn't do similar because their GPU development department was effectively shut down for 3 years prior to Polaris' release. AMD's Raja said in an interview that AMD expected high performance GPU gaming to be a phase, which had passed, and was not actively investing in GPU development at the time when Raja left Apple in 2013 and returned to AMD. Raja said that he had to convince AMD's execs that GPU development was important, and to restart AMD's GPU research department. Raja also said that when research on Polaris began, AMD was a few years behind Nvidia and wouldn't be able to close the gap with Nvidia within one or two generations, and that it'd be a slow processes to get back to being fully competitive.

So, Nvidia has been resting comfortably while AMD struggles to catch up.

And it shows that Nvidia is not just going to release a vastly more powerful GPU even if they could, when Nvidia marks up the prices for existing GPU generations to capitalize on market demand. Part of launching a more powerful GPU is having people buy it - and if Nvidia sets the market price higher for existing GPUs, then that pushes new and more powerful GPUs further out of affordability for people. Raising prices on older hardware to milk that generation further is an action stemming from the opposite alignment of will as would be releasing a 2x more powerful GPU just because they're able to do so.


And Microsoft could release a Windows 10 without any data-theft and spying features built into it - but they haven't.


In 2016, Nvidia's share value more than tripled, without releasing a GPU lineup that was 2x more powerful than their previous generation. So, what's going to make Nvidia's shareholders sign off on releasing a 2x more powerful GPU at affordable prices that results in people who drove that 3x share value increase not buying a new GPU for many years to come? Those shareholders largely want to provoke large, frequent GPU purchases, and not a large one-time GPU purchase. So, they find the sweet-spot in generational performance increases and price that is just enough to get people to buy up produceable stock at frequent intervals.

The lack of competition has allowed Nvidia to focus on avenues other than gaming, areas where there is real money to be made. I rather doubt they can "drop a far more powerful GPU at any moment they wished". You make it sound like they sit on new cards for months at a time, which would be insanely stupid. Yes, the TI line is something they can toss out whenever they feel the need (after a new generation has launched), but that isn't the same as tossing out something anytime they want. It takes a lot of time to make a new generation of GPU. We're not even seeing Volta this year. Not because AMD isn't competitive but because it isn't ready to launch yet. AMD's failure to compete gives Nvidia room to breath and take their time on each new generation, but it doesn't mean they're just choosing to sit on something far more powerful and not make money on it. The lack of competition has also given Nvidia room to explore other avenues away from the gaming and enthusiast markets, pushing more into areas that have the potential to provide far far greater revenue streams. They could spend billions developing a massively more powerful gaming GPU, but there really isn't a point. It would be a waste of money. Focusing more on VR and non-gaming markets is more valuable for Nvidia. This has nothing to do with the lack of competition from AMD as AMD is focusing their GPU efforts on non-gaming markets as well. The time and cost that would be required to even get to that 2x gen-to-gen gaming performance jump is simply far too much these days.
 
The histories of these companies, especially their recent histories, prove that they wouldn't.

After the hugely overclockable Sandy Bridge saw people not upgrading their CPUs for many years (I'm still running an overclocked 2600K with some breathing room in graphically-intensive games), Intel greatly slowed down its generational performance gains to prevent a reoccurrance of the same thing. Intel also locked in 4 core consumer CPUs for years, despite making server CPUs with 22 cores, and has only recently budged on consumer cores due to AMD upping the ante to 8 core consumer CPUs - and Intel is still exercising restraint in only going for 6 cores with their upcoming Coffee Lake, while relying on their better IPC to give them a marketing edge against AMD.

And Intel's upcoming 6-core Coffee Lake is still a refresh of Kaby Lake, which is a refresh of Skylake. And despite relying on refreshes, which means that Intel's internal costs to develop and produce these CPUs is going down with each generation, Intel has still been marginally increasing the price of new CPUs with each generation. So, Intel's profit margins are increasing with each CPU refresh that they do.

Since Intel's new 6-core CPUs are Skylake refreshes, and Intel was offering 22-core server CPUs in early 2016, Intel could have easily made a consumer CPU with a higher core count when Skylake first launched - but they didn't. And right now, Intel could make Coffee Lake 8 cores instead of 6 cores - but they aren't. They're not doing more than they have to to remain in the lead and to generate new sales while not getting so far ahead of the market that they make less money in the long-term.


And Nvidia could drop a far more powerful GPU at any moment they wished - which is exactly what they did with the GTX 1080 Ti when Vega was approaching and rumours were going around concerning its potential performance. If not for Vega, Nvidia might have withheld the 1080 Ti. But to steal AMDs thunder (though Vega turned out to be disappointing), Nvidia dropped the 1080 Ti just when the PR focus was starting to be what AMD's upcoming GPU lineup was going to offer, and turned the focus back to being about Nvidia, and reinforcing its image as undisputed GPU leader and premium brand. And Nvidia could surely drop a 1080 Ti Ti right now, if they wanted to.
.

My point is that the actual development of speed and features isn't driven by the competition nearly as much as people think. You go out of business if you take your foot off the throttle. What does change, and what many of your points allude to, is pricing.

Why? You can change list price at any time. If you get caught with your pants down in R&D, it will be years before you recover, if at all. They cannot risk that. So yes, they may price things higher if they think they can but don't think for a second R&D isn't going balls-out to make the best architectures they can.

The consumer line of CPUs may not have as many cores as you want. But chips do clearly exist with that many cores - you cited some. You're just not willing to pay for them. And as it turns out most people aren't, and beyond that, most people do not need them either. How many benchmarks do we see daily showing scaling beyond 4 cores on consumer programs (games, general computing) is diminishing, to put it mildly. Look no further than how many people still run Sandy Bridge processors with great success. That amount of cores is still doing quite well for most things.

So why would Intel/AMD increase the cost of making the product by adding things most people don't need?

As to the 1080 Ti Ti - that level of performance clearly exists - buy two 1080 TIs. Unless you're saying they should be able to get all that performance in one die. That's pushing very deep into "deathstar sized" dies. Yields would be poor, cost would be huge, and they'd sell a tiny handful. I can't imagine this would be a good product for them to make.
 
Another one of his videos reminds us of the first mining GPU bubble a few years back where they couldn't give cards away. Guess I should be patient and not look a gift horse in the mouth since I've been waiting for a good price to get an upgrade.
 
nVidia and Intel don't want you to sit on your current hardware for 5+ years. They want you to upgrade frequently, it's in their best interest to continue to develop and evolve products, even without competition driving advancement.

Competition will give you more aggressive advancement and cost placement, for certain, but lack of competition won't cause a halt in advancement.

I do agree that what we have seen from Intel for the past ... 6+ years, is what a company does with no competition. Continue to advance, but do so at the pace of profitability. The minimum needed advancement to keep people interested in upgrading for the minimum amount of cost. But that's still been advancement none the less, just not as much as a lot of us would have liked to have seen, nor nearly as much as Intel would have been capable of doing if they were truly pushed to do so.

The fact that GPUs aren't doubling in speed every generation is because all the easy advancements have been made already. Rock's Law plays a big part of that as well. PhaseNoise said it well. GPU and CPU companies really do try to push for annual release cycles, because they want to get all those upgrades sold.
 
nVidia and Intel don't want you to sit on your current hardware for 5+ years. They want you to upgrade frequently, it's in their best interest to continue to develop and evolve products, even without competition driving advancement.
For me, that method works against them though. I'm not going to drop $400 or $500 for a 30% increase. If anything it means I can skip a generation or two, which is exactly what I have done.
 
Of course 6800 Ultra was a massive step up in performance compared to FX 5800.
FX5800 was a hot, loud, poor performing turd of a card.
 
So, it's more AMD's responsibility to keep Nvidia fair and honest with its consumer dealings than it is Nvidia's own responsibility? Competition can be what gets a company to put in effort, but the bigger reason for this slowdown is because Nvidia doesn't care about moving things ahead or about consumer value, as can be seen by things such as the Nvidia g-sync premium and refusal to support free-sync, and by the jacking up of prices in the GPU market when AMD isn't there to compete. Those things aren't just a lack of action to move forward without the of competition being nearby. They're actions towards the negative in terms of exploiting consumers while not giving them fair returns for their money.

Call me a fanboy if you want, but nVidia supporting FreeSync will be a VERY bad thing for AMD.

Let's say, if nVidia announced that their latest drivers now support FreeSync just weeks before AMD releases their vega, would you believe there is ANY reason to go for AMD's Vega?

AMD's market share would thus tank even faster, because there is literally no reason for anyone to choose AMD over nVidia this generation, at all, in any way.

I support nVidia to support FreeSync as much as the next guy, but as long as AMD performs this poorly in GPU segment, and nVidia still have some eggs in the G-Sync basket, it's probably in the best interest of everyone (including nVidia) for them to not support FreeSync.
 
The histories of these companies, especially their recent histories, prove that they wouldn't.

After the hugely overclockable Sandy Bridge saw people not upgrading their CPUs for many years (I'm still running an overclocked 2600K with some breathing room in graphically-intensive games), Intel greatly slowed down its generational performance gains to prevent a reoccurrance of the same thing. Intel also locked in 4 core consumer CPUs for years, despite making server CPUs with 22 cores, and has only recently budged on consumer cores due to AMD upping the ante to 8 core consumer CPUs - and Intel is still exercising restraint in only going for 6 cores with their upcoming Coffee Lake, while relying on their better IPC to give them a marketing edge against AMD.

And Intel's upcoming 6-core Coffee Lake is still a refresh of Kaby Lake, which is a refresh of Skylake. And despite relying on refreshes, which means that Intel's internal costs to develop and produce these CPUs is going down with each generation, Intel has still been marginally increasing the price of new CPUs with each generation. So, Intel's profit margins are increasing with each CPU refresh that they do.

Since Intel's new 6-core CPUs are Skylake refreshes, and Intel was offering 22-core server CPUs in early 2016, Intel could have easily made a consumer CPU with a higher core count when Skylake first launched - but they didn't. And right now, Intel could make Coffee Lake 8 cores instead of 6 cores - but they aren't. They're not doing more than they have to to remain in the lead and to generate new sales while not getting so far ahead of the market that they make less money in the long-term.


And Nvidia could drop a far more powerful GPU at any moment they wished - which is exactly what they did with the GTX 1080 Ti when Vega was approaching and rumours were going around concerning its potential performance. If not for Vega, Nvidia might have withheld the 1080 Ti. But to steal AMDs thunder (though Vega turned out to be disappointing), Nvidia dropped the 1080 Ti just when the PR focus was starting to be what AMD's upcoming GPU lineup was going to offer, and turned the focus back to being about Nvidia, and reinforcing its image as undisputed GPU leader and premium brand. And Nvidia could surely drop a 1080 Ti Ti right now, if they wanted to.

AMD couldn't do similar because their GPU development department was effectively shut down for 3 years prior to Polaris' release. AMD's Raja said in an interview that AMD expected high performance GPU gaming to be a phase, which had passed, and was not actively investing in GPU development at the time when Raja left Apple in 2013 and returned to AMD. Raja said that he had to convince AMD's execs that GPU development was important, and to restart AMD's GPU research department. Raja also said that when research on Polaris began, AMD was a few years behind Nvidia and wouldn't be able to close the gap with Nvidia within one or two generations, and that it'd be a slow processes to get back to being fully competitive.

So, Nvidia has been resting comfortably while AMD struggles to catch up.

And it shows that Nvidia is not just going to release a vastly more powerful GPU even if they could, when Nvidia marks up the prices for existing GPU generations to capitalize on market demand. Part of launching a more powerful GPU is having people buy it - and if Nvidia sets the market price higher for existing GPUs, then that pushes new and more powerful GPUs further out of affordability for people. Raising prices on older hardware to milk that generation further is an action stemming from the opposite alignment of will as would be releasing a 2x more powerful GPU just because they're able to do so.


And Microsoft could release a Windows 10 without any data-theft and spying features built into it - but they haven't.


In 2016, Nvidia's share value more than tripled, without releasing a GPU lineup that was 2x more powerful than their previous generation. So, what's going to make Nvidia's shareholders sign off on releasing a 2x more powerful GPU at affordable prices that results in people who drove that 3x share value increase not buying a new GPU for many years to come? Those shareholders largely want to provoke large, frequent GPU purchases, and not a large one-time GPU purchase. So, they find the sweet-spot in generational performance increases and price that is just enough to get people to buy up produceable stock at frequent intervals.
 
If we look at simply how large these GPUs are getting, I don't think that this is realistic. Node shrinks aren't keeping up these days, and the cost of the GPUs is relative to the number of chips you get off each wafer, making huge monolithic chips means that you have more wafer space lost each time a defect happens in a section you can't section off for a lower performance variant. The idea that Intel would be selling 22 core chips to consumer (which take a huge amount of die space for ironically memory) at 4 core prices is a bit silly if they can produce 5x as many consumer chips at a gradual rate of improvement (singe CPUs aren't bottlenecking that much anyway). On the other hand, for GPU manufacturers their chips are already huge, much bigger than CPUs. If NVIDIA could make a GPU 2x as fast in the same area, they would have done it. Instead they take an iterative improvement approach rather than throwing tons of R&D at it. It's efficient good business. Throwing tons of R&D in a generation is a sign of desperation and need to catch up. If all companies did that all the time, they'd be like game companies, laying off staff after each project. It's not healthy. Since everyone's living a lot longer on each node, and RAM isn't scaling anywhere close to as fast as we want it to, frankly 30% per generation of improvement through mostly efficiency gains is pretty impressive. Ryzen on the other hand is the result of throwing a lot of resources on a new architecture to catch up. They'll be optimizing off that for a couple generations for sure, the can't do that every 18 months.
 
Back
Top