confirmed: AMD's big Navi launch to disrupt 4K gaming

Marees

[H]ard|Gawd
Joined
Sep 28, 2018
Messages
2,038
Fingers crossed that all disrupting will affect only Nvidia & not AMD's board partners ...


https://www.pcgamesn.com/amd/navi-4k-graphics-card


https://www.thefpsreview.com/2020/0...-disrupt-4k-as-ryzen-did-with-cpu-processors/

https://www.techradar.com/news/amd-...d-dethrone-nvidia-just-like-it-did-with-intel

https://hardforum.com/threads/amds-...s-have-hit-north-america-for-testing.1992057/

Hook it up !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

"And with those 80 CUs you’re looking at 5,120 RDNA 2.0 cores. Quite how AMD is managing the hardware-accelerated ray tracing we still don’t know. It will surely be designed to support Microsoft’s DirectX Raytracing API, given that is what will likely power the Xbox Series X’s approach to the latest lighting magic, but how that is manifested in the GPU itself is still up in the air."

View attachment 217429

https://translate.google.com/transl...phell.com/thread-2181303-1-1.html&prev=search

EDIT:

Get ready for lots of "disruption" this year ...

https://www.extremetech.com/gaming/...nce-and-slash-power-consumption-by-50-percent


The question of how effectively Navi 20 can match Ampere will almost certainly come down to power efficiency improvements and how well AMD can scale its new silicon.

It’ll also be interesting to see how Nvidia prices Ampere, given the hostile reception to its price increases with Turing, and how much it can improve ray tracing performance.

All of these are likely to be significant factors in how the two GPUs compare with each other
 
Last edited:
Funny title on the first link. AMD hasn't "disrupted" anything on the CPU side. They simply released a set of products which are somewhat faster than what was already on the market. If Big Navi is going to be "similarly disruptive," then that just means it's going to offer not quite enough performance for AAA 4K - just like the 2080 Ti that came out two years before it.

With AMD aiming to match 2 year old Nvidia performance, it sounds like they're still stuck in the rut of being the low-cost GPU maker rather than the high performance one.
 
Funny title on the first link. AMD hasn't "disrupted" anything on the CPU side. They simply released a set of products which are somewhat faster than what was already on the market. If Big Navi is going to be "similarly disruptive," then that just means it's going to offer not quite enough performance for AAA 4K - just like the 2080 Ti that came out two years before it.

With AMD aiming to match 2 year old Nvidia performance, it sounds like they're still stuck in the rut of being the low-cost GPU maker rather than the high performance one.
I mean, they did disrupt it. It forced Intel to cut prices and give everyone more cores. If forcing another manufacturer to change their pricing and product lines isn't a disruption than maybe your definition is not the same. It's not as if it says it completely obliterated the landscape... It disrupted it, aka not businesses as usual for Intel, which is a true statement. Now, how much of a disruption... That's debatable, but it was enough to be noticeable. Anyways, I'll wait for benchmarks before I get to excited, here's to hoping though!
 
Forgive me for feeling like Humble Bellows, but...


corsaire_rouge-05.jpg


My humble eyes will believe it, when my humble eyes see it.

It's been way too long with nothing but half-assed disappointments from AMD on the GPU front. They haven't been relevant to me in GPU's since at least the 290x in 2013, if not longer.

Anything can happen, but I've been disappointed too many times, so I'm not counting on anything until I see competent 3rd party WRITTEN reviews. (None of that YouTube garbage.)


At this point with the 3080ti presumably around the corner, it's not going to be sufficient for them to tango with the 2080ti. They'll need to beat it by a significant margin.
 
Forgive me for feeling like Humble Bellows, but...


View attachment 218688

My humble eyes will believe it, when my humble eyes see it.

It's been way too long with nothing but half-assed disappointments from AMD on the GPU front. They haven't been relevant to me in GPU's since at least the 290x in 2013, if not longer.

Anything can happen, but I've been disappointed too many times, so I'm not counting on anything until I see competent 3rd party WRITTEN reviews. (None of that YouTube garbage.)


At this point with the 3080ti presumably around the corner, it's not going to be sufficient for them to tango with the 2080ti. They'll need to beat it by a significant margin.

I cannot agree with you opinion, since the 2080 Ti has been out at $1200 for a long time and most folks who want the performance are not willing to spend the price. Also, if the 2080 Ti suddenly drops by about $500, if AMD's comes out around $700 or so, it will be clear all along the Nvidia was just milking their customers, because they could.
 
Forgive me for feeling like Humble Bellows, but...


View attachment 218688

My humble eyes will believe it, when my humble eyes see it.

It's been way too long with nothing but half-assed disappointments from AMD on the GPU front. They haven't been relevant to me in GPU's since at least the 290x in 2013, if not longer.

Anything can happen, but I've been disappointed too many times, so I'm not counting on anything until I see competent 3rd party WRITTEN reviews. (None of that YouTube garbage.)


At this point with the 3080ti presumably around the corner, it's not going to be sufficient for them to tango with the 2080ti. They'll need to beat it by a significant margin.

Yeah, I’d go a step further and say I’d like to see a few months of initial user impressions/time to uncover issues. Both manufacturers don’t have a great history of flawless launches...
 
Nothing confirmed til the final hardware is released to reviewers.
I cannot agree with you opinion, since the 2080 Ti has been out at $1200 for a long time and most folks who want the performance are not willing to spend the price. Also, if the 2080 Ti suddenly drops by about $500, if AMD's comes out around $700 or so, it will be clear all along the Nvidia was just milking their customers, because they could.
Everyone knows Nvidia is milking people. So what if AMD releases a card that slightly better then a 2080ti. Nvidia has 3080ti ready to go. Since that was I'll probably be another 30%+ more then the 2080ti it will still command a premium price.
 
Literally for almost ten years (2008-2017) the high end of the consumer cpu market was quad cores with SMT. Each generation slightly faster than the last, which is why so many held on to those Sandy Bridge chips for so long. There was no significant performance to be gained from upgrading even 5 generations later.

First gen Ryzen doubled that. Two and a half years later 3rd gen did it again, now to 16 with SMT. Intel’s high end finally increased the core counts and they refined 14nm to such a sharp edge the latest ones are an actual improvement on the first ones. Consumers can buy high end laptops with 8 cores and 64gb of RAM.

it is the very definition of disruptive.

as for whether they can do it in the gpu space - I’d love for all these rumors to be true, but aside from complaining about pricing, I don’t think anyone could really argue that a Fermi-based GPU is still “good enough”. Nvidia has been pushing performance improvements each generation, noticeable ones at that, and AMD has been keeping up in the mid-range at least (though a year or two behind).

at this point all they’d need to do to disrupt the market is release a competitive part high end nvidia hardware within a few months of nvidia release, instead of two years later.
 
Literally for almost ten years (2008-2017) the high end of the consumer cpu market was quad cores with SMT. Each generation slightly faster than the last, which is why so many held on to those Sandy Bridge chips for so long. There was no significant performance to be gained from upgrading even 5 generations later.

First gen Ryzen doubled that. Two and a half years later 3rd gen did it again, now to 16 with SMT. Intel’s high end finally increased the core counts and they refined 14nm to such a sharp edge the latest ones are an actual improvement on the first ones. Consumers can buy high end laptops with 8 cores and 64gb of RAM.

it is the very definition of disruptive.

as for whether they can do it in the gpu space - I’d love for all these rumors to be true, but aside from complaining about pricing, I don’t think anyone could really argue that a Fermi-based GPU is still “good enough”. Nvidia has been pushing performance improvements each generation, noticeable ones at that, and AMD has been keeping up in the mid-range at least (though a year or two behind).

at this point all they’d need to do to disrupt the market is release a competitive part high end nvidia hardware within a few months of nvidia release, instead of two years later.
^^^^^^
well said.
I want a waaaay faster GPU for cheap lol
 
It feels as though there's a selective choice of words (from AMD) on big Navi... At a time when we are seeing many new monitors (including IPS) at 240Hz and even higher - so what about 240Hz (and better) gaming at 1080P and 1440P? Is AMD really stating they are focused on a Gaming card centered around 4k gaming (which can be a different product than a card capable of pushing 150+ frames at 1440p (or even 1080p) - Let's see where nVidia takes us with Ampre... disrupting the 2080 Ti may not prove to be such a big thing come Ampre Titan RTX and 3080 Ti - Don't get me wrong, I'm hoping like all other AMD fans, but, its pretty hard to have much faith in that happening...
 
  • Like
Reactions: Auer
like this
Funny title on the first link. AMD hasn't "disrupted" anything on the CPU side. They simply released a set of products which are somewhat faster than what was already on the market
8 months After Zen 2 launch and you still can't get more than 10 Intel cores on a mainstream consumer platform. AMD has a 12 and a 16. And the performance on those cores scales very well. Its not just a bigger number smokescreen. And the 12 launched for the same money as Intel's best 10 core, the 9900k.

AMD's 6 core Zen 2 3600 gives you SMT for 12 threads, unlocked for overclocking. And at launch, was $62 cheaper than Intel's unlocked 6 core 8600k, which doesn't have SMT for 12 threads.

AMD's 8 core gives you SMT for 16 threads, unlocked for overclocking. At launch, $40 cheaper than Intel's unlocked 8 core 9700k, which doesn't have SMT for 16 threads.

And AMD gives you a usable heatsink with all of those, too. The 9900k was actually about $15 cheaper than a 12 core Ryzen 3900. But, you'd have to buy something to cool it. Making it at bare minimum, cost as much as the 12 core.

and that was all just last year. As pointed out earlier, the first release of Zen started a chain reaction which has forced Intel to finally increase core counts and attempt a new architecture. Even in the mobile sector, where AMD is only just getting going.

If Big Navi is going to be "similarly disruptive," then that just means it's going to offer not quite enough performance for AAA 4K - just like the 2080 Ti that came out two years before it.

If they become similarly disruptive, I'm guessing a whole new product line of RDNA2 and it means they will essentially shift downward the pricing in every segment, with meaningful performance. They have already done that a little bit. Next time round, it should be even more shifting.

Indeed, Nvidia's Ampere could and probably will keep the tip top crown. But I think AMD will be on a much higher rung. And with the neeeexxxxt release, will finally fully catch up or even start a battle. And in the mean time, those of us who don't care about the highest of the high, will be very happy about the downward shift in price/performance. I mean, we already are. But even more.
 
Last edited:
I mean, they did disrupt it. It forced Intel to cut prices and give everyone more cores. If forcing another manufacturer to change their pricing and product lines isn't a disruption than maybe your definition is not the same. It's not as if it says it completely obliterated the landscape... It disrupted it, aka not businesses as usual for Intel, which is a true statement. Now, how much of a disruption... That's debatable, but it was enough to be noticeable. Anyways, I'll wait for benchmarks before I get to excited, here's to hoping though!

considering they make a CPU that's way faster than the 9980xe that you have, id say that pretty disruptive.

That really isn't disruptive. It is an iterative upgrade. If I upgraded, then I would simply be doing the same computing I've always done, but a bit faster than before. There's no disruption there at all. (btw, it's an upgrade I'm looking at making this year, but thinking about it bores me).

If we want to call ordinary business practice "disruption," then we can talk about how Nvidia "disrupted" the low end of the GPU market by forcing AMD to release new firmware after reviews were done but before the cards had shipped. (and before you get confused, I don't feel that is disruption either)

For me, a disruptive release would be a 4K Ultra 100fps card at $900. Anything shy of that is just business as usual.

Literally for almost ten years (2008-2017) the high end of the consumer cpu market was quad cores with SMT. Each generation slightly faster than the last, which is why so many held on to those Sandy Bridge chips for so long. There was no significant performance to be gained from upgrading even 5 generations later.

Desktop CPU performance has been pretty linearly increasing since ~2004, and that's including the latest generations from both makers. People have kept their hardware for a long time because 1) that's actually normal and has been for decades; and 2) for the most part, people's computational needs have remained flat despite the increase in computational capability. Web pages, email, Word docs cover 99% of of the needs of 99% of users, and those can be done tremendously well on fairly basic hardware (such as a 5yo iPhone).

Consumers can buy high end laptops with 8 cores and 64gb of RAM.

Yeah, you're absolutely right about this. They can buy Intel ones. AMD must have pulled a reverse double bait on this one. They SO played Intel on this!


AMD has been keeping up in the mid-range at least (though a year or two behind).

at this point all they’d need to do to disrupt the market is release a competitive part high end nvidia hardware within a few months of nvidia release, instead of two years later.

This is literally the opposite of the meaning of "keeping up."

Nothing confirmed til the final hardware is released to reviewers.

* Except in the case of AMD
 
We have heard nothing on a 3000 series and it's possible performance so were still a ways out before it shows up. Were getting leaks on Big Navi and possible performance numbers and AMD is starting to talk about it. So at best I think Nvidia might be launching very late this year with a 3000 series, so AMD will have several months I think before having to deal with the 3000 series. If Big Navi can beat the 2080ti by 25% then I expect they dont have much to worry about from Nvidia. Will also be interesting to see how Ray Tracing performs on Navi as well, should be a interesting year in graphics.
 
8 months After Zen 2 launch and you still can't get more than 10 Intel cores on a mainstream consumer platform. AMD has a 12 and a 16. And the performance on those cores scales very well. Its not just a bigger number smokescreen. And the 12 launched for the same money as Intel's best 10 core, the 9900k.

AMD's 6 core Zen 2 3600 gives you SMT for 12 threads, unlocked for overclocking. And at launch, was $62 cheaper than Intel's unlocked 6 core 8600k, which doesn't have SMT for 12 threads.

AMD's 8 core gives you SMT for 16 threads, unlocked for overclocking. At launch, $40 cheaper than Intel's unlocked 8 core 9700k, which doesn't have SMT for 16 threads.

And AMD gives you a usable heatsink with all of those, too. The 9900k was actually about $15 cheaper than a 12 core Ryzen 3900. But, you'd have to buy something to cool it. Making it at bare minimum, cost as much as the 12 core.

and that was all just last year. As pointed out earlier, the first release of Zen started a chain reaction which has forced Intel to finally increase core counts and attempt a new architecture. Even in the mobile sector, where AMD is only just getting going.



If they become similarly disruptive, I'm guessing a whole new product line of RDNA2 and it means they will essentially shift downward the pricing in every segment, with meaningful performance. They have already done that a little bit. Next time round, it should be even more shifting.

Indeed, Nvidia's Ampere could and probably will keep the tip top crown. But I think AMD will be on a much higher rung. And with the neeeexxxxt release, will finally fully catch up or even start a battle. And in the mean time, those of us who don't care about the highest of the high, will be very happy about the downward shift in price/performance. I mean, we already are. But even more.
Also, Intel just announced they didn't get PCIE 4.0 working on their next platform. But AMD has had it for 8 months.
 
It feels as though there's a selective choice of words (from AMD) on big Navi... At a time when we are seeing many new monitors (including IPS) at 240Hz and even higher - so what about 240Hz (and better) gaming at 1080P and 1440P? Is AMD really stating they are focused on a Gaming card centered around 4k gaming (which can be a different product than a card capable of pushing 150+ frames at 1440p (or even 1080p)

1080p gaming is in a very difficult place for a GPU maker to target because of the current state of GPU performance. It is a segment of the market which is much more heavily constrained on cost. Releasing a super hot 1080p GPU to this segment means that many (most?) of the gamers buying it will be CPU bottlenecked and won't get the full performance of the card - these probably won't be very happy or very motivated customers. The ones who aren't CPU bottlenecked will be display-limited. The few who are bound by neither are spending enough that 1440p gaming is the target. Image quality could be improved (ie, RT or some other tech), but that's typically a difficult benefit to market effectively.
 
[QUOTE nEo717] It feels as though there's a selective choice of words (from AMD) on big Navi... At a time when we are seeing many new monitors (including IPS) at 240Hz and even higher - so what about 240Hz (and better) gaming at 1080P and 1440P? Is AMD really stating they are focused on a Gaming card centered around 4k gaming (which can be a different product than a card capable of pushing 150+ frames at 1440p (or even 1080p) - Let's see where nVidia takes us with Ampre... disrupting the 2080 Ti may not prove to be such a big thing come Ampre Titan RTX and 3080 Ti - Don't get me wrong, I'm hoping like all other AMD fans, but, its pretty hard to have much faith in that happening...[/QUOTE]
-------

1080p gaming is in a very difficult place for a GPU maker to target because of the current state of GPU performance. It is a segment of the market which is much more heavily constrained on cost. Releasing a super hot 1080p GPU to this segment means that many (most?) of the gamers buying it will be CPU bottlenecked and won't get the full performance of the card - these probably won't be very happy or very motivated customers. The ones who aren't CPU bottlenecked will be display-limited. The few who are bound by neither are spending enough that 1440p gaming is the target. Image quality could be improved (ie, RT or some other tech), but that's typically a difficult benefit to market effectively.
-------

I agree, though there is also a market (by no means the main stream volume) that buys Titan RTX and Ti's for gaming with whatever the top Frame Rate delivering CPU there is on the market paired with high frame rate Monitors - Its a (slowly) growing market where we are seeing many new IPS 240Hz (or higher) monitors coming out and even some at 1440p resolution (we see something similar right now on the CPU side of things with Intel v AMD with Intel edging out AMD when goal is to turn the most frames possible at 1080p and even 1440p).
 
nvidia will release a 3080ti that is 35% faster than the 2080ti, with vastly improved RT thru refined RTX and DLSS.

AMD will release a 5900XT that is 10% faster than the 2080ti, with some form of RT. It will be 20% cheaper than the 3080ti

HUB will make a 30 minute review saying "CHEEPA" 148 times, r/AMD will declare a resounding victory.

Launch will be the stuff of legends.

Herkelman will leave AMD, open a Hot Dog cart at Times Square.
 
nvidia will release a 3080ti that is 35% faster than the 2080ti, with vastly improved RT thru refined RTX and DLSS.

AMD will release a 5900XT that is 10% faster than the 2080ti, with some form of RT. It will be 20% cheaper than the 3080ti

HUB will make a 30 minute review saying "CHEEPA" 148 times, r/AMD will declare a resounding victory.

Launch will be the stuff of legends.

Herkelman will leave AMD, open a Hot Dog cart at Times Square.
-------

You nailed it right there... I'd even bet on it.
 
I think Auer has got it right.

That really isn't disruptive. It is an iterative upgrade. If I upgraded, then I would simply be doing the same computing I've always done, but a bit faster than before. There's no disruption there at all. (btw, it's an upgrade I'm looking at making this year, but thinking about it bores me).

If we want to call ordinary business practice "disruption," then we can talk about how Nvidia "disrupted" the low end of the GPU market by forcing AMD to release new firmware after reviews were done but before the cards had shipped. (and before you get confused, I don't feel that is disruption either)

For me, a disruptive release would be a 4K Ultra 100fps card at $900. Anything shy of that is just business as usual.

By your own definition that wouldn’t be disruptive though. It’d be the same graphics you’ve always done, but a bit faster. So you’re saying the ability to play at a high frame rate and resolution for lower price is disruptive, but the ability to render or do video encodes 3-4x faster isn’t?

Desktop CPU performance has been pretty linearly increasing since ~2004, and that's including the latest generations from both makers. People have kept their hardware for a long time because 1) that's actually normal and has been for decades; and 2) for the most part, people's computational needs have remained flat despite the increase in computational capability. Web pages, email, Word docs cover 99% of of the needs of 99% of users, and those can be done tremendously well on fairly basic hardware (such as a 5yo iPhone).

Yep, and there has been no compelling reason to upgrade for most users, even power users, because for the longest time the top end was just 4C/8T.

Yeah, you're absolutely right about this. They can buy Intel ones. AMD must have pulled a reverse double bait on this one. They SO played Intel on this!

/shrug, Intel wasn’t doing anything for 8-core laptop options before they had to do it on the desktop, too. If you think they would have introduced those so rapidly instead of telling everyone to wait for 7nm, go on with yourself.

keeping up

that’s a fair point. I should have said “catching up.”
 
I cannot agree with you opinion, since the 2080 Ti has been out at $1200 for a long time and most folks who want the performance are not willing to spend the price. Also, if the 2080 Ti suddenly drops by about $500, if AMD's comes out around $700 or so, it will be clear all along the Nvidia was just milking their customers, because they could.

Any product which has no alternative can command a premium price. AMD will and has done the same during times when they were in the lead.

As soon as they launch a new product the competitive landscape changes. AMD will be forced to launch at a lower price as they are now launching into a competitive market, and Nvidia will in all likelihood be forced to drop prices.

My point is simply if AMD is going to celebrate merely catching up to where the 2080ti is today, that only brings us a few months of competitive market until Nvidia leapfrogs them again with their next gen.

If we are to have any hope of a longer term competitive market they need to show significant performance advantages over the 2080ti, so that it can be competitive with whatever Nvidia launches in 4-5 months time.
 
Last edited:
That really isn't disruptive. It is an iterative upgrade. If I upgraded, then I would simply be doing the same computing I've always done, but a bit faster than before. There's no disruption there at all. (btw, it's an upgrade I'm looking at making this year, but thinking about it bores me).

If we want to call ordinary business practice "disruption," then we can talk about how Nvidia "disrupted" the low end of the GPU market by forcing AMD to release new firmware after reviews were done but before the cards had shipped. (and before you get confused, I don't feel that is disruption either)

For me, a disruptive release would be a 4K Ultra 100fps card at $900. Anything shy of that is just business as usual.



Desktop CPU performance has been pretty linearly increasing since ~2004, and that's including the latest generations from both makers. People have kept their hardware for a long time because 1) that's actually normal and has been for decades; and 2) for the most part, people's computational needs have remained flat despite the increase in computational capability. Web pages, email, Word docs cover 99% of of the needs of 99% of users, and those can be done tremendously well on fairly basic hardware (such as a 5yo iPhone).



Yeah, you're absolutely right about this. They can buy Intel ones. AMD must have pulled a reverse double bait on this one. They SO played Intel on this!




This is literally the opposite of the meaning of "keeping up."



* Except in the case of AMD
Wow such nonsense....
How much faster would a video card have to be than the 2080Ti to meet the 4K Ultra 100fps criteria? 50%-%70? And at $900 you call that disruptive???.
AMD on the mainstream platform has almost quadrupled CPU performance since the 4770k-6700k and you call it business as usual only just a bit faster than before?

What an epic failure of trying to get your point across.
You have a HEDT product so single core speed is definetly not your thing, which would be the only saving grace of your argument. As far as single core speed, your argument would make sense.

With the 64 core threadripper, the video editing industry is going to be able to do much more much faster.
 
Wow such nonsense....
How much faster would a video card have to be than the 2080Ti to meet the 4K Ultra 100fps criteria? 50%-%70? And at $900 you call that disruptive???.
AMD on the mainstream platform has almost quadrupled CPU performance since the 4770k-6700k and you call it business as usual only just a bit faster than before?

What an epic failure of trying to get your point across.
You have a HEDT product so single core speed is definetly not your thing, which would be the only saving grace of your argument. As far as single core speed, your argument would make sense.

With the 64 core threadripper, the video editing industry is going to be able to do much more much faster.
This is why I didn't even respond, it's only disruptive if Nvidia increased performance 30%, lol. If AMD trippled performance and forced Intel to increase it's performance to match, it's meh, just a little faster, no point. Not even worth arguing with that kind of reasoning.
 
How much faster would a video card have to be than the 2080Ti to meet the 4K Ultra 100fps criteria? 50%-%70? And at $900 you call that disruptive???.

That would be about double the performance of the 2080 Ti, so probably about 50% faster than what Ampere will be. This would be significant disruption because it would bring new features and capabilities to new sections of the market whereas Ampere is likely to fall into "business as usual" iterative updates. Additionally, AMD offering a 50% performance edge over the top of the line model at a 25% lower price would shake up Nvidia.

AMD on the mainstream platform has almost quadrupled CPU performance since the 4770k-6700k and you call it business as usual only just a bit faster than before?

That's impressive performance..... for AMD. That amazing quadrupling allowed them to keep pace with Intel.... who was just iteratively upgrading as per normal business practice. Catching up to competition isn't disruptive. If Intel caught up to AMD this year, that would be a big jump for Intel - but simply catching up to the leader (in this case, AMD) isn't disruptive.

With the 64 core threadripper, the video editing industry is going to be able to do much more much faster.

No, this still won't change a thing. GN is still going to have basic table overlays in their videos. Jayz2cents is still not going to use overlays. LTT is still going to work with source footage beyond anything they're capable of distributing to their audience. Hollywood is still going to put out half-assed remakes of films which weren't very good to begin with.
 
This is why I didn't even respond, it's only disruptive if Nvidia increased performance 30%, lol. If AMD trippled performance and forced Intel to increase it's performance to match, it's meh, just a little faster, no point. Not even worth arguing with that kind of reasoning.

To be fair, you probably also shouldn't respond because you don't seem to be reading anything, and what you have "read" isn't something that I actually posted. I never said that Nvidia offering a 30% performance boost would be disruptive. In fact, I've said the literal opposite.
 
Wow such nonsense....
How much faster would a video card have to be than the 2080Ti to meet the 4K Ultra 100fps criteria? 50%-%70? And at $900 you call that disruptive???.
AMD on the mainstream platform has almost quadrupled CPU performance since the 4770k-6700k and you call it business as usual only just a bit faster than before?

What an epic failure of trying to get your point across.
You have a HEDT product so single core speed is definetly not your thing, which would be the only saving grace of your argument. As far as single core speed, your argument would make sense.

With the 64 core threadripper, the video editing industry is going to be able to do much more much faster.

The notion of AMD having quadrupled CPU performance is quite the mischaracterization.

Sure, on a narrow few highly threaded loads they have. But the overwhelming majority of things CPU's are used for there will be little or no difference at all

What they have succeeded in doing is getting a tie with Intel on the per core performance side of things, (they have higher IPC, but lower max clocks resulting in an approximate tie in some Ryzen models) while at the same time offering more cores for when you need them, but this is hardly revolutionary.

I just upgraded from my 2011 hexacore 3930k which I had overclocked at 4.8Ghz to a Threadripper 3960x.

Is it an upgrade? Yes. Lightly threaded performance improved slightly. Heavily threaded performance increased massively, but honestly, outside of benchmarks like Cinebench I'm rarely able to load it up. Even when transcoding videos, which is a textbook load for many core CPU's I sit at ~20% load.

It has not transformed how I use my computer. It has allowed me to do the same things slightly faster, that's it. So no, it is not revolutionary.

And let's not forget why AMD is able to throw so many cores at their chips. It has absolutely nothing to do with anything they have done themselves. It is because they are using TSMC's excellent 7nm process. This is what allows them all the cores without having a thermal meltdown.

Throwing cores at the problem is not revolutionary. Sure, it helps in some loads, but it is kind of an easy copy and paste approach to things.

AMD is benefitting from Intel screwing the pooch in their process. That is all. If it hadn't been for Intel's 10nm process failure Zen would have been another Phenom II. Sure, a nice improvement for AMD, but still playing second fiddle to Intel's offerings.

We do have to give Lisa Su and team credit for digging AMD out of the complete hole they were in with the FX/Bulldozer designs and turning things around to the point where they are in the game again, but still, without Intel stumbling, Ryzen would have been an OK chip, but 10-20% behind Intel.

In short, AMD's resurgent position is largely due to luck that Intel messed up. In a way, the last time they were successful in 1999 to 2005 the same thing happened. Intel messed up with P4/Netburst/RAMBUS allowing AMD an opening.

Don't get me wrong. I am very excited we have a competitive market again, and I'm loving being able to buy AMD products for more than just sentimental nostalgic reasons, but there is nothing revolutionary here at all.

We have more cores. Big whoop.
 
To be fair, you probably also shouldn't respond because you don't seem to be reading anything, and what you have "read" isn't something that I actually posted. I never said that Nvidia offering a 30% performance boost would be disruptive. In fact, I've said the literal opposite.

Considering how many others pointed to how illogical your statement is, I just don't feel the need. You can keep dressing up your ignorant comments, but it doesn't change their meaning. Again, all you did was put your own imaginary line where you feel disruptive is disruptive enough for you. This doesn't make it non disruptive, just you don't feel it was disruptive enough. Others can feel free to disagree with where you put that benchmark. If you don't feel like others are entitled to their own opinions that differ from yours, well keep on keeping on. Heck, someone might come and say a disruptive part won't be until we have quantum computers that can do real time raytracing @240hz for $100 and tell you that a mere 50% increase in performance and 10% drop in price is just business as usual.
 
"disrupt" 4k gaming?? LOL. Sounds like a headline designed to influence investors...

'Disrupting' the 4k gaming market would be coming out with 8k gaming at 144hz for $500... or a streaming service for 4k gaming with no latency for $9.99 a month, all games included.. stuff like this would shake up the market.

Not holding my breath that an AMD GPU is going to accomplish those things anytime soon.
 
The notion of AMD having quadrupled CPU performance is quite the mischaracterization.

Sure, on a narrow few highly threaded loads they have. But the overwhelming majority of things CPU's are used for there will be little or no difference at all

What they have succeeded in doing is getting a tie with Intel on the per core performance side of things, (they have higher IPC, but lower max clocks resulting in an approximate tie in some Ryzen models) while at the same time offering more cores for when you need them, but this is hardly revolutionary.

I just upgraded from my 2011 hexacore 3930k which I had overclocked at 4.8Ghz to a Threadripper 3960x.

Is it an upgrade? Yes. Lightly threaded performance improved slightly. Heavily threaded performance increased massively, but honestly, outside of benchmarks like Cinebench I'm rarely able to load it up. Even when transcoding videos, which is a textbook load for many core CPU's I sit at ~20% load.

It has not transformed how I use my computer. It has allowed me to do the same things slightly faster, that's it. So no, it is not revolutionary.

And let's not forget why AMD is able to throw so many cores at their chips. It has absolutely nothing to do with anything they have done themselves. It is because they are using TSMC's excellent 7nm process. This is what allows them all the cores without having a thermal meltdown.

Throwing cores at the problem is not revolutionary. Sure, it helps in some loads, but it is kind of an easy copy and paste approach to things.

AMD is benefitting from Intel screwing the pooch in their process. That is all. If it hadn't been for Intel's 10nm process failure Zen would have been another Phenom II. Sure, a nice improvement for AMD, but still playing second fiddle to Intel's offerings.

We do have to give Lisa Su and team credit for digging AMD out of the complete hole they were in with the FX/Bulldozer designs and turning things around to the point where they are in the game again, but still, without Intel stumbling, Ryzen would have been an OK chip, but 10-20% behind Intel.

In short, AMD's resurgent position is largely due to luck that Intel messed up. In a way, the last time they were successful in 1999 to 2005 the same thing happened. Intel messed up with P4/Netburst/RAMBUS allowing AMD an opening.

Don't get me wrong. I am very excited we have a competitive market again, and I'm loving being able to buy AMD products for more than just sentimental nostalgic reasons, but there is nothing revolutionary here at all.

We have more cores. Big whoop.
Intel shrunk their roadmaps due to Zen. That's disruptive.

A failure is a failure. The biggest baddest CPU company failed 7nm 10nm
And just announced failure for PCIE 4.0 still isn't offering PCIE 4.0.

Vega has RT tech in it. But AMD failed to get it working with confidence. So its not enabled. So right now, NVidia gets to have that as another thing over AMD GPUs.
 
Last edited:
Disruption means as little was web2.0 or synergy at this point.

Just a buzzword.

Go AMD and Nvidia I need a new video card (mine blew last night) and I’m waiting for this release before I pull a trigger on either team.
 
Intel shrunk their roadmaps due to Zen. That's disruptive.

A failure is a failure. The biggest baddest CPU company failed 7nm and just announced failure for PCIE 4.0

Vega has RT tech in it. But AMD failed to get it working with confidence. So its not enabled. So right now, NVidia gets to have that as another thing over AMD GPUs.

And DLSS is finally moving along in the right direction.
 
Intel shrunk their roadmaps due to Zen. That's disruptive.

A failure is a failure. The biggest baddest CPU company failed 7nm and just announced failure for PCIE 4.0

Vega has RT tech in it. But AMD failed to get it working with confidence. So its not enabled. So right now, NVidia gets to have that as another thing over AMD GPUs.

10nm.

Also, there was no announcement on PCIe 4. It was only a rumor and Anad's Ian Cutress basically put the kibosh on the rumor a couple days ago. So, it's entirely possible that there was never the kind of plan or failure the initial rumor planted.
 
10nm.

Also, there was no announcement on PCIe 4. It was only a rumor and Anad's Ian Cutress basically put the kibosh on the rumor a couple days ago. So, it's entirely possible that there was never the kind of plan or failure the initial rumor planted.
ah ha

Well.....they don't have it yet. And AMD has had it for 8 months. So, still useful for this dumb arguement! haha
 
Last edited:
The notion of AMD having quadrupled CPU performance is quite the mischaracterization.

Sure, on a narrow few highly threaded loads they have. But the overwhelming majority of things CPU's are used for there will be little or no difference at all

What they have succeeded in doing is getting a tie with Intel on the per core performance side of things, (they have higher IPC, but lower max clocks resulting in an approximate tie in some Ryzen models) while at the same time offering more cores for when you need them, but this is hardly revolutionary.

I just upgraded from my 2011 hexacore 3930k which I had overclocked at 4.8Ghz to a Threadripper 3960x.

Is it an upgrade? Yes. Lightly threaded performance improved slightly. Heavily threaded performance increased massively, but honestly, outside of benchmarks like Cinebench I'm rarely able to load it up. Even when transcoding videos, which is a textbook load for many core CPU's I sit at ~20% load.

It has not transformed how I use my computer. It has allowed me to do the same things slightly faster, that's it. So no, it is not revolutionary.

And let's not forget why AMD is able to throw so many cores at their chips. It has absolutely nothing to do with anything they have done themselves. It is because they are using TSMC's excellent 7nm process. This is what allows them all the cores without having a thermal meltdown.

Throwing cores at the problem is not revolutionary. Sure, it helps in some loads, but it is kind of an easy copy and paste approach to things.

AMD is benefitting from Intel screwing the pooch in their process. That is all. If it hadn't been for Intel's 10nm process failure Zen would have been another Phenom II. Sure, a nice improvement for AMD, but still playing second fiddle to Intel's offerings.

We do have to give Lisa Su and team credit for digging AMD out of the complete hole they were in with the FX/Bulldozer designs and turning things around to the point where they are in the game again, but still, without Intel stumbling, Ryzen would have been an OK chip, but 10-20% behind Intel.

In short, AMD's resurgent position is largely due to luck that Intel messed up. In a way, the last time they were successful in 1999 to 2005 the same thing happened. Intel messed up with P4/Netburst/RAMBUS allowing AMD an opening.

Don't get me wrong. I am very excited we have a competitive market again, and I'm loving being able to buy AMD products for more than just sentimental nostalgic reasons, but there is nothing revolutionary here at all.

We have more cores. Big whoop.

You missed by point about single threaded tasks.
Also your point about encoding and sitting at 20%, I feel bad for saying this but you're doing it wrong. I mean the easiest way to use more of your CPU during encoding is to add another encode process. Keep adding until you hit 100%, its not that hard. I understand that most people won't be able to have that many simultaneous encodes running all the time, so 64 cores may not be for you. I would love to have 64 cores as I can have batch encodes that reach in the hundreds, but that's waaay out of my price range and my PC isn't my money maker. \
OS's are multithreaded and anything to make windows run faster is welcome

That's impressive performance..... for AMD. That amazing quadrupling allowed them to keep pace with Intel.... who was just iteratively upgrading as per normal business practice. Catching up to competition isn't disruptive. If Intel caught up to AMD this year, that would be a big jump for Intel - but simply catching up to the leader (in this case, AMD) isn't disruptive.

I would say that the 3950x is just a bit more than parity compared to the 9900K/KS.

No, this still won't change a thing. GN is still going to have basic table overlays in their videos. Jayz2cents is still not going to use overlays. LTT is still going to work with source footage beyond anything they're capable of distributing to their audience. Hollywood is still going to put out half-assed remakes of films which weren't very good to begin with.

Well, what else would they do? What you are talking about is purely software related.
The benefits would be for them and not you. They could edit and encode their video and upload their video to youtube much faster, but I don't think they would be the ones that would benefit from Threadripper very much.

What you (and I) want is a disruptive break through that would allow single core speed to jump drastically or somehow make single threaded only software to be multithreaded (probably impossible)
 
Last edited:
The notion of AMD having quadrupled CPU performance is quite the mischaracterization.

Sure, on a narrow few highly threaded loads they have. But the overwhelming majority of things CPU's are used for there will be little or no difference at all

What they have succeeded in doing is getting a tie with Intel on the per core performance side of things, (they have higher IPC, but lower max clocks resulting in an approximate tie in some Ryzen models) while at the same time offering more cores for when you need them, but this is hardly revolutionary.

I just upgraded from my 2011 hexacore 3930k which I had overclocked at 4.8Ghz to a Threadripper 3960x.

Is it an upgrade? Yes. Lightly threaded performance improved slightly. Heavily threaded performance increased massively, but honestly, outside of benchmarks like Cinebench I'm rarely able to load it up. Even when transcoding videos, which is a textbook load for many core CPU's I sit at ~20% load.

It has not transformed how I use my computer. It has allowed me to do the same things slightly faster, that's it. So no, it is not revolutionary.

And let's not forget why AMD is able to throw so many cores at their chips. It has absolutely nothing to do with anything they have done themselves. It is because they are using TSMC's excellent 7nm process. This is what allows them all the cores without having a thermal meltdown.

Throwing cores at the problem is not revolutionary. Sure, it helps in some loads, but it is kind of an easy copy and paste approach to things.

AMD is benefitting from Intel screwing the pooch in their process. That is all. If it hadn't been for Intel's 10nm process failure Zen would have been another Phenom II. Sure, a nice improvement for AMD, but still playing second fiddle to Intel's offerings.

We do have to give Lisa Su and team credit for digging AMD out of the complete hole they were in with the FX/Bulldozer designs and turning things around to the point where they are in the game again, but still, without Intel stumbling, Ryzen would have been an OK chip, but 10-20% behind Intel.

In short, AMD's resurgent position is largely due to luck that Intel messed up. In a way, the last time they were successful in 1999 to 2005 the same thing happened. Intel messed up with P4/Netburst/RAMBUS allowing AMD an opening.

Don't get me wrong. I am very excited we have a competitive market again, and I'm loving being able to buy AMD products for more than just sentimental nostalgic reasons, but there is nothing revolutionary here at all.

We have more cores. Big whoop.
I don't know what platform you are using to do your transcoding but many of them require you to make changes in settings or modify an .ini to utilize more than 8 cores, they will do so happily but most don't do it by default.
 
10nm.

Also, there was no announcement on PCIe 4. It was only a rumor and Anad's Ian Cutress basically put the kibosh on the rumor a couple days ago. So, it's entirely possible that there was never the kind of plan or failure the initial rumor planted.


That is one repeating rumor monger pattern I see: Claim X will happen with ZERO evidence. Later claim, X failed and will Not happen after all, again no evidence.

Externally, nothing happens. Rumor Monger claims correct rumors. "See nothing happened, we were correct about the failure of X". :rolleyes:
 
You missed by point about single threaded tasks.
Also your point about encoding and sitting at 20%, I feel bad for saying this but you're doing it wrong. I mean the easiest way to use more of your CPU during encoding is to add another encode process. Keep adding until you hit 100%, its not that hard. I understand that most people won't be able to have that many simultaneous encodes running all the time, so 64 cores may not be for you. I would love to have 64 cores as I can have batch encodes that reach in the hundreds, but that's waaay out of my price range and my PC isn't my money maker.

Fair.

A lot of this likely has to do with my workload right now. I haven't exactly been transcoding feature films. I've been working on a project to digitize analog camcorder tapes from the 80's and 90's, in PAL format, so we are talking 720x576 resolution at 25hz.

My workload has been to first record the entire length of a tape in RAW YUV format, then go through and slice it into clips as they were recorded. The clips range in length from !3 minutes up to an hour. Once clipped up, I then encode the clips with h264

The low resolution seems to be what limits the core utilization to be pretty low. And because the clips are pretty small, it's not like they are running for long periods of time, so it becomes challenging to run many instances.

I have noticed that when I up the resolution to 1080p samples, i do get better utilization. Still not fantastic, but better. Maybe 4k encodes will be better yet? I don't know. I've never had any reason to encode anything in 4k.

OS's are multithreaded and anything to make windows run faster is welcome

Yeah, you can always utilize more cores by running more separate programs at the same time, but in this scenario you wind up being user limited long before you wind up being core limited on most systems. There are only so many things you can do at the same time as a human being.


I would say that the 3950x is just a bit more than parity compared to the 9900K/KS.

Really depends on what you are doing. For highly threaded loads, certainly, but for the things most people do with their computers, it is going to be a tie, or slightly favor the 9900ks.


What you (and I) want is a disruptive break through that would allow single core speed to jump drastically or somehow make single threaded only software to be multithreaded (probably impossible)

That would be the holy grail, but it is definitely not possible.

In the end per core performance increases (combination of clock speed and IPC) will always trump added cores for the simple reason that they improve ALL software that runs on the system while added cores only benefit the relatively limited types of code that can be multithreaded.

All is always better than some. :p
 
Back
Top