Intel challenges AMD: “Come beat us in Real World gaming”

I'd love to see AMD say. "Alright we've agreed to go with Intel to select a third party to conduct real world gaming performance with each of our flagship CPU's with the SAME video cards operating at the same settings for everything outside of the CPU. So Intel we are calling you on your statement. Lets find an independent third party to do this gaming comparison."

Because I think Intel would crap themselves. both sides are so damn close today for real world performance that I don't see them really ironing out how to do this.
 
Being coy about what? I didn't state it because I didn't intend to state it. Also, please take your personal issues elsewhere. That is trolling.

You might want to work on your wording if you don't want people to take your comments to imply something you did not intend on saying. It's all in the interpretation. I took your comment as snarky, negative, and as if you are saying AMD was the loser here. But the way you are all butt hurt about it, and immediately called me as a troll, which I am not, and some how believe it's some personal issues (not sure where you pulled that thought out of) raises questions about your motives. If that is not what you meant, then why the personal attacks on me? All you needed to do, is simply state you did not mean it that way..
 
Last edited:
Yeah.... Sorry I only posted one place that day... Obviously there are other retailers but hey... We all skew the results how we want to.

But keep your head buried in the sand.. THERE IS NO INTEL CHIP SHORTAGE.. /s

I have to agree with Armenius... 9xxx series is almost all green on now in stock.

7E71278B-D416-4C57-A6CF-06A7F4186A79.png
 
I'd love to see AMD say. "Alright we've agreed to go with Intel to select a third party to conduct real world gaming performance with each of our flagship CPU's with the SAME video cards operating at the same settings for everything outside of the CPU. So Intel we are calling you on your statement. Lets find an independent third party to do this gaming comparison."

I'd give it to more than one house- maybe Gamer's Nexus for the Youtube crowd and Tom's Hardware, to balance it out?

Let them run their tests independently, then collaborate on their results?
 
They need to establish and publish the testing methodology, agreed security patching and such.

Today unless I upgrade to 6.7 u2 of esxi I loose all of my hyperthreaded Intel logical cores. That can be a massive impact. Amd owns that market in my mind.
 
They need to establish and publish the testing methodology, agreed security patching and such.

No question.

Today unless I upgrade to 6.7 u2 of esxi I loose all of my hyperthreaded Intel logical cores. That can be a massive impact. Amd owns that market in my mind.

They've owned it for a while, in my mind too- if you have the purchasing flexibility in the enterprise at least. It's frustrating being an enthusiast in IT and seeing the myriad of performance considerations that get dropped below platform, supply, and support considerations in the enterprise.

Also, this thread is about gaming ;).
 
The bigger question is why is bulldozer at #14 :confused:

There's... a lot of questions there. The only real answer I can see is that 'popular processors are popular'. I definitely didn't expect to see the 9900K that high given its price.
 
The bigger question is why is bulldozer at #14 :confused:

Tons of people out there on AM3 and AM3+ boards still. Check fleabay, you can get new B350 and B450 boards for the price of a used AM3 board. The 8350 gets you two more cores and a lot more space heating for not a lot of coin as a drop in replacement to an old Athlon Xwhatever
 
I doubt the popularity index only take number of sales in consideration. They probably throw in some weird metric to skew that up a little like customer account value, number of times page checked, number of times compared to other products... etc.
They have to somehow market the high profit margin ;)
 
Exactly, why would Intel want their competitor beating them with their own compiler, can't enable certain instructions for them ;P

Intel's been the 'edge' for the longest time, it does make sense for them to have been putting out a compiler. At the same time, it would make sense for AMD to target code released for the Intel compiler, because that's what's out there. Intel's compiler has become the reference. It helps that Intel contributes massively to Linux, to the point of having their own bleeding-edge Linux distro (Clear Linux), which is pretty cool in and of itself.

I do expect AMD to start doing something similar. They're contributing on the graphics side and making inroads into the community that Nvidia seems uninterested in pursuing, so perhaps we'll see more commits to balance open-source compilers and perhaps a distro optimized for the quirks in their architecture.


As for not enabling instructions on non-Intel architectures, that needs to be called out with proof.
 
Intel's been the 'edge' for the longest time, it does make sense for them to have been putting out a compiler. At the same time, it would make sense for AMD to target code released for the Intel compiler, because that's what's out there. Intel's compiler has become the reference. It helps that Intel contributes massively to Linux, to the point of having their own bleeding-edge Linux distro (Clear Linux), which is pretty cool in and of itself.

I do expect AMD to start doing something similar. They're contributing on the graphics side and making inroads into the community that Nvidia seems uninterested in pursuing, so perhaps we'll see more commits to balance open-source compilers and perhaps a distro optimized for the quirks in their architecture.

As for not enabling instructions on non-Intel architectures, that needs to be called out with proof.

The actual instruction differences in my experience are not a huge deal, as those are mostly the SIMD instructions. Strictly left to the compiler - no compiler does a good job of handling auto-vectorization, even intel's. You must explicitly handle it, and then you're just targeting instructions, not companies.

Intel had a world-beating last-mile optimizer for a while, but that lead has diminished dramatically over the years. It is to the point where I don't bother using it, and casual inspection indicates very few other dev houses do either. This is doubly diminished by the fact that Zen is really quite good at handling "intel optimized" binaries. I expect developers will not bother to target processors over time, just instruction sets.
 
I expect developers will not bother to target processors over time, just instruction sets.

That's basically where we need to get, and I do understand addressing vectors directly.

I'll also add (without intending to correct) that those SIMD instructions do tend to make up a pretty large part of CPU 'grunt' these days, at least for work that cannot reasonably done on GPUs instead.
 
I think you missed the point that PhaseNoise was trying to make. The difference now between a generic compiler and intel’s compiler is very small, a few percent.

Sure more compiler development is always a good thing. But you generally won’t get a magical 20% performance with a compiler that already does a very solid job.

Good threading conventions is what is needed.

I actually tried to look for some compiler benchmarks but couldn't find anything new (I only searched shortly)
As always im always curios when ppl bring up new information. but i also very skeptical ( to many ppl throewing BS all around) so i likes reading it confirmed form multiple sources.
So i didnt respond to that aspect of the the post yet.
 
I actually tried to look for some compiler benchmarks but couldn't find anything new (I only searched shortly)
As always im always curios when ppl bring up new information. but i also very skeptical ( to many ppl throewing BS all around) so i likes reading it confirmed form multiple sources.
So i didnt respond to that aspect of the the post yet.

https://www.phoronix.com/scan.php?page=article&item=amd-aocc-13&num=1

https://colfaxresearch.com/compiler-comparison/ (note this is on intel)
 
I have a question for you. When is Intel going to catch up to Intel's own six year old architecture?
IdiotInCharge as your attorney I advise you not to reply to this guy - he's got you in a box.

Not really- it's an entirely different question about future products vs. the thread topic about current products. Well, as 'current' as Ryzen 3000 is, since we don't have comprehensive independent reviews.

But to answer the question, Intel has stated that they'll be using 14nm for their next round of desktop parts late 2019 and / or 2020, which presumably means Skylake, and then in the 2020 to 2021 timeframe move the desktop parts to Ice Lake at 10nm or its 7nm successor (which may just be shrunk Ice Lake).

At this point we're fairly certain that they have a solid IPC gain with Ice Lake over Skylake, mitigations included, but again we do need comprehensive independent reviews, and that's going to be difficult as the first Ice Lake cores are coming in Sunny Cove in laptops. We should at least get an idea.
 
There's... a lot of questions there. The only real answer I can see is that 'popular processors are popular'. I definitely didn't expect to see the 9900K that high given its price.
I suspect it is because the calculations are based on 30 to 90 days of sales (it might be as little as 7 days or less for all we know), and since the 9900k is the newest processor released from intel, or from any manufacturer, the sales appear to be high in that small window of 30 to 90 days vs being calculated over a year or more.
 
Last edited:
I suspect it is because the calculations are based on 30 to 90 days of sales (it might be as little as 7 days or less for all we know), and since the 9900k is the newest processor released from intel, or from any manufacturer, the sales appear to be high in that small window of 30 to 90 days vs being calculated over a year or more.

That's as reasonable as anything really.
 
I have to agree with Armenius... 9xxx series is almost all green on now in stock.

View attachment 167581

Now they are because almost no one is buying them ;)
Multiple e-tailers are showing zen stuff on top, not just mindfactory now.. seems AMD has won a lot of mindshare recently.
 
Last edited:
  • Like
Reactions: blkt
like this
As I see it the CPU in 2019 is mostly irrelevant to gaming performance.

Something like 97% of all systems are going to be GPU limited long before they ever see slowdowns due to their CPU

So, in real world gaming? The experience will be identical.


Amen x10.

Talk of Cyberpunk maxed out behind closed doors last week on a 8700K + TITAN RTX ,2X16Gb + Samsung 960Pro + Z370,and the framerate dipping a bit with RTX Global Illumination,and other eye candy goodies turned up to 11/10.... Yeah,bring on that 16 core monster and Ampere on 7nm,please and thank you.
 
Amen x10.

Talk of Cyberpunk maxed out behind closed doors last week on a 8700K + TITAN RTX ,2X16Gb + Samsung 960Pro + Z370,and the framerate dipping a bit with RTX Global Illumination,and other eye candy goodies turned up to 11/10.... Yeah,bring on that 16 core monster and Ampere on 7nm,please and thank you.

Unoptimized code is still unoptimized.
 
Back
Top