Intel challenges AMD: “Come beat us in Real World gaming”

I think AMD could easily win this. They said real world. Real world = chrome with 2 dozen tabs open, teamspeak/discord, outlook open, either a movie on a second screen or a streaming service going + the actual game. That is real world, or just realistic use world...
 
Lol its all marketing games should be optimised for all sorts of hardware not only cpu but gpu wise as well.

Developers target what's available. Expecting anything to be optimized for Ryzen over Skylake is disconnected from reality. Developers are still trying to catch up to the deficiencies of Ryzen, and it's a moving target.

AMD has always been a game changer to the world.

They really haven't. They occasionally innovate, and they occasionally offer a product that stands out, but have generally been an uninspired 'me too' company. Most of their "wins" have been contingent on their competition stumbling.

And i like it that way if AMD wasnt squeezing Intel, Intel would have still be applying us with 45nm in 2020 lol.

Intel has been competing with themselves for near a decade, so I don't buy this. Not only did they increase performance, but they kept mainstream prices sane, despite AMD abdicating from the CPU market.

Intel is not even looking at the gaming side.

Yeah... nope.

All they care is how to capture AI graphics business from Nvidia and otherwise grow in Data centers.

Intel is big enough to do this and many, many other things. Nvidia is rightly concerned, but they're not as nearly as worried as AMD should be. Number three is a crappy place to compete from and that's where AMD is headed in the near term if they don't ratchet up their GPU innovation and maintain a rapid pace of CPU development.

Games being optimised for Intel comes in handy here for Intel

Why would they be optimized for anything else?

But i like your idea if games where AMD optimised where it would lead us ?

Well, they'd run slower.
 
I think AMD could easily win this. They said real world. Real world = chrome with 2 dozen tabs open, teamspeak/discord, outlook open, either a movie on a second screen or a streaming service going + the actual game. That is real world, or just realistic use world...

Neither company's top CPUs are going to have a problem with this. Intel's probably less, as they have video decode hardware on CPU that AMD refuses to ship.
 
Anyone else see the new Intel ads on Twitch in the past day or so? Tag line is "Intel i7 processors for gaming" and showing a bunch of streamers playing games, saying how great the i7 is for streaming and recording your game highlights etc..
 
You do realise it's not about games being optimised for AMD/intel.

No programmer is going to spend time hand optimising game code unless there is a bottleneck that desperately needs to be fixed.

It's about compilers being aware. That takes time and engineering effort. Typically code queries the cpu to work out what extensions (such as sse/avx) it has, and runs a path that is suitable.

In most cases, it'll just go "oh you've got AVX2, great, I'll use that" rather than "oh you have an AMD Ryzen 9000 chip, let's use code for that".

The difference is if you use the intel compiler, which by and large unless you have a good reason and the money to do it, you don't these days. The reason for this is that intel's compiler looks at the manufacturer string then if it doesn't find GenuineIntel it uses a basic code path. The upshot of the compiler is it is the best compiler for intel chips.

The vast majority of games use Microsoft's/Visual studio's compiler or GCC (Through Clang/LLVM) so this is a bit of a moot point.

Regarding IPC, since Ryzen, if you look at the architecture, AMD can do more IPC.

The AMD can execute six micro-ops per clock while Intel can do only four. But there is a problem with doing so many operations per clock cycle. It is not possible to do two instructions simultaneously if the second instruction depends on the result of the first instruction, of course. The high throughput of the processor puts an increased burden on the programmer and the compiler to avoid long dependency chains. The maximum throughput can only be obtained if there are many independent instructions that can be executed simultaneously.

This is where simultaneous multithreading comes in. You can run two threads in the same CPU core (this is what Intel calls hyperthreading). Each thread will then get half of the resources. If the CPU core has a higher capacity than a single thread can utilize then it makes sense to run two threads in the same core. The gain in total performance that you get from running two threads per core is much higher in the Ryzen than in Intel processors because of the higher throughput of the AMD core (except for 256-bit vector code).

The optimising that Agner is talking about there will benefit intel as well as AMD, quite frankly. It's platform agnostic, as it just means proper threading conventions.

The hardware can do more with one cycle than Intel's existing architecture, the exception is AVX2 (and it looks like they may have fixed that with Zen 2.. but we'll see). The issue is care and feeding of the beast. You need to make sure all of those pipelines are fed, as much as possible, all of the time. This means multithreading and is why SMT results in so much better results for AMD than it does intel.

As an aside, it's also why AMD worked so hard on cache in Zen+, and Zen 2, and why they have incremental increases in performance with the same underlying architecture.
 
Last edited:
Regarding IPC, since Ryzen, if you look at the architecture, AMD can do more IPC.

IPC on paper does not translate to real-world IPC, see: Bulldozer.

and why they have incremental increases in performance with the same underlying architecture.

Getting the rest of the core out of the way has helped. There were a lot of in-built inefficiencies with Zen / Zen+ that AMD appears to be addressing with Zen2.
 
This isn't necessarily accurate. On the OS side, the patches for vulnerability mitigation are likely in place, but not necessarily on the firmware side.


Yep, fresh install or stale install of windows 10 not on the network to be able to update. They need to specifically specify the updates now. Great point Dan.
 
ryzen,
I think AMD could easily win this. They said real world. Real world = chrome with 2 dozen tabs open, teamspeak/discord, outlook open, either a movie on a second screen or a streaming service going + the actual game. That is real world, or just realistic use world...
They had 3200 ram with they had it. deliddling.
 
I remember the days when the pencil trick was cutting edge. Now we are delidding.

In fairness, all our CPU's had bare dies at one point. Case in point, the Pentium III 1.0GHz and the two Athlon's on the far right. The top is a Thunderbird 1.33GHz and the bottom one is an Athlon XP 2600+.

KClbhOih.jpg


I remember the days when changing the crystal was the way to overclock.... does that make me older?

I started in the mid-1990's. These were already dated, but I've actually done a few memory upgrades with SIPP modules. Back in those days, you could sometimes upgrade the cache on the motherboard. However, that required a chip puller.

Was that the amd's I don't remember jumpers were fun. Oh man I woke up again.

Back when I started, we did everything with either jumpers or dip switches. Overclocking was done simply by setting the motherboard for a CPU frequency that was higher than what you had. If you had the fastest CPU, you often couldn't go any higher. Of course, we had tricks for that too. We had control over the multipliers so we could sometimes get increases on those chips as well. You could also go slightly above what ever your bus spec was, but that threw everything off. I've overclocked everything from the 286 onward. Of course, those were dated when I got started, but they were still widely used.
 

Attachments

  • KClbhOih.jpg
    KClbhOih.jpg
    72.7 KB · Views: 0
Intel is so unprepared for Gen Z. Gen Z thinks gaming and streaming are the same word. And every day another Gen Z gets old enough for their own computer.

Intel lost an entire generation.
 
In fairness, all our CPU's had bare dies at one point. Case in point, the Pentium III 1.0GHz and the two Athlon's on the far right. The top is a Thunderbird 1.33GHz and the bottom one is an Athlon XP 2600+.

View attachment 167327



I started in the mid-1990's. These were already dated, but I've actually done a few memory upgrades with SIPP modules. Back in those days, you could sometimes upgrade the cache on the motherboard. However, that required a chip puller.



Back when I started, we did everything with either jumpers or dip switches. Overclocking was done simply by setting the motherboard for a CPU frequency that was higher than what you had. If you had the fastest CPU, you often couldn't go any higher. Of course, we had tricks for that too. We had control over the multipliers so we could sometimes get increases on those chips as well. You could also go slightly above what ever your bus spec was, but that threw everything off. I've overclocked everything from the 286 onward. Of course, those were dated when I got started, but they were still widely used.

I had one of the Athlon Xp 2600+ processors in your picture. I vaguely remember this as the processor where I jumped a couple of pins on the CPU socket with a wire to do some overclocking of some sort.
 
  • Like
Reactions: N4CR
like this
You do realise it's not about games being optimised for AMD/intel.

No programmer is going to spend time hand optimising game code unless there is a bottleneck that desperately needs to be fixed.

It's about compilers being aware. That takes time and engineering effort. Typically code queries the cpu to work out what extensions (such as sse/avx) it has, and runs a path that is suitable.

In most cases, it'll just go "oh you've got AVX2, great, I'll use that" rather than "oh you have an AMD Ryzen 9000 chip, let's use code for that".

The difference is if you use the intel compiler, which by and large unless you have a good reason and the money to do it, you don't these days. The reason for this is that intel's compiler looks at the manufacturer string then if it doesn't find GenuineIntel it uses a basic code path. The upshot of the compiler is it is the best compiler for intel chips.

The vast majority of games use Microsoft's/Visual studio's compiler or GCC (Through Clang/LLVM) so this is a bit of a moot point.

Regarding IPC, since Ryzen, if you look at the architecture, AMD can do more IPC.



he optimising that Agner is talking about there will benefit intel as well as AMD.. quite frankly. it's platform agnostic, as it just means proper threading conventions

The hardware can do more with one cycle than Intel's existing architecture, the exception is AVX2 (and it looks like they may have fixed that with Zen 2.. but we'll see). The issue is care and feeding of the beast. You need to make sure all of those pipelines are fed, as much as possible, all of the time. This means multithreading and is why SMT results in so much better results for AMD than it does intel.

As an aside, it's also why AMD worked so hard on cache in Zen+, and Zen 2, and why they have incremental increases in performance with the same underlying architecture.


All your points are correct.

I felt it is at least as much AMD faults on this matter for not providing and alternative or working more close with more generic compilers to optimize for their architecture.
When a developts can get a good compiler with tons of tweaks in it , and it gives better speed than generic ones the developers is going to pick that compiler.
espcially when the speed boost hits what 80% of the markets and and sacrifice a bit of speed on 20% of the market.

I would make that choice anyway. AMD is just not giving an alternative for it and its going to be a long uphill battle on this one due to market share :(

We can hope that some day GCC or any more brand neutral compiler delivers speed near the ICC on most software/hardware. and then we would get a more "correct" benchmark of the CPU's


I did try to add in the disabling of the ICC CPU check "live" on Project mercury but I could not get it to work.
 
  • Like
Reactions: blkt
like this

GamersNexus (@GamersNexus)
AMD's game streaming "benchmarks" with the 9900K were bogus and misleading. We did those tests ages ago and the 9900K is nowhere near as bad as it was painted. You can force it to be bad, but it's very forced. https://t.co/F3AJKWIM6J

How many vulnerabilities and fixes have been put in place since then that possibly effect streaming? How many did GamersNexus have in place when they tested? What streaming conditions did they use, what resoltuion, etc.. there are so many variables that are at play here.

I also have the opinion, just by various pieces GamersNexus has produced, that they are very bias towards Intel. So, it might as well be Intel making that claim, it means about as much to me.
 
So much drama. I was told to expect nothing before 20th next month :/
 
Back
Top