"Intel's "Real World" Benchmarketing 101" by AdoredTV

Meeho

Supreme [H]ardness
Joined
Aug 16, 2010
Messages
5,914
Further analysis of Intel's "real world" marketing direction. Spoiler alert, it's as real as their care for honesty.

 
I saw the slide deck, so I'm not listening to someone who has their own issues with honesty. I didn't see anything that was dishonest in the PPT.
 
TLDW (too long, did watch)
If benchmarks don't give results in time they're bullshit.
 
I saw the slide deck, so I'm not listening to someone who has their own issues with honesty. I didn't see anything that was dishonest in the PPT.

Comparing a 3400G to a 9100F with a discrete GTX 1050 isn't dishonest? Definitely misleading if you don't want to cross into dishonest territory.
 
Having done many benchmarks between Intel's CPU's and AMD's, I can tell you that it's stance that "benchmarks don't matter" tell you all you need to know. Intel only competes well on the gaming front outside of a few select applications which feature specific Intel optimizations that don't really scale well with larger thread count CPU's. At best, any benchmark data released by Intel is going to show Intel in the nicest possible light. At worst, benchmarks are likely to be misleading to anyone who sees them.

It's marketing 101 and while most of the companies that dominate this industry have all done this for as long as I can recall, Intel sets the standard for it. AMD, NVIDIA, Intel, and other companies have all done it in the past. This isn't exactly shocking.

Now, playing Devil's advocate, Intel's stance that "benchmarks don't matter" isn't entirely wrong. Most of the time, Intel and AMD are equal across any tasks that don't require the extra thread count AMD brings to the table. In contrast, there are applications where single-threaded performance makes a huge difference and Intel's clock speed advantage certainly improves its performance, however, those applications are almost equally few. For the vast majority of use cases, you could buy either AMD or Intel products and you wouldn't ever notice the difference.

Of course, the bulk of us here are enthusiasts. It's those niche use cases and specific scenarios that make the decision for us. In some cases, it comes down to price/performance. In those cases, we gravitate towards the solutions we are biased towards or work better for our intended uses.
 
I saw the slide deck, so I'm not listening to someone who has their own issues with honesty. I didn't see anything that was dishonest in the PPT.
He raised good points in this one and there is much to criticize in the slides and Intel's marketing in general lately.
 
Having done many benchmarks between Intel's CPU's and AMD's, I can tell you that it's stance that "benchmarks don't matter" tell you all you need to know. Intel only competes well on the gaming front outside of a few select applications which feature specific Intel optimizations that don't really scale well with larger thread count CPU's. At best, any benchmark data released by Intel is going to show Intel in the nicest possible light. At worst, benchmarks are likely to be misleading to anyone who sees them.

It's marketing 101 and while most of the companies that dominate this industry have all done this for as long as I can recall, Intel sets the standard for it. AMD, NVIDIA, Intel, and other companies have all done it in the past. This isn't exactly shocking.

Now, playing Devil's advocate, Intel's stance that "benchmarks don't matter" isn't entirely wrong. Most of the time, Intel and AMD are equal across any tasks that don't require the extra thread count AMD brings to the table. In contrast, there are applications where single-threaded performance makes a huge difference and Intel's clock speed advantage certainly improves its performance, however, those applications are almost equally few. For the vast majority of use cases, you could buy either AMD or Intel products and you wouldn't ever notice the difference.

Of course, the bulk of us here are enthusiasts. It's those niche use cases and specific scenarios that make the decision for us. In some cases, it comes down to price/performance. In those cases, we gravitate towards the solutions we are biased towards or work better for our intended uses.

I agree with your "Benchmarks don't matter" assessement as I've noted the same thing for a while. I've swapped between AMD and Intel several times in the past year and in day to day use, I don't "feel" a difference between the two in normal tasks.

What is comical about this Intel scenario is how Intel's marketing relied on benchmarks for so long only to see it be tweaked to "Real World benchmarks" and now to "Benchmarks don't matter." Then with Rocket Lake, maybe benchmarks will matter again?
 
Now, playing Devil's advocate, Intel's stance that "benchmarks don't matter" isn't entirely wrong. Most of the time, Intel and AMD are equal across any tasks that don't require the extra thread count AMD brings to the table. In contrast, there are applications where single-threaded performance makes a huge difference and Intel's clock speed advantage certainly improves its performance, however, those applications are almost equally few. For the vast majority of use cases, you could buy either AMD or Intel products and you wouldn't ever notice the difference.
I would even support them if they actually meant it, even if it comes from blatant hipocricy, but it's scummy dishonest to bash benchmarks like Cinebench as not real world while promoting Intel Bapco SYSmark as reference real world at the same time.
 
I can tell you honestly that more than 75% of the time it doesn't matter, out of the 600+ CPU's I have between my 6 buildings I doubt any of them averages above 50% usage on a busy day. Outside of a half dozen that are pinned the entire day as they are rendering student projects the majority of them are just running a few chrome tabs and an excel workbook or 2, maybe Adobe DC and Outlook along with that. But nothing that is going to bring anything i3 or better above 20%. That said my home 3900x gets used heavy and I should have saved up a bit more for a 3950x and a better MB.
 
I would even support them if they actually meant it, even if it comes from blatant hipocricy, but it's scummy dishonest to bash benchmarks like Cinebench as not real world while promoting Intel Bapco SYSmark as reference real world at the same time.

Agreed. Intel, and others have obviously always used benchmarks to their advantage when they showcase their product's strengths and always downplayed their weaknesses. However, in this particular case Intel has to resort to far more deceptive tactics in order to downplay those differences. Basically, it's easy to say: "we are winning" when you are in the lead. It's much harder to say the opposite and claim it doesn't matter when you've been crying about how winning was all that mattered for most of the last three decades.
 
  • Like
Reactions: Meeho
like this
After watching video, I need a hit of smack, and need to pass out on the streets of Edinburgh.
 
  • Like
Reactions: Meeho
like this
1593063940459.png
 
I agree with your "Benchmarks don't matter" assessement as I've noted the same thing for a while. I've swapped between AMD and Intel several times in the past year and in day to day use, I don't "feel" a difference between the two in normal tasks.

What is comical about this Intel scenario is how Intel's marketing relied on benchmarks for so long only to see it be tweaked to "Real World benchmarks" and now to "Benchmarks don't matter." Then with Rocket Lake, maybe benchmarks will matter again?

If they are pushing this hard for Benchmarks Don't Matter (can we just call it BDM?) then I don't think they're that confident that they will be overtaking them anytime soon.
 
Image snipped

That's a huge 4 tile GPU chip. I would be surprised if that's for us mere mortals. I'm guessing their single and possibly dual tile chips would be more consumer oriented. Not really sure what it's got to do with this thread though. Would be funny if all the outlets decided not to benchmark it since benchmarks don't matter to Intel ;). (see what I did there, I brought it back into the thread topic).
 
That's a huge 4 tile GPU chip. I would be surprised if that's for us mere mortals. I'm guessing their single and possibly dual tile chips would be more consumer oriented. Not really sure what it's got to do with this thread though. Would be funny if all the outlets decided not to benchmark it since benchmarks don't matter to Intel ;). (see what I did there, I brought it back into the thread topic).
Look at size of his thumb and then look at the area for a die with all the pin space and caps on each side.. die is even smaller than I thought.
 
An average thumb tip is close to 1". That thing is covered in a heat spreader, but each tile is supposedly close to 25x25mm (1x1"). This lines up with the size of his thumb. 2x2" total, or about 50x50mm is 2500mm^2. Probably a little under this in reality, but probably still close to 2,000. For reference a 2080ti is 31x25mm or about 775mm^2. Or about 1/3 the size. The 5700 xt is about 250mm^2.... So around 1/8 to 1/10th the size. To me, that's pretty friggin big. If this is on 10nm and the 2080ti was on a larger node, it would have a massive transistor count. Since Intel 10nm is close to tsmc 7nm, it's more comparable to the 5700xt (transistor density, not performance).
 
Back
Top