Intel Crosses an Unacceptable Ethical Line

Ok, wait. You are quoting Semiaccurate? That is the site that isn't :p. Charlie Demerjian is a nutball and lets his biases dictate a whole lot. If he claimed the sky was blue, I'd have to go check for myself just to make sure it didn't change colour :p.

That aside a tech company lied about benchmarks? Hmmm, I've never seen that happen except for always :D.
 
Reading isnt your strong suit.
Probably a good thing when you want to follow AMD.
Obviously ethics aren't your strong suit, either. Probably a good thing when you want to follow Nvidia.
 
Slides that state benefits without backing data are usually forbidden because it invites lawsuits. Here Intel made three distinct statements for the P4800X and didn’t back them up. Pulling it was probably right but presenting it to the press, making those claims, and then pulling the slides was probably intended to make the press write the claims ‘in their own words’. This is what we called the sleaziest of PR tactics, trying to get the press to write things you know you can’t legally say or claim. Unethical in the extreme.

I am pretty sure there is no legal requirement about presenting claims without backing data. There might be some misleading marketing going on, but misleading and marketing and almost synonymous. So par for the course while we wait for real independent comparisons/tests.
 
Obviously ethics aren't your strong suit, either. Probably a good thing when you want to follow Nvidia.
Haha, pot kettle.
I did knock NVidia for the 970 and I knocked Intel for this.
You on the other hand...
Reading skillz ftw.
 
The technology itself has very interesting capabilities but isn't well suited to the chipset/driver ecosystem that Intel is pushing it into. It will get better with improved controller chips, different access patterns, higher parallelism, tighter integration with the CPU and so on. Cell phones with their shorter design time and tighter integration might work much better than the PC CPU model. Putting a controller (for Micron QuantX) into a cell phone ARM chip could be a very attractive solution. How long would it take to put a controller into an Intel CPU? Currently, the XPoint chip has to communicate with PCIe chips and drivers that treat it like a flash chip. It will be 2-4 years of optimization before we'll know how to make best use of the technology, but I think we'll see rapid improvement in the first 12-18 months.

Too bad marketing tried to shoe-horn the technology into places it doesn't quite fit and then tried to remove data that shows the lack of that good fit. Very common. Still disappointing. Being quiet, on the other hand, lets the louder mouths bury you in the market.
 
Color me shocked! Intel has been "promising" insane Optane performance going on a couple years now.



...Intel not releasing anything worth upgrading since sandy bridge. Going from core 2 duo to nehalem was huge, and again to sandy bridge. Then ivy, haswell, etc *yawn*

Sandy Bridge 2600K provided around a 20% performance increase over its 4c/8t predecessor. The 7700K provides around a 25% increase over the 2600K. That's a pretty stout jump for current SB owners, IMO.
 
Color me shocked! Intel has been "promising" insane Optane performance going on a couple years now.





Sandy Bridge 2600K provided around a 20% performance increase over its 4c/8t predecessor. The 7700K provides around a 25% increase over the 2600K. That's a pretty stout jump for current SB owners, IMO.
It does in a lot of circumstances, but it also took 5 generations of processors to do it with nothing particularly compelling on the horizon.

Consider that jump from c2d to nehalem. We're talking mid 2006 to late 2008. Even the jump from that to sandy bridge was early 2011. So 2-2.5 years for major jumps in performance, definitely nothing to complain about. But a bunch of middling increases over the course of 6 years to get a similar overall increase while doing effectively nothing for mainstream core counts since the q6600 in 2007? Come on.
 
It does in a lot of circumstances, but it also took 5 generations of processors to do it with nothing particularly compelling on the horizon.

Consider that jump from c2d to nehalem. We're talking mid 2006 to late 2008. Even the jump from that to sandy bridge was early 2011. So 2-2.5 years for major jumps in performance, definitely nothing to complain about. But a bunch of middling increases over the course of 6 years to get a similar overall increase while doing effectively nothing for mainstream core counts since the q6600 in 2007? Come on.

I'll agree with that...however, SB is so efficient (especially in the power consumption and IMC departments) compared to its predecessors that it's still a viable CPU today, 6 years later. ...Especially with GPU-centric gaming (more GPU bound than CPU bound). So, getting a 25% performance jump on a six year old processor which is still plenty enough strong for 90% of world's PC users (including gamers) is pretty damn impressive, IMO.
 
I will agree that AMD hype on the RX480 and bulldozer was pretty bad -- but i can see the OP's point with intel not releasing anything worth upgrading since sandy bridge. Going from core 2 duo to nehalem was huge, and again to sandy bridge. Then ivy, haswell, etc *yawn*
The RX 480 in my opinion is a very impressive graphics card. It's less impressive when you use older games. The problem is that everyone insists to pit the 8GB RX 480 to the 6GB 1060. Obviously the 1060 is faster, but the 4GB 480 is just as fast and only $200. Also, for some reason a lot of people are using games well over 2 years old. The ones that aren't over 2 years old use GameWorks like Tomb Raider. Plus some websites insist not to use DX12/Vulkan, which makes the RX 480 look worse. But AMD did say the RX 480 would be like a R9 390, and so far they aren't wrong.

Bulldozer was hyped, but again AMD wasn't wrong here either. Much like the RyZen chips, they just selected benchmarks to show. Obviously many people were worried about RyZen's performance in games since AMD wasn't showing much of those before it was released. AMD showed that the Bulldozer was very fast in mutltithreaded performance, which it is, but neglected to mention how bad it's IPC was. In fact, for games the Phenom X6 was faster.

What AMD did was selective benchmarking, which nearly everyone does. What Intel did here was flat out lie about the performance. And Intel has been known to do this for a while. Nearest thing I can think of with AMD was how many cores are in their Bulldozer CPUs. That's still up for debate, cause a core has technically never been defined by anyone. God knows Nvidia has lied about the memory in their 970's, and they even lost the lawsuit for that one.
 
Last edited:
I'll agree with that...however, SB is so efficient (especially in the power consumption and IMC departments) compared to its predecessors that it's still a viable CPU today, 6 years later. ...Especially with GPU-centric gaming (more GPU bound than CPU bound). So, getting a 25% performance jump on a six year old processor which is still plenty enough strong for 90% of world's PC users (including gamers) is pretty damn impressive, IMO.
Sure, but wouldn't you have rather seen a competitive industry and had 2 or 3 jumps by now? 6 or 8 core should have been mainstream a couple of years ago(lets just ignore that useless bulldozer mess, because having a dozen garbage cores isn't any better). Setting the standard as being sufficient for 90% of the world's PC users honestly doesn't amount to much, because the fact is that most computer users around the globe would probably still do just fine with a q6600, and an SSD for their main drive simply to improve general use by eliminating HDD latency.
 
Obviously ethics aren't your strong suit, either. Probably a good thing when you want to follow Nvidia.

Intel has a moral and ethical obligation to its share holders....it isn't to you.

Problem is when people somehow think what they think is moral and ethical but fail to realize it is from their point of view.
 
Sandy Bridge 2600K provided around a 20% performance increase over its 4c/8t predecessor. The 7700K provides around a 25% increase over the 2600K. That's a pretty stout jump for current SB owners, IMO.
Well, technically the IPC difference is like 30-40% and the default clock speed is much higher on the 7700K. A good 1.4Ghz higher without overclock. With overclock though, it's nearly equal. Which would put the 7700K about 40% faster.

But the 2600K is how old? 2013? Haswell, Skylake, and Broadwell chips are all very close in performance. So effectively with the past 4 years Intel has went up, lets say by 100%, if we don't overclock the 2600k. That Moree's Law ain't working out too well for them.

If I were a SandyBridge owner I would continue to use it, or buy a RyZen chip. Cause any benefits the 7700K has, it's not worth the price difference compared to RyZen.
 
Sure, but wouldn't you have rather seen a competitive industry and had 2 or 3 jumps by now? 6 or 8 core should have been mainstream a couple of years ago(lets just ignore that useless bulldozer mess, because having a dozen garbage cores isn't any better). Setting the standard as being sufficient for 90% of the world's PC users honestly doesn't amount to much, because the fact is that most computer users around the globe would probably still do just fine with a q6600, and an SSD for their main drive simply to improve general use by eliminating HDD latency.

Well of course I would have loved to see 2 or 3 jumps due to a competitive industry...but reality-wise, the jump from SB to KL is still pretty stout. Hell, if I wasn't holding off for about a year when I plan to upgrade, I'd likely be very happy with a 7700K or 1700/1700X over my current 3770K.



Well, technically the IPC difference is like 30-40% and the default clock speed is much higher on the 7700K. A good 1.4Ghz higher without overclock. With overclock though, it's nearly equal. Which would put the 7700K about 40% faster.

But the 2600K is how old? 2013? Haswell, Skylake, and Broadwell chips are all very close in performance. So effectively with the past 4 years Intel has went up, lets say by 100%, if we don't overclock the 2600k. That Moree's Law ain't working out too well for them.

If I were a SandyBridge owner I would continue to use it, or buy a RyZen chip. Cause any benefits the 7700K has, it's not worth the price difference compared to RyZen.

40% faster theoretically...real-world testing (here's another bad-ass comparison review by [H] itself) shows around a 25% gain average; unless I'm interpreting the results incorrectly. SB came out in 2011 ;)
 
Optane is yet another stillborn project from Intel. Anyone else remember the optical interconnect technology they were wanting to roll out? Lol.

You mean this?

As far as Optane Memory goes, you guys keep locking in on the bandwidth, and why the first iteration of the M.2 part is only x2 PCIe. You are all missing the key point and that is the latency. Low latency is what makes a PC respond to input quickly. SSDs are a good gain over HDDs, and Optane is another good gain over that. The issue is that software and system architecture was not built around the possibility of having something non-volatile with such a low latency, so seeing the full benefit of the tech is going to take some time. For now we have what looks to be a decent caching layer that will help HDD-only systems behave much more like SSD systems (for typical use). Power users will still go full SSD (or full Optane once that's out) anyway.

Side note - at least one of the 'removed' slides in Charlie's article is in all revisions of the press deck they sent out. I went back and confirmed it myself. Yes, Intel removed slides from the deck compared to what was briefed, but those that were on campus also observed active examples of the tech operating with real-time telemetry. I didn't bust out a calculator to confirm the results matched the slides exactly, but it was close enough for me to believe their claims were based in reality. Either way, we'll know shortly as review embargos will expire soon enough.
 
As much as I would like to join the rag on AMD using highly, highly selected performance numbers in the past, I honestly can't recall when they flat out lied or when they removed data slides like Intel have just done.

"premium VR experience"
 
  • Like
Reactions: Nenu
like this
Well technically, those were not lies... but lacked a lot of variables... ie: Settings used / FPS / does it get you sick ? But still, you had a premium 4K experience in VR (yes you threw up 5 times but still...)
What Intel did is work the numbers... very different from my point of view...

I would say Intel is VW and AMD is Apple if I had to make comparison... VW cheated and lied while Apple says it's magical ;)
 
Yay Intel! I got an email from Newegg a while back and made this:

k0melzx1pofy.png
 
Bah! I'm still rocking a 2700k @ 4.8 with no need to upgrade as the performance benefits of doing so are negligible at best.

It's great. Unlike the old days where I was upgrading every 12 months in order to keep up with the latest and greatest which used to cost me a fortune, I'm now saving a fortune as the benefit's just not there for me to constantly upgrade anymore! Then you have the PC I use daily which is a Dell T5500 with 24GB of ram and dual X5675's. Gawd damn tank that machine!

We're reaching the limits of silicon technology, I wouldn't be surprised if outright lying in relation to performance figures increases exponentially as the limitations become more prevalent.
 
how is this news? I don't think a pc component company out there hasn't lied to us yet. This is why sites like the [H] exist to run the numbers to see. No one is holding a gun to your head to buy an optane. If it sucks, we don't buy and the tech dies a sad lonely death. I've been quite happy with nvme m.2 and current generation SSD, looking forward to the larger capacity SSD in future and improved life.
 
Bulldozer? Or was that just some random reviews spouting a bunch of BS?

What are you talking about? I had a Bulldozer chip. It still works fine. Performs as expected from the reviews.

There was lots of 'hype' prior to the Bulldozer release, but I don't recall AMD ever lying about its actual performance. Their marketing people made a big deal about it's overclocking ability and value, the press overwhelmingly focused on the comparatively low IPC.

Bulldozer/Piledriver really wasn't that bad. For less than an i5 you got i7-level multithreaded performance and 'good enough' single-threaded IPC for 1080p gaming, even today. My GF still runs my FX-8320 rig. AMD gambled on the world going more multi-threaded and it hasn't panned out quite as well as they hoped.

Honestly, this sounded meh from the start, at least for consumers. I've noticed the more hype there is, the shittier the product usually ends up being, hence why AMD keeps getting dragged in these discussions. They are the kings of hype at this point.

Either Intel really thought they were going to get better performance earlier on and did a terrible job trying to quietly dial it back or they lied to drum up interest.
 
Or SSD development jumped too fast and overtook Intels plans.
 
Am I the only one who doesn't see an issue here? Someone at Intel went over the top and Intel's own people redacted it. It would be a different story if there was a commercial on TV saying all this stuff, but c'mon people.
 
Am I the only one who doesn't see an issue here? Someone at Intel went over the top and Intel's own people redacted it. It would be a different story if there was a commercial on TV saying all this stuff, but c'mon people.
They bullshitted a press release then silently withheld major parts of it hoping for favourable follow on comments.
In case the only summary you read didnt include enough detail, hopefully this is a step forward.

If you can point to a press release where Intel redacted it I will consider your post.
 
At this point with Optane, i will believe it when i see it on store shelves. Tired of waiting around for vaporware.
 
What? Are you saying that they are misleading the public by removing the outlandish slides to the press? Why would that be misleading? Obviously they knew it was outlandish and they removed it from the presentation...

So you want to see lies?

No, you either didn't read the original article or failed to comprehend what it said.

Intel had a press briefing that was under NDA for journalists/tech reviewers etc. to preview the product before launch so then they can go and write articles about the fancy new Intel product so they could be ready once the black-out period ended. When journalists/reviewers requested copies of the slides for inclusion in their articles, an edited slide deck was given to them without many of the performance figures and claims presented in the original press briefing, and even when the embargo was lifted, no complete "as-presented" slide deck was provided.

So, responsible journalists/tech reviewers would have to go and re-edit their articles for accuracy since Intel changed the information and it would be irresponsible for them to publish views based on information that even Intel will no longer back up... Or less ethical press/reviewers would just push out their articles when the embargo was lifted and get published first, but with inaccurate information.

The point that Charlie was trying to make this time was that Intel had never stooped this low before with omitting information from slide decks handed out to the press that was in the original presentation.

Definitely it's not to say that other companies have not done the same or worse, nor that Intel hasn't done worse things before. It's just another example of how some reps will go to huge lengths to "shine the turd" when their company releases an underperforming product.
 
Can somebody double-check my math, please?

Start here:
http://www.intel.com/content/www/us...orage/optane-memory/optane-32gb-m-2-80mm.html

Intel's specs for 32GB Optane M.2 80mm:
Sequential Read: 1,350 MB/second
Random Read: 240,000 IOPS (input operations per second)

If we assume one IOP uses 4,096 bytes, then:

240,000 IOPS x 4,096 bytes = 983.0 MB/second

Max bandwidth: 8G / 8.125 bits per byte x 2 PCIe 3.0 lanes = 1,969.2 MB/second (w/ 128b/130b jumbo frames)

Aggregate controller overhead in sequential READ mode: 1.0 - (1,350 / 1,969.2) = 31.4%

Aggregate controller overhead in random READ mode: 1.0 - (983.0 / 1,969.2) = 50.0%

Based on the latter 2 calculations, it appears that the on-board controller
still needs some work, particularly in random mode.

I haven't bothered with any calculations for WRITEs because the stated performance is so inferior, imho:

Sequential Write: 290 MB/second
Random Write: 65,000 IOPS

1. You can't just subtract only encoding overhead from PCIe. You need to also subtract TLP (30 bytes per packet), Flow Control, and Packet Size, which is variable and may be intentionally higher if Intel chose a smaller packet size to reduce latency at a slight cost to total IOPS. Real-world throughput for PCIe 3.0 x2 is likely closer to 1.6-1.7 GB/s.
2. Any given die of memory must be addressed. There is addressing and data transfer overhead associated with each random request. Sequential transfers can benefit from streaming modes where random accesses can not. All of this overhead is outside the bounds of the controller. Normally this is not as big of an issue at higher QD's, but this part has only two dies, meaning a more significant percentage of the overall time is spent on lining up for the data transfers.
3. From what I've seen so far, the Optane M.2 part is likely host managed, meaning the controller is nothing but a bridge, meaning the actual 'controller' overhead would be extremely small, and the differences you see between sequential and random perf come more from overhead associated with addressing the dies (similar to how DDR memory timimgs work), except in this case you are only talking to one or two dies across a far more limiting (less parallel) bus.
4. Intel has likely dialed back some of the timings / bus speeds (between controller and XPoint dies) to reduce power consumption of the final part. Note the difference between the final specs and those that were previously leaked.
5. Writes slower as they are more power limiting, and Intel likely wanted to keep this as a part that can be installed without any consideration to cooling. The P4800X (with better cooling) has very similar read / write specs.
 
This is called marketing. Kind of like how light bulbs all still claim you will save money compared to some old incandescent bulb, you aren't even allowed to buy those incandescent bulbs anymore so why is anyone comparing to that?
 
I think it was Light Peak I was originally meaning. Back when I had read about it, I thought it was intended for internal connections, perhaps I misread or misunderstood.

Latency is already extremely low with storage tech though. I haven't checked how Optane stacks up in this regard, however. But Optane is being touted as an alternative to RAM, so sure as hell Bandwidth is a real concern!

Also ZFS tech can seriously offset the performance limitations of spinning disk. Solid state isn't necessarily the only answer for performance.


You mean this?

As far as Optane Memory goes, you guys keep locking in on the bandwidth, and why the first iteration of the M.2 part is only x2 PCIe. You are all missing the key point and that is the latency. Low latency is what makes a PC respond to input quickly. SSDs are a good gain over HDDs, and Optane is another good gain over that. The issue is that software and system architecture was not built around the possibility of having something non-volatile with such a low latency, so seeing the full benefit of the tech is going to take some time. For now we have what looks to be a decent caching layer that will help HDD-only systems behave much more like SSD systems (for typical use). Power users will still go full SSD (or full Optane once that's out) anyway.

Side note - at least one of the 'removed' slides in Charlie's article is in all revisions of the press deck they sent out. I went back and confirmed it myself. Yes, Intel removed slides from the deck compared to what was briefed, but those that were on campus also observed active examples of the tech operating with real-time telemetry. I didn't bust out a calculator to confirm the results matched the slides exactly, but it was close enough for me to believe their claims were based in reality. Either way, we'll know shortly as review embargos will expire soon enough.
 
Back
Top