Intel Bug - Did this actually tip the balance for any AMD CPUs?

GotNoRice

[H]F Junkie
Joined
Jul 11, 2001
Messages
12,006
AMD fans are pretty much going crazy over this Intel bug, like the 2nd coming of Jesus. I really would like to know, reputation aside, has this actually tipped the balance in any benchmarks?

Are there any cases where AMD CPUs that were largely shunned due to bad performance are now competitive?

Even with a worst-case 30-40% slowdown for Intel CPUs, would that result in even a single AMD CPU able to beat the top-end Intel CPUs in games?

Is there ANYTHING about this, aside from company reputation, that has the potential to make AMD competitive in the CPU market again?
 
I see this will clearly help EPYC gain some ground in the sever market. That us unless Intel finds a way to patch this with a microcode update.
 
AMD fans are pretty much going crazy over this Intel bug, like the 2nd coming of Jesus. I really would like to know, reputation aside, has this actually tipped the balance in any benchmarks?

Are there any cases where AMD CPUs that were largely shunned due to bad performance are now competitive?

Even with a worst-case 30-40% slowdown for Intel CPUs, would that result in even a single AMD CPU able to beat the top-end Intel CPUs in games?

Is there ANYTHING about this, aside from company reputation, that has the potential to make AMD competitive in the CPU market again?

Before this AMD was already competitive. You should look at the Ryzen/Thredripper/Epyc line of chips to see what kind of improvement was already made in the latest AMD CPU's before the Intel bug. We will have to wait and see if the bug fix has a genuine impact on gaming on either side of the CPU coin.
 
AMD fans are pretty much going crazy over this Intel bug, like the 2nd coming of Jesus. I really would like to know, reputation aside, has this actually tipped the balance in any benchmarks?

Are there any cases where AMD CPUs that were largely shunned due to bad performance are now competitive?

Even with a worst-case 30-40% slowdown for Intel CPUs, would that result in even a single AMD CPU able to beat the top-end Intel CPUs in games?

Is there ANYTHING about this, aside from company reputation, that has the potential to make AMD competitive in the CPU market again?

Absolutely. If you had used Google for all of 5 minutes or less you wouldn't even need to ask this question.
 
Absolutely. If you had used Google for all of 5 minutes or less you wouldn't even need to ask this question.

So far what google has shown me is that gaming performance is minimally impacted, which would seem to indicate that the huge lead Intel has maintained, will continue to be maintained. I'm not running a database.

Just to be clear, i'm not against AMD, or inherently pro Intel. I would love to see actual competition bring down prices on all CPUs. I'm wondering if that is right around the corner yet, or not.
 
The Intel patch severely hurts IO performance and is a task that Epyc excels at. It basically gave AMD a huge reason to get serious inroads into the server market if the penalties on Intel are over 10% which seems likely.
 
Reading the threads yesterday, you would find many users foaming at the mouth and that's one way to react to it. IMO, AMD has demonstrated in the past that even looking the gift horse in the mouth, smooth execution is hardly an assumed success. I want to be optimistic, but there is still a long way to go.

I would love it if AMD could recover their old enterprise install base.
https://www.theregister.co.uk/2006/08/01/amd_x86_server_market_share_q2/
If these numbers are accurate, AMD once commanded 25.9% server market share in Q2 2006 - which is insane. Of course we know what happened in the 2nd half of 2006 - Core2/Conroe...
 
I wonder what the patch will do to my X58 xeon - it'll be interesting how performance compares Ryzen vs I7 after the dust settles...
 
Shouldn’t if a business layered security properly.

Well any new purchases maybe but again, if security is implemented I don’t see a mass exodus.

Even before this bug bit like mass data leaks, customer data loss in cloud, data pulled not encrypted, tons more issues were never present before...
 
One, this stuff just got revealed publicly, and a lot of testing hasn't been done (or revealed, January 9th is supposed to have another disclosure of info on this stuff), and two, will mostly impact specific workloads and scenarios, so no general tsunami of AMD growth. Maybe some specific company doing some specific workload may switch to AMD but that can't be determined this early, and I doubt we'll know that until AMD issues some new guidance/info on future growth beyond their current predictions.

Maybeee..AMD can capitalize on this to tout supporting a 2nd x86 CPU company, but they were attempting to do that already.
 
A lot of our virtual machines are IO heavy, our next clusters will definitely be AMD, the major number of cores and threads already had us going to AMD this makes it an easy decision. All I read is how the major cloud players and gaming servers are being crippled by these patches on Intel servers. I just hope we can get some amd servers, the supermicro ones seem tempting but the support isn't the 4 hour support we can get from the major vendors

heard it drastically slows down SSD performance.
 
Last edited:
  • Like
Reactions: N4CR
like this
I will be honest I love to sell my 6800K/Motherboard and 7700/Motherboard and just go with ThreadRipper build.
 
AMD fans are pretty much going crazy over this Intel bug, like the 2nd coming of Jesus. I really would like to know, reputation aside, has this actually tipped the balance in any benchmarks?

Are there any cases where AMD CPUs that were largely shunned due to bad performance are now competitive?

Even with a worst-case 30-40% slowdown for Intel CPUs, would that result in even a single AMD CPU able to beat the top-end Intel CPUs in games?

Is there ANYTHING about this, aside from company reputation, that has the potential to make AMD competitive in the CPU market again?

For starters there are AMD current CPU's that beat many Intel CPU's in gaming results despite not really being able to compete against the very high clocked parts, which is understandable as gaming is going to benefit the 20-30% clock advantage more than it will favour cores. Yesterday for instance an intel fan made the assumption that Sandy beats AMD in gaming, this may only be true if you run a 4.8+ Sandy against stock ryzen, however that only gets the 2600K largely on par with the Ryzen 5 1500X which is a mainstream part while the 2600K is high end (yes even in 2017 it still remains high end even though its performance is now very old), AMD offer exceptionally good performance and seem more diverse in performance and far less shady than intels cheap tricks with Coffee lake, especially the 8400's rather dubious boost states often being seconds long and rarely every hitting the advertised 3.8 under hard loading. Another overlooked factor is the 12v results in high loads like Cinebench and Blender where the 8700K uses the same wattage as a 1800X to do less work despite a 22% clock boost (or 33% if MCE is on), meaning the 8 core AMD does more work than a massively clocked intel part while using the same power as a 6 core intel part. The 1950X basically used the same power as the i7 6900K in the same workload to produce the I9 7960X result, the parallel workload performance on AMD is unreal and efficient while Intel is running over 500W in those loads to the 1950X's 250W.

As to the main question I think it is to moot to tell, I think a doo r has opened for AMD now though I don't think it will alter much short term, though long term with the opening AMD may earn a lot of market in the server domains which is a very lucrative market space. For general PC market AMD has been selling ryzen at a similar rate to intel so DT users will see a more 50-50 split for some time now AMD is very competitive, it is however not a very big cash cow.

all this does essentially is give AMD a win on architectural design, Intel maybe got to complacent to fast and underestimated the Ryzen threat for to long.
 
Short answer: no. The bug does jack shit for gaming except for that one game you can maybe find out of thousands.
And they are already competitive, just not in the highend of things. I expect Zen 2 will be a more competitive fight against Icelake.
 
Short answer: no. The bug does jack shit for gaming except for that one game you can maybe find out of thousands.
And they are already competitive, just not in the highend of things. I expect Zen 2 will be a more competitive fight against Icelake.

try Fallout 4 or Skyrim SE, which are heavily I/O based and require LOT of SSD speed (yes, SSD..) for optimal performance. those are terribly affected.

about the topic, Ryzen it's already competitive with Intel, My [email protected] have better performance than one of my [email protected] in all ways, 5960X need to be at 4.3ghz to offer similar performance in single thread and multi-thread. with a 10% performance hit on haswell (according Intel) the difference will just increase in favor of ryzen and with ryzen refresh getting higher clocks I can see intel in a bad position.
 
I think the threadripper refresh this year is going to be my upgrade path. The 1800x has been great except for the early platform and chipset issues. I think this makes the intel 16 and 18 hedt options look even more silly with their price and lack of ecc support.
 
I think the threadripper refresh this year is going to be my upgrade path. The 1800x has been great except for the early platform and chipset issues. I think this makes the intel 16 and 18 hedt options look even more silly with their price and lack of ecc support.

If not for the price of ddr4, I'd be looking at TR. I think I'll go 2800x or whatever the top am4 part is unless ddr4 prices drop. If they do maybe I'll replatform when TR2 comes out.
 
Yeah, ram prices are really out of control. I'm hoping they come down this year
 
This actually may help for anyone wondering, (I KNOW! AMD subforum but intel thread so whatever). 4790K surprising, results.. Ryzen also tested..
 
So as long as you’re playing Destiny or watching porn on your 7900X you won’t notice a difference.

Meanwhile people like me who bought into HEDT for doing real work are getting fucked.

I use VMs (ouch) to compile code (double ouch) on a Samsung 960 Pro (triple ouch). The Meltdown and Spectre fixes are stacking to completely castrate my real world performance that I use to earn my income.

The saving grace is I bought my 7820X last month from MicroCenter and I’m still in the holiday return period. Returning it tomorrow for a Threadripper 1920X setup even though its a major pain to recompile all my shit for AMD.

Fuck Intel.
 
According to Google, they've only seen minimal impact from the fixes they applied.
https://www.blog.google/topics/goog...ulnerabilities-without-impacting-performance/

In September, we began deploying solutions for both Variants 1 and 3 to the production infrastructure that underpins all Google products (snipped) Thanks to extensive performance tuning work, these protections caused no perceptible impact in our cloud

Retpoline fully protects against Variant 2 without impacting customer performance on all of our platforms.


Edit: Though I've heard that Retpoline is only fully protective under broadwell- and that you need to instead use IBRS under skylake+ (which is also minimally impactive).
 
Last edited:
I have been carefully watching these updates and the real question how much will it hurts business with VDI!
 
AMD fans are pretty much going crazy over this Intel bug, like the 2nd coming of Jesus. I really would like to know, reputation aside, has this actually tipped the balance in any benchmarks?

Are there any cases where AMD CPUs that were largely shunned due to bad performance are now competitive?

Even with a worst-case 30-40% slowdown for Intel CPUs, would that result in even a single AMD CPU able to beat the top-end Intel CPUs in games?

Is there ANYTHING about this, aside from company reputation, that has the potential to make AMD competitive in the CPU market again?

First recall that this is not an "Intel bug", but a collection of security flaws that affect virtually any modern CPU: AMD, Apple, ARM, Fujitsu, IBM, Intel, NVIDIA,..

https://www.hardocp.com/article/2018/01/04/quick_facts_about_meltdown_spectre

Games are affected by much less than 30--40%. Usually the impact in negligible. The 7700k (full patched) loses 1--2% in applications; and 2% in FPS and 8% in 99% framerates on games (results at 720p).

Expect the performance impact to be reduced over time when patches are optimized for speed. Those first patches are emergency patches to secure processors.
 
I see this will clearly help EPYC gain some ground in the sever market. That us unless Intel finds a way to patch this with a microcode update.

It is not black and white with Intel Xeon doomed and AMD EPYC safe.

There are workloads where EPYC is affected and Xeon isn't

embed.php


There are workloads where Xeon is affected and EPYC isn't

embed.php


and there are workloads where both EPYC and Xeon are unaffected

embed.php
 
For starters there are AMD current CPU's that beat many Intel CPU's in gaming results despite not really being able to compete against the very high clocked parts

Sure you can cherry pick and show that R5-1600X beats a cheap Pentium G4560 in games. But this is not the kind of comparison people usually does. We compare, for instance, flagship RyZen vs flagship Intel or we compare models in a similar price bracket.

Yesterday for instance an intel fan made the assumption that Sandy beats AMD in gaming, this may only be true if you run a 4.8+ Sandy against stock ryzen, however that only gets the 2600K largely on par with the Ryzen 5 1500X which is a mainstream part while the 2600K is high end (yes even in 2017 it still remains high end even though its performance is now very old)

No idea what he said you, but stock vs stock 4C/4T Ryzen is like 4C/4T Sandy ( R3 1300X ~ i5 2500k ~ R3 1200 ) and 4C/8T Ryzen is like 4C/8T Sandy ( R5 1500X ~ i7 2700k ). Now Sandy overclocks much higher than Ryzen (5.1GHz vs 3.9GHz). So overclocked Sandy beats overclocked Ryzen is probably what he said you.

AMD offer exceptionally good performance and seem more diverse in performance and far less shady than intels cheap tricks with Coffee lake, especially the 8400's rather dubious boost states often being seconds long and rarely every hitting the advertised 3.8 under hard loading.

So you insist on your old idea that running Coffee Lake with MCE enabled is "cheating benchmarks", but you continuously provide links to reviews as Guru3D that run RyZen only with the interconnect overclocked by 20% or more. So, running CoffeeLake chips with an automated BIOS setting enhancement is "cheating", but running Ryzen chips with a manual overclock is not cheating, right? :rolleyes:

Another overlooked factor is the 12v results in high loads like Cinebench and Blender where the 8700K uses the same wattage as a 1800X to do less work despite a 22% clock boost (or 33% if MCE is on), meaning the 8 core AMD does more work than a massively clocked intel part while using the same power as a 6 core intel part.

Power consumption scales almost linearly with number of cores, but about quadratically (cubic scaling in extreme cases) with frequency. So a higher clocked six-core will consume more power than a lower clocked eight-core that gives the same throughput, if and only if, everything else in the chips is the same. Note that I mentioned throughput because this doesn't apply to latency (which you always ignore by continuously referring to CineBench and Blender like if those are the only workloads that there exist or that matter).

Moar-cores lower clock consuming less power isn't a mystery; this is a basic computer science law. Now, this law is valid only when "everything else in the chips is the same", which is not the case when comparing Intel to AMD.

You talk about power consumed on the 12v rail by 8700K vs 1800X to do a task and by 7960X vs 1950X. This has a name: Efficiency. The i7 is 18% more efficient than the R7 on total throughput as measured in x264 task.
The TR is 7% more efficient than the i9 on same task, but both the 1950X and the 7980X are on pair (2% gap).

As to the main question I think it is to moot to tell, I think a doo r has opened for AMD now though I don't think it will alter much short term, though long term with the opening AMD may earn a lot of market in the server domains which is a very lucrative market space. For general PC market AMD has been selling ryzen at a similar rate to intel so DT users will see a more 50-50 split for some time now AMD is very competitive, it is however not a very big cash cow.

all this does essentially is give AMD a win on architectural design, Intel maybe got to complacent to fast and underestimated the Ryzen threat for to long.

Spectre and Meltdown exploit microarchitectural elements and so both attacks affect CPUs from different vendors. EPYC is affected as well.

The idea that AMD Ryzen has been selling so well as Intel is refuted by data. RyZen got momentum at launch but this vanished away. Also Q3 data from Intel and AMD show that most RyZen purchases were from former AMD fans; Intel finances were almost unaffected by RyZen sales. RyZen gave AMD about 3% gain in desktop marketshare, but this was before CoffeeLake launch.

Once Intel launched CoffeeLake, the only advantage of RyZen (8-core Ryzen being faster than 4-core Kabylake in throughput workloads as Blender) vanished as hot air and Intel recovered #1 in sales and revenue (check last Mindfactory or Amazon sales). Current marketshare for AMD is back to pre-Zen times.
 
Last edited:
Sure you can cherry pick and show that R5-1600X beats a cheap Pentium G4560 in games. But this is not the kind of comparison people usually does. We compare, for instance, flagship RyZen vs flagship Intel or we compare models in a similar price bracket.



No idea what he said you, but stock vs stock 4C/4T Ryzen is like 4C/4T Sandy ( R3 1300X ~ i5 2500k ~ R3 1200 ) and 4C/8T Ryzen is like 4C/8T Sandy ( R5 1500X ~ i7 2700k ). Now Sandy overclocks much higher than Ryzen (5.1GHz vs 3.9GHz). So overclocked Sandy beats overclocked Ryzen is probably what he said you.



So you insist on your old idea that running Coffee Lake with MCE enabled is "cheating benchmarks", but you continuously provide links to reviews as Guru3D that run RyZen only with the interconnect overclocked by 20% or more. So, running CoffeeLake chips with an automated BIOS setting enhancement is "cheating", but running Ryzen chips with a manual overclock is not cheating, right? :rolleyes:



Power consumption scales almost linearly with number of cores, but about quadratically (cubic scaling in extreme cases) with frequency. So a higher clocked six-core will consume more power than a lower clocked eight-core that gives the same throughput, if and only if, everything else in the chips is the same. Note that I mentioned throughput because this doesn't apply to latency (which you always ignore by continuously referring to CineBench and Blender like if those are the only workloads that there exist or that matter).

Moar-cores lower clock consuming less power isn't a mystery; this is a basic computer science law. Now, this law is valid only when "everything else in the chips is the same", which is not the case when comparing Intel to AMD.

You talk about power consumed on the 12v rail by 8700K vs 1800X to do a task and by 7960X vs 1950X. This has a name: Efficiency. The i7 is 18% more efficient than the R7 on total throughput as measured in x264 task.
The TR is 7% more efficient than the i9 on same task, but both the 1950X and the 7980X are on pair (2% gap).



Spectre and Meltdown exploit microarchitectural elements and so both attacks affect CPUs from different vendors. EPYC is affected as well.

The idea that AMD Ryzen has been selling so well as Intel is refuted by data. RyZen got momentum at launch but this vanished away. Also Q3 data from Intel and AMD show that most RyZen purchases were from former AMD fans; Intel finances were almost unaffected by RyZen sales. RyZen gave AMD about 3% gain in desktop marketshare, but this was before CoffeeLake launch.

Once Intel launched CoffeeLake, the only advantage of RyZen (8-core Ryzen being faster than 4-core Kabylake in throughput workloads as Blender) vanished as hot air and Intel recovered #1 in sales and revenue (check last Mindfactory or Amazon sales). Current marketshare for AMD is back to pre-Zen times.
Sure you can cherry pick and show that R5-1600X beats a cheap Pentium G4560 in games. But this is not the kind of comparison people usually does. We compare, for instance, flagship RyZen vs flagship Intel or we compare models in a similar price bracket.



No idea what he said you, but stock vs stock 4C/4T Ryzen is like 4C/4T Sandy ( R3 1300X ~ i5 2500k ~ R3 1200 ) and 4C/8T Ryzen is like 4C/8T Sandy ( R5 1500X ~ i7 2700k ). Now Sandy overclocks much higher than Ryzen (5.1GHz vs 3.9GHz). So overclocked Sandy beats overclocked Ryzen is probably what he said you.



So you insist on your old idea that running Coffee Lake with MCE enabled is "cheating benchmarks", but you continuously provide links to reviews as Guru3D that run RyZen only with the interconnect overclocked by 20% or more. So, running CoffeeLake chips with an automated BIOS setting enhancement is "cheating", but running Ryzen chips with a manual overclock is not cheating, right? :rolleyes:



Power consumption scales almost linearly with number of cores, but about quadratically (cubic scaling in extreme cases) with frequency. So a higher clocked six-core will consume more power than a lower clocked eight-core that gives the same throughput, if and only if, everything else in the chips is the same. Note that I mentioned throughput because this doesn't apply to latency (which you always ignore by continuously referring to CineBench and Blender like if those are the only workloads that there exist or that matter).

Moar-cores lower clock consuming less power isn't a mystery; this is a basic computer science law. Now, this law is valid only when "everything else in the chips is the same", which is not the case when comparing Intel to AMD.

You talk about power consumed on the 12v rail by 8700K vs 1800X to do a task and by 7960X vs 1950X. This has a name: Efficiency. The i7 is 18% more efficient than the R7 on total throughput as measured in x264 task.
The TR is 7% more efficient than the i9 on same task, but both the 1950X and the 7980X are on pair (2% gap).



Spectre and Meltdown exploit microarchitectural elements and so both attacks affect CPUs from different vendors. EPYC is affected as well.

The idea that AMD Ryzen has been selling so well as Intel is refuted by data. RyZen got momentum at launch but this vanished away. Also Q3 data from Intel and AMD show that most RyZen purchases were from former AMD fans; Intel finances were almost unaffected by RyZen sales. RyZen gave AMD about 3% gain in desktop marketshare, but this was before CoffeeLake launch.

Once Intel launched CoffeeLake, the only advantage of RyZen (8-core Ryzen being faster than 4-core Kabylake in throughput workloads as Blender) vanished as hot air and Intel recovered #1 in sales and revenue (check last Mindfactory or Amazon sales). Current marketshare for AMD is back to pre-Zen times.

The issue of power is quite easily dealt with by Steve Burke's 8700K review

https://www.gamersnexus.net/hwrevie...vs-ryzen-streaming-gaming-overclocking/page-3

The R7 1700 shows favourable results and being the most popular of the R7 family due to its packaged cooler and high performance despite the conservative clocks. In high threadloaded stresses it consumes less power than the 8700K and rather surprisingly AMD's 1C power usage is impressive especially thread ripper compared to the unmitigated disaster of 7960-80 parts that just fall off the cliff.

I will explain the concept slowly, the R7 1700 produces about on par performance with the 8700K despite 23% less clock (real 8700K all core) and 33-34% (MCE active), even running the 1700 up to 3.8/3.9ghz to match the MCE result on the 8700K its using the same power to do more work per watt, at a considerably lower total system cost and on a stock fan. In the power segment of reviews AMD's use of the LP node has shown impressive results and Ryzen on a whole offers high performance at low clocks, this has been done independently but nowhere published yet, but the 3ghz show down shows across the board Ryzen is not starved at low clocks while offering more cores for less consumption is a massive hit to many who are now in the segment of streaming.

The comment was Sandy performance depending on the overclock, that is only half true, you need a 4.7ghz 2600K to about keep up with the 1500X stock and consuming a monsterous amount more power in the process, the 1500X also shows superior SMT performance in taxing loads like streaming with the 2600K almost blowing out on dropped frames. I also will add the 1500X is a entry level part while the 2600K is high end top binned silicon.

Paul Alcorn did a pretty strong round up on streaming from entry to enthusiast and AMD showed up impressively in that result two. Performance wise AMD continues to be to close for the fanboy to be comfortable with.

Ultimately for this issue AMD can utilize less clock to get their output while Intel sacrifices power on clockspeed. more work, more parallelism, less power is more efficient work. This is also shown how a 3ghz 8700K is about on par with a 3ghz 1600, given the price between the two platforms its a clear value / perf win. The only win is the one where more clock pulls through and even that will have no place to hide when AMD lifts clocks. AMD will have their flaws, but pretending Intel doesn't is just naive at best.

The MCE issue is a topic that ill just let pass, despite the evidence of how much it boosts scores and at the cost of up to 20-25% more power and running up XMP to absolute max it is a topic you don't understand despite every reviewer with credibility re running benches or acknowledging the deceptive nature of it. Move along nothing to see here, you can live on that island alone.

RAM issue seems to be a regular thing by the 2Bit brigade, no objections to any intel bench running DDR4 3600 or 4000, but if you slap in a 2800 kit on Ryzen its outrage. Pot kettle black is all I have to say on this matter. Add faster memory helps intel as much as it does AMD, making the IMC faster boo hoo.

as to market share again tomshardwares paul alcorn did real investigative journalism and went into actual study and again you are proven wrong and your stupid sources are either misquoted, ie mindshare showed AMD out pacing Intel on per units sold. Any ways you can have a read. It seems the professionals state the number is around 10-12% share gained in consumer PC which excludes black friday and christmas rewards.

http://www.tomshardware.com/news/amd-ryzen-intel-desktop-pc-market-share,36152.html

There are some very interesting caveats there. so while Coffee is more the in thing now this due to Ryzen being available since March 2017, it is the new flavour of the month like Zen + will be in 2-3 months from now.
 
Wow delusion-ed much... AMD is cheaper and less affected than my 8700K that is all.

8700k = $379.00
1800X = $379.99

https://www.amazon.com/Best-Sellers-Computers-Accessories-Computer-CPU-Processors/zgbs/pc/229189#1

In #24 you can find three linux-patched benches. They are server workloads because I was discussing Xeon vs EPYC, but the 8700k and the 1800X are benched as well. In some workloads the 1800X is more affected by the patches; in some workloads the 8700k is more affected.; in some workloads none of them is affected.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
The R7 1700 shows favourable results and being the most popular of the R7 family due to its packaged cooler and high performance despite the conservative clocks. In high threadloaded stresses it consumes less power than the 8700K and rather surprisingly AMD's 1C power usage is impressive especially thread ripper compared to the unmitigated disaster of 7960-80 parts that just fall off the cliff.

As explained before power consumption scales non-linearly with clocks. So reducing clocks is a well-known way to reduce power consumption and increase efficiency. The R7 1700 is 47% more efficient than the R7 1800X. Obviously the same physical law applies to Intel processors and the i5 8400 is more efficient than the i5 8600k. Not a mystery here.

In fact the i5 8400 is the more efficient chip, and it beats the R7 1700 both in 1C and all-core load. In 1C the i5 is 99% more efficient than the R7.

I will explain the concept slowly, the R7 1700 produces about on par performance with the 8700K

The i7 8700k run circles around the R7 1800X, and the 1800X is ~12% faster than the R7 1700. I will leave you to fill in the dots...

The comment was Sandy performance depending on the overclock

Answered above.

Ultimately for this issue AMD can utilize less clock to get their output while Intel sacrifices power on clockspeed. more work, more parallelism, less power is more efficient work.

The problem with the moar cores approach is on those workloads that don't scale efficiently to all cores, which is the immense mayority of the wortkloads. If everything was Cinebench and Blender then Zen wouldn't be needed. A 16 core jaguar @2GHz on 14nm would be much more efficient than 8 Zen cores at @3.5GHz. ;)

This is also shown how a 3ghz 8700K is about on par with a 3ghz 1600, given the price between the two platforms its a clear value / perf win.

If about on pair means 15--20% behind I agree. I don't know any user that purchase i7 8700k or R5 1600 to run them to 3GHz. Maybe that is the reason why the 8700k is #1 in sales in Amazon whereas the 1600 is only #8.

The MCE issue is a topic that ill just let pass, despite the evidence of how much it boosts scores and at the cost of up to 20-25% more power and running up XMP to absolute max it is a topic you don't understand despite every reviewer with credibility re running benches or acknowledging the deceptive nature of it.

The same reviews that test RyZen overclocked but label it as stock in graphs? The same reviews that test engineering samples of Intel chips but forget to add an "ES" label to graphs? I understand perfectly why they only want to test stock Intel vs overclocked RyZen.

RAM issue seems to be a regular thing by the 2Bit brigade, no objections to any intel bench running DDR4 3600 or 4000, but if you slap in a 2800 kit on Ryzen its outrage.

Keep repeating that falsity, and ignoring that the problem is not RAM but the interconnect.

as to market share again tomshardwares paul alcorn did real investigative journalism and went into actual study and again you are proven wrong and your stupid sources are either misquoted, ie mindshare showed AMD out pacing Intel on per units sold. Any ways you can have a read. It seems the professionals state the number is around 10-12% share gained in consumer PC which excludes black friday and christmas rewards.

http://www.tomshardware.com/news/amd-ryzen-intel-desktop-pc-market-share,36152.html

Before Zen launch the desktop marketshare was 9.9%. It was projected to have increased to 12.5--12.9% for 4Q17. This is the "3% gain" provided by Zen that I quoted here in the forums, for instance in a older post from 2017.

But those projections used the tendency before CoffeLake launch. After CofeeLake launch Intel has stoled marketshare from AMD and AMD current marketshare is back to pre-Zen times.

Try again. ;)
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
As explained before power consumption scales non-linearly with clocks. So reducing clocks is a well-known way to reduce power consumption and increase efficiency. The R7 1700 is 47% more efficient than the R7 1800X. Obviously the same physical law applies to Intel processors and the i5 8400 is more efficient than the i5 8600k. Not a mystery here.

In fact the i5 8400 is the more efficient chip, and it beats the R7 1700 both in 1C and all-core load. In 1C the i5 is 99% more efficient than the R7.



The i7 8700k run circles around the R7 1800X, and the 1800X is ~12% faster than the R7 1700. I will leave you to fill in the dots...



Answered above.



The problem with the moar cores approach is on those workloads that don't scale efficiently to all cores, which is the immense mayority of the wortkloads. If everything was Cinebench and Blender then Zen wouldn't be needed. A 16 core jaguar @2GHz on 14nm would be much more efficient than 8 Zen cores at @3.5GHz. ;)



If about on pair means 15--20% behind I agree. I don't know any user that purchase i7 8700k or R5 1600 to run them to 3GHz. Maybe that is the reason why the 8700k is #1 in sales in Amazon whereas the 1600 is only #8.



The same reviews that test RyZen overclocked but label it as stock in graphs? The same reviews that test engineering samples of Intel chips but forget to add an "ES" label to graphs? I understand perfectly why they only want to test stock Intel vs overclocked RyZen.



Keep repeating that falsity, and ignoring that the problem is not RAM but the interconnect.



Before Zen launch the desktop marketshare was 9.9%. It was projected to have increased to 12.5--12.9% for 4Q17. This is the "3% gain" provided by Zen that I quoted here in the forums, for instance in a older post from 2017.

But those projections used the tendency before CoffeLake launch. After CofeeLake launch Intel has stoled marketshare from AMD and AMD current marketshare is back to pre-Zen times.

Try again. ;)


8600k-blender-monkeys.png


8600k-blender-power.png


The 8400 runs a higher all core turbo than the 1700 but delivers 41.12% more performance at the cost of 34.23% more power used, it is also a 8/16 part so in doing that it is adding even more positive value on efficiency over Intel as Steve Burke concludes:

For Blender, we’re measuring the i5-8600K stock CPU at 66W down the rails, which is actually pretty competitive. The R5 1600X was running 81W, a 22% increase over the 8600K – but it also had an inversely proportional decrease in render time, so that’s not necessarily a negative. Overclocking power consumption will entirely depend on what voltage you need to keep it stable; for this AVX workload, we needed 1.4VCore with some LLC tuning, so our 8600K ended up at 144W power draw, equivalent to a stock 1950X – and the rendering performance disparity is several times improved on the 1950X. Intel’s Coffee Lake architecture doesn’t necessarily scale well in power consumption once you start pushing 5GHz. It’s sort of a psychologically nice number to hit, 5.0, but dropping to 4.9 or 4.8 would reduce power consumption significantly in cases where a Vcore below 1.4 can be sustained. The overclocked R5 1600X, for comparison, is at 106W draw, making the overclocked 8600K consume 36% more power than the R5, while performing slightly behind.

AMD beating intel at their own game, in a Intel favourable environment with far superior SMT that doesn't require high clocks to offset limitations and producing better results than Intel. Again this is lower power, more performance which equates to more efficiency.

Zen cores are faster than Jaguar cores, a 16C 2Ghz Zen powered unit will outperform a 16C 2Ghz jaguar across the board, by a sizeable margin, it should be pretty know to you that architecture changes account for the biggest performance increase in clock vs clock comparisons with former generation parts. This is probably why Ryzen was 40-50% faster at lower clocks compared to bulldozer and in clock vs clock more like 70% faster.

I agree the Sandy issue was answered, need a massive overclock to compete with a stock entry level Zen part, the 2600K and 2500K 340 and 250 dollar parts struggle to deal with a stock 1300 and 1500X respectively, needless to say the latter work on B350 boards while to get the best of Sandy you needed a Z68/P67 or Z77 board. this is also excluding efficiency which goes out the door for Sandy at 4.7ghz, ramps up a 200+ Watt to a 3.9Ghz Zen rolling around 116W Sandy is completely overwhelmed despite the ludicrous claims otherwise.

The reason why we tested a bunch of CPU's at 3Ghz is because all the parts concerned are able to achieve that, some included older Lynfield and Nahelem parts. What 3ghz highlights is how Intels uArch is maxed out and why no gains have been seen from Devil Canyon to Coffee lake and how relient on clockspeed intel are. What the above numbers show is that Intel are now hitting the maximum frequency they can achieve without leakages as shown by how badly the numbers spiral for coffee lake at 5ghz. this is kind of why I don't want AMD to change philosophy of going from a 1.6l hybrid turbo to a V8 just to get performance at the cost of power and efficiency.

The same reviews that test RyZen overclocked but label it as stock in graphs? The same reviews that test engineering samples of Intel chips but forget to add an "ES" label to graphs? I understand perfectly why they only want to test stock Intel vs overclocked RyZen.

Oh you mean the ones on that graph that indicates stock or overclocked? stop being in idiot. Just for the record this graph was taken with redone numbers after Steve Burke called out Intel's BS.

RAM improves every CPU's performance AMD or Intel by improving the interconnect bandwidth, you just seem to only have an issue with AMD benches done on 2800-3000mhz but completely say nothing about intel tests done on 3400-4000 kits. If one is prepared to pay for 3200 kits then they can pay the premium for 2-5% gains at best case scenarios but a 2600 kit is perfectly fine for every man and his dog. Subtimings actually affect performance more than out and out mhz, like making a 2600kit beat a 3000mhz kit is a trick the best in the business when it comes to memory tweaking know, and I have been fortunate to have been taught this by someone that was #1 on HWBOT for like 7 years.

Herp derp, the 3% gain was Q4 alone, taking the annaul growth to 12.9%, further the article does not just apply to the target market which we need the data from, in niche markets ie: gaming it is expressly stated that the number is significantly higher.
 
Personally I’m planning on buying a Ryzen refresh just to annoy Juanrga and Shintai :D

Purchasing decisions have a part of subjectivity. That is the reason why I don't make hardware recommendations. I only discuss tech stuff as efficiency, performance, and only occasionally pricing, of products; and then left people to take their own decisions and purchase what they want.
 
8600k-blender-monkeys.png


8600k-blender-power.png


The 8400 runs a higher all core turbo than the 1700 but delivers 41.12% more performance at the cost of 34.23% more power used, it is also a 8/16 part so in doing that it is adding even more positive value on efficiency over Intel as Steve Burke concludes

Your own graphs show that the 1700 is only 32.7% faster than the 8400 whereas consuming 41.3% more power (79.95W vs 56.58W). So your own graphs show that the i5 is more efficient on a throughput workload despite having less cores. In other workloads that aren't Blender the i5 is even much more efficient than the 1700.

Not even bother to reply the rest of your post. It is a waste of time.
 
Competition is always good & this is an excellent opportunity for AMD to gain any ground that it can. This is both good & possible very bad for AMD. Good in the sense that some people will switch to an AMD based build for at least one generation. AMD will need to invest as much as they can into R&D so they can continue to compete with whatever Intel can dish out over the next few years. The potential very bad side of things for AMD is that this whole Spectre/Meltdown situation is going to force Intel back to the drawing board. While I do not expect to see any major changes with the new few generations of Intel CPU's, whatever comes after that is what AMD will need to brace themselves for. Only thing we know about it is the code name which is Sapphire Rapids but I am expecting that architecture to be the big jump. Similar to what Conroe was in 2006.
 
Your own graphs show that the 1700 is only 32.7% faster than the 8400 whereas consuming 41.3% more power (79.95W vs 56.58W). So your own graphs show that the i5 is more efficient on a throughput workload despite having less cores. In other workloads that aren't Blender the i5 is even much more efficient than the 1700.

Not even bother to reply the rest of your post. It is a waste of time.

Your math needs some work. 8400 is 49% slower than r7 1700.
 
So 1700 is 32.7% faster than the 8400, just as stated.

You're mixing denominators to make Intel look better. Come on man. If you're going to use the R7 as denominator for power, you have to use it for speed as well and vice versa.

So r7 is 32.7% faster while i5 uses 30% less power. Or, i5 is 49% slower while r7 uses 41% more power. Either way, the r7 is more efficient.

The real way is to measure watt*sec. So 80W for 28.8 seconds vs 56W for 42.8 seconds. so 2304 Ws for r7, 2396 Ws for i5 8400 making the r7 4% more efficient than the i5 8400.

Not only is it 32.7% faster, it's also 4% more efficient while doing so.
 
Last edited:
You... you know he's not going to stop obfuscating, right? It's just the pattern, but I'll say he's very good at talking circles around almost anything.

Yes, AMD is more than competitive now. When I do my new build in April / May, it'll be an AMD system, personally. First time in more than a decade.
 
Last edited:
Back
Top