Intel's 8th Generation Core Family - Coffee Lake (LGA 1151, 6C/12T)

Where do you expect Core i7-8700K's Turbo to land?

  • 3.8/3.9 GHz

    Votes: 0 0.0%
  • 4.0/4.1 GHz

    Votes: 3 23.1%
  • 4.2/4.3 GHz

    Votes: 6 46.2%
  • 4.4/4.5 GHz

    Votes: 3 23.1%
  • 4.6/4.7 GHz

    Votes: 1 7.7%

  • Total voters
    13
  • Poll closed .
Surprise, surprise. You make the 1800x argument. A CPU for those that want to use an a320 board or simply do not like to o/c. If you haven't been paying attention lately, the 1700x/1700 are about $100 cheaper than the i7 competitors... with an affordable mb. Those of course match or beat the 8700k in cinebench, blender, crona and others. ;);)
But they suck because bad 1080p gaming.

The R5 easily beats the 8400 in several multi thread benchmarks and overclock capable with a cheaper motherboard (no b360, what a joke) ;););):p:p:p
But they suck because bad 1080p gaming.

Your measure of market share: steam users and hwbot submissions. amiright?

What i7 competitors? They are not even in the same ball park performance wise.

And as I predicted, you made the "FX" sales pitch ;)
 
Feel free to supply the arguments.
You have no argument. Anything you say is null since you can't seem to be impartial. Your arguments have 0 merit unless they can actually be verified well which is very sad and pathetic.
I have all Intel in the house, though I do have a few systems that are just MB/CPU of AMD. But you know what? I can still see it from both sides.
 
You have no argument. Anything you say is null since you can't seem to be impartial. Your arguments have 0 merit unless they can actually be verified well which is very sad and pathetic.
I have all Intel in the house, though I do have a few systems that are just MB/CPU of AMD. But you know what? I can still see it from both sides.

So you gave no argument.

On the previous page I already gave an example:
upload_2017-11-25_10-45-30-png.44276


And you can get some more, even from a sponsored site.

dishonored2_1920_1080_min.png

bf1_1280_720.png

deusex_1280_720.png

farcryprimal_1280_720.png

hitman_1280_720.png

rottr_1280_720.png

watchdogs2_1280_720.png



Almost there, right?
 
Last edited:
Oh oh. Now you done it, Hagrid.

Our friend here is whiping out the Soviet benchmark showing SB like performance. He got ya!

Meanwhile, $250 R7 chips are able to get higher CB scores than a $400+ 8700k will ever get.
 
Oh oh. Now you done it, Hagrid.

Our friend here is whiping out the Soviet benchmark showing SB like performance. He got ya!

Meanwhile, $250 R7 chips are able to get higher CB scores than a $400+ 8700k will ever get.

Lets hope all you do is to run CB, just like FX users did what, encode movies and compress files? ;)

I am sure there is always another excuse why sales and forum hype never materializes in sales for AMD. Guess they should spend more money on engineering/QA and less on viral marketing and sponsorships.
 
Woohoo, them 1080ti fps charts running 720p. To bad tpu was too dopey to realize the 8400 was not running at 2.8 ghz for those benches.

Now we are discrediting cinebench?

There is a reason the majority of reviewers use it. It is very hard measure cpu performance while decoding, running several tabs on both windows as well as a vm with a vpn running with netflix/ game playing on aa second screen. Cinebench shows that the R7 can do all of this just as well or better.
 
"Reviewers", aka sponsored advertising, use Cinebench because they are paid to use Cinebench.

If they didn't use Cinebench they risk not getting "review samples", aka free stuff, from the manufacturer that demands Cinebench usage.

Cinebench is a "benchmark" of an obsolete 2010 version of a GPU benchmark running on CPU with zero proper instruction support.

It "indicated" absolutely jack squat. and has always "indicated" absolutely jack squat.

Reviewers use it mostly out of sheer incompetence.

The main reason why the company that made "Cinebench" never made a newer version is due to the obvious fact that it's a 100% GPU workload, and therefore is pointless to run on CPU once they got GPU rendering working.
 
Oh oh. Now you done it, Hagrid.

Our friend here is whiping out the Soviet benchmark showing SB like performance. He got ya!

Meanwhile, $250 R7 chips are able to get higher CB scores than a $400+ 8700k will ever get.
I know that either system will run games just fine. Price/performance AMD is kicking some booty.
 
"Reviewers", aka sponsored advertising, use Cinebench because they are paid to use Cinebench.

If they didn't use Cinebench they risk not getting "review samples", aka free stuff, from the manufacturer that demands Cinebench usage.

Cinebench is a "benchmark" of an obsolete 2010 version of a GPU benchmark running on CPU with zero proper instruction support.

It "indicated" absolutely jack squat. and has always "indicated" absolutely jack squat.

Reviewers use it mostly out of sheer incompetence.

The main reason why the company that made "Cinebench" never made a newer version is due to the obvious fact that it's a 100% GPU workload, and therefore is pointless to run on CPU once they got GPU rendering working.


Cinebenchist Pigs trying to impose their will on the people again, eh comrade!
All of these paid incompetent reviewers will go down in the Blender revolution.
 
Woohoo, them 1080ti fps charts running 720p. To bad tpu was too dopey to realize the 8400 was not running at 2.8 ghz for those benches.

Now we are discrediting cinebench?

There is a reason the majority of reviewers use it. It is very hard measure cpu performance while decoding, running several tabs on both windows as well as a vm with a vpn running with netflix/ game playing on aa second screen. Cinebench shows that the R7 can do all of this just as well or better.

Cinebench is truly a really bad single benchmark to use as a holy grail of general CPU performance. Shintai is correct - it's really just a best-case SSE throughput benchmark. That does provide some information, but is very incomplete to extrapolate how a CPU does all those other things you mention. Twisty branchy code is all over the place, and stresses processors very differently, as does integer driven stuff, things with more data dependencies, etc.

A netburst CPU with more cores and SSE2 would also do very well on CB, for example. I am not saying Ryzen is netburst, but rather a simple benchmark like this may lead you to believe something does better overall than it does.
 
Woohoo, them 1080ti fps charts running 720p. To bad tpu was too dopey to realize the 8400 was not running at 2.8 ghz for those benches.
Lol at posting 720p benchmarks and claiming they are relevant.
You can make retarded tests with any comparison if you do the classic AMD shill tactic of GPU bottlenecking all your "CPU Testing."

There's a reason why 90+% of youtube "reviewers" are pure clickbait shill trash.

My favorite part is where the "reviewer" starts using 8x SSAA to GPU bottleneck the tests that probably didn't give results that they were trying to engineer.

====================================================================================

Reminds me of gems like this from a supposedly "respectable" review site:

https://www.anandtech.com/show/6985/choosing-a-gaming-cpu-at-1440p-adding-in-haswell-

Read this page for the fully AMD sponsored shilling:

https://www.anandtech.com/show/6985/choosing-a-gaming-cpu-at-1440p-adding-in-haswell-/9

Anandtech literally recommended that you purchase a faildozer when Haswell came out:
 
By your logic consoles prove that 2 ghz Jaguar core is "good enough" for gaming and that more CPU performance is "lol irrelevant posts".

But what does one expect from AMD's shill army that reposts the same talking points from AMD Shill HQ every time threads reach a new page.

I cant believe I just clicked a 4 year old tech page for an R7 v i7 debate. Fak me.

Exact same narrative AMD shills have been trying to spin since Phenom came out.
 
If a 2 ghz Jaguar core doea this trick for the ps4 pro/ X1X, I would feel good about a 4 ghz Ryzen core for any gaming scenario.
 
Oh and you might want to look at a page 86 post by Kyle. And then delete your post before getting banned. I would miss you too much. :(
 
name calling
Oh and you might want to look at a page 86 post by Kyle. And then delete your post before getting banned. I would miss you too much. :(

If i get banned it simply shows exactly who pays him to keep you shills around.
 
Noted. Is there a better alternative to such scenarios? Asking sincerely.

The best case is of course to use the application(s) you will actually be using. Specific games and resolutions you use, for example, or if you do encoding / rendering, the program of choice doing representative workloads. You can fall back to try to use similar things when those aren't readily available.

To cite me personally, I build huge software projects, so when eyeballing parts for a new machine I look for benchmarks on related things. I found folks had compiled Chromium build times broken down by processor. While I'm not building Chromium, this is still very close to my personal workload and does mirror my usage closely. Much closer than a benchmarked software renderer, of course.
 
There is no absolute side anymore. Intel was the main for all but budget. Things have changed. It's ok to root for both sides. :)
I don't need a powerful cpu anymore since I run 4K or use the Rift. My 5960x barely breaks a sweat doing either. Hoping some good stuff next year in 6/8 core since that seems to be the sweet spot, though some games might be fine on 4.
 
...Also because Intel's chips are clocked a lot higher. I thought the IPC was closer to 10%.

The post just above yours has all chips clocked at 3.5GHz. The performance gap on Audacity is exclusively due to IPC.

|182−241| / 241 => 24.48% IPC gap.

Now check this

blender-2.png


i7-7800K

IPC = (1/3032.11) / (6C * 4.3GHz) = 1.278 * 10^(-5)

R7-1800X

IPC = (1/3297.1) / (8C * 3.7GHz) = 1.025 * 10^(-5)

So the IPC gap is 25% in Blender-Gooseberry. In Cinebench R15 the IPC gap is reduced to something around 8%, but Cinebench is the exception and not the rule. The average IPC gap is above 15%.
 
well according to Shintai and juanrga they are saying IPC is greater when using software that is more advanced to take advantage of more advanced architecture and cpu extension, such as skylake-x's cache rework but unfortunately with the lowly clocked uncore ruined it. however imho what they didnt explain or perhaps they disagrees to, is that ryzen actually has better and more advanced architecture in certain areas like caching and it's hyperthreading more superior than intels, while being crippled by infinity fabric. in a way it is true because intel is still on core build since forever.

evidence show ryzen lower score in CB15 than intel but when multi threaded it can do real well which gives two thing. ryzen's multi threaded workload takes full advantage of well optimized (SSE but not new extension) software, or the software isnt shuffling workload between the ccx for the fabric penalty to show. otherwise juanrga example using another software to show IPC should be greater would be that specific software is more optimized for intel than AMD, that or its not as optimized in multi threaded workload than CB.

Where is it demonstrated that RyZen has better and more advanced caching?

About SMT, I have explained plenty times that RyZen doesn't have better implementation. RyZen has lower IPC, because the SSOOO logic cannot extract so much ILP as latest Intel microarchitectures do. So more execution units on the Zen core are idle and ready to execute instructions from a second thread. That is the reason why Zen core has higher SMT gains when a second thread is executed. This is a simplified illustrative diagram. Core A is 33% faster when a single thread is executed, but core B matches it when two threads are executed.

SMT%2Bvs%2BILP.png
 
It would be nice if Intel went back to solder..... They charge enough to still make plenty of $$$.
 
The R7 competes performance wise with the upper end i5s and lower end i7s. 1700 and 1700x prices have dropped to below those.
Both the R5 and the i5 8400 are competitive. Only a small price drop for the R5, which was already a great deal
The R3 destroys the i3 and still had a price drop.

What i3 is being destroyed? Kabylake i3 is able to match the R3 even in non-gaming.

ATFouYaxBChqM5WYv4jWP8-650-80.png


JSPqQENb5AWX7fZjTWzbuS-650-80.png


CoffeLake i3 (~ KBL i5) runs circles around those R3.


Why? Because AMD has an opportunity to pick up market share. And, perhaps they are less dickish.

Despite all the hype, media manipulation (TechSpot, Anandtech, Guru3d), and false marketing AMD got what? 2% marketshare? 4%?

https://www.cpubenchmark.net/market_share.html
 
Last edited:
Surprise, surprise. You make the 1800x argument. A CPU for those that want to use an a320 board or simply do not like to o/c. If you haven't been paying attention lately, the 1700x/1700 are about $100 cheaper than the i7 competitors... with an affordable mb. Those of course match or beat the 8700k in cinebench, blender, crona and others. ;);)
But they suck because bad 1080p gaming.

Lower performance at 720p/1080p means lower performance at higher resolutions with a faster GPU.

Also the 1700 is slower than 8700k in Cinebench, Blender, Handbrake, X264, and Pov-Ray. And the 1700x depends... slower in Blender but faster in Cinebench for instance.
 
Back
Top