AMD Cinebench Benchmark Demo at CES 2019 Buries the Current Intel Lineup

Hey Adoredtv does know that extra space can be used for a Gpu right? I think he might, but I am not totally sure he does.
 
And Intel's 10nm was due [according to Intel] to drop in 2018, and in 2017, and in 2016, and originally in what 2015?

At this point, what credence can Intel's claims regarding when either credible 10nm products or 10nm products that can combat Ryzen will release be given?

That and the 10nm they are finally coming out with is reportedly gimped in order to get any yields at all...
 
At this point, what credence can Intel's claims regarding when either credible 10nm products or 10nm products that can combat Ryzen will release be given?

They've been up front about the issues that they've had, and now they're ready to start releasing. So the same credence we've always given them.
 
They've been up front about the issues that they've had, and now they're ready to start releasing. So the same credence we've always given them.

Fall of '19 is a long ways away yet. It's even further away than a Ryzen 2 release.
 
And Intel's 10nm was due [according to Intel] to drop in 2018, and in 2017, and in 2016, and originally in what 2015?

At this point, what credence can Intel's claims regarding when either credible 10nm products or 10nm products that can combat Ryzen will release be given?

It was a huge fuck-up but it looks like it's been fixed now.
There's been a lot of speculation about why (some people think it's to do with integrating cobalt into the process but that strongly looks like a red herring) one intriguing nugget I saw on twitter said it was due to contact patches just not working at all. Whatever it must have always looked to Intel like one small-ish fix and its good to go after 6 months or so, but every time that rolled around they were never any closer and I'm sure a bunch of people were shown the door over it.
 
AMD [is] using a new node while Intel is on the old. AMD isn't being criticised unfairly; it's the 'beats Intel' proclamation that is being criticized given the unequal comparison.

??? AMD has credibly demo'd a chip that does exactly as described, beating Intel's current best at much less power. Which actually means thrashing Intel's best. You say this conclusion is "unfair" because... umm... Intel doesn't have a better chip to demo.

Got it.
 
Last edited:
??? AMD has credibly demo'd a chip that does exactly as described, beating Intel's current best at much less power. Which actually means thrashing Intel's best. You say this conclusion is "unfair" because... umm... Intel doesn't have a better chip to demo.

Got it.

They're both coming out with new chips on new nodes. The comparison is AMD's new product to Intel's old. I'm also not saying that the comparison is 'unfair', but that criticism of the comparison is fair.

What you 'got' was making stuff up.
 
Were all waiting for the day that Intel releases their 16C32T desktop chip and that is what we will be doing for quite a while meanwhile this year in this dimension/universe we will get one from AMD.
 
Hey Adoredtv does know that extra space can be used for a Gpu right? I think he might, but I am not totally sure he does.

No point in having a GPU there unless you can somehow manage to put a stack of HBM on there or even EDRAM to use as a frame buffer. AMD's APUS for desktop will continue to be bandwidth starved until they figure out how to address that issue.

Better to stick to their APU design as it is right now.
 
No point in having a GPU there unless you can somehow manage to put a stack of HBM on there or even EDRAM to use as a frame buffer. AMD's APUS for desktop will continue to be bandwidth starved until they figure out how to address that issue.

It's not like they couldn't- they could even split the current eight-core die into a four-core or six-core die and a GPU, then put the memory on the other 'pad'. Dunno if it would be economical.
 
It's not like they couldn't- they could even split the current eight-core die into a four-core or six-core die and a GPU, then put the memory on the other 'pad'. Dunno if it would be economical.

I just thought about this. It's also possible that they don't use a chiplet at all on the APU design. Could possibly shrink everything (CPU/GPU) down to 7nm and still have space left over for EDRAM or HBM. It will be interesting to see a naked APU to see what route AMD actually takes when they become available for purchase.
 
I just thought about this. It's also possible that they don't use a chiplet at all on the APU design. Could possibly shrink everything (CPU/GPU) down to 7nm and still have space left over for EDRAM or HBM. It will be interesting to see a naked APU to see what route AMD actually takes when they become available for purchase.

Their current APU design- perhaps with a few more cores for CPU and GPU thanks to the shrink- would be fine if they could put some HBM on there. This is where HBM actually makes quite a bit of sense, versus large GPU die, where yields of the larger interposer system may be problematic.
 
The comparison is AMD's new product to Intel's old. I'm also not saying that the comparison is 'unfair', but that criticism of the comparison is fair.
Slight correction: it's AMD's new to Intel's current. Criticizing that comparison is a bit silly because, first, there's nothing else yet to compare to. And second, yes, Intel will be releasing new chips that may--and likely will--outperform the CES demo... but so will AMD. Remember that this is an engineering sample running just 8 cores; there's absolutely no chance that what we saw was the best AMD will have to offer.
 
Interconnects are a barrier that IF helps bypass to an extent.

There will be times with the edge of NVME type performance where it will suffer but no where near as much as that intel chip did for sure in terms of memory latencies.
 
... I watched both the video in advance of CES and the one above, ...
There were actually a couple of videos posted leading up to CES.
It really starts with the one about chiplets, posted half a year ago (2018-06-30).

Maybe, I know one thing for sure. nVidia is still king when it comes to GPU's so there sure as hell won't be no 10% difference in that arena.
Will be interesting to see how Navi compares in terms of price/performance and power/performance.
Nvidia might not be able to keep the crown for long.

I still take anything with AMD with a grain of salt because I believed them when they said Bulldozer would be great, ...
Before the Ryzen release I took AMD statements with a spoon of salt, but now they have an architecture that will keep delivering for a while.

When AMD released those 220W Piledriver and power consumption raised to the moon, the argument was that power was irrelevant for high-performance desktop.
Same as Intel's 140W (TDP) Prescott and 140W (de facto) Core i9...

.. that package is more expensive to manufacturer ...
Is it?
Mounting two (or three) dies on the interposer is probably marginally more costly than mounting only one.
The aggregated cost of the dies should be lower or same compared to a single die.
 
I was primarily referring to the video regarding the 3000 series specs speculated to be announced at CES. But I did watch the chicklet stuff too.

There were actually a couple of videos posted leading up to CES.
It really starts with the one about chiplets, posted half a year ago (2018-06-30).

Will be interesting to see how Navi compares in terms of price/performance and power/performance.
Nvidia might not be able to keep the crown for long.

Before the Ryzen release I took AMD statements with a spoon of salt, but now they have an architecture that will keep delivering for a while.

Same as Intel's 140W (TDP) Prescott and 140W (de facto) Core i9...

Is it?
Mounting two (or three) dies on the interposer is probably marginally more costly than mounting only one.
The aggregated cost of the dies should be lower or same compared to a single die.
 
They're both coming out with new chips on new nodes. The comparison is AMD's new product to Intel's old. I'm also not saying that the comparison is 'unfair', but that criticism of the comparison is fair.

What you 'got' was making stuff up.

Aren't all compaisons point in time? I mean, Intel will release something, and it'll be better... but when? AMD is much closer, things are leaking and we even have a demo of cinebench. For a time AMD will have the upper hand on speed and core count. After that, Intel will probably claim it.

Since manufacturers rarely release things the same day, there will always be a leapfrog effect.
 
Aren't all compaisons point in time? I mean, Intel will release something, and it'll be better... but when? AMD is much closer, things are leaking and we even have a demo of cinebench. For a time AMD will have the upper hand on speed and core count. After that, Intel will probably claim it.

Since manufacturers rarely release things the same day, there will always be a leapfrog effect.

It's less that this is an industry constant, and more that one is mentioned to the exclusion of the other, either way, as well without historical consideration- i.e., we can be more certain that Intel will improve overall IPC because they've been working on it longer and because they have a history of doing so, than AMD's Cinebench results being indicative of overall performance gain, and really, it's the comparison of current Skylake cores that have been due for replacement for years to something AMD hasn't yet released.

So it's the specifics, not the overall point, really. It's just Cinebench on an unreleased part, as tested by AMD. Once the part is released and is tested by independent houses, then the context changes and Intel's presence or absence comes into focus. And do keep in mind that this goes both ways.
 
You realize Intel has paid shills all over the net, right?

That DEFINITELY explains some of the "salt" and "naysaying".
They often are called shareholders ;)

The hilarious derailing from the fact that AMD demoing something with half the power beating the 9999k flagship is quite telling. Some people are fucking terrified of what is coming.
Get ready to hear muh latency every few posts going forward without any evidence of it being an issue. No one has the actual part in their hands, maybe wait and see what the tests are like from people such as [H] first before making huge assumptions that didn't really make any difference with Zen1 for most workloads.
The latencies between monolithic ring bus (many core Intel) and inter-CCX AMD were basically the same even with average ram, it's already tested. In fact, the intra-CCX in this case is quite a lot less latency than Intel monolithic ring bus topology, but they hate discussing that ;)
 
Last edited:
They often are called shareholders ;)

The hilarious derailing from the fact that AMD demoing something with half the power beating the 9999k flagship is quite telling. Some people are fucking terrified of what is coming.
Get ready to hear muh latency every few posts going forward without any evidence of it being an issue. No one has the actual part in their hands, maybe wait and see what the tests are like from people such as [H] first before making huge assumptions that didn't really make any difference with Zen1 for most workloads.
The latencies between monolithic ring bus (many core Intel) and inter-CCX AMD were basically the same even with average ram, it's already tested. In fact, the intra-CCX in this case is quite a lot less latency than Intel monolithic ring bus topology, but they hate discussing that ;)

SSSHHHH with all those facts in here..We have to appease the IDF! All kidding aside, I am planning on grabbing one of the 12c/24t models (the highest performing model) unless the 16c/32t comes in under $500. If I needed the 16c model for sure, I would gladly pay even a bit more then $500, but I really do not "need" the cores, aside from mining Cryptonight based coins. The nice rally we just had last week has already paid for a $400 SKU, so whatever I can get with that is most likely going to be were my money goes.

I know this is the CPU thread, but I really wish that AMD had been able to get a sustained 1900~2000 Mhz out of the VEGA VIIs. As it stands, the fact I have a 1750/1100mhz V64 and my highest performing V56 does 1800/1150 means that the gap is too small for me to spend $700 on a new one, plus another $130 for new block. But this is me we are talking about, so I may just sell a pair of my 56s and grab one anyway :-D.
 
  • Like
Reactions: N4CR
like this
They often are called shareholders ;)

The hilarious derailing from the fact that AMD demoing something with half the power beating the 9999k flagship is quite telling. Some people are fucking terrified of what is coming.
Get ready to hear muh latency every few posts going forward without any evidence of it being an issue. No one has the actual part in their hands, maybe wait and see what the tests are like from people such as [H] first before making huge assumptions that didn't really make any difference with Zen1 for most workloads.
The latencies between monolithic ring bus (many core Intel) and inter-CCX AMD were basically the same even with average ram, it's already tested. In fact, the intra-CCX in this case is quite a lot less latency than Intel monolithic ring bus topology, but they hate discussing that ;)

About latency there is a nice comparison that equates ring bus and IF frequencies to allow an academic inspection on, a better apples to apples comparison of IPC. It also solves some misconceptions about the SMT implementation versus HT.



for some good comments about it:
 
I am excited to see 7nm especially the power savings to performance ratio. I've been running my threadripper 2950x so hard running video work I've actually seen a wave in my power bill. I'm not comparing this to mining with 50 GPU's of course lol.

But none the less I am excited to see this process shrink and I am completely ignoring the political commentary about it.

I cant wait to see what a 3950x or higher will do for me and many others down the road. Hell even an 8 core 7nm will probably run circles around my 16 core threadripper core for core.

Exciting year.
 
... we can be more certain that Intel will improve overall IPC because they've been working on it longer and because they have a history of doing so, ...
I'd rather think that since Intel's been increasing their IPC for so long they're now at the point of diminishing returns.
I don't expect anything significant there until they take a leap away from the Core architecture.
 
I'd rather think that since Intel's been increasing their IPC for so long they're now at the point of diminishing returns.
I don't expect anything significant there until they take a leap away from the Core architecture.

I agree, Intel is going to need a whole new architecture to makes leaps and bounds again. (Pentium III architecture is getting long in the tooth, more or less.)
 
I'd rather think that since Intel's been increasing their IPC for so long they're now at the point of diminishing returns.
I don't expect anything significant there until they take a leap away from the Core architecture.

That's what they have coming. Meanwhile, AMD just caught up to Skylake...
 
We don't have any more evidence of Ryzen 3 either ;)

Really? Of course we have evidence of Ryzen 2 but 3 does not yet exist. (Ryzen 3000 series or Ryzen 2, same thing.) On the other hand, at this time, we have zero evidence of anything that Intel would be offering. Perhaps after being smacked around by AMD, this will be there true wake up call but, we will see.
 
Back
Top