Der8auer Delids 9900k and Investigates TIM

I wouldn't be too happy sanding 2mm from the die, I would be worried about going right through to the silicon.
 
Just kidding.... But it's fun to kick sand at Intel and Ngreedia, both companies have proven to me time and time again they don't give a flying fuck about the end user. Intel cares about DELL, Lenovo, etc (bulk sales). And if you pre-ordered anything from Intel or NVIDIA, you get what you deserve. Intel should rename it the 9900k FX.

:ROFLMAO::ROFLMAO::ROFLMAO:

If you think a company cares about you, then you have been personally and absolutely swindled. You dislike their corporate tactics, but they still lead in innovation, without respect to your feelings.
 
Yup, shorter frametimes at 1080P, with what is most likely a 1080Ti.

Switch that to 1440P or 4K and that frametime difference gets thrown out the window.

Not really?

And when you upgrade GPUs and new games come out, there it is again!

My God people have zero understanding of this stuff. Go read what Kyle has written about testing at 1080p. And if you're still confused, read it again.
 
so can these be easily delided using the same procedure or is the sauder going to stick to the die without using that special tool debaur had?
 
Not really?

And when you upgrade GPUs and new games come out, there it is again!

My God people have zero understanding of this stuff. Go read what Kyle has written about testing at 1080p. And if you're still confused, read it again.

You mean the part where he says that at 1080p is typically CPU bottlenecked? That's nothing new.

It's also not news that at 1440p, the usual bottleneck is the GPU.

The frametime review in the techreport is purely 1080p performance, where the cards are stacked heavily against Ryzen.

Is that fair? Well, it depends. From a purely technical perspective, it's absolutely fair. From a real game experience, it hardly matters.

If you're dropping $530 on a CPU and $600-700 on a GPU and care about every bit of performance at 1080p, you'd be playing those esports games where 144hz monitors isn't a baseline, but a requirement.

For all others, 1440p is probably where such a setup is going to be running. And while the 9900K certainly has the Ryzen beat, it barely beats out other high end Intels, or even loses (8700k, 9600k,9700k) at both 1080p/1440p.

At 1440p, the frametime difference between Ryzen and Intel drastically shrinks as most games become GPU bound.



That being said, you are correct in saying you could just upgrade your GPU, until you hit a CPU bottleneck. However, you (and I) are already running 1080Tis. The only way to push that harder is to go with a RTX2080Ti - a person who is in that kind of performance profile, the 9900K is an absolute no brainer. But if you're even slightly worried about value proposition and you're only interested in gaming, then the $100-200 increase over other Intel CPUs (2700x included) is not going to magically make a $600 GPU give you a much better experience.
 
From a real game experience, it hardly matters.

I'm going to agree with the majority of your post, I just want to clarify something a bit: I'm not looking at the 2080Ti, but what comes after that. People keep CPUs for longer than ever, and that's part of my point. You're getting two extra cores with class-leading per-core performance, and yeah, it costs a few bucks extra- but you also don't need to upgrade it as soon.
 
Were the silicon die thicker prior to the move to TIM? Not sure if that can be measured - TIM came a long time ago now and CPU silicon has changed - but it would be interesting.
 
I'm going to agree with the majority of your post, I just want to clarify something a bit: I'm not looking at the 2080Ti, but what comes after that. People keep CPUs for longer than ever, and that's part of my point. You're getting two extra cores with class-leading per-core performance, and yeah, it costs a few bucks extra- but you also don't need to upgrade it as soon.

Very true, especially since Sandybridge.

From the future proofing perspective, I agree with the logic behind your reasoning.

My own experience has taught me not to future proof beyond what I see the life of the system (CPU+GPU, 4 years). Any major upgrade planned has to be done in ~2 years, because after that, it's hard to predict: this is why I went with a 1080Ti instead of a 1060.
 
My own experience has taught me not to future proof beyond what I see the life of the system (CPU+GPU, 4 years).

I don't inherently disagree- this is why I made the mistake of getting a 2500k instead of a 2600k! Damn thing ran a GTX570 at the start, and ended with a pair of GTX970s...

Main reason the 9900K makes sense is that it's really representing a pretty hard upper limit. At best, we might get a few more MHz, hopefully with a few more cores, but it's simply fast enough today that it should be good for multiple GPU upgrades. At the least, if you had a 2080Ti, I'd expect it to still be good for its ancestor's replacement. That's why I see it not necessarily as a great immediate value but perhaps not nearly as bad as a straight price/performance analysis would suggest.

Beyond that, we are certainly looking at unpredictability, and pointedly, the potential of new computing paradigms.
 
Get access to a surface grinder
Build a cpu mounting fixture
Sacrifice a cpu to find correct depth of cut
Offer service to Silicon Lottery
Profit
I could do this. I can dial in .001" increments. hmm......
 
Intel isn't going to waste silicone or engineering hours on something you don't need. The 9900K gets hot as hell under the lid, especially overclocked, as the reviews show. Perhaps the material is needed to limit the damage to the CPU over time.

Adding insulation to 'limit the damage' from heat? What a load of bullshit.

This is Intel, they made it thicker so they can artificially squeeze out some additional performance from the next refresh.
 
I don't inherently disagree- this is why I made the mistake of getting a 2500k instead of a 2600k! Damn thing ran a GTX570 at the start, and ended with a pair of GTX970s...

It's an easy mistake to make, especially if you're trying to get the most bang for your buck. I had a friend built a E8400 over a Q6600. He eventually moved up to a 3570K, played tons of BF, which the newer ones likes more threads - had he also gone with a 3770K, he would have also been in a better place, probably would not have upgraded.

In the end he threw that out and bought a 7700K, literally 3 months before the 8700K was released, which meant that Ryzen had already been around for several months.
It was pretty clear the writing was on the wall that Intel had to release more cores, and that they had the capability to do so...

I currently have "2 bets" rolling with 1 year remaining.

1. Two years after my 1080Ti purchase, THERE WILL NOT be a $300 card that performs at 1080 level
2. Two years after my 1700 purchase, THERE WILL BE a AM4 processor that fits my X370, and is "40% faster" than Ryzen 1 @ 3.7ghz for $300

I already failed one bet, that was it would take Intel ~1 year to respond to Ryzen, they managed it in <6 months.
 
I already failed one bet, that was it would take Intel ~1 year to respond to Ryzen, they managed it in <6 months.

I failed this one too- mostly because Intel failed themselves, and I didn't expect them to tack more Skylake cores together on 14nm to make the 8- and then 9-series. They've honestly exceeded what I expected would be possible, but of course so has AMD with their first Zen arch.
 
So Intels new chip runs hot and even though they went back to solder, it still needs a delid, liquid metal, and now you have to sand the die and ihs?

Well ok. I guess if you are running 1080 you can squeeze a few for fps going Intel if you can get it to overclock and not throttle. 1440 and above you might as well save money and get an AMD.
I will wait for Amd's new cpu to come out and probably switch over even though this 5960x is probably good enough.
 
So Intels new chip runs hot and even though they went back to solder, it still needs a delid, liquid metal, and now you have to sand the die and ihs?

According to GN, their results are LM > Solder > HQ TIM. Their TG Cryonaut is worse than the solder.
That being said, it appears they didn't sand down the die like der8auer, which could improve their results.
But if you're going to sand down the die you might as well use LM anyways.

So basically unless you're going to move to LM, there's no point delidding.
 
You'd be very likely to get 5.0GHz on eight hyper-threaded cores, perhaps with a small AVX offset. I can't support the claim that it needs anything, given that it's at the 'edge' already.
At the edge of it's thermal capacity?
 
To the best of my knowledge, the topmost silicon layer of these reverse-package chips are usually known as the de-stress layer. They are there to mechanically cope with differential thermal expansion under load. If this layer isn't specified correctly in thickness, the silicon die can eventually crack, detach or delaminate internally. I remember this was an issue with some earlier generations of GPUs like the 9000-series from ATI, when the GPUs would eventually delaminate internally if bigger than original coolers with higher clamping forces were added by the users.

I wouldn't be too hasty to sand off the top layers, unless it is for experimenting purposes like Der8auer. I suspect Intel put that thicker PCB and de-stress layer on top because the 8 die elongated silicon setup is particularly prone to thermal stresses.
 
To the best of my knowledge, the topmost silicon layer of these reverse-package chips are usually known as the de-stress layer. They are there to mechanically cope with differential thermal expansion under load. If this layer isn't specified correctly in thickness, the silicon die can eventually crack, detach or delaminate internally. I remember this was an issue with some earlier generations of GPUs like the 9000-series from ATI, when the GPUs would eventually delaminate internally if bigger than original coolers with higher clamping forces were added by the users.

I wouldn't be too hasty to sand off the top layers, unless it is for experimenting purposes like Der8auer. I suspect Intel put that thicker PCB and de-stress layer on top because the 8 die elongated silicon setup is particularly prone to thermal stresses.


Elongated. Lol
 
With the thicker silicon and die, I wonder if direct-die cooling will rear it's head again. If the delidding and sanding is as straight forward as it looks in the video, I could see practicing on an i3 first before breaking apart the i9.
 
That seems like a viable explanation, too. I don't think this iteration had a die shrink, though.

Right but since the last soldered IHS (7 years ago?) it's shrunk considerably. The heat needed to solder the ihs to the silicon may damage the current CPU due to the smaller connections so they may have added a larger layer of silicon to possibly insulate from that.
 
I've only ever lapped a IHS, Don't think I would have the balls to lap the chip itself. Fascinating video. Waiting for Zen 2 in Q2 of 2019.
 
Were the silicon die thicker prior to the move to TIM? Not sure if that can be measured - TIM came a long time ago now and CPU silicon has changed - but it would be interesting.

Usually there's a step in there where they thin the die from the back, to improve thermal transfer. or to add more circuitry
They obviously skipped that step; I'd say this is a cost saving step, more than anything.

I would guarantee the 28 core parts would have to be thinned, or they would probably melt.

https://en.wikipedia.org/wiki/Wafer_backgrinding

250W is getting close to the edge for that amount of surface area; look at high power SMT transistors; those are about the highest power density you see, and 300W ones are about the size of that die.

There's a limit to how much power you can move at a given temperature difference, to get better, the water needs to be chilled, or you need a freon-based cooling solution.

-20 degree salt water is probably a bad idea for PC based cooling, lol.


With the thicker silicon and die, I wonder if direct-die cooling will rear it's head again. If the delidding and sanding is as straight forward as it looks in the video, I could see practicing on an i3 first before breaking apart the i9.

I don't understand why you'd put the IHS back on at all. I'd put the block on the bare die, if I'm going to the trouble of removing it.
You Do have to make sure there's even pressure on the die, and sometimes shims are used.
Silver shims aren't that expensive. :)

It never caused problems with the Barton core processors... (lol. You could crack the corners off the die, by being careless.)
 
So instead of re-tooling the IHS equipment in order to account for difference between the thickness of their toothpaste TIM and solder, they increased the thickness of the die?

Seems legit.

More likely is that they did this on purpose in order to limit the overclockability and be able to release an "improved" version down the road and not really have to change anything at all.

Just Intel being Intel.
 
Adding insulation to 'limit the damage' from heat? What a load of bullshit.

This is Intel, they made it thicker so they can artificially squeeze out some additional performance from the next refresh.
This architecture is at its end. Intel has pulled all the stops to give it one last hurrah (the hurrah that should have been the 8086K, in my view), but really it's done. You can see how poor it's scaling now and how much power it needs to get there. It has vulnerabilities they're never going to fix in hardware. Ringbus is awesome but it's showing cracks, too.. They're looking forward to XYZ-Lake in H2 2019 and beyond with major changes.This thing simply can't be refreshed any more.
 
Indeed it is the cpu die itself that he sanded. He insists that the circuits are on the bottom of the silicon, so taking some off the top won't kill it.

Yeah, no thanks. Exposing the die on my GPU and cleaning it (VERY carefully) with an alcohol soaked Qtip to install a waterblock is about as [H]ard as I get. Delidding I may have considered, if my chip of choice needed it (Ryzen 2700x, it doesnt), but I will be damned if I am going to take sandpaper to a CPU die to try and fix something Intel should have fixed in the factory! Besides, in another 6 months or so, you will have chips that soundly beat Intel's best, even with extreme idiocy like this.
 
I like reading about the exteme community doing this. Kudos to Debauer and that community for going out this far to see this.


For me, I’m much to lazy to delid. But I sure like reading about it. Same thing with race cars - I sure like seeing horsepower in action, but it’s nothing I’ll ever personally care to afford.
 
Back
Top