Anandtech and Gamers Nexus 11700k Reviews

Also lets be clear. Admittedly it required refinement, Jim Keller help them develop the Zen core which has been their saving grace. Without Zen, AMD would be in serious trouble.
No argument from me on Jim Keller's impact.
However, it was much more than Jim.
The Zen team is well run.
Dr. Su has instilled a discipline of excellence and really dialed down the level of arrogance.

I have no doubt Intel can do the same but it's going to take some work.
 
I'm not saying quicksync is niche. I'm saying getting an 11700k just for that feature is. If it isn't improved that much over the current generation, or if the lower end lineup can match it.
My point is that Quicksync can be a reason to buy Intel, full stop. No matter which level of CPU you otherwise want. There are reasons to have a high end CPU and also have Quicksync.
 
Almost anything handles hardware decoding now - unless you're building SFF with just iGPU, decoding is easy. Encoding is hard - and yes, quicksync rocks there (AMD has a good H.265 encoder, but their 264 one blows, while NVENC is the opposite - great at 264, lousy at 265 - at least it was, haven't checked recently) and is big for streaming and the like. All depends on what you want to accomplish. Rocket Lake isn't bad on its own - but compared to the competition, it has limited compelling features, unless you already have Z490 or need intel for some reason.
H.265 from the NVENC hardware on Turing and Ampere (they both have the exact same NVENC) is quite good-----if you are using software which exposes certain options** (same can be said for their H.264 encoding). It could still be better. But I don't really have any complaints. and it does give better results than their H.264 encoder

To get the best from NVENC, turn on spatial AQ and set to "15" and Temporal AQ needs to be turned on (it has no additional options). For streaming, OBS exposes these options with the StreamFX plugin. For encoding, use Staxrip for these options. Or FFMPEG, if you are savy with manually setting all of that stuff.
 
Well right now, the leaked price of the 11700KF is $5 more than the 5800x. Based on this review AND assuming that this review represents what we'll see with more finalized bioses, drivers, etc., it seems to make little sense to buy a new system around any i7 (and even less with the i9) part without an IGP. You get essentially equal or better performance at a lower power draw by going AMD. IF you want the IGP, you're looking at $484. So essentially $30 for a cheap video card (not unreasonable). But the pricing... You're getting to the point where you have to be a die hard Intel fan to purposely go out and buy Intel this generation at the prices they are asking.

If the 11700 (non-k) at $389 does what the last generation did with turbo boost, MCE, and BCLK trickery to get near 5Ghz all core, that might be the best deal for an 8 core RKL part. At that point, it's at least a $60 cost savings over AMD.

Also, below $300, AMD has nothing this generation (yet). So the 6C/12T variants might very well be good buys in that space, especially closer to the $200 mark. Also the previous 10th generation might end up being the biggest competitor until the supply dries up. If I had to buy an Intel CPU right now, I'd get the i9-10850k on sale.

Sad, we're talking about Intel the way we talked about AMD being the budget alternative. Then again, this is based on the leaked pricing I saw here.
Definitely a good point about the 6 core Rocket Lakes coming in under the current price of a 5600x. I mean, AMD still has Zen 2. But even a locked non-K Rocket Lake under $250 should be a compelling buy Vs. a $200 Zen 2 Ryzen 3600 or whatever a 3600x costs.

Basically, Zen 3 performs like a Comet lake with two more cores. So a 5600x (6 cores) performs more or less like a 10700k (8 cores). A 5800x (8 cores) performs like a 10900k (10 cores).

Rocket Lake gains back about 1 core's worth of multi-threading performance. So a 6 core Rocket Lake should perform like a theoretical 7 core Comet Lake. And an 8 core Rocket Lake should be like a theoretical 9 core Comet Lake.
 
H.265 from the NVENC hardware on Turing and Ampere (they both have the exact same NVENC) is quite good-----if you are using software which exposes certain options** (same can be said for their H.264 encoding). It could still be better. But I don't really have any complaints. and it does give better results than their H.264 encoder

To get the best from NVENC, turn on spatial AQ and set to "15" and Temporal AQ needs to be turned on (it has no additional options). For streaming, OBS exposes these options with the StreamFX plugin. For encoding, use Staxrip for these options. Or FFMPEG, if you are savy with manually setting all of that stuff.
Don't stream, I just transcode for various things at high speed :) And run virtual desktops with the GRID versions of the cards.
 
Also lets be clear. Admittedly it required refinement, Jim Keller help them develop the Zen core which has been their saving grace. Without Zen, AMD would be in serious trouble.

We don't have access to a timeline where Jim Keller or Lisa Su did not work for AMD, so it's tough to say what would have been released without their leadership. I'm glad they did achieve success. Now Intel has to pull their head out so we can return to a more competitive environment, particularly in desktop.
 
Why not just choose a 10700K and get identical performance for a much, much lower price?
The 10700K is not a modern CPU. It lacks features like PCIe 4.0. Also it performs worse in applications than the 11700K (CB20 1T: ~500 vs ~600, CB20 nT: ~5000 vs ~5700).
If you're willing to spend that much to get the 11700k for encoding....why not get a 5800X instead?
That is what I meant, you now have the option of Intel again at a roughly comparable price (at Mindfactory it as 439 EUR for the 5800X and 459 EUR for the 11700K). For those who prefer Intel, they can now buy without major sacrifices.
Vega's turned out to be a brilliant development that's contributed hugely to AMD. It's enabled a bunch of APUs and was a genuine success on its own.

Unless this goes somewhere it's not comparable to an AMD development, it's just a rehash of Prescott.
I'm not talking down on Vega. I am well aware that Vega is what brought AMD back into play. Although I feel that AMD fails to iterate quickly enough and uses old technology for too long: Zen3+RDNA2 is not coming to APUs before 2022, how are they going to compete against 3nm Apple M1 successor with then two year old technology?
 
The 10700K is not a modern CPU. It lacks features like PCIe 4.0
PCIE 4.0 defines what is a "modern" cpu? Oh come on.
For those who prefer Intel, they can now buy without major sacrifices.
Just looking across the board (and not just a single benchmark score), I'd say *at least* the 5800X beats the 11700K by as much of a margin as the 11700k beats the 10700k. So yes, you are taking a "major"* sacrifice by going Intel, unless you specifically need the features that intel provides. The 11700k costs more, burns way more power, and performs significantly* worse in some applications (worse in general in almost every single score) than the 5800X.

At the other end of the spectrum, the 10700k performs significantly* worse in a handful of applications, isn't a "modern" (lol) cpu, but also is much cheaper (I guess this depends on where you are), and doesn't require the higher end chipset. Shoot, in the US I could get a 10900k for probably around the same price and negate any multicore advantage.

*Based on cb20. If that's literally the only score you care about. If not, I'm sure the averages are closer together especially between 10th and 11th gen.

 
We don't have access to a timeline where Jim Keller or Lisa Su did not work for AMD, so it's tough to say what would have been released without their leadership. I'm glad they did achieve success. Now Intel has to pull their head out so we can return to a more competitive environment, particularly in desktop.
I don't think I fully understand what you're saying. What I can is I'm 40 years old and have used AMD since my 386 DX 40Mhz. I've used them for a long time, including in the datacenter. Bulldozer was a turd. Piledriver was a less stinky turd. Zen I don't believe would have happened without Jim Keller.
 
Why is Intel releasing things if they're not really any better than previous lineup?
It's at least their first new architecture in a while, so it shouldn't be susceptible to the previous litany of exploits, I'm sure new ones will pop up because people are going to be on the hunt for them like mad but at least they would be new.
 
Why is Intel releasing things if they're not really any better than previous lineup?
I only have to buy a motherboard and CPU to complete my build (with all new parts) so here is my reasoning for why I'd choose Rocket Lake:
#1. Ryzen 3600 is too expensive for me.
#2. PCIe4 I expect the RTX 3050 Ti to be PCIe4.
#3. faster iGPU This build will likely be my backup later on in which I'll take out the video card and install it in my main future computer.
So, obviously I want to buy a motherboard with the video ports in the back. Besides, I don't expect the 11400 to use too much power when idle.
According to Hothardware: "Rocket Lake will blast off later this month, on March 30 at precisely 6:00 am PT (9:00 am ET)
 
faster iGPU

to be fair - you could just say "has an integrated GPU".

The only regularly available AMD parts with an iGPU are... the first generation "Zen+" APUs (and even they are super overpriced right now). So if you want something newer than Zen1 arch, or you just want more than a 4C/8T config, and you want an iGPU for any reason, Intel is still the only option.

But then, this advantage also applied to 9th and 10th gen processors.
 
Why is Intel releasing things if they're not really any better than previous lineup?
The big changes are platform improvements. It's Intel's first platfrom with PCIE4 as an example and it also has significantly more PCI-E lanes. As I'm sure you're aware this comes at the cost of some top end absolute performance. If all of the platform improvements don't interest you, then getting a 9 series of 10 series setup for lower cost and slightly better performance probably makes sense.

EDIT: Honestly for most people it probably just makes sense to get a previous generation setup. PCIE4 NVME drives aren't "necessary" and add a lot of cost to builds if you want them in any appreciable size (2TB+). Additionally most people don't use 2-3 NVME drives and multiple graphics cards necessitating more PCIE lanes. Most would probably benefit far more from spending far less on a 9 or 10 series system and spending more on their graphics card or upgrading a monitor or something.
 
Last edited:
PCIE 4.0 defines what is a "modern" cpu?
Not entirely, I wrote "like PCIe 4.0". PCIe 4.0 is part of the modern featureset. Also both new consoles use PCIe 4.0 SSDs and it is normal that console hardware will become the baseline once game developers stop making cross-generation games. On PC, fast SSDs will leverage the DirectStorage API for asset streaming and if you have a slow SSD, you will have to live with reduced detail or increased object pop-in.

Another example: The USB Key-A header (for front USB-C) is a modern feature of mobos. You can still buy mobos without USB Key-A header, but these are either old or budget models. Likewise, a CPU/mobo that doesn't support PCIe 4.0 is either old or a budget model.

Just looking across the board (and not just a single benchmark score), I'd say *at least* the 5800X beats the 11700K by as much of a margin as the 11700k beats the 10700k.
I don't see this. From the benchmarks published so far, the 11700K is in a number of situations closer to the 5800X than it is to the 10700K, especially in highly parallel tasks like rendering (CB20, Blender, V-Ray) and code compilation.

*Based on cb20. If that's literally the only score you care about. If not, I'm sure the averages are closer together especially between 10th and 11th gen.
3DCenter did an analysis of the HardwareLuxx 11700K review (which used new BIOS released after AnandTech review) in German, but the charts speak for themselves:
https://www.3dcenter.org/news/hardware-und-nachrichten-links-des-10-maerz-2021
Application performance is right between the 10700K and 5800X, and gaming performance with the new BIOS is also almost there. Note that gaming performance is not comparable between AnandTech and HardwareLuxx due to different memory speed.
 
Not entirely, I wrote "like PCIe 4.0". PCIe 4.0 is part of the modern featureset. Also both new consoles use PCIe 4.0 SSDs and it is normal that console hardware will become the baseline once game developers stop making cross-generation games. On PC, fast SSDs will leverage the DirectStorage API for asset streaming and if you have a slow SSD, you will have to live with reduced detail or increased object pop-in.
I sincerely doubt this will remotely be an issue on the PC side. DirectStorage is an API that definitely optimizes things on consoles limited feature set, but PC's for a long time can essentially brute force this. Long story short if all of your textures can be loaded into RAM this is a non-issue (in other words just get more RAM). At that point it's an issue with how the engine's are being written and the bottle neck isn't storage. Storage on PC's for a long time have really only affected loading times, but once in game have done very little in terms of actually changing the performance of the game and also have zero to do with pop-in (again that's engine design and is related to things like viewing distance as well as how assets are scaled).

Even if a game is designed to "never have a loading screen" and show massive viewing distances like Cyberpunk 2077 - frankly an SATA SSD is more than fast enough to keep up provided that the game engine intelligently loads and caches when necessary (that is to say loading up the assets before they are needed dynamically). In fact CP2077 is an excellent example of why the DS API isn't really necessary on PC as it's a very modern engine that was programmed properly. On a console with limited RAM, and limited vRAM, the DS API will make a much bigger difference as the HD can also be depended on as a dynamic loading source. Outside of initial load it's unlikely to do much on PC side - other than hopefully get major players like Unreal Engine to adopt modern API's in general that modernizes the baseline of engines that people are using and therefore actually optimizes systems to better stream assets regardless of having PCIE4 or not, just having a 'faster' SSD in general (again even as "slow" as standard SATA SSD's) as compared to older rotating drives.
 
Not entirely, I wrote "like PCIe 4.0". PCIe 4.0 is part of the modern featureset. Also both new consoles use PCIe 4.0 SSDs and it is normal that console hardware will become the baseline once game developers stop making cross-generation games. On PC, fast SSDs will leverage the DirectStorage API for asset streaming and if you have a slow SSD, you will have to live with reduced detail or increased object pop-in.

Another example: The USB Key-A header (for front USB-C) is a modern feature of mobos. You can still buy mobos without USB Key-A header, but these are either old or budget models. Likewise, a CPU/mobo that doesn't support PCIe 4.0 is either old or a budget model.

I don't see this. From the benchmarks published so far, the 11700K is in a number of situations closer to the 5800X than it is to the 10700K, especially in highly parallel tasks like rendering (CB20, Blender, V-Ray) and code compilation.

3DCenter did an analysis of the HardwareLuxx 11700K review (which used new BIOS released after AnandTech review) in German, but the charts speak for themselves:
https://www.3dcenter.org/news/hardware-und-nachrichten-links-des-10-maerz-2021
Application performance is right between the 10700K and 5800X, and gaming performance with the new BIOS is also almost there. Note that gaming performance is not comparable between AnandTech and HardwareLuxx due to different memory speed.
Yeah, bios tuning is important. Performance can vary pretty widley, between motherboards. Even with the same chipset. There are z490 boards which score up to 10fps lower than some others (at least with the bios versions at the times of review). Likewise, some of the Z590 boards are doing a few FPS better than the best Z490 boards.
 
I sincerely doubt this will remotely be an issue on the PC side. DirectStorage is an API that definitely optimizes things on consoles limited feature set, but PC's for a long time can essentially brute force this. Long story short if all of your textures can be loaded into RAM this is a non-issue (in other words just get more RAM). At that point it's an issue with how the engine's are being written and the bottle neck isn't storage. Storage on PC's for a long time have really only affected loading times, but once in game have done very little in terms of actually changing the performance of the game and also have zero to do with pop-in (again that's engine design and is related to things like viewing distance as well as how assets are scaled).

Even if a game is designed to "never have a loading screen" and show massive viewing distances like Cyberpunk 2077 - frankly an SATA SSD is more than fast enough to keep up provided that the game engine intelligently loads and caches when necessary (that is to say loading up the assets before they are needed dynamically). In fact CP2077 is an excellent example of why the DS API isn't really necessary on PC as it's a very modern engine that was programmed properly. On a console with limited RAM, and limited vRAM, the DS API will make a much bigger difference as the HD can also be depended on as a dynamic loading source. Outside of initial load it's unlikely to do much on PC side - other than hopefully get major players like Unreal Engine to adopt modern API's in general that modernizes the baseline of engines that people are using and therefore actually optimizes systems to better stream assets regardless of having PCIE4 or not, just having a 'faster' SSD in general (again even as "slow" as standard SATA SSD's) as compared to older rotating drives.
What? Cyberpunk is absolutely riddled with pop-in and in your face LoD transitions.

Direct Storage ought to work pretty well with the mid/high tier PCI-E 3 drives. I'm willing to bet that most games you won't be able to notice a difference, compared to a PCI-E 4 drive. And that's because its not just about throughput. Direct Storage will decrease the amount of time it takes for the first bits of data to become usable by the game engine. That should clean up stutters and hitches from loading sections of an area. But there could be a few games here or there, which leverage that extra throughput, as well.
Playstation 5 isn't a great example to leverage as evidence that PCI-E 4 is needed over 3-----because the comparison point is the PS4. Which was SATA II platter drives. Certainly, the superspeed of the PS5 SSD is driving some incredibly low loading times and area transitions. But we aren't really able to say if the PS5's SSD speed is exactly needed or if its just gravy. Time will tell.
 
3DCenter did an analysis of the HardwareLuxx 11700K review (which used new BIOS released after AnandTech review) in German, but the charts speak for themselves:
https://www.3dcenter.org/news/hardware-und-nachrichten-links-des-10-maerz-2021
Application performance is right between the 10700K and 5800X, and gaming performance with the new BIOS is also almost there. Note that gaming performance is not comparable between AnandTech and HardwareLuxx due to different memory speed.
Thanks for the link... interesting.
 
Why is Intel releasing things if they're not really any better than previous lineup?

Desperation and $$$ because people conclude 11>10.

*Edit* Or on the off chance that it's actually pretty okay at release.
38ecc7e232aa91e3.gif
 
gen 2x2 20Gbps usb C is part of the chipset now. So more boards will probably include it on the back panel, including mid and low end boards. I have an H570 board with it.

Z490/H470 boards would have to include a third party chipset to include a gen 2x2 usb C.

H570 and B560 also allow memory overclocking. Even for 10 series chips.
 
to be fair - you could just say "has an integrated GPU".

The only regularly available AMD parts with an iGPU are... the first generation "Zen+" APUs (and even they are super overpriced right now). So if you want something newer than Zen1 arch, or you just want more than a 4C/8T config, and you want an iGPU for any reason, Intel is still the only option.

But then, this advantage also applied to 9th and 10th gen processors.
If I look at the AMD web site, it shows there are processors like the Ryzen 5 4600G and Ryzen 3 4300G. I looked at some retailers like Newegg and didn't see them. Like where are those?
 
If I look at the AMD web site, it shows there are processors like the Ryzen 5 4600G and Ryzen 3 4300G. I looked at some retailers like Newegg and didn't see them. Like where are those?

They are oem only processors. While you can get them, because they are in the channel, it's more economical to just get an intel processor. I just went through this myself. I ended up purchasing a 10600k (I was going to get a 10400/11400 but at 175, I just bit the bullet.
 
If I look at the AMD web site, it shows there are processors like the Ryzen 5 4600G and Ryzen 3 4300G. I looked at some retailers like Newegg and didn't see them. Like where are those?
Those are OEM only processors, although you can find them online at some places.
 
wandplus
You can buy the Ryzen 4300G, 4600G, and 4800G at OEM parts resellers, but the prices are quite high.

The Ryzen Pro 4350G, 4650G, and 4750G are OEM+SI and you can buy them in many shops here like normal tray CPUs. If they are not sold in your region, AliExpress has them for reasonable prices. Intel is still cheaper though.
gen 2x2 20Gbps usb C is part of the chipset now. So more boards will probably include it on the back panel
USB-C on the back panel is also popular with cheap/old mobos, even first generation entry-level AM4 mobos like the ASRock AB350M Pro4 have it. But Key-A headers for front USB-C you generally don't see in the budget range yet, with the exception of one MSI mobo (B550M Pro-VDH).
 
wandplus
You can buy the Ryzen 4300G, 4600G, and 4800G at OEM parts resellers, but the prices are quite high.

The Ryzen Pro 4350G, 4650G, and 4750G are OEM+SI and you can buy them in many shops here like normal tray CPUs. If they are not sold in your region, AliExpress has them for reasonable prices. Intel is still cheaper though.

USB-C on the back panel is also popular with cheap/old mobos, even first generation entry-level AM4 mobos like the ASRock AB350M Pro4 have it. But Key-A headers for front USB-C you generally don't see in the budget range yet, with the exception of one MSI mobo (B550M Pro-VDH).
Its gen 2x2 on the back panel. that's 10 - 15 Gbps faster than what you find on many motherboards. especially older ones.
 
What? Cyberpunk is absolutely riddled with pop-in and in your face LoD transitions.
Even if you don’t feel like my example was a good one you failed to show how DS is the important factor for loading in games. It’s not. Again everything can be loaded into RAM on PC’s so the HD isn’t accessed at all. If there is LOD and pop in that occurs in an engine that’s a coding and optimization issue.

To drive the point home, on PC it literally is possible to install your entire OS and your game library into a RAM disk and literally not have an HDD installed at all. That’s not practical of course, but you’d face all of the same game engine issues showing it isn’t an issue of speed of access on HDDs.

If DS fixes anything it will be simply better optimization in engines to better utilize RAM and the rest of the system resources. Streaming assets isn’t the issue.
 
Even if you don’t feel like my example was a good one you failed to show how DS is the important factor for loading in games. It’s not. Again everything can be loaded into RAM on PC’s so the HD isn’t accessed at all. If there is LOD and pop in that occurs in an engine that’s a coding and optimization issue.

To drive the point home, on PC it literally is possible to install your entire OS and your game library into a RAM disk and literally not have an HDD installed at all. That’s not practical of course, but you’d face all of the same game engine issues showing it isn’t an issue of speed of access on HDDs.

If DS fixes anything it will be simply better optimization in engines to better utilize RAM and the rest of the system resources. Streaming assets isn’t the issue.
That's all still limited by current storage API.

Direct storage is a new set of API for storage, which is designed to reduce IO overhead. Additonally, RTX cards will be able to handle the compression and decompression, rather than doing that on a CPU. Which will help to maximize the potential of the APIs (supposedly by a lot. But tests need to be done when this stuff is someday available).

PS5 runs on API's with similar goals.
 
Last edited:
If I look at the AMD web site, it shows there are processors like the Ryzen 5 4600G and Ryzen 3 4300G. I looked at some retailers like Newegg and didn't see them. Like where are those?
Most of the 4000 series is OEM only and the OEM’s are angry because AMD isn’t making good on their deliveries of them.
 
high IPC with lower frequency will always beat lower IPC with high frequency. Frequency and IPC are mutually exclusive. outside of Intel generated tests it is likely that Rocket Lake is around 3-5% better than comet lake in the 4ghz challenge. removing frequency will show the true IPC capability.

watching a friend render a project instantaneously on less than 25% CPU usage on his new 5950X, it is a beast with no equal
 
To drive the point home, on PC it literally is possible to install your entire OS and your game library into a RAM disk and literally not have an HDD installed at all.
That would mean you need to limit the total size of your game assets to the minimum RAM spec, ie. have pretty low quality assets.

In any other case you would need a strategy how to preload assets and/or discard unused ones like today. The fast SSDs in the consoles and the DirectStorage API make this unnecessary, game developers can simply use any assets instantaneously, so I would not expect they will give much afterthought on preloading strategies any more.

This is why I expect that if you do not have a fast enough SSD, in future games there will be either reduced quality or more object pop-in, as a slower SSD completes the I/O requests only after the frame has been rendered.
I was a week one adopter of the tech in 2019 with a Corsair MP600 2Tb drive on a Asus X570 Mobo.
The early PCIe 4.0 SSDs weren't particularly good outside linear transfer speed. Also current games aren't able to use fast SSDs due to other bottlenecks, so there is today almost no advantage even in an NVMe drive over a SATA drive.
 
kirbyrj
Reviewers are aware of Gear 1 vs. Gear 2 mode. It is configurable. But they cannot yet talk about it due to NDA.
View attachment 338247
Source

I get that, but what did Anandtech use? The notebookcheck.net article seems to imply that it is a "feature" of i9s and that i7s can't run 1:1 over 2933Mhz to artificially differentiate the product stack. Anandtech ran at 3200Mhz which might have tripped the 1/2:1 divider. So actually underclocking the memory to 2933Mhz would have given better results.
 
Last edited:
Back
Top