No excitement about intel's latest offering??

I was making the best of a bad analogy. While Prescott did shrink the process node, it was a step backwards in performance clock for clock, and had increased power consumption. Your Thuban analogy is more accurate, though.

For the record, I am no fanboy. I have owned both Intel and AMD over the last 20+ years. I simply buy the best price/performance and go for the platform feature set I want.
Intel is at the point where AMD was with Thuban. They set themselve up at this point when they decide to sit on their laurel starting with Sandy Bridge just like AMD did with Athlon 64. Both companies made mistake when their enemy is down and they are on top.

At least with Prescott (or PresHOT), intel managed to shrink the process. If they can’t manage to do that soon, it will be Athlon 64 vs Pentium 4 again.

Also funny to see fanboys and fanboyism haven’t change much in 10 years, lol.
 
I was making the best of a bad analogy. While Prescott did shrink the process node, it was a step backwards in performance clock for clock, and had increased power consumption. Your Thuban analogy is more accurate, though.

For the record, I am no fanboy. I have owned both Intel and AMD over the last 20+ years. I simply buy the best price/performance and go for the platform feature set I want.
I was refering to Snowdog and the others who are going to defend Intel to the death.

Totally agreed with you on buying whatever is best at the time of purchase.
 
I was refering to Snowdog and the others who are going to defend Intel to the death.

Attempting to correct your misunderstanding of gaming, general, and embarrassingly parallel performance differences is not defending Intel. It's just trying to correct ignorance.
 
What I cannot understand is why the hell would anyone accept anything above 65c for their CPU temps? It is so weird to see so many, both Intel and AMD users reporting being fine with anything above 65c. My 3800x (recent upgrade in MSI Tomahawk b350) is cooled on a 120m rad Corsair H60. Max temp is 65c (only pegged at that in 3dMark Timespy Demo). I have not overclocked it myself, but have Gameboost on and it hits 4.25Ghz (standard AMD) at that temp. What are people doing, frying eggs, boiling water? There is a cheaper way to cook. Its like the community forgot how to build properly cooled cases and setup proper air flow. If a 10 series is for you, then my 3800x at 4.2Ghz 65c will beat the snot out of that 10 series in heat and power draw alone. Trick to cooling a high clocked, high end CPU is to have a case that has large fans and can push the air around the components and exit out the top, as heat rises. People building these high end PCs in tiny cases or even larger, but with inadequate cooling ...holy shit, better keep fire department on speed dial, man! 90c is 194f ! Its really not that far from boiling water (unless at 10k ft above ground then it is at boiling), but sure enough to cook some eggs! Sure, you will get higher fps, but the margin is so minimal to justify the cost of buying a new CPU, mobo and cooling + the overall power draw. For reference I use an old Coolermaster HAF 932 Advanced. It has 3x 230 mm fans which push air very well. One front and side pull air in and the top one exhausts out. The 120m rad is mounted in the back pulling air from the rear inward, so the whole thing exhausts through the top. My GPU (MSI 2070 Super Ventus OC) will also only top at 60c and it is OCed +910mhz on the mem and +112 on the core. System temp is at 26c to 30c under load, so the rest of the system is nice and cool. Ya'll got to learn how to make room for air to move otherwise, it is not going to do you good. High end in smaller than full tower is ridiculous! Amazingly, this setup is very quiet, even cranked. loudest thing is the GPU fans, but it is variable and not constant. If I ever crank this CPU manually higher and it pushes past 65c, then I will up to a 240 AIO, because anything above 65 is borderline dumb.
 
it was in stock when I posted it for you for $409
8 core for $409, isn't that above MSRP? 3900x would be a much better deal or even a 3800x.

Pretty meaningless launch from Intel, if Intel had more reasonable pricing then that would be an incentive. If one already has a Z390 board or even a Z370, 8700K and up then upgrading would not really be buying much in the form of gaming performance, IO . . . Unless one absolutely needs the two extra cores which then AMD would just make 10x more sense if your workloads needs or benefit with 10cores implying 12, 16, 24 would probably be way more beneficial. This really sucks is the bottom line.
 
Conversly, no doubt you run Renders all day. Perhaps show us some your great 3D render work.
Attempting to correct your misunderstanding of gaming, general, and embarrassingly parallel performance differences is not defending Intel. It's just trying to correct ignorance.
I compile most of the time i'm on the PC + transcode. For MY particular work load, it makes sense. If yours doesn't, then get what makes sense for you. Stop trying to pretend like you know what everyone does and everyone has to use their computer just like you. There are other things to do with a PC besides game, and even people who game sometimes use their PC for other things. You really like Intel, EVERYONE can tell. If it was just one person noticing, maybe their wrong, when you're arguing with 10 other people and their all saying the same thing, maybe it's time to look in the mirror? I tell people to buy what makes sense for what they do, since zen2 it's been more AMD than it's been prior to that. With the 10 series, it doesn't seem like much will change. If you play games that run better on Intel (eg; most of them) and that's the most important metric to YOU, then get Intel. If you use other applications and they work better on AMD and that's your priority, then you buy AMD. Stop trying to make this into a contest of why gaming is the only metric that anyone should go by and any program that is "embarrassingly parrallel is stupid to use. Get over the fact that others use their computers differently than you and may just run apps that work better on one or the other manufacturer, or that they may even prioritize what they want from their computer more.
 
8 core for $409, isn't that above MSRP? 3900x would be a much better deal or even a 3800x.

Pretty meaningless launch from Intel, if Intel had more reasonable pricing then that would be an incentive. If one already has a Z390 board or even a Z370, 8700K and up then upgrading would not really be buying much in the form of gaming performance, IO . . . Unless one absolutely needs the two extra cores which then AMD would just make 10x more sense if your workloads needs or benefit with 10cores implying 12, 16, 24 would probably be way more beneficial. This really sucks is the bottom line.
Agreed, their pricing really isn't inline with what you get. I built 3 systems when Skylake first came out (6 series). My current desktop is Ryzen... my next 2-3 (2-3 because my desktop will probably just be am upgrade) builds will be Ryzen unless zen3 somehow runs worse than zen2. Price / performance is just a lot better with AMD. I don't even mean top end, I upgrade multiple PC's at my house, so low/mid range is typically where I buy and you can get a lot more for less going with AMD.
 
  • Like
Reactions: noko
like this
Years ago. NVenc is now considered superior to software x264 for streaming.
For streaming yes, because it offloads the work to your GPU. It's not because of quality. Software encoding still has better quality for the same bitrate in general. Better is a subjective term. It USED to be a very noticeable difference, but the gap has closed tremendously with Nvidia out front on quality and AMD lacking, Intel QSV being in the middle but very close to Nvidia. At least that's from latest data I've recently seen. The efficiency of the GPU and similar quality make it a no brainer to prefer for streaming. Especially when you can easily bump the bitrate up a little bit without much effect to speed, something lesser CPUs would be more affected by.
 
What I cannot understand is why the hell would anyone accept anything above 65c for their CPU temps? It is so weird to see so many, both Intel and AMD users reporting being fine with anything above 65c. My 3800x (recent upgrade in MSI Tomahawk b350) is cooled on a 120m rad Corsair H60. Max temp is 65c (only pegged at that in 3dMark Timespy Demo). I have not overclocked it myself, but have Gameboost on and it hits 4.25Ghz (standard AMD) at that temp. What are people doing, frying eggs, boiling water? There is a cheaper way to cook. Its like the community forgot how to build properly cooled cases and setup proper air flow. If a 10 series is for you, then my 3800x at 4.2Ghz 65c will beat the snot out of that 10 series in heat and power draw alone. Trick to cooling a high clocked, high end CPU is to have a case that has large fans and can push the air around the components and exit out the top, as heat rises. People building these high end PCs in tiny cases or even larger, but with inadequate cooling ...holy shit, better keep fire department on speed dial, man! 90c is 194f ! Its really not that far from boiling water (unless at 10k ft above ground then it is at boiling), but sure enough to cook some eggs! Sure, you will get higher fps, but the margin is so minimal to justify the cost of buying a new CPU, mobo and cooling + the overall power draw. For reference I use an old Coolermaster HAF 932 Advanced. It has 3x 230 mm fans which push air very well. One front and side pull air in and the top one exhausts out. The 120m rad is mounted in the back pulling air from the rear inward, so the whole thing exhausts through the top. My GPU (MSI 2070 Super Ventus OC) will also only top at 60c and it is OCed +910mhz on the mem and +112 on the core. System temp is at 26c to 30c under load, so the rest of the system is nice and cool. Ya'll got to learn how to make room for air to move otherwise, it is not going to do you good. High end in smaller than full tower is ridiculous! Amazingly, this setup is very quiet, even cranked. loudest thing is the GPU fans, but it is variable and not constant. If I ever crank this CPU manually higher and it pushes past 65c, then I will up to a 240 AIO, because anything above 65 is borderline dumb.
Because it's within the manufacturer specs? I mean, sure it will run great at 70C, it'll run great at 75 as well.. and run fine at -40C on liquid nitrogen. If you slap in a stock 3900x with stock cooler and ok airflow in your case, it's "normal" to have 80C or higher temperatures. Is it ideal? I mean, your possibly losing a percent or two performance on long running high power use loads, but day to day stuff, no difference. If you are constantly hitting 90C on minimal loads, should probably check things out. If you are constantly pushing all cores to 100% you should probably have better cooling to maintain boost speeds. If you like tinkering and have extra $ it's not a bad thing to get a better cooler, but the cost of a decent AIO is th difference between a 3600 and a 3700x for a lot of people. I'd take a 3700x at 80C vs 3600@70C in just about any work load. Both are still perfectly reasonable and within tolerance. That said, I to like to run stuff cooler to the point I'll under volt to achieve my goals of less power/quieter/cooler. When someone says "My 3800x is hitting 85C when stress testing, is this normal?" If your on a stock cooler and purposefully stressing it, yes, it's normal just use it and be happy or you can spend some $ to being the temps down for the benchmarking/stress testing if you desire. If someone is running 24/7 video editing and says my CPU is throttling when I'm trying to get projects complete, then they are in need of a better cooling solution. It's not then about "normal" or "safe", since it's still normal but it's actually affecting something and probably making a noticeable difference on performance.
 
What I cannot understand is why the hell would anyone accept anything above 65c for their CPU temps? It is so weird to see so many, both Intel and AMD users reporting being fine with anything above 65c. My 3800x (recent upgrade in MSI Tomahawk b350) is cooled on a 120m rad Corsair H60. Max temp is 65c (only pegged at that in 3dMark Timespy Demo). I have not overclocked it myself, but have Gameboost on and it hits 4.25Ghz (standard AMD) at that temp. What are people doing, frying eggs, boiling water? There is a cheaper way to cook. Its like the community forgot how to build properly cooled cases and setup proper air flow. If a 10 series is for you, then my 3800x at 4.2Ghz 65c will beat the snot out of that 10 series in heat and power draw alone. Trick to cooling a high clocked, high end CPU is to have a case that has large fans and can push the air around the components and exit out the top, as heat rises. People building these high end PCs in tiny cases or even larger, but with inadequate cooling ...holy shit, better keep fire department on speed dial, man! 90c is 194f ! Its really not that far from boiling water (unless at 10k ft above ground then it is at boiling), but sure enough to cook some eggs! Sure, you will get higher fps, but the margin is so minimal to justify the cost of buying a new CPU, mobo and cooling + the overall power draw. For reference I use an old Coolermaster HAF 932 Advanced. It has 3x 230 mm fans which push air very well. One front and side pull air in and the top one exhausts out. The 120m rad is mounted in the back pulling air from the rear inward, so the whole thing exhausts through the top. My GPU (MSI 2070 Super Ventus OC) will also only top at 60c and it is OCed +910mhz on the mem and +112 on the core. System temp is at 26c to 30c under load, so the rest of the system is nice and cool. Ya'll got to learn how to make room for air to move otherwise, it is not going to do you good. High end in smaller than full tower is ridiculous! Amazingly, this setup is very quiet, even cranked. loudest thing is the GPU fans, but it is variable and not constant. If I ever crank this CPU manually higher and it pushes past 65c, then I will up to a 240 AIO, because anything above 65 is borderline dumb.

it's not 2001 anymore and cpu's and gpu's don't just up and die due to what would of been considered high temps at that time.. is the 10 series a furnace? yes but that's the trade off you have to deal with to get the performance they're brute forcing out of the architecture. either way the cpu's and motherboards will protect themselves long before temps ever become a concern. about the only risk you have is killing thru-hole capacitors earlier due to their proximity to the cpu.
 
What I cannot understand is why the hell would anyone accept anything above 65c for their CPU temps? .....then I will up to a 240 AIO, because anything above 65 is borderline dumb.

I don't like liquid cooling for starters. Also 65c is not some magic number.

What matters is where the manufacturer says is safe, and where it will impact performance throttle.

Heck my 12 year old C2Q has been overclocked to the point that it hits nearly 80C in stress tests, and has been running like that for 12 years!
 
Intel is at the point where AMD was with Thuban. They set themselve up at this point when they decide to sit on their laurel starting with Sandy Bridge just like AMD did with Athlon 64. Both companies made mistake when their enemy is down and they are on top.
I think this is a better comparison than Bulldozer, at least from an outside perspective :)

Also funny to see fanboys and fanboyism haven’t change much in 10 years, lol.
Nope; anyone that has prefered performance over the years has been accused of being every color of fanboy at one point or another...
 
I just compared one of the oldest Skylake to new Comet Lake
6700K vs 10300K
Everything is the same except base frequency that is also the reason TDP of 10300K is lower. Also iGPU is now being called UHD 630 vs previously UHD 530 but otherwise have identical specs.
It is the same damn core and I bet it is made out of the same damn wafers without single transistor changed.

I bet i9 10900K could run on Z170 with DDR3 just like i9 9900K can.

One wonders then: what the hell were people at Intel doing for last five years?
If they have issues with 10nm then they should release new architecture on 14nm.
Maybe this is what Rocket Lake will be but I would not be surprised if they release another Skylake with another software turbo and theoretical clocks no one will ever see in any monitoring software...
 
I just compared one of the oldest Skylake to new Comet Lake
6700K vs 10300K
Everything is the same except base frequency that is also the reason TDP of 10300K is lower. Also iGPU is now being called UHD 630 vs previously UHD 530 but otherwise have identical specs.
It is the same damn core and I bet it is made out of the same damn wafers without single transistor changed.

I bet i9 10900K could run on Z170 with DDR3 just like i9 9900K can.

One wonders then: what the hell were people at Intel doing for last five years?
If they have issues with 10nm then they should release new architecture on 14nm.
Maybe this is what Rocket Lake will be but I would not be surprised if they release another Skylake with another software turbo and theoretical clocks no one will ever see in any monitoring software...

I'm sure there's a stepping change in there for hardware mitigations of spectre/meltdown, etc. and a slightly more refined 14nm, but it's essentially the same. Probably performs exactly the same I'd guess in MT, but slightly better in ST as it has a slightly larger boost.
 
  • Like
Reactions: XoR_
like this
It is the same damn core and I bet it is made out of the same damn wafers without single transistor changed.
Same core, yes -- same wafers / transistors, not as much.

Intel has made many smaller changes to allow for higher clockspeeds, faster clockspeed transitions, lower power usage, increased memory speeds, increased board connectivity, and of course security patching.

Also, Intel did iterate the GPU after the 6000-series.

I bet i9 10900K could run on Z170 with DDR3 just like i9 9900K can.
If the board has a massive power delivery budget far exceeding anything that the 6000 and 7000 series could have used... and then has its firmware custom hacked and so on....
 
  • Like
Reactions: XoR_
like this
If the board has a massive power delivery budget far exceeding anything that the 6000 and 7000 series could have used... and then has its firmware custom hacked and so on....

Or maybe Intel could just keep the "unlimited power turbo boost" in check on lower end (non-Z boards) and then they wouldn't have to design a new socket just for supposed power delivery, etc.

And if everything I've read in these threads is correct, it draws less than 100W while gaming anyway. No need for a robust power delivery system ;).
 
I just compared one of the oldest Skylake to new Comet Lake
6700K vs 10300K
Everything is the same except base frequency that is also the reason TDP of 10300K is lower. Also iGPU is now being called UHD 630 vs previously UHD 530 but otherwise have identical specs.
It is the same damn core and I bet it is made out of the same damn wafers without single transistor changed.

I bet i9 10900K could run on Z170 with DDR3 just like i9 9900K can.

One wonders then: what the hell were people at Intel doing for last five years?
If they have issues with 10nm then they should release new architecture on 14nm.
Maybe this is what Rocket Lake will be but I would not be surprised if they release another Skylake with another software turbo and theoretical clocks no one will ever see in any monitoring software...
Designing silicon patters for a chip is not easy. Going from 14nm to 10nm is not just taking patterns and scaling the size. Same is if they were to back-port a new design. They designed it with 10nm in mind, meaning a lot of transistors, specific power requirements, speeds, etc. All of these are not equal so it's not as easy as taking an already designed architecture and back-porting to 14nm in a week. The 10nm transition fail created all sorts of issues. They were confident (falsely obvously) that they would be able to work out the issues and didn't want to spend time backporting and taking away from R&D to move forward. I'm sure if they realized then what they now know it makes more sense, but they made a business decision that didn't work out as well as they hoped. It takes a lot of time to go from design -> silicon, this is why AMD has leapfrogging teams that were working on zen4 before zen3 even started silicon design. If intel had to modify the design to work on 14nm it would (just guessing on this one) probably have taken 2 years to get the timing re-worked, cache sizes probably had to change (transistor count for 14nm needs to be less than 10nm), silicon designed, tested, verified and patterns made then hardware testing, etc. By the time they realized 10nm was going to be pushed much further than they ever anticipated, they were so far behind already backporting would only be a stop gap for a short period. Rocket Lake very well may be 14nm because they finally realized even a 2 year delay is better than waiting any longer. Their 10nm fiasco really did put them in a damned if you do damned if you don't situation. They really tried to take a big step forward and managed to take 2 backwards instead. I'm amazed how much they've been able to pull out of 14nm honestly, but skylake + 14nm is showing its age and needs replaced sooner than later. I think over 5ghz on 14nm is also hurting their transition to 10nm because they aren't able to get higher clocks with 10nm so it would have been a really hard sell to come out with a slower CPU for a new generation.
 
Designing silicon patters for a chip is not easy. Going from 14nm to 10nm is not just taking patterns and scaling the size. Same is if they were to back-port a new design. They designed it with 10nm in mind, meaning a lot of transistors, specific power requirements, speeds, etc. All of these are not equal so it's not as easy as taking an already designed architecture and back-porting to 14nm in a week. The 10nm transition fail created all sorts of issues. They were confident (falsely obvously) that they would be able to work out the issues and didn't want to spend time backporting and taking away from R&D to move forward. I'm sure if they realized then what they now know it makes more sense, but they made a business decision that didn't work out as well as they hoped. It takes a lot of time to go from design -> silicon, this is why AMD has leapfrogging teams that were working on zen4 before zen3 even started silicon design. If intel had to modify the design to work on 14nm it would (just guessing on this one) probably have taken 2 years to get the timing re-worked, cache sizes probably had to change (transistor count for 14nm needs to be less than 10nm), silicon designed, tested, verified and patterns made then hardware testing, etc. By the time they realized 10nm was going to be pushed much further than they ever anticipated, they were so far behind already backporting would only be a stop gap for a short period. Rocket Lake very well may be 14nm because they finally realized even a 2 year delay is better than waiting any longer. Their 10nm fiasco really did put them in a damned if you do damned if you don't situation. They really tried to take a big step forward and managed to take 2 backwards instead. I'm amazed how much they've been able to pull out of 14nm honestly, but skylake + 14nm is showing its age and needs replaced sooner than later. I think over 5ghz on 14nm is also hurting their transition to 10nm because they aren't able to get higher clocks with 10nm so it would have been a really hard sell to come out with a slower CPU for a new generation.

Is Comet Lake a 10nm back port though? I don't remember off the top of my head. It might be that it was supposed to be the die shrink of the old architecture. It's clearly Skylake based though.

I'm pretty sure I read that Rocket Lake was definitely a back port to 14nm.
 
Designing silicon patters for a chip is not easy. Going from 14nm to 10nm is not just taking patterns and scaling the size. Same is if they were to back-port a new design. They designed it with 10nm in mind, meaning a lot of transistors, specific power requirements, speeds, etc. All of these are not equal so it's not as easy as taking an already designed architecture and back-porting to 14nm in a week. The 10nm transition fail created all sorts of issues. They were confident (falsely obvously) that they would be able to work out the issues and didn't want to spend time backporting and taking away from R&D to move forward. I'm sure if they realized then what they now know it makes more sense, but they made a business decision that didn't work out as well as they hoped. It takes a lot of time to go from design -> silicon, this is why AMD has leapfrogging teams that were working on zen4 before zen3 even started silicon design. If intel had to modify the design to work on 14nm it would (just guessing on this one) probably have taken 2 years to get the timing re-worked, cache sizes probably had to change (transistor count for 14nm needs to be less than 10nm), silicon designed, tested, verified and patterns made then hardware testing, etc. By the time they realized 10nm was going to be pushed much further than they ever anticipated, they were so far behind already backporting would only be a stop gap for a short period. Rocket Lake very well may be 14nm because they finally realized even a 2 year delay is better than waiting any longer. Their 10nm fiasco really did put them in a damned if you do damned if you don't situation. They really tried to take a big step forward and managed to take 2 backwards instead. I'm amazed how much they've been able to pull out of 14nm honestly, but skylake + 14nm is showing its age and needs replaced sooner than later. I think over 5ghz on 14nm is also hurting their transition to 10nm because they aren't able to get higher clocks with 10nm so it would have been a really hard sell to come out with a slower CPU for a new generation.
I understand it but keep in mind they during five years they did pretty much no performance improvement to Skylake and at the very least should try to something improve it...

Take for example Broadwell which is die shrink of Hashwell. The only other thing that they changed except process node is giving it L4 cache and the result is that at 4.2GHz Broadwell can often outperform Skylake even clocked at 5GHz, most notably in games and other memory intensive applications. Performance improvements in productivity is also pretty significant.
https://www.purepc.pl/broadwell-niszczyciel-test-intel-core-i5-5675c-i-core-i7-5775c?page=0,22

Now if i9 10900K had L4 it would not need to be clocked as high (which would lower TDP, and more die are would increase thermals) and would easily outperform 12c/24t Ryzen parts in productivity and just wipe the floor with any Ryzen in gaming. Also all 9900K owners would be now making offers on ebay to get rid of their outdated "fastest gaming CPUs" and Intel would be busy counting sweet sweet money.

What Intel instead did? Absolutely nothing

Of course there are reasons to not do that but as it is AMD will probably beat them to the punch and use L4 to boost performance... then any reason Intel had to not do that will be invalidated...
 
I'd take a 3700x at 80C vs 3600@70C

You don't have to. Just with a decent air cooler my 3700x doesn't go above 70C and that's in a rackmount that doesn't have the best ventilation. It's one of the coolest running CPUs you can get right now (in its class). Though the paradigm has shifted a bit on what's considered hot and cool. Manufacturers have raised thermal limits over the years so what's considered cool is also higher than it used to be.

I'm not loyal to either brand, in fact my recent build was the first with AMD since Core 2. I think Intel has simply got to get power consumption and heat down to compete at this point. Typically that's done with a smaller process, but they could probably do well with 10nm at this point. Right now heat and power consumption looks like the biggest complaint along with cost for core/thread count. Unfortunately for Intel Zen 3 is coming soon which will give AMD even more of an edge. Intel needs to do something fast and the 10k series was not it.
 
Last edited:
Well, at this time and with my stable 5Ghz all cores 9900k setup, I think I am going to wait for next gen to see which way I may go.
Going to the 10900k is not going to get me much, its basically an enhanced 9900k even thought it keeps scoring less for Photoshop which is my MAIN thing. Likely some issue that will be solved but still. And going to AMD is not going to happen until they can hit 5Ghz or beat Intel for apps like Photoshop and competitive gaming. So at this point, it will either be 4000 or 11th gen before I upgrade.
 
Cant get too excited about Intel, we have used AMD for a few customer builds, nice alternative to Intel. Still on 5820K I see no need to upgrade, it handles anything I throw at it. For the $279(new not used) I paid for it makes it hard to complain. Not totally sold on 7 nm platform yet.
Performance is both quantifiable ("measurebators" who measure and analyze data and only go with faster "read newer HW") and subjective nonscientific comparisons (i.e. "feels faster").
For example, my 5820k rig OC clocks lower than my sons 8700K rig but his system is not as snappy as mine (X99 versus 1151).
I buy solely on actual computational needs and price. Hard to compare HW unless everything in your test setups are the same. Impossible to do across AMD and Intel's HW.
I buy based on my preference not some measurbators review. I still read and respect the reviewers efforts and do use it as a baseline when purchasing.
Hard to verify reviewers true expertise and scientific competence,not peer reviewed like my publications are.
 
Well, at this time and with my stable 5Ghz all cores 9900k setup, I think I am going to wait for next gen to see which way I may go.
Going to the 10900k is not going to get me much, its basically an enhanced 9900k even thought it keeps scoring less for Photoshop which is my MAIN thing. Likely some issue that will be solved but still. And going to AMD is not going to happen until they can hit 5Ghz or beat Intel for apps like Photoshop and competitive gaming. So at this point, it will either be 4000 or 11th gen before I upgrade.
As of zen2 they beat Intel 9900k more than they lose to them in Photoshop... against 3800x. Not to mention the 3900x is faster, and 3950x faster than that.
https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1529/

Gaming, well, this the one place AMD still hasn't quite caught up. I don't care if they ever hit 5ghz, it's just a #. If they can run at 4GHz and outrun Intel @ 5.5GHz, I'd still buy it. If you want to hold off for an imaginary line in the sand, you have that luxury (especially with a 9900k, which will easily keep up with todays best in just about everything). I just care about how much it costs and how fast the stuff I do gets done.


Edit: Ps. Just pointing out that AMD isn't really slow in Photoshop in general, obviously there are specific things you could be doing that favor Intel, just wanted to make sure people aren't stuck in the mindset that Photoshop is just always better on Intel.
 
As of zen2 they beat Intel 9900k more than they lose to them in Photoshop... against 3800x. Not to mention the 3900x is faster, and 3950x faster than that.
https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1529/

Gaming, well, this the one place AMD still hasn't quite caught up. I don't care if they ever hit 5ghz, it's just a #. If they can run at 4GHz and outrun Intel @ 5.5GHz, I'd still buy it. If you want to hold off for an imaginary line in the sand, you have that luxury (especially with a 9900k, which will easily keep up with todays best in just about everything). I just care about how much it costs and how fast the stuff I do gets done.


Edit: Ps. Just pointing out that AMD isn't really slow in Photoshop in general, obviously there are specific things you could be doing that favor Intel, just wanted to make sure people aren't stuck in the mindset that Photoshop is just always better on Intel.


Zen 3 may fix the major performance issues in games - remember, the single CCX is what makes the 3300X such an impressive performer over the 3100. There's also 5% higher clocks, but the single CCX makes the majority of the difference!

Zen 3 It will be 8-core single CCX now! They should have other improved I/O performance under the architecture as well, so it should be abut 10-15% faster!

relative-performance-games-1280-720.png
 
Last edited:
Zen 3 may fix the major performance issues in games - remember, the single CCX is what makes the 3300X such an impressive perrformker over the 3100.
That, and other improved I/O perforjace sho
Does Zen2 even have any major performance issues with games?
I was under impression that 9900K/10900K advantage is splitting hairs over few frames per second at resolutions no one use anymore at well past capabilities of typical 144Hz gaming monitor 🤣

Notably K10 was slower than Nehalem when the latter came out and Bulldozer which had major performance issues with games was even slower than both. Zen and Zen+ were kinda underwhelming but sufficient and nothing to worry about except maybe for some pro gamers (meaning they spend too much time playing online games more than anything else...) with 240Hz monitors where maybe even if those additional frames do not really matter it is reasonable to get the fastest processor for the task. But Zen2? It is actually pretty good, especially if someone goes all for 3950X which is terrific processor for anything you throw at it. Slightly slower than highly clocked Skylake but not at the point it even matters anymore.

But yeah, I get it, AMD have some optimizations left to do and which can make it not only productivity king but just the best in pretty much anything.
And it will be very interesting to see that happen because it would be pretty much Athlon64/X2 all over again and even at that time arguably Pentium 4s and then Pentium D, especially Extreme Edition variants were often better at productivity and AMD screwed up multi threading on X2 to the point imho it was actually better to get Pentium D just to not have to deal with these issues.

Of course Intel even if it appears is in deep sleep S5 state and just adding plus'es to 14nm and polishing Skylake (kinda literally even...) they might be cooking something interesting and AMD cannot do the same and rest assured their sales will continue to be great indefinitely even if they do not improve Zen.
 
Zen 3 may fix the major performance issues in games - remember, the single CCX is what makes the 3300X such an impressive performer over the 3100. There's also 5% higher clocks, but the single CCX makes the majority of the difference!

Zen 3 It will be 8-core single CCX now! They should have other improved I/O performance under the architecture as well, so it should be abut 10-15% faster!

View attachment 247156
I agree, Zen 3 MAY fix it, but without more information I'm not going to state it like it's a fact, but I will hope that it's true!
Does Zen2 even have any major performance issues with games?
I was under impression that 9900K/10900K advantage is splitting hairs over few frames per second at resolutions no one use anymore at well past capabilities of typical 144Hz gaming monitor 🤣

Notably K10 was slower than Nehalem when the latter came out and Bulldozer which had major performance issues with games was even slower than both. Zen and Zen+ were kinda underwhelming but sufficient and nothing to worry about except maybe for some pro gamers (meaning they spend too much time playing online games more than anything else...) with 240Hz monitors where maybe even if those additional frames do not really matter it is reasonable to get the fastest processor for the task. But Zen2? It is actually pretty good, especially if someone goes all for 3950X which is terrific processor for anything you throw at it. Slightly slower than highly clocked Skylake but not at the point it even matters anymore.

But yeah, I get it, AMD have some optimizations left to do and which can make it not only productivity king but just the best in pretty much anything.
And it will be very interesting to see that happen because it would be pretty much Athlon64/X2 all over again and even at that time arguably Pentium 4s and then Pentium D, especially Extreme Edition variants were often better at productivity and AMD screwed up multi threading on X2 to the point imho it was actually better to get Pentium D just to not have to deal with these issues.

Of course Intel even if it appears is in deep sleep S5 state and just adding plus'es to 14nm and polishing Skylake (kinda literally even...) they might be cooking something interesting and AMD cannot do the same and rest assured their sales will continue to be great indefinitely even if they do not improve Zen.
It's not a "major" issue, and 1440p and higher it's pretty relative (although, noticeable on .1% minimums). IF (note, big if) they can close this last gap, Intel will have to start dropping prices to stay competitive as their name/brand will only get them so far. If the gap close some but still not all the way, Intel will cling on the "Fastest gaming CPU" title like it's going out of style. Yes, if it was to close this gap and even overtake Intel, it would be Athlon64 again, except this time they are in a better position than they were back then due to the other zen releases (and Intels failures to release). I'm much more excited to see if Zen3 will finally close the last gap (zen2 closed it a lot over zen+, hoping the IPC increase and small freqeuncy increase for zen3 will finish closing it). I agree, the 3300x is a good gaming CPU, I am contemplating picking one up for my sons desktop because of this.
 
Does Zen2 even have any major performance issues with games?
I was under impression that 9900K/10900K advantage is splitting hairs over few frames per second at resolutions no one use anymore at well past capabilities of typical 144Hz gaming monitor 🤣

Notably K10 was slower than Nehalem when the latter came out and Bulldozer which had major performance issues with games was even slower than both. Zen and Zen+ were kinda underwhelming but sufficient and nothing to worry about except maybe for some pro gamers (meaning they spend too much time playing online games more than anything else...) with 240Hz monitors where maybe even if those additional frames do not really matter it is reasonable to get the fastest processor for the task. But Zen2? It is actually pretty good, especially if someone goes all for 3950X which is terrific processor for anything you throw at it. Slightly slower than highly clocked Skylake but not at the point it even matters anymore.

But yeah, I get it, AMD have some optimizations left to do and which can make it not only productivity king but just the best in pretty much anything.
And it will be very interesting to see that happen because it would be pretty much Athlon64/X2 all over again and even at that time arguably Pentium 4s and then Pentium D, especially Extreme Edition variants were often better at productivity and AMD screwed up multi threading on X2 to the point imho it was actually better to get Pentium D just to not have to deal with these issues.

Of course Intel even if it appears is in deep sleep S5 state and just adding plus'es to 14nm and polishing Skylake (kinda literally even...) they might be cooking something interesting and AMD cannot do the same and rest assured their sales will continue to be great indefinitely even if they do not improve Zen.


It depends on the game. Some are as large as 15-20%, but most are 10% or less. That's why the average performance difference between the 9700k and the 3600X in the graph above are around 10%. That's already small enough for most buyers (except for competitive gamers!)

Zen 3 should make that difference disappear, or go in AMD's favor. That is why I've been waiting to replace my 4790k with Zen 3! I cant wait to pick-up a single-CCX 4700X!
 
Last edited:
I understand it but keep in mind they during five years they did pretty much no performance improvement to Skylake and at the very least should try to something improve it...

Take for example Broadwell which is die shrink of Hashwell. The only other thing that they changed except process node is giving it L4 cache and the result is that at 4.2GHz Broadwell can often outperform Skylake even clocked at 5GHz, most notably in games and other memory intensive applications. Performance improvements in productivity is also pretty significant.
https://www.purepc.pl/broadwell-niszczyciel-test-intel-core-i5-5675c-i-core-i7-5775c?page=0,22
No
https://techreport.com/review/34205/checking-in-on-intels-core-i7-5775c-for-gaming-in-2018/

Also, getting broadwell to 4.2ghz was not easy - I have first hand experience trying under custom water. Best I managed was 3.85ghz
 
Last edited:
Not totally sold on 7 nm platform yet.
Performance is both quantifiable ("measurebators" who measure and analyze data and only go with faster "read newer HW") and subjective nonscientific comparisons (i.e. "feels faster").
For example, my 5820k rig OC clocks lower than my sons 8700K rig but his system is not as snappy as mine (X99 versus 1151).

Please enable speedshift on your son’s machine and report back
 
What "no"?
You posted url to test where they compared stock 4c/8c part running on DDR3 to newer processors with more cores and vastly higher base clocks running on DDR4

This kind of test if good to show that stock i7 5775C is not really good gaming CPU today. Those CPU's are pretty expensive and some people with LGA1150 platform might have an idea that this is an alternative to newer platforms and this test debunks that.

My point that I was trying to make was that inclusion of large 128MB L4 cache made Broadwell compete with higher clocked Skylake (exactly 4.2GHz Broadwell vs 5GHz Skylake and it in some games won or was very close) and this despite Broadwell being on slower DDR3 and Skylake on DDR4 and Broadwell being Hashwell had also lower IPC. So when this happened to Hashwell then it is not unimaginable that if Intel put similar L4 cache to Skylake it would also get some performance improvements, especially in memory intensive applications like games. Maybe 128MB would not be enough to provide similar boost for today's larger games running on more threads but even this amount would help Intel to keep gaming performance advantage and make existence of LGA1200 platform to make much more sense that it otherwise does.

Maybe Intel keeps this as last resort option because it makes processor die much larger and they do not want that because it decrease yields and thus increase cost. I hope Rocket Lake will have it though.
 
As of zen2 they beat Intel 9900k more than they lose to them in Photoshop... against 3800x. Not to mention the 3900x is faster, and 3950x faster than that.
https://www.pugetsystems.com/labs/a...adripper-2-Intel-9th-Gen-Intel-X-series-1529/

Gaming, well, this the one place AMD still hasn't quite caught up. I don't care if they ever hit 5ghz, it's just a #. If they can run at 4GHz and outrun Intel @ 5.5GHz, I'd still buy it. If you want to hold off for an imaginary line in the sand, you have that luxury (especially with a 9900k, which will easily keep up with todays best in just about everything). I just care about how much it costs and how fast the stuff I do gets done.


Edit: Ps. Just pointing out that AMD isn't really slow in Photoshop in general, obviously there are specific things you could be doing that favor Intel, just wanted to make sure people aren't stuck in the mindset that Photoshop is just always better on Intel.
Except my 5ghz all cores 9900k beats the 3950 score easily in Photoshop , in fact, it beats the 10900k for some of reason at this point. So for my needs, Photoshop and even premiere (using igpu) plus competitive gaming, Intel delivers the best performance for my money.
Indeed I have the luxury to wait.
Well duh, I don't care if they could run the cpu at 1mhz and still perform better than Intel 10Ghz, of course I would go with amd in the case. That was not the point of just having a number as a goal but as they keep struggling to go faster the next gen may determine which one finds the best configuration to beat the other in all aspects... No compromise.
 
I just compared one of the oldest Skylake to new Comet Lake
6700K vs 10300K
Everything is the same except base frequency that is also the reason TDP of 10300K is lower. Also iGPU is now being called UHD 630 vs previously UHD 530 but otherwise have identical specs.
It is the same damn core and I bet it is made out of the same damn wafers without single transistor changed.

I bet i9 10900K could run on Z170 with DDR3 just like i9 9900K can.

One wonders then: what the hell were people at Intel doing for last five years?
If they have issues with 10nm then they should release new architecture on 14nm.
Maybe this is what Rocket Lake will be but I would not be surprised if they release another Skylake with another software turbo and theoretical clocks no one will ever see in any monitoring software...

Rocket Lake will indeed be Sunny Cove... or Willow Cove... or maybe it was Golden Cove. Anyway it's on the Cove architecture.

It will be interesting to see how it handles high clocks as architecture plays a role in clock speed as much as the node. RKL will need at least a 15% better IPC while reaching 5 ghz+ in order to compete with Zen 3 imho.
 
Why would anyone be excited about another CPU with basically the same performance that runs hotter than the sun?

It's about as boring as it gets.
 
Except my 5ghz all cores 9900k beats the 3950 score easily in Photoshop , in fact, it beats the 10900k for some of reason at this point. So for my needs, Photoshop and even premiere (using igpu) plus competitive gaming, Intel delivers the best performance for my money.
Indeed I have the luxury to wait.
Well duh, I don't care if they could run the cpu at 1mhz and still perform better than Intel 10Ghz, of course I would go with amd in the case. That was not the point of just having a number as a goal but as they keep struggling to go faster the next gen may determine which one finds the best configuration to beat the other in all aspects... No compromise.
Well, you said duh to that, but your previous comment said you aren't going to go AMD until they can hit 5ghz... so you contradict yourself now.
5ghz all core will outrun a lot of things, and for what you use it for sounds like you have 0 reason to upgrade (Intel or AMD). I was simply pointing out that stock for stock AMD is no longer crap in Photoshop like in the past, I didn't mean it to say it would keep up with your overclocked rig. Also, premiere can use a regular GPU as well, so unless you are using your iGPU to game, this is kind of a mute point as you can just use the GPU to whether you had Intel or AMD. Competitive gaming there is still no choice, Intel wins and can overclock better. This is why I try to ask someone what their primary and secondary uses are and which one they consider more important. No longer does a single chip win in all situations.
 
Why would anyone be excited about another CPU with basically the same performance that runs hotter than the sun?

It's about as boring as it gets.
Hard to see how anyone might be excited about these releases. Two more cores, potentially higher max overclocks, what's new on Z490 (I don't actually know or care myself).

However, if you do need to purchase now, an unfortunate circumstance with Zen 3 and Intel hopefully releasing an actually new architecture soon, this current Intel release is still worth considering based on user workload.
 
Hard to see how anyone might be excited about these releases. Two more cores, potentially higher max overclocks, what's new on Z490 (I don't actually know or care myself).

I wouldn't say excited, but I am glad to see HT turned on, like it should have been, across the lineup. Intel should have done this 9th gen, but I guess they would have almost nothing for 10th generation. It's makes the i5 great budget gaming CPU.

The extra cores, really only apply to people doing a lot of rendering (extremely tiny niche) or similar activity, so I don't give a rats ass about that.
 
Well, you said duh to that, but your previous comment said you aren't going to go AMD until they can hit 5ghz... so you contradict yourself now.
5ghz all core will outrun a lot of things, and for what you use it for sounds like you have 0 reason to upgrade (Intel or AMD). I was simply pointing out that stock for stock AMD is no longer crap in Photoshop like in the past, I didn't mean it to say it would keep up with your overclocked rig. Also, premiere can use a regular GPU as well, so unless you are using your iGPU to game, this is kind of a mute point as you can just use the GPU to whether you had Intel or AMD. Competitive gaming there is still no choice, Intel wins and can overclock better. This is why I try to ask someone what their primary and secondary uses are and which one they consider more important. No longer does a single chip win in all situations.
Let me spell it for you...I said until they can hit 5Ghz based on TODAYS performance where their current speed cannot match what Intel is doing for the apps I need. If tomorrow they can achieve more performance than Intel for those programs, with less speed needed, then that is fine as well. Got it now????
No Premiere next release will indeed use GPU better but the current utilization is not nearly as fast as what iGPU can do. This is why it easily beats what AMD . Gamers nexus used to test with iGPU and showed the big differences but even that channel decided to skip that these days..guess it pays off to pander to loud amd fans. Anyways, get your FACTS together buddy and don't let fanboysim cloud your judgement.
 
Back
Top