4770k With1080ti vs 3080ti - Time to retire? :(

Pasic

Weaksauce
Joined
Dec 22, 2013
Messages
101
What a beast ! 9 years... of crunching 4600mhz all day long. I've done test to see how the old trusty CPU goes with 3080 Ti.

To make the 336mm 3080ti fit i had to remove custom cooling loop and bought Noctua D15, i am amazed that is a brilliant cooler.

I've got comparison below, 4770k @ 4.6ghz On Air, 3080 Ti vs MSI laptop (pile of expensive crap that i returned 3 days later) vs 4770k @ 4.7ghz on Custom Water Loop + Strix 1080 Ti OC
https://www.3dmark.com/compare/spy/25920255/spy/24994341/spy/1932821#

I've got new build in the works SSF with NR200P Max case, just waiting for Z690-I Strix to arrive and need to make decision 12900k or 12700k or kf.
 
  • Like
Reactions: Axman
like this
Pic of new SSF build, this card is huge.
 

Attachments

  • 20220128_170927.jpg
    20220128_170927.jpg
    503.4 KB · Views: 0
What a beast ! 9 years... of crunching 4600mhz all day long. I've done test to see how the old trusty CPU goes with 3080 Ti.

To make the 336mm 3080ti fit i had to remove custom cooling loop and bought Noctua D15, i am amazed that is a brilliant cooler.

I've got comparison below, 4770k @ 4.6ghz On Air, 3080 Ti vs MSI laptop (pile of expensive crap that i returned 3 days later) vs 4770k @ 4.7ghz on Custom Water Loop + Strix 1080 Ti OC
https://www.3dmark.com/compare/spy/25920255/spy/24994341/spy/1932821#

I've got new build in the works SSF with NR200P Max case, just waiting for Z690-I Strix to arrive and need to make decision 12900k or 12700k or kf.
12700k all day long. The extra e cores are a joke. In fact many of us have to disable them for certain programs and games to run correctly.
 
Wow i didnt know that. Thanks
I have a app from gigabyte that can disable the e cores on the fly. Helps with compatibility. And I run W11 on the ADL box. Some games and programs just do not know how to handle the e-cores. Also the e cores are slower and used for much less intensive background tasks. They will make no difference if single threaded performance and gaming is your thing. Its not that MS was deceiving here, its more like they changed up how the CPU operates and did not explicity point out to consumers that the e-cores are basically useless for intensive applications.
 
I have a app from gigabyte that can disable the e cores on the fly. Helps with compatibility. And I run W11 on the ADL box. Some games and programs just do not know how to handle the e-cores. Also the e cores are slower and used for much less intensive background tasks. They will make no difference if single threaded performance and gaming is your thing. Its not that MS was deceiving here, its more like they changed up how the CPU operates and did not explicity point out to consumers that the e-cores are basically useless for intensive applications.
aren't the E-cores for general purpose use, like just running windows and browsing and such?
 
aren't the E-cores for general purpose use, like just running windows and browsing and such?
The way the w11 scheduler uses them is still… kind of a mystery. There’s been lots of anecdotal videos and info posted so far. I claim to be no expert. I will say I’ve had to disable the e cores when I had games that hung (that never did before) or apps that refused to load. Disabling the e cores immediately solved the issue. I am guessing compatibility will improve with time. It’s a new architecture and new software running it. I do believe it is the way of the future tho. I honestly think AMD will follow suit at some point.
 
The way the w11 scheduler uses them is still… kind of a mystery. There’s been lots of anecdotal videos and info posted so far. I claim to be no expert. I will say I’ve had to disable the e cores when I had games that hung (that never did before) or apps that refused to load. Disabling the e cores immediately solved the issue. I am guessing compatibility will improve with time. It’s a new architecture and new software running it. I do believe it is the way of the future tho. I honestly think AMD will follow suit at some point.

It'll take a few years for the software and OS support to get there, just like it did with AMD's awful Bulldozer. Microsoft and the Linux community had to rewrite the thread dispatcher to only load one thread per module and backfill the second thread with low utilization background tasks. Something similar will be done with Alder Lake. It may get a bit weird in the low end though, there are some really low power parts like the Celeron 7300 with just one P core and four E cores, or the i5-1230U with two P and eight E.

Those low end parts are going to be a marketing wet dream, they'll make systems with the crap CPUs and try and say they're 5 or 10 core CPUs without differentiating the heterogeneous core config to make it look better than it actually is.
 
Those low end parts are going to be a marketing wet dream, they'll make systems with the crap CPUs and try and say they're 5 or 10 core CPUs without differentiating the heterogeneous core config to make it look better than it actually is.

Marketing will catch up. AMD is moving to big+little, too.

And FWIW, compare Bulldozer and its successors to parity Intel with patches. Yeah, at the time, Intel was clocking AMD. Now we know in hindsight, it's because they compromised on security. It's not fair, it's never fair, but Intel super-cheated.
 
And FWIW, compare Bulldozer and its successors to parity Intel with patches. Yeah, at the time, Intel was clocking AMD. Now we know in hindsight, it's because they compromised on security. It's not fair, it's never fair, but Intel super-cheated.

It was less about Intel cutting corners than Bulldozer was a stupid architecture. Clustered multi-threading was a terrible idea that resulted in horrendous amounts of module contention where both integer units and the FPU were constantly fighting each other for execution time. Bulldozer required operating systems to rewrite their thread handlers so you didn't end up with two resource hungry threads within the same module fighting over the single FPU and killing their efficiency.

Bulldozer in a way become a "big.little" architecture by way of thread handlers spreading resource demanding threads among all available modules, and then backfilling the second thread with low priority background threads to avoid degrading performance on the first thread too much.

I'm not really a fan of the "big.little" approach because x86 was never really designed to work like that. It has rarely been done before back in the early days in the 386/486 era where AMP servers with custom operating systems would have a 486 main CPU and a 386 slave CPU for offloading tasks like bus I/O, but those weren't very common. Because it's so uncommon, very little software has ever been written to take advantage of such configurations. It's why we're seeing the problem rear its ugly head with 12th gen CPUs.
 
Back
Top