AMD Ryzen 7000 Series Reviews

The sad thing is, for an average user / gamer such as myself, there is no point. If anything was PCIE-5, if ddr5 made any sense, if the CPU’s didn’t run 95*c BY DESIGN,
I think people misunderstand this and what it truly means. The fact of the matter is a performance uplift isn't free. It has to come from somewhere. You can't magically make a new architecture more efficient while offering faster performance. Every once in awhile some break through leads to a major change in which you will see something like that and then what you'll see is that headroom diminished over time by performance improvements to that architecture. Also keep in mind that this is going to be in heavily multi-threaded workloads and not gaming or general tasks. You aren't going to see those temperatures very often unless you are using the machine for certain specific applications and that's all you do with it.

People that really push their existing systems or overclock are already used to seeing CPU temperatures in that range. It's not that big of a deal.

This is a ultra-competitive space now. AMD and Intel have been running their highest end CPU's at the edge of what the silicon is capable of. That's why overclocking is virtually pointless on those CPU's. Any gains in efficiency will give way to making the CPU's more powerful. Any headroom they have will be used towards out performing the competition. That's all AMD has done here. Get used to it because this is the new norm unless some serious breakthrough in semi-conductor engineering happens soon as is practical to implement for desktop and mobile systems.

Well Raja has been long gone and is buggering up Arc nicely. RX 6000 series were on target. I think it will remain consistent
AMD stated during the Ryzen 5000 series announcement that their GPU's would be the best on the planet. They were far from delivering on that statement.
 
What most of the reviews are telling me is that I kind of want a 5800X3D.

Or that I should wait a few months and see if a 3D cache versions of these 7000 series parts.
I keep thinking they lowered the price on the 7950X so they can slot a 7950X3D over it at some point in the next year.
 
Worth it 32 gig ddr5 kit alone would be more than a good ddr4 one I would have thought no ?
Well I was comparing 16 gig kits as I feel that's more than enough for a gaming rig, who knows maybe I'm wrong on that, and that's about a $50 difference in cost just to get into the game without worrying too much about shit like timings, memory speed, and all that rot. A quick look at 32 gigs that difference seems to go to about $60-80, just looking at a Corsair Vengeance brand which is one of the cheaper ones out there, you got $140 vs $80 so that's a $60 difference. I mean the difference absolutely can be a lot more if you so choose or are that picky. But I'm just talking about getting a system up and running not necessarily optimizing for "the best stuff I can throw in there"
 
https://www.techpowerup.com/review/amd-ryzen-7-7700x/20.html

Decent jump at 1440p vs previous generations, to be fair. >10% gains in 1440p are nothing to sneeze at, especially considering this is far more GPU limited historically. Whether or not it's worth an upgrade is up to you.

I'll personally be far more interested if motherboard prices come down. AM5 motherboard prices are insane right now. DDR5 is moving in the right direction, but I have no idea how the jump in cost from a comparable AM4 board could be justified.

Was about to post this, for those interested in gaming results at playable resolutions. Of course, more than frame rate matters, but I may consider an upgrade from a 3700X:

far-cry-6-2560-1440.png
 
What most of the reviews are telling me is that I kind of want a 5800X3D.

Or that I should wait a few months and see if a 3D cache versions of these 7000 series parts.
It is very likely going to be more than a few months. TSMC is still struggling with the manufacturing process for its TSVs.

The TSMC paper showcased yields for SoIC which was quite interesting. This was using daisy chain test structures on test dies that measured in at 6mm by 6mm, which is conveniently the same die dimensions as AMD’s VCache. One of the slowest steps in chip on wafer hybrid bonding is when a BESI tool physically picks up the die and places it on the bottom wafer. This bond step suffers heavily from accuracy, and throughput versus accuracy is a very big battle. TSMC, with a 3-micron TSV pitch, showcased yields do not differ and resistance did not meaningfully change at less than 0.5-micron misalignment, with 98% bond yield. From 0.5-micron to 1-micron, their structure did yield, but there was a sharp increase in resistance for the last 10% of their daisy chain structure. With greater than 1-micron pitch, their yield was 60% and all measured structures exceeded their specifications for resistance. 0.5-micron is a very important level as that is BESI’s claimed accuracy on their 8800 Ultra tool is <200nm, although we have heard it is more like 0.5-micron with a wide variance even with throughput at half the tool’s rated spec.
TSMC also showcased that contact resistance was better across the stack due to their thinner barrier layer. In addition, TSMC believes SoIC is more reliable. This includes with a wider range of operating temperatures. Many were disappointed when AMD locked down overclocking and modifying power entirely on their 5800X3D desktop chips. This is likely only a hiccup with the 1st generation. As TSMC’s Cu alloy is modified and the pitches decrease with SoIC gen 4, it seems they are improving their reliability and yields.

https://semianalysis.com/packaging-...-1-micron-pitch-hybrid-bonding-mediatek-netw/
 
If people are interested in snatching up a 5800X3D, it might be worth checking out the Microcenter "daily deals" if you live near one. They tend to do specials on older items whenever new products roll out. They're also good about having bundles where you buy a processor/mobo and they take $100 off the top of existing discounts.
 
What I'm most curious about, is the fact that AMD's design intentions not to OC the 5800X3D due to heat and degradation issues. Wondering how a X3D version of zen4 will fair, considering it's designed around a 95C target at 100% cpu utilization at the highest clock speed possible. Guess were still months away from that though.
 
Are you talking about performance or something else? Because performance wise even a cheap DDR5 has measurable gains.

View attachment 514011View attachment 514012View attachment 514013View attachment 514014
There are some titles at 1080p that the extra speed can use but at 1440p and up the latency really hurts and the speed means nothing. On average though even at 1080p it's only 3%. Game engines don't currently batch their commands in a way that can take advantage of the split memory channels in the DDR5. That will change so in time DDR5 will be the better choice, but currently, it's not there.

1664230515735.png
 
What I'm most curious about, is the fact that AMD's design intentions not to OC the 5800X3D due to heat and degradation issues. Wondering how a X3D version of zen4 will fair, considering it's designed around a 95C target at 100% cpu utilization at the highest clock speed possible. Guess were still months away from that though.
For the SoIC process used with the 5800x3d TSMC used a copper-based adhesive bonding the two layers together they have since made improvements to the bonding layer to increase the thermal ranges it can handle. The new issue is the physical placement of the upper layer. Apparently, the yield rate at 7nm was already bad and it's only gotten worse at 5nm. The issue seems to mostly be with the tool BESI builds that TSMC is using to do this doesn't have the operational accuracy to pull it off. The 8800 Ultra is rated to work at sub 200nm, but in practice, it's closer to 0.5 microns and even then it has some unacceptable variances. TSMC is physically struggling to get the top and bottom layers lined up, Intel has a really good paper on the topic using the capillary action of water to essentially pull the layers together, but that doesn't do TSMC and AMD any good as Intel uses a different bonding technology to do their stacking.
 
I'll just do what I do.

Wait.

Wait until the platform is more mature and prices aren't idiotic.

Wait until these processors aren't the kings of the hill.

Refusing to be an early adopter of new platforms was really hard when Intel launched a whole new socket every twelve hours, but AMD has made my cheapness easier. They're trying to bend me, but my cheapness is the cheapness of a father of five teenage girls. My cheapness could crush a neutron star.
 
I think people misunderstand this and what it truly means. The fact of the matter is a performance uplift isn't free. It has to come from somewhere. You can't magically make a new architecture more efficient while offering faster performance. Every once in awhile some break through leads to a major change in which you will see something like that and then what you'll see is that headroom diminished over time by performance improvements to that architecture. Also keep in mind that this is going to be in heavily multi-threaded workloads and not gaming or general tasks. You aren't going to see those temperatures very often unless you are using the machine for certain specific applications and that's all you do with it.

People that really push their existing systems or overclock are already used to seeing CPU temperatures in that range. It's not that big of a deal.

This is a ultra-competitive space now. AMD and Intel have been running their highest end CPU's at the edge of what the silicon is capable of. That's why overclocking is virtually pointless on those CPU's. Any gains in efficiency will give way to making the CPU's more powerful. Any headroom they have will be used towards out performing the competition. That's all AMD has done here. Get used to it because this is the new norm unless some serious breakthrough in semi-conductor engineering happens soon as is practical to implement for desktop and mobile systems.


AMD stated during the Ryzen 5000 series announcement that their GPU's would be the best on the planet. They were far from delivering on that statement.

They were damn good. Best on the planet is an opinionated statement.
 
"Nowhere near" is a heck of a claim (1.5% (206 vs 203 fps) against the i7).
I believe they were talking about power usage


---------------------------------------------
I think the multi-core performance is essentially the main benefit of the smaller process: you can squeeze in 16 P-cores and make it a somewhat realistic home/office CPU.
Intel's current 12th gen P-cores are amazing. But they are on a larger process. If Intel could do 16 -12th gen P-cores, they would be looking at performance as good or better than the 7970x. So, AMD has the process/silicon advantage.

However, Intel's cores seem to be a bit better in gaming and I expect 13th gen with its cache increases and clock increases, to re-affirm that. Even the 12700k has a good showing against both the 5800x3D and the 7700x.
 
Until somebody puts something out forcing them to lower their prices they will take the largest margin that they can.
Larger margins mean more R&D money. Leading edge is expensive and getting more so every year.
 
Except the 58003D is no clean sweep. I get it uses more power, but it's no where near the 12000 series. So not seeing the problem on this one.
View attachment 513961
The main issue with 58003D for AMD is pretty easy to see though.

A lot of gamers willing to buy AMD over Intel already have... most of us are on AM4 already. DDR4 hasn't changed in years... PCI3 is fine for most people, and gamers we tend to buy higher end SSDs.
As such most of us can just pop a 58003D in and we are probably good to skip this first AM5 gen.
 
I was hoping to downgrade my core count to 8 with the 7700 from my 3900x. It seems like I should wait for the vcache
 
The main issue with 58003D for AMD is pretty easy to see though.

A lot of gamers willing to buy AMD over Intel already have... most of us are on AM4 already. DDR4 hasn't changed in years... PCI3 is fine for most people, and gamers we tend to buy higher end SSDs.
As such most of us can just pop a 58003D in and we are probably good to skip this first AM5 gen.
Quite the statement to make considering the 3D version of Zen 4 hadn't even dropped yet. That's like writing off the 12000 series before the 12900K has even dropped.
 
Quite the statement to make considering the 3D version of Zen 4 hadn't even dropped yet. That's like writing off the 12000 series before the 12900K has even dropped.
I'm not suggesting a 3D Zen4 isn't going to be insane. Just that for gaming... lets all be a little realistic as well, Zen 3 Zen 4 doesn't make all that much difference really. If your gaming on the high end you are probably gaming at 4k, at the very least 1440. At those resolutions the difference between any of the top 20 so CPU skus on the market will be basically undetectable. For production workloads 58003D already proved cache isn't a big deal in those work loads very often.

So yes imo Zen4 is a tough sell to anyone that has a decent AM4 mainly gaming rig. If you really want Zen3+ gaming performance you can drop in a 3D Zen3 for a fraction of the price and just roll with what you got.

Of course unless AMDs 7000 are very different... Going next gen GPU probably means a new Power Supply, and the more parts you swap I guess the more likely it is you just go all in.
 
My current home office can't dissipate this, I mean a 7950x and a 4090, FFS man. Maybe the tower lives in the basement and I just use some longer cables up through the floor, I have a spare 240 down there I can power this off of.
Pickup a portable unit a window makes it viable plus it can be used in other rooms as needed. I find it quite useful.
 
No, there is no reason to upgrade at all. NONE. There is, literally no reason to go out an spend a shitload of money to upgrade to this. The 7600X looks good, if you had some old rig or were on a budget. But even then, you will likely be able to pickup a 3800X3D in a month's time for nearly the same price and everything else in that system will be much cheaper.

So, YAWN right back at you. lol

I will be nice... There are a number of benchmarks where my 5900X is faster than the 7900X at 4K and other resolutions. At less Watts and Less Frequency. Furthermore, these 7000 series processors don't seem to really be Beating Intel at much in the gaming spectrum. So, I don't see how this is even a win for AMD. The 5800X3D is an incredible value compared to these chips for gaming. The Platform Price is stupid because it's all new and PCI-E 5.0. Even the MBs without the PCI-E 5.0 are expensive and then you have to go out and buy DDR5.

Nothing here gets me really excited. Dammit! I need to find a hooker again? DAMN YOU AMD!!! You are making me spend all my money on hookers and blow!
No doubt I would wait for the sub $200 B-650 boards. So it makes sense then getting a 7600x, B650, and DDR5 instead of the 5800x3d and B550 if you are sitting on an old platform as you would have a much longer upgrade path and better I/O.

For now the $350+ MBs is what really screws things up. Also, we need to see more 7600x undervolting results to see if this can be a bit more efficient and less power/cooler demanding.
 
Y'all are too gaming focused. I read these reviews, like the one on Anandtech, and my jaw is on the floor that AMD managed to take this big of a generational leap.

I think many have a problem with this:
https://www.guru3d.com/articles_pages/amd_ryzen_7_7700x_review,9.html

We are seeing only a 3% IPC boost over Vermeer.
What would the efficiency / clocks be of a 5800x3d on 5nm? Some don't mind the 200+ watts but not everyone wants the extra heat and power consumption.

Also, what can we expect on the mobile front when power is limited to 35w or so?
 
Yeah, that's not great. I mean look at the jump from 3000 to 5000. Nobody is denying the performance of the 7000 series, but it appears as if all they did was crank the dials. It's the sort of stuff people would poke fun at Intel for.
Truth. We have made fun of Intel for doing the same thing in the past. More than once.
 
After my second read through I find the eco mode 65 cinebench outperforming the 12900K at 170 watts very interesting. That seems manageable and not terrible for power levels.170 watts outperforming 300+ watts is definitely still a win with eco 105 option as well when bios are more mature. 24:30 of the video post #45. Ryzen 7000 comes with a 3 speed transmission apparently.😂
 
After my second read through I find the eco mode 65 cinebench outperforming the 12900K at 170 watts very interesting. That seems manageable and not terrible for power levels.170 watts outperforming 300+ watts is definitely still a win with eco 105 option as well when bios are more mature. 24:30 of the video post #45. Ryzen 7000 comes with a 3 speed transmission apparently.😂
They needed something to meet California and EU power requirements for desktop OEM devices?
 
I think many have a problem with this:
https://www.guru3d.com/articles_pages/amd_ryzen_7_7700x_review,9.html

We are seeing only a 3% IPC boost over Vermeer.
What would the efficiency / clocks be of a 5800x3d on 5nm? Some don't mind the 200+ watts but not everyone wants the extra heat and power consumption.

Also, what can we expect on the mobile front when power is limited to 35w or so?
Almost all their mobile lineup they have announced are all Zen 2+ and Zen 3+ designs, it doesn't look like they have anything coming out with a Zen 4 in mobile until 2023. It's also rumored that those Zen4 chips will be sold as SoC packages and not the traditional socketed setups for OEMs.
 
will be interesting to see if there is not some difference in higher resolution in some title left on the table with the 4090-RDNA3 benchmark coming up like Moore's law is dead pointed out.
 
Might toss in the 5800X3D I have had sitting on my desk finally. As it looks to be the best bet for gaming until the V-Cache Zen4 CPUs come out probably sometime in the first half of next year.
 
The exact reason I put one in my X570 and I won't have to deal with early adopter tax or immaturity annoyances.
 

Come on guys, let's not taint the well with misrepresented facts. These are GPU bottlenecked titles, not cpu limited...

130286.png


Here's a title that is for sure CPU limited...The architecture has improvements. But just because you don't gain a massive performance bump if 4k gaming doesn't mean "this cpu is a shit release!". Especially when we KNOW 4k titles are almost always gpu restricted...1440P IMO would be a more realistic factor...If the title is more CPU performant...Probably why CS go 1080p always pop up in most benchmarks.
 
basically 0% gain from 5950X to 7950X @ 4K imho
Using current graphics cards.

Having more CPU horsepower would greatly help when the 5090 Ti comes out and game engines get more complex.

Remember, the 3700X wasn't much different to the 1800X at 4K when it first came out.. It sure as hell is now with modern games and video cards, and the 3700X is considered slow compared to the 5800X in the same benches.
 
  • Like
Reactions: noko
like this
Back
Top