AMD at CES 2023: 7950X3D, 7900X3D, 7800X3D - VCache Party In Here

Not the guy you're responding to, but don't diss my 2.1 Cambridge Soundworks setup! It's 25 years old and Will. Not. Die.
I have and use that same speaker system! I think it sounds great, and haven't wanted to get another system because I like their sound, and the only speakers I personally compared them to (newer Creative Labs ones) didn't sound as good. I have the 4.1 system, but I only use 2 of the speakers. The other 2 have been sitting unused in storage for, well, almost always.

I noted the capacitors the subwoofer uses, and I think I might have bought the caps to recap the unit with a couple of years ago, maybe longer, but haven't gotten around to installing them yet.

Edit: I've now recapped my Cambridge Soundworks subwoofer. It's helped the sound quality, gotten rid of a bit of low-frequency hum it had before, and solved an issue it had with making a loud buzzing noise (until I powered off and on the subwoofer again) if I powered on or off another device from the same outlet.

This is the capacitor list:

3x 100 uf 16v
3x 10uf 35v
4x 1uf 50v
4x 0.22uf 50v
1x 0.1uf 50v
6x 470uf 16v
1x 4700uf 25v


Four of the 470uF 16v capacitors are under the metal shield that's shown around 4:00 in the first video here. To access them, the subwoofer volume knob needs to be pulled off, and the nut underneath it needs to be removed. And the screw between the RCA connectors on the subwoofer also needs to be removed.



 
Last edited:
H
That's nice. I personally have no need for that, as most things I need to leave running are on dedicated VM's or containers on my KVM/LXC server box, but I bet that is a nice system.



I haven't been able to make myself do this yet. I have a Threadripper 3960x (24C/48T) (which I bought for the PCIe lanes, not for the cores, I would have been more than happy with 8C/16T) but even so, I still have that old hangup from the 90's that you always always always close everything else before starting a game. You don't want to risk the slightlest chance of anything else messing with your game threads and reducing performance.

I even clean out most things running in my system tray before launching a game. Games are great! Multitasking is great! but never the twain shall meet! :p

I guess I just have trust issues :p
ahaha!! That’s why I built it. I needed a server level system. I also needed a workstation. And I needed a secondary gaming box. So build them as one machine! Threadripper makes that easy. So yeah my workloads are in VMs (the local ones are nested hosts or something that needs high interaction), it just runs on a workstation I can do two things at once on.

And since I don’t do competitive gaming - something like Ixion or the like won’t care about background tasks.
 
No thanks. They were a pain to setup then. Something always conflicting with something else.

That was because of having to manually manage IRQ's. This is something we don't have to do since the introduction of PCIe. PCI Express does not have physical interrupt lines, and uses Message Signaled Interrupts (MSI) to the operating systems if available.

Nothing wrong with on board nics. Hell you can get 10gb ones in sub $500 boards now. Pretty much every decent board is 2.5gb. Almost no average use needs more then 1gb. I use a DAC now but I never shuddered at using on board sound.

That's exactly it. Every component on a board is usually fine for typical use. If I were building a system for my mom I'd probably use on board audio and NIC. She'd never know the difference.
But back when you could customize things you had the choice of what was important for you, and you could prioritize those things with higher quality components, or components with specs that perfectly meet your needs. And the things that were not important to you, you could either use lower quality parts, or omit all together.

Audio:

Personally I have never used on board audio except for in backup builds from spare parts. I've never been happy with onboard sound. When it first started being integrated in the 90's the quality was really bad. It got better over time, but at that point I already had a dedicated sound card. By the time I transitioned away from the sound card, on board audio was much better, but still not devoid of noise from the electrically noisy environment inside a PC case. The sound card had this same noise issue, as it too was inside the case. That's when I transitioned to external USB dacs that are electrically isolated from the noisy environment in the PC.

At this point the only time on board audio would be used would be when I upgrade my main machine, and use the old motherboard in a backup machine, and even then, I have old USB DAC's I'm no longer using anymore I can use on the backup machine too...


NIC:

For typical users most on board NIC's will be just fine, (I have on occasion encountered really terrible ones, but that was on $30 motherboards, at that price, there are no expectations of quality) But again, PC's are supposed to be modular and customizeable to the users needs. That's what makes them so great compared Apples one size fits all whether it does or not approach.

I rely heavily on NAS in my house, and because of this, most non-Intel Ethernet chips just won't do the trick. (Broadcom NetXtreme are pretty good too, but anything Realtek or Atheros, or anyhting else, I simply do not trust.) I have had nothing but problems when using Realtek and other non Intel or Non NetXtreme chips in the past, and I simply consider them unusable, randomly dropping connections, or maintaining very disappointing max transfer speeds.

A typical user just views their local network as a way to access the Internet rather than a local networking tool, and because of this it does not matter to them as long as they can hit their WAN speeds. That and sustained connections are often not a problem, because they don't have several open files on their NAS that can go haywire if connections are dropped.

I haven't used the on board NIC on any of my workstations since maybe 2014? Mostly I've popped in Intel server Nics. Right now they all use X520 10gig SFP+ adapters which require a 8x Gen2 slot, and I am considering moving towards 25Gbe Intel XXV710 adapters which require 8x Gen 3 slots.

Again, PC's are supposed to be customizable to the users needs. A typical network user can buy a typical NIC and a user that has greater needs should be able to buy and use a better one.

I have never used WiFi on a stationary device. Not even once, and I never will, so I consider the inclusion of wifi on motherboards to be a complete waste. Wifi is for phones, tablets and laptops, IMHO.


So again, it comes back to the fact that PC's are supposed to be customizeable to the needs of the user. There are certain things I care a great deal about, there are others I don't want or care about at all. In our current state it forces to you buy a super duper premium $1200 motherboard if you care about things and want high quality integrated components, and then a whole host of shit I either hate or don't care about at all comes along for the ride. (flashy lights, fancy heatsinks, wifi chips, on board sound, shitty Realtek networking chips, LED controllers, more USB ports than I'll ever need in a lifetime, etc.)

It is utterly infuriating.

The PC is over time becoming more and more like a Mac, and it annoys the hell out of me.

"One size fits everyone whether it does or not" is not supposed to be the PC way.
 
Eric Raymond wrote a tool to convert source code repositories from older stuff like RCS to git. He wound up converting GCC, which has a repo that's decades old, with lots and lots of branches and cycles. He wound up with a Xeon with I want to say 128GB because you more or less need to keep the entire representation of the tree in RAM--nothing like having a 13-hour job crash with an out of memory error. Definitely a special case, but sometimes nothing else will do.

Certainly. I'm not saying there are NO use cases for it, but they are somewhat unusual use cases on the desktop, which is why I was curious what he did! :)
 
That may be the ideal, but whether the reality holds up to that ideal is another question entirely.

The thing about bottlenecks is that they shift around based on workload.

Maybe the same app or game is usually cache-limited, but it hits a point that's clock-limited - will the scheduler suddenly try to switch it between CCDs, potentially incurring latency penalties (and thus high/sluggish 0.1% frame times) for that switch each time? Might be better to whip out Process Lasso and limit the program to the consistently better CCD in such cases.

This isn't even getting into race condition bugs that get magnified with such heterogeneous architectures, which may be serious enough to bring the whole program crashing down. I'd hope there's no more such cases with Alder Lake being out for so long, but you never know how much bad code remains out there.
I have to imagine that AMD will simply be benchmarking games and then making a profile in their chipset drivers, to "lasso" that game's processes onto whichever CCX they deem better for it. Including a possible situation where the faster, regular CCX gets the processes-----but is allowed to still call back to the V-cache on the other CCX, for that game.


There is no better WiFi than no WiFi at all.

Be it copper or fiber, for me it is wired or bust.

I'll save WiFi for devices that need to be mobile. (phones, tablets, laptops, etc.)
Hey some of us renters aren't able to run cables :(
 
I have to imagine that AMD will simply be benchmarking games and then making a profile in their chipset drivers, to "lasso" that game's processes onto whichever CCX they deem better for it. Including a possible situation where the faster, regular CCX gets the processes-----but is allowed to still call back to the V-cache on the other CCX, for that game.



Hey some of us renters aren't able to run cables :(

Up until last year I was a renter as well. Now I have a little bit more flexibility, but I have been running cables in my apartments and rented homes for 20 years.

You just have to be creative. In most cases you can't drill holes, but there are usually existing crevices and holes you can use around radiators and air ducts and plumbing and the like. That and you can use cable raceways to hide the cables elsewhere (just don't use the included double sided tape, as that will literally rip the paint off the walls when you move out (ask me how I know). I found peeling off the included tape, and instead using command strips worked for me at my last few apartments.

It's a little extra work, but it is well worth it.
 
There is no better WiFi than no WiFi at all.

Be it copper or fiber, for me it is wired or bust.

I'll save WiFi for devices that need to be mobile. (phones, tablets, laptops, etc.)
So, I work in this huge international company known as BP. They are firm believers in all WiFi. "Site of the Future" I am looking at lightly populated offices with only a handful of access points available with Gig connections.... I'm just waiting for the day that everyone comes back into work from home and the entire network grinds to a halt because everyone is accessing their data simultaneously. I have watched the transfer rates bounce all over the place here zeroing out only a modicum of users in the office. Should be interesting a little later in the year when more people are back.

We can't even reliably image systems on the WiFi. Wired is often the only way to get certs to populate on machines here and it's being eliminated enterprise wide for decades now?

Wired is the only way to go. You want security, wired is a shitload more secure.

WiFi should never be more than a convenience or luxury.
 
I have to imagine that AMD will simply be benchmarking games and then making a profile in their chipset drivers, to "lasso" that game's processes onto whichever CCX they deem better for it. Including a possible situation where the faster, regular CCX gets the processes-----but is allowed to still call back to the V-cache on the other CCX, for that game.



Hey some of us renters aren't able to run cables :(
Yeah.... Powerline Ethernet still sucks. But it is an option.
 
I have to imagine that AMD will simply be benchmarking games and then making a profile in their chipset drivers, to "lasso" that game's processes onto whichever CCX they deem better for it. Including a possible situation where the faster, regular CCX gets the processes-----but is allowed to still call back to the V-cache on the other CCX, for that game.
If they have the time to do that, that would be nice, but I only realistically see it happening with newer, high-profile titles - the ones every major reviewer already covers in their benchmarks.

The niche stuff like ArmA, DCS, IL-2 Great Battles, BeamNG, later-gen console emulators like RPCS3 and such will likely get overlooked, and it's up to the community to do their own testing and figure out which works best.

I should note, the last time I touched an AMD platform in working order, it was an old Phenom II build for a client, so I have no idea how big of an influence chipset drivers have on modern Zen platforms.
 
Hey peeps!!! I love the new Ryzen 9 7950x3D with half cores having 3d v-cache, and half of them not!

We should discuss this because, you know, it's the topic!!!

That is really interesting, I didn't see it originally. I wonder if it will ahve some of the same issues Intel has with having apps keep track of E and P cores.
 
Hey peeps!!! I love the new Ryzen 9 7950x3D with half cores having 3d v-cache, and half of them not!

We should discuss this because, you know, it's the topic!!!
With slightly lower power draw these should go well with the new cheaper boards that were mentioned.
 
That is really interesting, I didn't see it originally. I wonder if it will ahve some of the same issues Intel has with having apps keep track of E and P cores.
Supposedly AMD is working with Microsoft to schedule tasks to the appropriate CCX. Certain games like more frequency, while some games like the added v-cache. Then of course, normal tasks would appreciate the frequency cores.

It's going to be a very cool chip if they get it right.
 
Supposedly AMD is working with Microsoft to schedule tasks to the appropriate CCX. Certain games like more frequency, while some games like the added v-cache. Then of course, normal tasks would appreciate the frequency cores.

It's going to be a very cool chip if they get it right.
Big IF, it's going to rely heavily on AMD's chipset drivers to determine what is and isn't a game could be mess. I'm going to wait for reviews before I jump to any conclusions but the 7800x3d seems like the safer choice...
 
What software are you using?

Because in general, Intel CPUs are killing it, in productivity. A lot of those apps are not nearly as multicore focused as people think. and Intel's single grunt, tends to put them in the lead. (and for the multicore loads, obviously doesn't matter. Because all the cores are needed. And the 13900k is as good as the 7950x, in those case).

If you do have something which is lightly threaded And incorrectly scheduling to the E-cores: you can press scroll-lock and disable the e-cores in real time. (feature must be enabled in the bios).

Use if for Hyper-V or ESXi and running VMs depending on what I working on, in all reality I run out of ram before cpu. Already have a 12700k/A770 mITX system for Plex, no reason to go Intel again, time for something new for this years build.

As long as the scheduler sees the identifier it really doesn't matter how different they are as long as its assigning the best core for each job. The problem with something like the 5800X3D was simple... not every job needs more cache. Plenty of tasks don't max out a normal amount of cache... where others would use 3 or 4x the cache if it was there.

As for just upping the power and getting the clocks... if that was possible that is what AMD would do. The issue with Vcache seems to be an issue of thermal design. With a normal chip setup cache is not directly situated in logic hot spots... with vcache it is. Vcache may well always have a lower max thermal limit vs a single layer on the same process.

The advantage vs P and E core scheduling is probably simple in this case... worse case if a task is given to the less then optimal core vs the high cache or high freq core, it won't result in drastic performance hits. As long as the scheduler gets it right most of the time its going to be much faster then other chips. (I mean it won't be the same massive performance hit you would see on a Intel chip where it accidently gives a bunch of performance tasks to E cores) I believe the 7000x dual chip parts are going to help AMD figure out what they need to figure out for the rumored Zen 5 with Zen P and E cores. AMD is also pushing out mashup parts like their new Instinct datacenter APUs. Zen 5 is going to be interesting, at this point it may well be possible AMD has Zen 5 skus with potentially 4 different types of x86 cores as well as AI and RDNA bits.

They've obviously done something to address the thermal issues and managed to get the TDP 50 lower than it is on the non-3D chips. If they can't up the clocks, just keep them low and give us the choice of a processor with lower clocks and all the v-cache and they'll have no problem charging more for it.

No thanks. They were a pain to setup then. Something always conflicting with something else. Nothing wrong with on board nics. Hell you can get 10gb ones in sub $500 boards now. Pretty much every decent board is 2.5gb. Almost no average use needs more then 1gb. I use a DAC now but I never shuddered at using on board sound.

Where are you finding sub $500 10GbE AM5 boards? The only decent one I've found is the Asus ProArt X670E-Creator and it's never in stock.
 
Use if for Hyper-V or ESXi and running VMs depending on what I working on, in all reality I run out of ram before cpu. Already have a 12700k/A770 mITX system for Plex, no reason to go Intel again, time for something new for this years build.



They've obviously done something to address the thermal issues and managed to get the TDP 50 lower than it is on the non-3D chips. If they can't up the clocks, just keep them low and give us the choice of a processor with lower clocks and all the v-cache and they'll have no problem charging more for it.



Where are you finding sub $500 10GbE AM5 boards? The only decent one I've found is the Asus ProArt X670E-Creator and it's never in stock.
All the new creator boards from Intel and AMD have 10g. There some GB boards also.
 
Last edited:
Where are you finding sub $500 10GbE AM5 boards? The only decent one I've found is the Asus ProArt X670E-Creator and it's never in stock.

If you have room inside the chassis just get a single-port SFP+ card and an SFP+ to 10Gbe slot if you can't fit that in the chassis Startech makes a nice thunderbolt to 10GBE which also works well.
I was looking at that same board but Reddit is filled with tales of people having nothing but problems with them, and reading through I am unable to find any of the usual "but it works for me" comments so I dodged that, I feel there is a reason they are not in stock and it's not because people are buying them in large quantities.
 
I’ll be honest, I was hoping for more uplift over the 5800x3d so I’d have a reason to upgrade. Maybe this will make them always better or equal to the 5800x3d? Currently zen4 outpaces the x3d in a lot of games and trails quite a bit in others. I wonder if this will finally be enough to make them officially better in everything.
 
I’ll be honest, I was hoping for more uplift over the 5800x3d so I’d have a reason to upgrade. Maybe this will make them always better or equal to the 5800x3d? Currently zen4 outpaces the x3d in a lot of games and trails quite a bit in others. I wonder if this will finally be enough to make them officially better in everything.
IDK

AMD had said in interviews that the X3D parts for 7000 Series wouldn't be limited like the 5000 series. And there is certainly a frequency uplift, yet I still see these pesky limitations. The TDP has been reduced 50 Watts, which sounds like there are problems with the stacked cache. The real thing to look for here is if you can overclock them. If the 7000 series cannot be overclocked and is also taking a frequency hit (which it is) I would suspect that you will see a smallish bump in performance in accordance with the frequency uplift (5Ghz Max Boost for the 7800X3D). The rumors put it about 15-20% ahead of the 5800X3D and that just sounds like a nothing burger to me. We will have to wait for the actual benchmarks
 
IDK

AMD had said in interviews that the X3D parts for 7000 Series wouldn't be limited like the 5000 series. And there is certainly a frequency uplift, yet I still see these pesky limitations. The TDP has been reduced 50 Watts, which sounds like there are problems with the stacked cache. The real thing to look for here is if you can overclock them. If the 7000 series cannot be overclocked and is also taking a frequency hit (which it is) I would suspect that you will see a smallish bump in performance in accordance with the frequency uplift (5Ghz Max Boost for the 7800X3D). The rumors put it about 15-20% ahead of the 5800X3D and that just sounds like a nothing burger to me. We will have to wait for the actual benchmarks
The copper alloy that TSMC uses for the adhesion layer between the stacks has too high a resistance value, it is significantly improved over the mixture that they used for the 5800x3d but still higher than they want/need. The layer has a very low melting point and needs to melt at a cool enough temperature that when they apply it it doesn't damage the chip, but as the chip is capable of running itself into a state where it can damage itself they need to severely limit the voltage that passes through it otherwise they risk melting it and shifting the layer.
 
The copper alloy that TSMC uses for the adhesion layer between the stacks has too high a resistance value, it is significantly improved over the mixture that they used for the 5800x3d but still higher than they want/need. The layer has a very low melting point and needs to melt at a cool enough temperature that when they apply it it doesn't damage the chip, but as the chip is capable of running itself into a state where it can damage itself they need to severely limit the voltage that passes through it otherwise they risk melting it and shifting the layer.
I recall reading something about that. So, since they've alluded to the (other on the 7900 and 7950) CCD's being able to access the Cache of the stacked unit I wonder if this is a dry run on seeing what the penalty would be if they just tossed the cache on the chip as a separate module and used the Fabric to access the cache memory. That would avoid all the problems of the stacking process and the cache would be accessible to all CCD's? Likely way slower but I wonder if that's a possibility.
 
I recall reading something about that. So, since they've alluded to the (other on the 7900 and 7950) CCD's being able to access the Cache of the stacked unit I wonder if this is a dry run on seeing what the penalty would be if they just tossed the cache on the chip as a separate module and used the Fabric to access the cache memory. That would avoid all the problems of the stacking process and the cache would be accessible to all CCD's? Likely way slower but I wonder if that's a possibility.
Like they do for the Infinity Cache on RDNA3. That is something I never thought of, that would be sort of dope.
 
I like that the 7900X3D is 120w. I am eye balling that for my upcoming build.
 
I’d expect it to run a lot higher than that full load, but nowhere near Intel levels of easy bake oven.

I'd expect the 120w part to run hotter than an easy bake oven... what with those being traditionally heated via a 100w bulb. The 3D line is still probably a lot more voltage sensitive than the regular chips, but the extra 50w of TDP is something that was not giving AMD too much benefit while being called out by reviewers and the enthusiast community. Otherwise, I do agree as AMD might as well make the best of the situation and run at 120 (more like 150-170 actual power draw) to really embarrass the 350-400w that a 13900KS running flat out will take.
 
I'd expect the 120w part to run hotter than an easy bake oven... what with those being traditionally heated via a 100w bulb. The 3D line is still probably a lot more voltage sensitive than the regular chips, but the extra 50w of TDP is something that was not giving AMD too much benefit while being called out by reviewers and the enthusiast community. Otherwise, I do agree as AMD might as well make the best of the situation and run at 120 (more like 150-170 actual power draw) to really embarrass the 350-400w that a 13900KS running flat out will take.
Techpowerup calculated the 7950x at 235w in Blender and 13900k at 283w, with Intel stock power limits.

Its not until you unlock the power limits, that they got 373 watts. And when they did that, it won by as much as it was losing, at stock power limit. So, you have some flexibility there, I suppose.

90 watts is still a large difference. But, Raptor Lake is known to scale well, with lower power limits. I don't think AMD will end up with much more of an efficiency advantage than they already have, at least for all cores loaded productivity.
Intel is actually more efficient in stuff like photo and video editing. The X3D probably won't change that, as the single thread/light threading for those tasks, will still use the same power from the non-cache core. The cache should give AMD a pretty big gaming efficiency advantage.
 
Techpowerup calculated the 7950x at 235w in Blender and 13900k at 283w, with Intel stock power limits.

Its not until you unlock the power limits, that they got 373 watts. And when they did that, it won by as much as it was losing, at stock power limit. So, you have some flexibility there, I suppose.

90 watts is still a large difference. But, Raptor Lake is known to scale well, with lower power limits. I don't think AMD will end up with much more of an efficiency advantage than they already have, at least for all cores loaded productivity.
Intel is actually more efficient in stuff like photo and video editing. The X3D probably won't change that, as the single thread/light threading for those tasks, will still use the same power from the non-cache core. The cache should give AMD a pretty big gaming efficiency advantage.
If AMD's slides are accurate and the 7800x3d is ~25% faster than the 5800x3d, and the 13900k is ~5% faster than the 5800x3d then in gaming at low resolutions the 7800x3d would be ~20% faster than the 13900k.
But as it is going to be GPU bound for the majority of people who are actually using it so in reality it wont run the games faster but it should run them cooler at a significantly lower power draw, which frees up juice for that 4090 it should be paired with.
 
Supposedly AMD is working with Microsoft to schedule tasks to the appropriate CCX. Certain games like more frequency, while some games like the added v-cache. Then of course, normal tasks would appreciate the frequency cores.

It's going to be a very cool chip if they get it right.
1672987817968.png

Found this, hopefully both MS and AMD can find the solution and if it could work, wow.
 
I was hoping for both dies to have extra cache as it would suit my needs best.

Looks like I'll end up with the 7800X3D to not worry about scheduler weirdness on Linux.

Though with the slow memory speeds when going to 128 GB of RAM, I may limp my 6800k another generation. Zen 1 also had a poor memory controller.
 
The copper alloy that TSMC uses for the adhesion layer between the stacks has too high a resistance value, it is significantly improved over the mixture that they used for the 5800x3d but still higher than they want/need. The layer has a very low melting point and needs to melt at a cool enough temperature that when they apply it it doesn't damage the chip, but as the chip is capable of running itself into a state where it can damage itself they need to severely limit the voltage that passes through it otherwise they risk melting it and shifting the layer.
This is what bothered me. Ever since i read about (and probably more than once from you discussing it in previous threads) this limitation with the 5800X3D, the idea was that they didn't launch the 7000 series 3D until they have a fix that wouldn't require the compromises of the previous generation. I'd hoped that they would have been able to find a compound stable enough to make it equivalent to the standard 7000 series in terms of wattage, allow the same OC and frequency, and was hoping for full both CCX unit vcache. I guess the question here is A) how much of a compromise is this , really and B) is this something that could have been solved in another few months of development and/or an admission that this is the best we get for this generation try again in the fall? Will there be a chance that in a few months there will be some sort of 7990X3D that is a 170w 4.5>5.7ghz standard boost, overclockable, 2 CCX 3D VCache on both sides chip?

Of course its likely that even as it is the 7950X3D is going to be an absolute monster as it is, but I have to wonder what problems this will create and if it isn't something that could have been resolved with a bit more time? What will be the overclock potential and how hard will things be locked down? What will be the frequencies of the 3D CCX vs the standard? Will users have the ability to pick which one they want applications to run upon and by default how good will it estimate what to do? Will users be able to easily override any 'default' decisions without having to use proprietary manual thread-pinning applications? Thinking of overclocking , Asus anyway was noteworthy for bringing the "dynamic OC" feature implemented on the 5000s-era Dark Hero into all their X670E boards (or definitely all their ROG or named boards at least) that allows users to both get the best of PBO2 style auto-to-threshold single/few core performance and when needed swap over to many/all core manually set turbo OCs. I wonder how this will be affected by a chip with both CCXs populated but heterogeneous vcache and potentially different max frequencies? Given the quote above that Riev90 cites, the "bare chiplet can access stacked cache in the adjacent chiplet but this isn't optional and will be rare", more info on that situation will be interesting.

Ultimately we'll get more info in the lead up to and after launch, but it will be nice to see it behaves in edge cases and if this design decision was a best-of-all-worlds or a necessitated compromise.
 
The knock on waiting for reviews is that by the time they hit - it's often too late to actually buy something. You're forced to buy something based on paper specs and hoping it doesn't suck. Otherwise the item sells out and you can't get one for months. Either that or the reviews tell you something is a rip off after you've already committed and you're left holding the bag.
I hope that whatever review embargo is in place that it's actually over prior to this things actually being on sale. Well, that and a plentiful supply if they're worthy.
 
I'd expect the 120w part to run hotter than an easy bake oven... what with those being traditionally heated via a 100w bulb. The 3D line is still probably a lot more voltage sensitive than the regular chips, but the extra 50w of TDP is something that was not giving AMD too much benefit while being called out by reviewers and the enthusiast community. Otherwise, I do agree as AMD might as well make the best of the situation and run at 120 (more like 150-170 actual power draw) to really embarrass the 350-400w that a 13900KS running flat out will take.
I once thought like this about power efficiency on AMD CPUs. It had kind of been a selling point especially vs 11th Gen Intel. The thing is, I gave the power hungry Intel 13900K a whirl and totally sold out. Once I saw what the thing could do, I stopped giving a shit about it's power draw in any meaningful way. Unless we are in a power constrained area, we're enthusiasts and we want our damn performance! The Intel processor performs and somehow manages to be pretty efficient IIRC clock for clock vs AMD.

I also don't see those stupid power draws because I didn't open my (non KS) up all the way yet.
 
I really feel that with the 7000 series AMD pulled up a bit too short. They didn't really fight hard enough to get the highest performance and they underestimated Intel's ability to squeeze performance out of an inferior node. How you do that, I have no idea. Intel has proven they were capable of extracting every damn iota of performance they could out of 14nm.

I kinda figured that the 7000X3D chips would be the Gaming Shiznit. But if they're only good at extracting maximum 1080P FPS they're yesterday's news. They must compete with the top of Intel's Product Stack at resolutions beyond that. Which is something that the 13900K has done, pulling some incredible FPS gains out of 4K compared to every other processor out there.

So, when the reviews drop I will be impressed if these chips just murder Intel at all resolutions.

We need real competition and it needs to be a gawdamn war every generation, delivering amazing performance and forcing the competition to be competitive, forever.
 
I really feel that with the 7000 series AMD pulled up a bit too short. They didn't really fight hard enough to get the highest performance and they underestimated Intel's ability to squeeze performance out of an inferior node. How you do that, I have no idea. Intel has proven they were capable of extracting every damn iota of performance they could out of 14nm.

I kinda figured that the 7000X3D chips would be the Gaming Shiznit. But if they're only good at extracting maximum 1080P FPS they're yesterday's news. They must compete with the top of Intel's Product Stack at resolutions beyond that. Which is something that the 13900K has done, pulling some incredible FPS gains out of 4K compared to every other processor out there.

So, when the reviews drop I will be impressed if these chips just murder Intel at all resolutions.

We need real competition and it needs to be a gawdamn war every generation, delivering amazing performance and forcing the competition to be competitive, forever.
The 13th gen didn't exactly blow away Zen 5. It's a different approach - lower wattage, high performance versus high wattage, high performance. X3D is the cherry on top to push far past what Intel can do.
 
Back
Top