AMD Ryzen 7 7700X Pictured, Installed On AM5 Motherboard specs/prices

There have also been rumors of DDR4 AM5 boards.
That basically is my question, so the short answer is maybe, but nothing beyond rumors.
Not so much AM5 is DDR4, but like have say "x640" MBs be DDR4, while the rest will be DDR5 MBs
 
and most importantly.. no need to worry about bent pins on your shiny new CPU... just have to worry about all those bent/smashed pins in the motherboard socket

🤣
I only touched those pins once... the very first time I saw that configuration "what are all those shiny bumps... *touch* fuuuuuuuuuuu..."
 
The 12900K is $540 right now. The Ryzen 7950X is rumored to be over $1k, and the 5950X is currently $546. I'm not going by cores, but by price. I don't care if the 12900K is a little faster or a little slower compared to AMD. The 7700X is $60 cheaper than the 12900K but does have half the cores. The 7950X's rumored price is more than double.

In some games the FX 8350 that I own will be faster in some areas because of this. The FX CPU's weren't bad, the motherboards were. Wasn't uncommon for them to warp or to melt some plastic bits because of the power draw.

That was a thing. AMD depended on Microsoft to fix their OS to get the most out of their CPU's, but Microsoft really took their time at it. Nowadays with Linux getting so much support from AMD, it puts Microsoft in a position when they're often compared to Linux and showing major performance issues.



Yes and no, cause most games today don't use more than 6 cores. Today an FX 8370 has aged better than say an i5 3750k, which was released about the same time. An i7 3770k though would hold up much better, and maybe even beat an FX 8370 in multi-threaded games. The problem is something like Handbrake will use all the threads and cores to the max, while games won't. This is because games have code that runs in order, and therefore can't just max cores. Developers will move parts out like sound and AI to take better advantage of the hardware, but it won't max out all the cores it uses. This is why IPC is still king for gaming, and will likely always be king. The only time having too little cores matters is when you start to not have enough, which last I checked was 6.


IPC will literally not always be king, in my opinion, things must eventually change. However, for now, and for the near future, it will remain as such. X86 will not always be around and programming will change, eventually, and we will be better for it.
 
  • Like
Reactions: DPI
like this
This got me wondering and from Noctua’s website:

“In short, all Noctua coolers and mounting kits that support AM4 are upwards compatible with socket AM5, except the NH-L9a-AM4 and the NM-AM4-L9aL9i.

All Noctua AM4 mountings except the ones of the NH-L9a-AM4 and the NM-AM4-L9aL9i attach to the threads of the standard AM4 stock backplate. Since these backplate threads and their pattern are identical on AM4 and AM5, our AM4 mountings that attach to the standard AMD backplate also support AM5.

This means that all SE-AM4 models as well as all Noctua multi-socket coolers purchased since 01/2019 already support socket AM5. Multi-socket coolers purchased before this date that have already been upgraded to AM4 using the NM-AM4 or NM-AM4-UxS kits also require no further upgrades. Older multi-socket coolers that have been purchased before 2019 and have not yet been upgraded to AM4 can be made compatible with AM5 using the NM-AM4 or NM-AM4-UxS upgrade kits.”

This is fantastic really.
Coolers that require their own backplates, i.e. waterblocks, are possibly going to need an adapter kit to match the fixed backplates on AM5... at least according to EK. If that's true, I hope Techn makes such a kit available.
 
and most importantly.. no need to worry about bent pins on your shiny new CPU... just have to worry about all those bent/smashed pins in the motherboard socket

🤣

Not much tech-wise makes me nervous these days, but can I tell you, inserting my $1,400 Threadripper into my $950 with it's 4094 hair thin pins.

Let me tell you. I took it slow.
 
Sure everything is going up in price but got dang! I know, Microcenter, but I bought a 12700k and z690 board for less than $600 out the door. Looking forward to the reviews regardless.
you going to buy a ryzen 7000 rig.. and u know it
 
The 12900K is $540 right now. The Ryzen 7950X is rumored to be over $1k, and the 5950X is currently $546. I'm not going by cores, but by price. I don't care if the 12900K is a little faster or a little slower compared to AMD. The 7700X is $60 cheaper than the 12900K but does have half the cores. The 7950X's rumored price is more than double.

In some games the FX 8350 that I own will be faster in some areas because of this. The FX CPU's weren't bad, the motherboards were. Wasn't uncommon for them to warp or to melt some plastic bits because of the power draw.

That was a thing. AMD depended on Microsoft to fix their OS to get the most out of their CPU's, but Microsoft really took their time at it. Nowadays with Linux getting so much support from AMD, it puts Microsoft in a position when they're often compared to Linux and showing major performance issues.



Yes and no, cause most games today don't use more than 6 cores. Today an FX 8370 has aged better than say an i5 3750k, which was released about the same time. An i7 3770k though would hold up much better, and maybe even beat an FX 8370 in multi-threaded games. The problem is something like Handbrake will use all the threads and cores to the max, while games won't. This is because games have code that runs in order, and therefore can't just max cores. Developers will move parts out like sound and AI to take better advantage of the hardware, but it won't max out all the cores it uses. This is why IPC is still king for gaming, and will likely always be king. The only time having too little cores matters is when you start to not have enough, which last I checked was 6.

7950X is 699.00
 
That basically is my question, so the short answer is maybe, but nothing beyond rumors.
Not so much AM5 is DDR4, but like have say "x640" MBs be DDR4, while the rest will be DDR5 MBs
If you love DDR4 prices and find that your random access memories are far sexier in high heels, ASUS maked this for you. Not sure they ever released it, however. I can't remember if you had to have one of their new mobos, too.

ASUS-ROG-DDR5-To-DDR4-Adapater-Board-_-Z690-Motherboards-_3.jpg


ASUS-ROG-DDR5-To-DDR4-Adapater-Board-_-Z690-Motherboards-_1-1480x833.png
 
If you love DDR4 prices and find that your random access memories are far sexier in high heels, ASUS maked this for you. Not sure they ever released it, however. I can't remember if you had to have one of their new mobos, too.

View attachment 505793

View attachment 505794
But I would think you would need DDR4 to DDR5 this seems to be the opposite. That said, I see ROG plastered on it, so that'll cost a bundle.
 
Yeah the wording is confusing. But the point was if you had DDR4 RAM and a DDR5 mobo, this would make it work together.
 
I've never been a fan of the big core little core design, but I know why Intel is doing this. I think Intel is trying to mimic a lot of the features we see on Apple's silicon, which does have the big little core design. Intel's Arc now has really good AV1 encoding, which is something we see from Apple, but neither AMD or Nvidia care too much about. Intel even went so much as to buy a bunch of TSMC's 3nm, to the point where they overpaid for it compared to Apple. Clearly Intel is very interested in competing directly with Apple in power efficiency, and the E-cores do seem to work, even though I still think it's a waste of silicon. AMD's Rembrandt is already proving that the whole Big little core design is a waste, and I'm sure once Zen4 is released and eventually makes its way to laptop parts then we'll see Big Little being a waste.
Those E cores give Intel a boost in meeting the EU and California low power requirements for OEM office equipment. There’s lots of other benefits as well IF the OS is able to take advantage of the architecture. The ability to relegate background processes that must run regardless of what you are doing that use very few actual resources greatly helps in system feel. It’s not one of those things that comes across in benchmarks or the likes but it greatly helps in making something feel like it’s a faster system regardless of how it actually performs. That feeling of performance is generally more important for 90% of general office computers where they would struggle to max out a basic i3 if it weren’t for AV scans or background updates and backups.
 
IPC will literally not always be king, in my opinion, things must eventually change. However, for now, and for the near future, it will remain as such.
The only time IPC isn't king is when you have too few cores. One of the things I realize many years after buying the FX 8350 is that the potential that CPU had wasn't worth it. Sure it had many cores and todays games do make very good use of it, but at the time the Intel CPU's were better for gaming.
X86 will not always be around and programming will change, eventually, and we will be better for it.
Gonna give a little history lesson about CPU architectures and why x86 will probably always be king. The 80's was all about Motorola 6800's and their reduced clones the 6502. I would include the Z80 but that's just a cloned Intel 8080. That was replaced by MIPS as the 90's was the MIPS decade for the most part. The 2000's was the PowerPC decade, and the 2010's was the ARM decade. But every decade was the x86 decade, and there's a very good reason for this. It's been said many times that x86 is terribly old and RISC is the future, but nearly every CPU architecture faded out of existence. The reason for this is the dynamic relationship between AMD vs Intel and how IBM made sure that these two fought each other for market share in the IBM compatible market. The problem with other CPU architectures is usually the company who is responsible for it's design, will usually end up doing nothing to improve it, and that's because the companies using it won't just go out and redo their software to move to a competitor. This is why PowerPC died because IBM knew that Apple would have to do a bold move and move away. IBM didn't have a competitor to PowerPC and so they took their time. Intel made a better product despite using the then ancient x86 architecture and Apple had no choice but to move to the better CPU's.

You can already see similar problems with ARM and Intel. For nearly a decade or more, Intel was beating AMD, even before Bulldozer was out. For too long we were stuck with dual core or quad core x86 CPU's from Intel, until Ryzen was released and pushed Intel to go beyond quad core. Even after Ryzen was released, we can see that Intel didn't care to advance their manufacturing technology, which gave Intel a huge edge in the past. One of the reasons Apple left Intel was because Intel fell so far behind that Apple could actually make a better CPU. The thing is that ARM is also running into these problems, and has been for many years. So much so that nobody makes a competitive ARM SoC without redesigning a lot of the CPU to gain a performance advantage. We forget that x86 didn't always have just AMD and Intel, but Via, IBM, and even Texas Instruments made x86 CPU's. The problem was that none of these companies put R&D to compete with AMD and Intel. ARM is in such poor shape that they filled for bankruptcy and Nvidia almost bought them. Safe to say that we won't be seeing ARM make any major updates to ARM CPU's for a while.

The point I'm making is that x86 will probably never go away, so long as AMD and Intel fight each other for market share. Also because nobody likes to lose compatibility with older software, there will always be a demand for their products. We would never be better off losing x86, because x86 is the most open platform. ARM is extremely fragmented in terms of a boot loader, which means we're not free to install whatever OS or software we want like we can on x86, and this is thanks to IBM's efforts decades ago. Thanks to AMD who keeps pushing for better CPU's, we have Intel who is actually now making a GPU, and is actually looking to other companies to manufacture these products for them and not just themselves. Apple's has no real competition, and any benchmark they lose on will be an act of x86 spooky action. Nobody is allowed to make computers that runs Mac OSX, or CPU's to be installed on Apple computers. Qualcomm will certainly improve their design, but who's their direct competitor? Certainly isn't Apple as their market is walled. Nvidia who can't find anyone to sell their products to but Nintendo? Samsung who does make ARM based SoC's but actually includes more Qualcomm chips than their own?

Trust me when I say this but x86 will improve it's performance far beyond whatever ARM has, and will be more efficient too. Not because x86 is better, but because AMD and Intel will continue to fight each other for the 85% market share they have access to. They have the incentive to pour a lot of R&D into their products. Not just the desktop/laptop market but also the server markets as well. Why you think AMD is including AVX-512 in the 7000 series chips? In a couple of years, Apples move to ARM would have been a huge mistake because by then AMD and Intel will have fantastically faster chips with equal or better battery life. I would argue AMD's Rembrandt is nearly there but still falls short in power consumption. The reason I can say these things is because it's happened before, as every decade has been the x86 decade.
 
Last edited:
The only time IPC isn't king is when you have too few cores. One of the things I realize many years after buying the FX 8350 is that the potential that CPU had wasn't worth it. Sure it had many cores and todays games do make very good use of it, but at the time the Intel CPU's were better for gaming.

Gonna give a little history lesson about CPU architectures and why x86 will probably always be king. The 80's was all about Motorola 6800's and their reduced clones the 6502. I would include the Z80 but that's just a cloned Intel 8080. That was replaced by MIPS as the 90's was the MIPS decade for the most part. The 2000's was the PowerPC decade, and the 2010's was the ARM decade. But every decade was the x86 decade, and there's a very good reason for this. It's been said many times that x86 is terribly old and RISC is the future, but nearly every CPU architecture faded out of existence. The reason for this is the dynamic relationship between AMD vs Intel and how IBM made sure that these two fought each other for market share in the IBM compatible market. The problem with other CPU architectures is usually the company who is responsible for it's design, will usually end up doing nothing to improve it, and that's because the companies using it won't just go out and redo their software to move to a competitor. This is why PowerPC died because IBM knew that Apple would have to do a bold move and move away. IBM didn't have a competitor to PowerPC and so they took their time. Intel made a better product despite using the then ancient x86 architecture and Apple had no choice but to move to the better CPU's.

You can already see similar problems with ARM and Intel. For nearly a decade or more, Intel was beating AMD, even before Bulldozer was out. For too long we were stuck with dual core or quad core x86 CPU's from Intel, until Ryzen was released and pushed Intel to go beyond quad core. Even after Ryzen was released, we can see that Intel didn't care to advance their manufacturing technology, which gave Intel a huge edge in the past. One of the reasons Apple left Intel was because Intel fell so far behind that Apple could actually make a better CPU. The thing is that ARM is also running into these problems, and has been for many years. So much so that nobody makes a competitive ARM SoC without redesigning a lot of the CPU to gain a performance advantage. We forget that x86 didn't always have just AMD and Intel, but Via, IBM, and even Texas Instruments made x86 CPU's. The problem was that none of these companies put R&D to compete with AMD and Intel. ARM is in such poor shape that they filled for bankruptcy and Nvidia almost bought them. Safe to say that we won't be seeing ARM make any major updates to ARM CPU's for a while.

The point I'm making is that x86 will probably never go away, so long as AMD and Intel fight each other for market share. Also because nobody likes to lose compatibility with older software, there will always be a demand for their products. We would never be better off losing x86, because x86 is the most open platform. ARM is extremely fragmented in terms of a boot loader, which means we're not free to install whatever OS or software we want like we can on x86, and this is thanks to IBM's efforts decades ago. Thanks to AMD who keeps pushing for better CPU's, we have Intel who is actually now making a GPU, and is actually looking to other companies to manufacture these products for them and not just themselves. Apple's has no real competition, and any benchmark they lose on will be an act of x86 spooky action. Nobody is allowed to make computers that runs Mac OSX, or CPU's to be installed on Apple computers. Qualcomm will certainly improve their design, but who's their direct competitor? Certainly isn't Apple as their market is walled. Nvidia who can't find anyone to sell their products to but Nintendo? Samsung who does make ARM based SoC's but actually includes more Qualcomm chips than their own?

Trust me when I say this but x86 will improve it's performance far beyond whatever ARM has, and will be more efficient too. Not because x86 is better, but because AMD and Intel will continue to fight each other for the 85% market share they have access to. They have the incentive to pour a lot of R&D into their products. Not just the desktop/laptop market but also the server markets as well. Why you think AMD is including AVX-512 in the 7000 series chips? In a couple of years, Apples move to ARM would have been a huge mistake because by then AMD and Intel will have fantastically faster chips with equal or better battery life. I would argue AMD's Rembrandt is nearly there but still falls short in power consumption. The reason I can say these things is because it's happened before, as every decade has been the x86 decade.

Past history does not predict future needs and therefore, x86 will not always be around. Having single core IPC as important is a impediment to advancement, in the long run. Therefore, eventually, things will change and it is not a matter of if but when.
 
I know that is the INTENT of the E-cores, but at least the 12xxx series chips seem to run rather hot compared to equivalent AMD chips.

Something is still amiss, but I ahvent kept up with what node they are on now. Maybe that is just a node disadvantage.
Raptor Lake (13th gen) is another Optimization step on 10nm in Intel's Process-Architecture-Optimization model. Meteor Lake next year will be the first CPU on Intel's 7nm process.
Yeah the wording is confusing. But the point was if you had DDR4 RAM and a DDR5 mobo, this would make it work together.
Wouldn't that massively increase the latency, though, since you're increasing the distance to the CPU? I really don't see a benefit to this at all.
 
Those E cores give Intel a boost in meeting the EU and California low power requirements for OEM office equipment. There’s lots of other benefits as well IF the OS is able to take advantage of the architecture. The ability to relegate background processes that must run regardless of what you are doing that use very few actual resources greatly helps in system feel. It’s not one of those things that comes across in benchmarks or the likes but it greatly helps in making something feel like it’s a faster system regardless of how it actually performs. That feeling of performance is generally more important for 90% of general office computers where they would struggle to max out a basic i3 if it weren’t for AV scans or background updates and backups.
Thus it makes total sense that the lower-power processors, which are what the vast majority of OEM desktop systems are probably going to be, almost exclusively don't have E-cores in Alder Lake, right? (although, admittedly, the mobile chips seem to be packed with E cores.)
 
Wouldn't that massively increase the latency, though, since you're increasing the distance to the CPU? I really don't see a benefit to this at all.
Might've made more sense in a world with even moar semiconductor shortages. Latency vs no PC at all? I will take 1 latency, please. Make that 4, one for each RAM slot. Then all you have to do is lose the platform shoes when ddr5 comes home.
 
Thus it makes total sense that the lower-power processors, which are what the vast majority of OEM desktop systems are probably going to be, almost exclusively don't have E-cores in Alder Lake, right? (although, admittedly, the mobile chips seem to be packed with E cores.)
When I take a look at my Dell offerings all their 12th gen series they offer me have the E cores. But I’d have to cross reference with ARC to know if they are moble or desktop class.
 
Past history does not predict future needs and therefore, x86 will not always be around.
You always use history to make future predictions. The other method is a crystal ball.
Having single core IPC as important is a impediment to advancement, in the long run. Therefore, eventually, things will change and it is not a matter of if but when.
Unless ARM slaps their dick on the table and tells everyone they must include a UEFI boot loader and forces Apple to open up, otherwise I can't see things change. BTW, Zen4 based architecture isn't going to be amazing in performance, but will be in power efficiency. Enough to scare Apple. Rembrandt was enough to scare Apple. The real performance upgrade is when AMD releases their V-Cache version, which I hear is going to be much faster than just regular Zen4.
 
You always use history to make future predictions. The other method is a crystal ball.

Unless ARM slaps their dick on the table and tells everyone they must include a UEFI boot loader and forces Apple to open up, otherwise I can't see things change. BTW, Zen4 based architecture isn't going to be amazing in performance, but will be in power efficiency. Enough to scare Apple. Rembrandt was enough to scare Apple. The real performance upgrade is when AMD releases their V-Cache version, which I hear is going to be much faster than just regular Zen4.

I find that you are thinking strictly in the short term well I am thinking in the long term, as in beyond my lifetime here. Although it was science fiction, try to imagine running any of the Starship Enterprise ships on what we have today, it would never happen or work. Not even an advanced version of what we have would be sufficient.
 
I find that you are thinking strictly in the short term well I am thinking in the long term, as in beyond my lifetime here. Although it was science fiction, try to imagine running any of the Starship Enterprise ships on what we have today, it would never happen or work. Not even an advanced version of what we have would be sufficient.
Long term will probably continue to be x86 until something dramatic takes its place because we need to. Like quantum computing for example. For now x86 will dominate the desktop while ARM dominates the mobile market due to legacy. This isn't game consoles where you can just force people to switch CPU architectures because they don't have a choice. You also can't get developers to rewrite their code for it, just because. This is why x86 still exists, because nobody likes rewriting millions of lines of code.
 
That basically is my question, so the short answer is maybe, but nothing beyond rumors.
Not so much AM5 is DDR4, but like have say "x640" MBs be DDR4, while the rest will be DDR5 MBs
well nowdays the memory controller is integrated into the cpu so unless they put both ddr4 and ddr5 memory controllers on there then it will be ddr5 only.

also guys, don't forget usb4 :) that doesn't really change anything hardware configuration wise, just mentioning it as part of the technology upgrade w/ the new chipset.
 
well nowdays the memory controller is integrated into the cpu so unless they put both ddr4 and ddr5 memory controllers on there then it will be ddr5 only.

You can use adapters to go from DDR5 to DDR4. The question is if the cost of the adapter offsets the cost of the memory. If someone can do it competitively, I'm sure we'll see it.

And why do I think that if it happens, it'll be Asrock?
 
You can use adapters to go from DDR5 to DDR4. The question is if the cost of the adapter offsets the cost of the memory. If someone can do it competitively, I'm sure we'll see it.

And why do I think that if it happens, it'll be Asrock?
because intel has both memory controllers on the chip.

besides, you really gonna be rockin one of these on your rig? give me a break

ddr adapter.jpg
 
It would be obvious why someone would want to use DDR4 on their next CPU purchase, but DDR-5 to DDR-4 seem different and I am not sure relevant here ?
 
because intel has both memory controllers on the chip.

But AFAIK AMD doesn't, so you'd have to have an, er, "adventurous" manufacturer to build a board that natively supports DDR4.

Asrock has a history of making crazy parts that cater to a fraction of a fraction of the market, it's why there are so many Asrock fans.
 
I really don't think these adapters are going to happen. If DDR5 was MIA or priced like it was, people would buy these.
 
But AFAIK AMD doesn't, so you'd have to have an, er, "adventurous" manufacturer to build a board that natively supports DDR4.

Asrock has a history of making crazy parts that cater to a fraction of a fraction of the market, it's why there are so many Asrock fans.
The Zen 4 memory controller is integrated into the chips IO controller which is in the CPU package itself but not inside the processor chips themselves.

Now the chip designs for the IO are very heavily modified versions of their existing IO dye with the obvious die shrink and GPU cores. That said it does appear to contain the old DDR4 support so it is a very likely possibility that some budget board come out with DDR4 support, but there is also the possibility that they release some low power zen4 chips for the AM4 chipset at some point as well.
 
well nowdays the memory controller is integrated into the cpu so unless they put both ddr4 and ddr5 memory controllers on there then it will be ddr5 only.

also guys, don't forget usb4 :) that doesn't really change anything hardware configuration wise, just mentioning it as part of the technology upgrade w/ the new chipset.
It could be possible, Phenom II supported both DDR2 and DDR3 on its memory controller, and that was back in 2009.
 
I did find out on AMD's website that the 7600x is an average7 21% faster than the 5600x in 1080p High, the small 7 at the end of average means seven game tested I think,
(up to) 34% in Middle Earth Shadow of War / 36% F1 2021 / 6% GTA V / 5% CS : GO / 40% Rainbow Six Siege

It did not say which video card was used or the driver,it's just some of AMD's in House testing results.
 
I did find out on AMD's website that the 7600x is an average7 21% faster than the 5600x in 1080p High, the small 7 at the end of average means seven game tested I think,
(up to) 34% in Middle Earth Shadow of War / 36% F1 2021 / 6% GTA V / 5% CS : GO / 40% Rainbow Six Siege

It did not say which video card was used or the driver,it's just some of AMD's in House testing results.
How many of those games are powered mostly by the CPU though? Maybe CPU bottleneck from the graphics card?
 
How many of those games are powered mostly by the CPU though? Maybe CPU bottleneck from the graphics card?
I don't play any of those games other than GTA 5 , Lets see Ghost Recon Breakpoint, it seems to load a gpu up for all the ram it can offer!
 
I don't play any of those games other than GTA 5 , Lets see Ghost Recon Breakpoint, it seems to load a gpu up for all the ram it can offer!
Depending on how they have implemented physics, and destructible environments, and perpetual objects many titles can still get easily CPU bound. There is also a lot of Memory bottlenecking going on for Ram access, 2 channels is getting to the point where it isn’t enough as the CPU and GPU fight over access to the memory not necessarily the amount of memory. It’s one of the things that DDR5 can potentially fix because of how it has the ability to essentially *subdivide the channels

* I know I’m over simplifying what it does but seriously that’s an article onto itself
 
Back
Top