Who is looking to upgrade to Zen2

That was kind of the situation for the 2700 vs 2700x. The boost clock algorithms reacted very differently to each processor, even though both were pretty regularly able to hit 4.1-4.2 all core OC. The 2700 natural behavior constrained it a lot more to keep it much lower on power draw, but you could force it to draw as much as a 2700x. I think that's what the TDP diff is here. The behavior in how aggressively it automatically pursues that auto-oc.

Yeah. 2700x does this so well that it's basically pointless to manually overclock.
 
I built a Ryzen 5 2600X system last year, with DDR4, so I figure that should be good for at least 5 years, and more likely 7.

The way I see it, my old FX4100 lasted me for 7 good years at home, before I made the system into my work PC, where it's still doing fine. To be honest, it can still game at 1920x1080 just fine.

With the Ryzen 5 being a much more robust architecture, it should be quite usable for the future.
 
...if you let it exceed the TDP in the BIOS, which you're going to do with the AMD parts too.

I think his point is more that what used to be a pretty good indicator of what the chip was actually pulling is now just some random irrelevant number. Case in point...if you run a 9900k and don't let it exceed it's stated 95W and run a 2700X at it's stated 105W they perform almost identically even in gaming...
 
debating between the 3600X and 3700X myself, but I can't seem to understand why the 3700X, with a higher core count and only 200 MHz lower base clock, is a 65W TDP part vs the 95W TDP on the 3600X. Can anyone shed some light on this for me?
 
debating between the 3600X and 3700X myself, but I can't seem to understand why the 3700X, with a higher core count and only 200 MHz lower base clock, is a 65W TDP part vs the 95W TDP on the 3600X. Can anyone shed some light on this for me?

This is what we're talking about. Right now, we're speculating, because we don't know what AMD actually means by those TDP numbers, especially as they relate to overclocking.

I think his point is more that what used to be a pretty good indicator of what the chip was actually pulling is now just some random irrelevant number. Case in point...if you run a 9900k and don't let it exceed it's stated 95W and run a 2700X at it's stated 105W they perform almost identically even in gaming...

And if you let the 9900K go to 105W? This basically shows that the 9900K is more efficient. Which we know- what we don't know without independent results is how the Ryzen 3000 CPUs will fare.
 
I might get the 12 core but doubtful.

More than likely the 8 core is looking really good right now. I have 16 cores already so no need for 2 16 cores.
 
My X370 Taichi has just been begging for a more power-hungry chip than my 1600, and that 12-core looks like just the ticket.
$499 is a bit rich for me ATM, so I'll give it a few months to save up some dough, and hope that prices drop a bit.
Also want to see what memory speeds are like; what I have now is pretty cheap, and it should be interesting to see how much speed I get from nicer RAM, on the 1600 as well as the 3900x
 
It does ;) checkout the [H] AMD motherboard section the video is where Buildzoid explains all of the ins and outs for motherboards with the Ryzen 3000 series including compatibility and memory frequency.
Ah, I see it supports 128GB via the new 32GB single sticks. I wonder what the old boards will do since the 3000 can run on X370 and X470 if suddenly the new memory controller with bios updates will change the ram support.
 
I built the system in my signature last September with the intention of getting a Ryzen 3000 series on launch. I was on a 3770K before. I'll be jumping on the 3900X if the independent reviews don't show any serious flaws. Most likely not upgrading the mobo, but might consider RAM if 3600 holds the chip back, but it's probably unlikely.
 
AMD has 1 year to make up significant ground on Intel before they release their desktop 10nm chips...they need to take advantage of this opportunity
 
AMD has 1 year to make up significant ground on Intel before they release their desktop 10nm chips...they need to take advantage of this opportunity


Intel has slipped numerous times, and while none of us should have a "favorite" (since without a serious rwce we ALL lose) I would like to see AMD sell every single Zen3 unit they can make it to fill those R&D coffers (and mine via stock boosts!)...

I can say the same for Intel since I own their stock as well, but AMD should have Zen2+ ready for release on 7nm+, which TSMC offers a ~20% uplift in frequency.


I think AMD will b doing a ton of benchmarks on Wiskey Lake to extrapolate as much data as they can for a "best/worst/most likely” scenario for Sunny Cove. I would also hazard a guess that AMD has a backup plan for 6nm to squeeze even more out of Zen 2+ or Zen2++ (see what I did there? ) It SC is another Sandy Bridge leap! (Which we should ALLL WANT!)
 
I'm wondering if we should look at this not as power draw directly. Aside from AMD's traditionally wacky TPD numbers, the extra TDP rating may be to allow for higher sustained boost clocks.

What we don't know is if the lower part will be TDP limited, or if this SKU separation is mostly for OEMs. It may be that given the same enthusiast-grade power delivery and cooling, the parts could produce very similar results.
That is what I am thinking. Either it maintains full boost or allows for OC headroom. It has to be something. They wouldn't near double the TDP for nothing. So far a mystery for sure.
 
AMD has 1 year to make up significant ground on Intel before they release their desktop 10nm chips...they need to take advantage of this opportunity
It is a 1.5/2 yr window. Intel will be going straight to 7nm for desktop and it will be formidable to say the least.
AMD has some strategizing to do ;)
 
AMD has 1 year to make up significant ground on Intel before they release their desktop 10nm chips...they need to take advantage of this opportunity

What do you think 10nm is going to do for Intel on Desktop? A shrink will likely benefit power, but it is unlikely it will benefit clock speed for Intel. I really don't think AMD has to fear intel 10nm at all, at least not on desktop.

It's really more about core design going forward than process as TSMC vs Intel looks to at best equal for Intel, and more likely worse for Intel until they demonstrate something significant, lets assume equal for the sake of argument. Top clock speed of 4.6GHz is a step forward for AMD, and step backwards for Intel.

It also seems VERY unlikely that Intel core design is going to have another jump like the Sandy Bridge.

They have shown their design cards and it Sunny Cove that will likely hold for a couple of years. Taking AMD (Zen 2) and Intel (Sunny Cove) at their word, they have similar jumps in IPC, leaving them close to parity in IPC.

As long as AMD can keep their core count and pricing advantage, they will own the DIY desktop market (they already did with Ryzen 2000 series).

What AMD needs to do: Make a great APU to steal laptop business that Intel still dominates.

What Intel needs to do: Cut prices on desktop to win back DIY market. Get its 10nm/7nm to at least competitive levels with TSMC and competitve with 14nm++ on desktop. The later may be more challenging. Fix their security issues.

IMO Intel has the tougher road ahead and won't be a threat to AMD anytime soon.
 
I think intel is done slipping on 10nm. The Dells and HP of the world have announced real products.

One thing that is still unknown is the performance of intels 10nm+. The boost and base clocks of these 25 watt parts are the same or worse then 14++++.

If SC has a 18% IPC gain but runs at a 20% lower clock speed then the net change is very little.
 
Waiting for 16 core Zen 2. AMD has tested one. If Intel works out en excellent 10 core à 5Ghz, AMD may launch their 16 core at the same price than the 12 core, but not yet. AMD needs to keep the market he took form Intel. The 16 core will probably be announced this year if Intel announces it's ten core.
 
What do you think 10nm is going to do for Intel on Desktop? A shrink will likely benefit power, but it is unlikely it will benefit clock speed for Intel. I really don't think AMD has to fear intel 10nm at all, at least not on desktop.

It's really more about core design going forward than process as TSMC vs Intel looks to at best equal for Intel, and more likely worse for Intel until they demonstrate something significant, lets assume equal for the sake of argument. Top clock speed of 4.6GHz is a step forward for AMD, and step backwards for Intel.

It also seems VERY unlikely that Intel core design is going to have another jump like the Sandy Bridge.

They have shown their design cards and it Sunny Cove that will likely hold for a couple of years. Taking AMD (Zen 2) and Intel (Sunny Cove) at their word, they have similar jumps in IPC, leaving them close to parity in IPC.

As long as AMD can keep their core count and pricing advantage, they will own the DIY desktop market (they already did with Ryzen 2000 series).

What AMD needs to do: Make a great APU to steal laptop business that Intel still dominates.

What Intel needs to do: Cut prices on desktop to win back DIY market. Get its 10nm/7nm to at least competitive levels with TSMC and competitve with 14nm++ on desktop. The later may be more challenging. Fix their security issues.

IMO Intel has the tougher road ahead and won't be a threat to AMD anytime soon.

I think Intel is going to need to do something about AMD's chiplets, right now intel is stuck on monolithic chips designs and they are expensive and lower yield parts hence why intel charges more for there higher core parts, greed is not the main driver for the cost of a 12c 24t intel CPU ~1,200USD it is because the chips is so dam expensive to make and core design alone wont fix that.
Intel needs to make there own chiplet competitor and I think they are working on 3D stacking there chips and it would work good to but i see heat being the main issue with that due to the fact CPU cooling is done in a 2D fashion and the bottom chips will get HOT but besides the heat 3D stacks would be better then infinity fabric in theory as the connection lengths will be extremely short allowing intel to make a competitive 8core part with 2 quad core CPU's for much cheaper then they can now and with 3D stack there will be no latency issues just heat ones.

I do agree every day AMD has no Zen2 APU is a day they lose a sale
 
Last edited:
As a 'bang for the buck' fan currently running AMD x470+2700x, I'm glad AMD is making great strides and creating some competition. However, I'm still impressed that Intel is able to pull such great IPC numbers despite being on a more antiquated tech / lithography.

It makes me wonder if and when they are at parity (there's only so much die shrinkage that is physically possible - whether that turns out to be 5nm or 3nm I don't know, but that's not so far off now that we're hitting 7nm) - is AMD still going to remain a strong Intel competitor other than on a value proposition basis?

Personally, I'm waiting to see how all the mitigations will affect new Intel products moving forward but if the AMD 12 core (or if it turns out to be real down the road - the 16 core) come out with strong single core performance, I will be staying with AMD.
 
I think intel is done slipping on 10nm. The Dells and HP of the world have announced real products.

One thing that is still unknown is the performance of intels 10nm+. The boost and base clocks of these 25 watt parts are the same or worse then 14++++.

If SC has a 18% IPC gain but runs at a 20% lower clock speed then the net change is very little.
Yeah sure...
In fact Intel just announced a 4c/8t with gen 11 on 10nm for thin laptops but nothing else. That's just a little step better than 3 year ago 2c/4t with iGPU disabled. Everybody hold their breath for when Intel will announce 6c/12t on 10nm next year, and that will be without GPU, and still for bigger laptops. No 10nm for desktop on sight. I kind of thing 10nm is just some art for art sake work by Intel. Like "see we can do it". In their mythology they'll be able to tell : "we were there". In fact Intel puts all its effort on 7 EUV and on building 14nm fabs for until they can deliver 7nm EUV chips. Intel is clearly far from that now.
 
  • Like
Reactions: N4CR
like this
I think Intel is going to need to do something about AMD's chiplets, right now intel is stuck on monolithic chips designs and they are expensive and lower yield parts hence why intel charges more for there higher core parts, greed is not the main driver for the cost of a 12c 24t intel CPU ~1,200USD it is because the chips is so dame expensive to make and core design alone wont fix that.
Intel needs to make there own chiplet competitor and I think they are working on 3D stacking there chips and it would work good to but i see heat being the main issue with that due to the fact CPU cooling is done in a 2D fashion and the bottom chips will get HOT but besides the heat 3D stacks would be better then infinity fabric in theory as the connection lengths will be extremely short allowing intel to make a competitive 8core part with 2 quad core CPU's for much cheaper then they can now and with 3D stack there will be no latency issues just heat ones.
Yes. Clearly Intel should look into the chiplet solution. Even on its trustworthy 14nm+++, when Intel put same number of cores in line with Threadripper or Epyc, even if it doesn't car about TDP, Intel has lower yields. On Desktop the problem is TDP.
 
In fact Intel just announced a 4c/8t with gen 11 on 10nm for thin laptops but nothing else.

It's a sector they already lead in, significantly; I wish AMD would come up with a part that addresses that market with a more 'balanced' solution, but it'd have to be as focused as Sunny Cove is. I'm expecting their reasoning here is that of all the segments that can benefit from a more efficient quad-core, ultrabooks benefit the most. Further, Sunny Cove is more than just a CPU core upgrade (the core itself being Ice Lake), it's also a major platform upgrade. They've moved the Thunderbolt stuff on board, added the appropriate platform support for WiFi 6, which is also starting to show up in enterprise access points (despite still being a 'draft standard'), and they've upgraded their IGP.

The boost and base clocks of these 25 watt parts are the same or worse then 14++++.

If SC has a 18% IPC gain but runs at a 20% lower clock speed then the net change is very little.

Fun part is that at 15w, 'base and boost clocks' really mean very little. They're all thermally limited, and Intel's advances in that envelope have been to address those limitations from multiple directions. IPC gains mean a little, but they'll mostly be used to get the same amount of work done with lower power draw, both at idle and under load, and to increase the amount of time that the CPU can maintain boost clocks under load.

Yes. Clearly Intel should look into the chiplet solution. Even on its trustworthy 14nm+++, when Intel put same number of cores in line with Threadripper or Epyc, even if it doesn't car about TDP, Intel has lower yields. On Desktop the problem is TDP.

They have already announced parts outside of the consumer market that are using 'chiplets'. While AMD is rightly making headlines by increasing the number of cores available in a consumer socket using chiplets, the technology (implementation, really) is hardly new or proprietary. We can assume that Intel will go down that road if it makes sense for their solutions.
 
...Mac or Intel build...

So, I've spent about the last hour catching up on posts, etc and I am sorry, but this struck a cord with me. Before I get into the quote, I will say I am highly interested in these new CPUs. I currently rock a 1600 OCed to 4.2 and it works very well when compared to my old FX6300. I game, edit, do some math crunches and more cores=better; for my uses. It is astounding that we, in no less than 20 years, are speaking in terms like this. Thought it was going to be single core for ever! About this little quote, though, I got beef. In case y'all didn't know the manufacturer that runs under the hoods of those over prices low performance pieces of crap, MACs, are low grade Intel CPUs. I remember when they would allow dual boot Win XP or the MAC OS and the XP was running better than that garbage, overrated, OS. A MAC is now what amounts to a fraud. If they were still using the PowerPC CPUs and RISC style, then they would still have my respect. (Mostly rant, YEAH!) Question about post: To those who have researched this a bit deeper than I, would an 8c Zen 2 work with a B350 (Ownes) or should I up the ante to the x370 (needed?) ?
 
Question about post: To those who have researched this a bit deeper than I, would an 8c Zen 2 work with a B350 (Ownes) or should I up the ante to the x370 (needed?) ?
Should be fine, assuming bios support.
 
Would a 3900X be a substantial upgrade from 7700K? If so, I might switch to AMD again. Maybe I can wait until they announce their 16 cores. Although, I have had some.... difficulties with AMD in the past but, I'm willing to give it another shot :) That price is looking mighty fine.
 
Last edited:
Thank you Nobo for the reply. I do hope a new bios comes for my board, not a big deal if it doesn't; will purchase an x370 on the cheap. Also, I am excited about the Navi cards coming out. Sure, nVidia is the best (blah, blah, BLAH!), but the price to performance and the bad 'juju' with the ways of how that company operates really doesn't make me turn green. I sport the trusty RX 480 8GB ver and I think I might upgrade as it shows clear promise of surpassing my aging GCN and nCrapia's 2070 (creative license used). Now, another thing, suppose the Bios update works, my question is or would be, should I change out my ram from the current Geil EVO Spear AMD (2x4GB DDR4) for a higher ram? EDIT: It is DDR4 2400
 
I'm seriously considering upgrading to the 3900x from a 4790k. Not going to upgrade untill November though so got lots of time to research everything and contemplate what i want to do. Right now it looks like Intel will only have the 9900KS by then but maybe their 14nm 10 core cpu will show up by then? Have to wait and see.
 
I'm sceptical until I see actual independent benchmarks, but god I hope this is everything it looks to be. If so, I may go Red for the first time in a very long time.
 
Thank you Nobo for the reply. I do hope a new bios comes for my board, not a big deal if it doesn't; will purchase an x370 on the cheap. Also, I am excited about the Navi cards coming out. Sure, nVidia is the best (blah, blah, BLAH!), but the price to performance and the bad 'juju' with the ways of how that company operates really doesn't make me turn green. I sport the trusty RX 480 8GB ver and I think I might upgrade as it shows clear promise of surpassing my aging GCN and nCrapia's 2070 (creative license used). Now, another thing, suppose the Bios update works, my question is or would be, should I change out my ram from the current Geil EVO Spear AMD (2x4GB DDR4) for a higher ram? EDIT: It is DDR4 2400

It would be worthwhile to swap out that RAM with at least 2x8G DDR4-3200. Unless something changes radically with the 3000 series Ryzens, Ryzens clock their Infinity Fabric off of RAM frequency. Faster RAM makes a much larger difference in overall performance with Ryzen CPUs than it does with Intel CPUs.
 
So, I've spent about the last hour catching up on posts, etc and I am sorry, but this struck a cord with me. Before I get into the quote, I will say I am highly interested in these new CPUs. I currently rock a 1600 OCed to 4.2 and it works very well when compared to my old FX6300. I game, edit, do some math crunches and more cores=better; for my uses. It is astounding that we, in no less than 20 years, are speaking in terms like this. Thought it was going to be single core for ever! About this little quote, though, I got beef. In case y'all didn't know the manufacturer that runs under the hoods of those over prices low performance pieces of crap, MACs, are low grade Intel CPUs. I remember when they would allow dual boot Win XP or the MAC OS and the XP was running better than that garbage, overrated, OS. A MAC is now what amounts to a fraud. If they were still using the PowerPC CPUs and RISC style, then they would still have my respect. (Mostly rant, YEAH!) Question about post: To those who have researched this a bit deeper than I, would an 8c Zen 2 work with a B350 (Ownes) or should I up the ante to the x370 (needed?) ?

What do you mean struck a chord?

Don't you have buddies that are content guys that have an environment they work in that makes white boxing difficult?

Either their tools are locked in, file formats, something that makes white boxing a build to work from home a pain?

My work issues Macs or Thinkpads with Ubuntu 16.04 as Dev gear.
Don't you think I want to use my more powerful white box gear except for work specific guidelines that preclude me from using anything unapproved?

Ever worked at a name brand tech company where you walk in your x299 rig to ingest an Oracle application that you needed to lift and shift? Bc I have.

I have work badges from the Oscars and E3 where I've had to allow the approved hardware to fail before I swapped out racks of white box gear I happened to have waiting in the wings that'd I'd snuck into last quarters budget.

I've even run things on Dell try and buy gear, then had to tell my Dell reps they weren't getting their gear back in time.
 
People who don't work with rigid IT rules really don't understand. It's hell trying to convince them these places exist and it's hell trying to convince IT that there are better options out there.
 
When they announce TR 3rd gen i'll upgrade, in the meantime my 1920x is just fine. Prolly go with either a 16 core if they do one or 24 if that's the lowest
 
Yeah, I'm holding out on TR 3rd gen too. As far as gaming fps at 4k its all GPU bound right now anyway so may as well rip all the threads when doing media work.
 
I am waiting on ITX boards, but I am seriously interested in a 3600 ITX setup to replace my 5960X.
 
What do you mean struck a chord?

Don't you have buddies that are content guys that have an environment they work in that makes white boxing difficult?

Either their tools are locked in, file formats, something that makes white boxing a build to work from home a pain?

My work issues Macs or Thinkpads with Ubuntu 16.04 as Dev gear.
Don't you think I want to use my more powerful white box gear except for work specific guidelines that preclude me from using anything unapproved?

Ever worked at a name brand tech company where you walk in your x299 rig to ingest an Oracle application that you needed to lift and shift? Bc I have.

I have work badges from the Oscars and E3 where I've had to allow the approved hardware to fail before I swapped out racks of white box gear I happened to have waiting in the wings that'd I'd snuck into last quarters budget.

I've even run things on Dell try and buy gear, then had to tell my Dell reps they weren't getting their gear back in time.

Cool story, bro. I have also worked in a restrictive IT field; hospital (The MACs at the reception desk were running Windows 7;LOL side-note! [installed for looks]) . I am merely just saying that MACs aren't unique any more. They're just over priced, sub-par machines. Honestly, I am looking for an old G5 with the PowerPC 970FX so I can have fun with a real MAC.
 
I'm finally upgrading from my 2011 2500K, it's still plenty fast in most scenarios I use it for, but it's definitely starting to show signs of age. I don't game above 75hz so it's mostly a non-issue, some games it can be a problem but not often.
 
Back
Top