drescherjm
[H]F Junkie
- Joined
- Nov 19, 2008
- Messages
- 14,941
It's likely a combination of lower cache latency and higher clock speeds. Zen3 should have improvements in both.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
What's more important to look at are the CPU framerates and here you can see the Ryzen 7 5800X completely crushing the Intel Core i9-10900K in terms of max framerate. In the Normal Batch run, the AMD Ryzen 7 5800X delivers up to 22% higher framerate than the Intel Core i9-10900K and 16% average in all batch results. We do not know the final clocks for the Ryzen 7 5800X yet but the Intel Core i9-10900K does feature more cores and threads and even runs at clock speeds up to 5.3 GHz.
I really hope October 8th is not just a paper launch...rumors are that the Oct 28th Big Navi announcement will also be a soft launch
150w is now considered high for a top end CPU? Lol, when did this happen? Someone should notify Intel about their 10600k, 10700k and 10900k.I just hope the 5xxx APU series is still compatible with my B450 Asrock board, I couldn't care less if they aren't compatible with the higher tier CPU's.
But yeah. Zen 3 sounds promising and exciting, though that high TDP on the higher tier CPU's is bit unexpected for a mainstream CPU, but as long it can deliver performance nobody really cares.
AMD has had limited quantity on launch in recent launches, so I assume this will be similar. Not as bad as the 3080 launch, but still not a chip available for everyone that wants one. I would expect limited quantities though, but it also depends on which CPU as some may sell out faster than others.I will be really disappointed if oct 8th launch is like nvidia's. I really really want a new system. Did the 3950x come out on launch or was it after the rest? If the new zen 3 chips are as rumored I want the 5950x.
AMD has had limited quantity on launch in recent launches, so I assume this will be similar. Not as bad as the 3080 launch, but still not a chip available for everyone that wants one. I would expect limited quantities though, but it also depends on which CPU as some may sell out faster than others.
Like I said, if the past is any indication, NVIDIA should have hard more quantity available as well (based on their past launches). But, if AMD is true to form, people WILL be able to buy stuff and not be sold out in 3 seconds. Of course, these aren't normal times either, so nobody really knows. Rumor is their release was pushed back from originally planned just to make sure they had some quantities (which may be why NVIDIA was in such a rush, thought AMD would release earlier). Hard to say for sure as they don't really share a lot of this information. We'll never know how low of quantities NVIDIA had this launch compared to last. They gave out stats like.."4 times more unique visitors to the site", well, bots are known to use VPN's with unique IP's to keep from being caught, so the 4 times more visitors could have simply been the bots. It also could have been the FE was cheaper and built better than most other AIB cards at MSRP. In the past, the FE cost more... so maybe more people were interested in the FE this time around due to those reasons. AMD will also not disclose information on how much supply they'll have, so we won't really find out how short of supply until release happens.Well that sucks, it has been a while since I bothered buying any computer hardware on launch. With the 3080 launch being so disappointed I don't think I'll be able to get a new system before Cyberpunk comes out.
It was mostly the 3900X and 3950X that were limited IIRC, the 3600 and 3700X seemed to be OK around launch.AMD has had limited quantity on launch in recent launches, so I assume this will be similar. Not as bad as the 3080 launch, but still not a chip available for everyone that wants one. I would expect limited quantities though, but it also depends on which CPU as some may sell out faster than others.
Yeah, that's why I said it also depends on which CPU . 3800x was not the most sought after since the 3700x had the same cores and was not far off in frequency. It's been a while, so I don't recall which sold out quickest, but I remember them being in stock here and there and a few having decent stock from the get go. I wouldn't swear to which ones though, lol.It was mostly the 3900X and 3950X that were limited IIRC, the 3600 and 3700X seemed to be OK around launch.
If it's a single CCX die, it may be a bit more difficult to come by. Keep in mind, the 5600x can have up to 2 borked cores in a CCX and still make the cut. A 5700x/5800x single die would require all cores to work to whatever frequency they end up at, so higher binning for sure. I'm not positive if both will be single CCX, it's possible the 5700x could be dual CCX and 5800x could be a single. I really haven't heard to much, but I'm really hoping they're both single CCX full 8 cores, would REALLY put some pressure on Intel if they can close that last gap. Leaked benchmarks are showing well, but I'll wait for real benchmarks before I put to much stake in them.If the 5700X or 5800X, whichever is the 8core single CCX CPU, really shows massive gains in gaming then I would expect that to be in short supply. It depends on yields too I guess, but those should be "OK" at this point. I really hope its not a paper launch, that would really suck.
The new consoles might necessitate this, with 8 cores including 2-way SMT for 16 threads.If one is primarily using this new CPU for gaming and plans to use it for 4 years - does it make any sense to go with 12 cores vs 8? In other words, is there any evidence of games being able to leverage more than 8 cores within next 4 years?
Also, there's a chance it was a typo and was supposed to say 105w .
Yea, I feel that too. AMD jumping up to 150w tdp seems like a serious regression.
I could live with it too, it's still less draw than a 10700k pulls, but I would think ~43% more power to get 100-200mhz is possibly a typo. New process should allow slightly better clocks at the same voltage, so it seems a 40% increase in TDP for a ~4% increase in clocks seems a little off, but who really knows until it's revealed. Rumors are just that, rumors, until they are in hand and benchmarked by independent reviewers, I won't be making a decision whether or not to upgrade anyways. At that point I'll take everything into account (performance where I need it, power draw, price) and make a decision on if I want to upgrade or not.Might have been needed for clock speed they targeted. However 150 watts is not horrible either, I can live with 45 more watts if it makes it substantially better in performance.
I could live with it too, it's still less draw than a 10700k pulls, but I would think ~43% more power to get 100-200mhz is possibly a typo. New process should allow slightly better clocks at the same voltage, so it seems a 40% increase in TDP for a ~4% increase in clocks seems a little off, but who really knows until it's revealed. Rumors are just that, rumors, until they are in hand and benchmarked by independent reviewers, I won't be making a decision whether or not to upgrade anyways. At that point I'll take everything into account (performance where I need it, power draw, price) and make a decision on if I want to upgrade or not.
I guess that is a possibility, but still doesnt' really make sense... the 3950x is a 105w part and is 16/32 (and clocks higher than a 3900x)... so if they bump the 5900x to 16/32 it still doesn't make sense they'd need that much more power for minimal increases in frequency. Not saying it may not be true (maybe it runs much lower most of the time but can run up to 150w when pushed hard), just seems like a very large jump in power for almost no changes in frequencies.I think you may gain 4 cores that can clock that high might be why the bump in TDP.
It's a promising start but Ashes... really need something else to better gauge the gains.
Nah, picking a game that has always favored AMD would be best case. 3950 min quality was around 130fps... 10900k min quality was around 155... so it was about 20% faster than Zen2. Not exactly showing something in its best light when it was getting thoroughly beaten. Anyways, it's a single data point, and we can't make to much of it, but I certainly wouldn't call it best case, it was losing by 20% and now is shown ahead by ~15%... that's a pretty drastic jump in a single game (and that was top end zen2 against not top end zen3). The 3800x by comparison (since the new benchmark is showing the 5800x) was hitting 120fps, which makes it an even larger increase. I don't expect all games to gain >20% performance, but if it's ~10% on average, that pretty much closes the gap to Intel for the most part. I'm sure there will still be games that favor one vs the other so finding benchmarks for things you use/play is always recommended. For sure gives me some hope though.I've been saying the same thing... it's a nice start, but it's also a benchmark that would also showcase Zen 3 under the best light and case scenario. I'd like to see some 1080p,1440p, gameplay benchmarks in games like RDR2 etc...
Fairly successful!?! They are absolutely the most dominant fab in the world and have been for years. The reports on 5nm and beyond paint a stunning picture for the next few years at least. Maybe Samsung can catch up? Intels roadmap for next year looks tough, another 14nm is ridiculous.AMD has a bigger long-term question: will TSMC continue to be able to deliver?
What's the plan for when they inevitably do not?
What's the plan when TSMC decides that someone else is more deserving of their fab capacity?
This is why Intel stayed in the fab business; it's biting them in the ass today, but it has delivered long-term and will likely continue to deliver long-term.
At the very least, TSMCs current process family has been fairly successful, and thankfully while Intel is experiencing their rare stumble, AMD has something actually worth producing!
And quoting myself, it was a typo and it was announced @ 105w, so you are safe to not even have to come close to Intel level power draws .150w is now considered high for a top end CPU? Lol, when did this happen? Someone should notify Intel about their 10600k, 10700k and 10900k.
10600k = 125w, PL2 = 182w
10700k = 125w, PL2 = 229w
10900k = 125w, PL2 = 250w
AMD 3700x = 65w, PPT = 88w
AMD 3900x = 105w, PPT = 142w
AMD = 150w will end up right around ~202w.
So, a 5900x 12/24 core/thread @ 150w is ~30w less than a 10700k and ~50w less than a 10900k. I don't think this is excessive or unexpected for a "mainstream" CPU (it's not really maintstream, the 5600x will be mainstream).
Also, there's a chance it was a typo and was supposed to say 105w .
Yup, 5950X with 16c/32t will draw 40W less than the 6c/12t 10600k. That's mind boggling to me. Intel has to figure out how to get more than 4c from their 10nm process in large volumes or AMD will go unchallenged for the foreseeable future, which is as bad as the opposite. AMD may hit volume 5nm before Intel even ramp 10nm in volume...And I don't buy them jumping right to 7nm. Having worked in the industry, going to a smaller node never solves yield issues, it makes them much worse. The only way that's possible is solving whatever issue they're having at 10nm also fixes it for 7nm, and they'd have to solve it completely and not incrementally.And quoting myself, it was a typo and it was announced @ 105w, so you are safe to not even have to come close to Intel level power draws .
Yeah, I mean, I'm not overly concerned with power draw, but this is a pretty large gap. Now AMD just needs to figure out how to get the heat out of the chip faster, lol. Even my 65w chip stays pretty warm .Yup, 5950X with 16c/32t will draw 40W less than the 6c/12t 10600k. That's mind boggling to me. Intel has to figure out how to get more than 4c from their 10nm process in large volumes or AMD will go unchallenged for the foreseeable future, which is as bad as the opposite. AMD may hit volume 5nm before Intel even ramp 10nm in volume...And I don't buy them jumping right to 7nm. Having worked in the industry, going to a smaller node never solves yield issues, it makes them much worse. The only way that's possible is solving whatever issue they're having at 10nm also fixes it for 7nm, and they'd have to solve it completely and not incrementally.
That's great but I've bought my last stick of DDR4.AMD Ryzen 5000 CPUs may run best with faster DDR4-4000 memory
A leaked slide allegedly from AMD suggests 4,000MHz RAM is the sweet spot for low latency and high performance...
https://www.pcgamer.com/amd-ryzen-5000-cpus-may-run-best-with-faster-ddr4-4000-memory/