ZeroBarrier
Gawd
- Joined
- Mar 19, 2011
- Messages
- 1,011
Its nothing to that degree.
Zendozer nano?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Its nothing to that degree.
Zendozer nano?
Itanic? Both companies have had their share of flops, Zen is not one of them.
Itanic? Both companies have had their share of flops, Zen is not one of them.
That's just as much as HP's fuck up as it was Intel's, it was joint project after all; but good try.
Oh dont forget Netburst, I suppose that was some other companies fault as well? Presshot or wait I meant Prescott. Please it's not hard to find failures at both companies.
Itanic 1 was crap, Itanic 2 was much better, but software was a problem and no company wanted to rewrite code for itanic's ISA, when AMD already had an x64 part that did better out of the box (MS also supported AMD's architecture over Intel's Itanic). If software was made for Itanic 2 they would have gotten better performance than AMD's parts, but it was not easy to do, lots of work had to be done to get the most of Itanic 2.
So Itanic was the Titanic. LOL.Itanic was dead once AMD made X64. But to be honest Itanic was already dying, was far more work then it was worth to make programs for and it died. To be honest it was better for us it died anyway, Otherwise it would have allowed Intel to lock up the market and force all competition out of it, the true goal of Itanic.
With less cores per CCX you would need less bandwidth across the Fabric, so in essence it would be like the 8 core configuration with double the speed on the fabric for a 4 core configuration. There are other factors such as each CCX has a L3 (which can be shared but is probably mostly works with it's own CCX) that speeds up the data needed. So there are other factors in the design that help it achieve better results.There is a problem its on AMD's end and how THEIR infinity fabric works. Just imagine how their cut down 8 core parts are going to perform? Are they going to be worse that Intel's 2 core performance in games? Possibly.
Seems like Zendozer to me....
With less cores per CCX you would need less bandwidth across the Fabric.
Actually he is closer to the mark than you here. His is based on factual inference yours is just probability and instance which is not factual; at all. Actually given your inane probability comment then it still works in favor of the 4 core as being more consistent across more games with the 8 core having a greater chance at hitting worse configuration of threads.It just goes in the opposite direction. The moar cores per CCX the more higher is the probability that all the running threads are in a single CCX and its associated L3 slice.
With less cores per CCX you would need less bandwidth across the Fabric, so in essence it would be like the 8 core configuration with double the speed on the fabric for a 4 core configuration. There are other factors such as each CCX has a L3 (which can be shared but is probably mostly works with it's own CCX) that speeds up the data needed. So there are other factors in the design that help it achieve better results.
Higher speed memory speeds up the fabric speed which will help. If AMD allows greater than 3200mhz without needing to up the BCLK will also work.
Plus the so called issue with games is no issues or minimal issue due to it is in most cases the frame rate is way faster then what the monitor could display or is limited by the GPU. The whole premise dealing with games as in being poor or let just say not capable is asinine. So once performance deviations are noted the next question should be how does that affect the outcome or in this case game experience - reality is it doesn't.
Actually he is closer to the mark than you here. His is based on factual inference yours is just probability and instance which is not factual; at all. Actually given your inane probability comment then it still works in favor of the 4 core as being more consistent across more games with the 8 core having a greater chance at hitting worse configuration of threads.
too funny, that last part from you. Anyway, my point is at this juncture 2+2 is likely to be far more constrained in results as noko alluded. Juanrga point is more of a random guess depending on circumstance of CCX usage. Now I am sure when they are able to restrict threads to a 4+0 configuration then Yes 2+2 may take a hit in comparison. However when mobos finally get higher ram support then this CCX issue will likely be less of an issue.Actually they are both correct and its application specific on top of this too so its hard to know exactly what the bandwidth needs will be, it will also change based on the core configuration (disabled parts), I love it how the people that say "soothsayers" yet they presume to know everything.
too funny, that last part from you. Anyway, my point is at this juncture 2+2 is likely to be far more constrained in results as noko alluded. Juanrga point is more of a random guess depending on circumstance of CCX usage. Now I am sure when they are able to restrict heads to a 4+0 configuration then Yes 2+2 may take a hit in comparison. However when mobos finally get higher ram support then this CCX issue will likely be less of an issue.
Buy bothHoly shit why you people arguing so much over this. Flawed or not Ryzen is a great CPU for the price. If the performance difference is that important to then go ahead buy Intel.
Well with a four core you are much more likely have thread communication/dependencies between the two CCX's (maybe not so good) due to the more limited number of cores. I guess this just has to be tested out. Anyways the Ryzen 4 cores are going against the I3's with basically double the cores. Which makes me think how will the RyZen APU's be configured? 1 CCX or 2 CCXs? Looks like 1 CCX and the other would be the GPU, there you would all the eggs in the same basket and maybe a faster 4 core then what is coming out.Actually they are both correct and its application specific on top of this too so its hard to know exactly what the bandwidth needs will be, it will also change based on the core configuration (disabled parts), I love it how the people that say "soothsayers" yet they presume to know everything.
Well with a four core you are much more likely have thread communication/dependencies between the two CCX's (maybe not so good) due to the more limited number of cores. I guess this just has to be tested out. Anyways the Ryzen 4 cores are going against the I3's with basically double the cores. Which makes me think how will the RyZen APU's be configured? 1 CCX or 2 CCXs? Looks like 1 CCX and the other would be the GPU, there you would all the eggs in the same basket and maybe a faster 4 core then what is coming out.
Well with a four core you are much more likely have thread communication/dependencies between the two CCX's (maybe not so good) due to the more limited number of cores. I guess this just has to be tested out. Anyways the Ryzen 4 cores are going against the I3's with basically double the cores. Which makes me think how will the RyZen APU's be configured? 1 CCX or 2 CCXs? Looks like 1 CCX and the other would be the GPU, there you would all the eggs in the same basket and maybe a faster 4 core then what is coming out.
Thought that myself when I saw one of the latency graphs. Think it was against a 6800k. The 6800k had constant 80ns( or was it micro sec) across all cache levels. The Ryzen had 40 or 60 thru L2 and then jumped to 140 with L3. Granted that is a huge % jump but question is to what degree with that effect real world performance. Just didn't seem as catastrophic as some were alluding to. Also don't remember the tested ram speed and would love to see how that L3 cache speed changed with Ram speed.The latency the CCX adds is pretty overblown, yeah it would be nice if it was lower but it's not a huge deal.
Thought that myself when I saw one of the latency graphs. Think it was against a 6800k. The 6800k had constant 80ns( or was it micro sec) across all cache levels. The Ryzen had 40 or 60 thru L2 and then jumped to 140 with L3. Granted that is a huge % jump but question is to what degree with that effect real world performance. Just didn't seem as catastrophic as some were alluding to. Also don't remember the tested ram speed and would love to see how that L3 cache speed changed with Ram speed.
Thought that myself when I saw one of the latency graphs. Think it was against a 6800k. The 6800k had constant 80ns( or was it micro sec) across all cache levels. The Ryzen had 40 or 60 thru L2 and then jumped to 140 with L3. Granted that is a huge % jump but question is to what degree with that effect real world performance. Just didn't seem as catastrophic as some were alluding to. Also don't remember the tested ram speed and would love to see how that L3 cache speed changed with Ram speed.
Come on, how do you get here from what I stated. I wasn't talking about frame rates or what not. I was talking about L3 cache changes and any REAL findings using different speed Ram and its effect on the infinity Fabric aka: CCX. Frame rates and what not matter little to me as the current performance from actual users seems to more than sufficient. So in conclusion have you seen any benches on the L3 cache using different ram speeds? That is what I am interested in.The ram increases performance for both AMD and Intel, linked a few recent review benchmark showing that, and one showed it for comparison between Ryzen and Intel.
The latency sensitivity depends upon the game engine/thread and data dependency structure, affecting some games more than others and easiest way to tell is look at 1800X/7600K/7700k/6900K.
There are some with the notable jump between 7600K->7700K (shows which games respond well to SMT) to 6900K while the 1800K does not follow same trend.
But then you need at least a GTX1080 for 1080p resolution to really pick up this, even for 7700K to see SMT benefit in some of the games (Hardware Unboxed did a vid showing that a GTX1070 bottlenecked testing certain games for SMT gains at 1080p resolution compared to the Pascal Titan).
Cheers
As I said the trend is comparable between Ryzen and Intel with the increase of RAM speed, if it also improved latency or the inter-CCX that gain would be notably higher for Ryzen but those doing these comparable tests showed it was more in line with 'game sensitivity' to RAM speed increase as there was a trend correlation between Ryzen and Intel.Come on, how do you get here from what I stated. I wasn't talking about frame rates or what not. I was talking about L3 cache changes and any REAL findings using different speed Ram and its effect on the infinity Fabric aka: CCX. Frame rates and what not matter little to me as the current performance from actual users seems to more than sufficient. So in conclusion have you seen any benches on the L3 cache using different ram speeds? That is what I am interested in.
As I said the trend is comparable between Ryzen and Intel with the increase of RAM speed, if it also improved latency or the inter-CCX that gain would be notably higher for Ryzen but those doing these comparable tests showed it was more in line with 'game sensitivity' to RAM speed increase.
Eurogamer is one that did this test and came to that conclusion.
Changing RAM speed is not enough to change inter-CCX latency; you get greater bandwidth but does not change the underlying protocols/controls/data transmission structure (this is how AMD improves on PCIe latency by using their own protocols/controls/data packaging over the actual PCIe physical connections).
Cheers
True enough, the difference is AMD is consistent at it.
Its not going to get though to him man. He is thinking the latency of the fabric is linked to the ram speed..... which its not two separate issues
However Eurogamer and a few others have shown that Ryzen access being faster from a core perspective does not improve performance when they compare Intel to R7 Ryzen and 2133MHz to around 3000MHz and games impacted (relative gains comparable on both), or if it does it is very marginal.Clock speed of the fabric is linked to that of the ram and time latency is just the inversion of frequency.
Cycle latency of the fabric itself wont change of course but from a core's perspective access will be faster
can't be that hard to understand? I am interested, in a purely scientific curiosity, in how ram speed affect L3 latency. Not at all in game benchmarks as they reflect total system not the individual L3 I am interested in. I have seen the latency graphs of caches but not between different ram speeds as to ascertain how much of an impact it has. Now do you get it?As I said the trend is comparable between Ryzen and Intel with the increase of RAM speed, if it also improved latency or the inter-CCX that gain would be notably higher for Ryzen but those doing these comparable tests showed it was more in line with 'game sensitivity' to RAM speed increase as there was a trend correlation between Ryzen and Intel.
Eurogamer is one that did this test and came to that conclusion.
Changing RAM speed is not enough to change inter-CCX latency; you get greater bandwidth but does not change the underlying protocols/controls/data transmission structure (this is how AMD improves on PCIe latency by using their own protocols/controls/data packaging over the actual PCIe physical connections).
Cheers
can't be that hard to understand? I am interested, in a purely scientific curiosity, in how ram speed affect L3 latency. Not at all in game benchmarks as they reflect total system not the individual L3 I am interested in. I have seen the latency graphs of caches but not between different ram speeds as to ascertain how much of an impact it has. Now do you get it?
Well what can you not understand....can't be that hard to understand? I am interested, in a purely scientific curiosity, in how ram speed affect L3 latency. Not at all in game benchmarks as they reflect total system not the individual L3 I am interested in. I have seen the latency graphs of caches but not between different ram speeds as to ascertain how much of an impact it has. Now do you get it?
if the infinity fabric is linked to ram speed and it is the speed at which the L3 runs and based on the graphs that peg the CCX issue to the latency then Yes I would like to see if ram speed will make any discernable difference. I am not making any claims here just interested in testing using these variables.Its not going to get though to him man. He is thinking the latency of the fabric is linked to the ram speed..... which its not two separate issues
again that is an assumption. Where are the latecy tests of the L3 on different ram speeds.Well what can you not understand....
If both Intel and Ryzen have relative comparable gains going from 2133MHz to around 3000MHz in games then obviously RAM speed is NOT affecting L3 latency/inter-CCX....
if the infinity fabric is linked to ram speed and it is the speed at which the L3 runs and based on the graphs that peg the CCX issue to the latency then Yes I would like to see if ram speed will make any discernable difference. I am not making any claims here just interested in testing using these variables.