pendragon1
Extremely [H]
- Joined
- Oct 7, 2000
- Messages
- 52,426
no that was just the last straw. the 3.5 vs 4 was the issue.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
no that was just the last straw. the 3.5 vs 4 was the issue.
It is not really that difficult per AMD these are the same chips with cores disabled and different frequencies all you have to do would be to disable cores on a 1800x and lock the clock speed.It is difficult to speculate on how a phantom chip at this point will and will not perform, the R5 strikes the best value for money of the entire SKU database and lets you pocket money towards a better graphics card which makes most of the difference.
The marketing team did lie on that one and it halso had fewer rops than advertised. So it was technically not frivolous. I think this idea there will be a lawsuit over Ryzen's performance is a bit far fetched mate. And I would know I got my $60 settlement lolJust like the 970 memory lawsuit,
There was no merit for that lawsuit for its original grounds it was filed for (it was filed for partitioning of the ram slow 512 mb), It was a 4 GB card and functioned like one too albeit a portion of the ram was slower due to guess what Latency increases lol. But during discover, the ROP amounts were found out not be accurate, and that is why nV settled, cause they knew they couldn't win that part of it.
The marketing team did lie on that one and it halso had fewer rops than advertised. So it was technically not frivolous. I think this idea there will be a lawsuit over Ryzen's performance is a bit far fetched mate.
There are plenty of other conversations going on here that it does not need closed. if you don't like the conversations go someplace else.no point in arguing with a know it all, you "win" I'm done.
now can we please close this stupid pointless thread...
The marketing team did lie on that one and it halso had fewer rops than advertised. So it was technically not frivolous. I think this idea there will be a lawsuit over Ryzen's performance is a bit far fetched mate.
There won't be a lawsuit over Ryzen. Let's be honest here, the majority of the people purchasing Ryzen are likely AMD fans, which wouldn't dream of suing AMD. In that aspect they're similar to Applle fans. How else can anyone explain reviewers getting death threats because of Ryzen reviews reporting lackluster gaming on this chip?
Then let us piss all over ourselves. If you say something then I say something back then you reply and we continue this little dance back and forth then yes it is a conversation. Maybe not a productive conversation, but still a conversation.its not a conversation, its turned into a internet knowledge pissing contest.
There won't be a lawsuit over Ryzen. Let's be honest here, the majority of the people purchasing Ryzen are likely AMD fans, which wouldn't dream of suing AMD. In that aspect they're similar to Applle fans. How else can anyone explain reviewers getting death threats because of Ryzen reviews reporting lackluster gaming on this chip?
super cool generalisation there, if you were aiming to punt other peoples fanboyism but in turn implicitely painted your own then you nailed it. what do they say, trying to take the spek out of anothers eye when you have a log in yours.
Sorry, I wasn't aware you were online. Figured you were benchmarking CB and handbrake ad nauseum. It seems to be what Ryzen fans tend to do. I mean, why wouldn't they just use GPU acceleration (in which case Ryzen doesn't seem to do any better than Intel's offerings) to help render faster?
I'm kidding. But in all seriousness, if you don't like my generalizations which are rooted in truth or they somehow hurt your efeelings; by all means ignore my posts and continue on your merry way.
Intel doesn't pay us to go into those forumsThere is one 7700K thread, about 2 pages long, go parade with the rest of the power rangers over there, can marvel at your at the collective circlejerk, maybe argue about who has the best intel
A lot of CRT refresh much faster than a LCD and have zero latency compared to a LCD, so that could very well be why. Also on a side note a lot Pro or hardcore CSGO players play with low resolutions because the larger the pixels are on your screen the larger the hit box.
Depends. Obviously technology does not grant skill, but it certainly can augment it. If you are allready good a high refresh rate can make it easier to be better, plus there are benefits in overall smoothness and experience.I know I realize all of that. My point was for people who think they game better because they have a refresh rate above 60hz, are wrong.
This is exactly how BD law suit started, and the entail suit is a fishing suit, they are looking for ways to find problems, the phases of a civil case.
There is no need for burden of proof in a civil case.
Once the case has started off
There is a discovery and fact finding phase where everyone involved shares information. At this point AMD will have to fully disclose what is going on.
Guys if you watch too much TV, this is not a criminal case, it doesn't work on proof up front, it works completely differently.
Granted the outcome is up in the air, but it still can be done.
Do you even know what the Bulldozer lawsuit is supposed to cover? Probably not since you think it is performance. No the challenge is that they advertise it as an 8 core processor. Now sure if it performed well it's CMT setup wouldn't have raised any questions. But the point is the civil lawsuit is about false advertising and in the end the courts will end up setting a precedence of what a core is. But it has nothing to do with being slower than Intel legally. That would be like Chevy owners suing GM because the Z28 Camaro isn't as fast as the Mustang 350GT-R. What they are getting sued for is closer to the Tesla Lawsuit for providing a HP rating based on both motors max out put and not the the max HP at max draw to both engines at the same time (which was significantly less).
I am still dumbfounded that the idea that IF while a bottleneck was such a terrible decision worthy of your scorn if the CPU even with this inept implementation does so well in so many tasks (this includes gaming). It might prevent it taking the top spot here and there and generally probably drags it down a bit more than people would like in games. But it not an anchor around the CPU, seems more like an ankle weight. Also I have stated why AMD developed it and why it was implemented the way it was. It's a seamless interconnect that will ease the mating of the GPU to the CCX in the APU's, allows for easy scaling for Server CPU's, and more importantly it is the actual interconnected that will be used to mate CU's on AMD's next GPU. The implementation on the CPU's probably is a three way compromise between "enough performance" on desktop, how it could scale up to the GPU requirements and still be compatible, and most importantly, transistor count, which would make a bigger difference on Naples and Vega.
PCPer has done an extensive vid that covers all their earlier testing and the behaviour of latency with threads local and across CCX, worth watching as they cover it well and why it can impact games more so than applications/benchmark tools, and IMO it is well done and explained.
Only 20mins in myself, but they do explain possible partial workarounds (some simple and some not) in the latter half after 20mins.
AMD also just released a statement saying the scheduler in Win10 is fine https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.
AMD also just released a statement saying the scheduler in Win10 is fine https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.
Worth watching the PCPer vid, they explain that NUMA may not actually work as intended because of the design of Ryzen core/CCX/cache/DRAM is not correct for that type of approach, not only that but it would also require a lot of coding by developers not just in gaming but also office/rendering/etc apps - even some well known apps they use for PCPer office related work (rendering and other stuff) do not support NUMA as they have noticed it with their 2S Xeon server.That would have been my probable take away. There is a certainly a performance penalty when spanning CCX's and maybe a tailored numa scheduler would have helped. But really there is a chance something like that could other adverse affects on other application and job types whereas this performance penalty was already accounted for in development. It'll probably be on the enthusiasts to figure out what tools and solutions they want to use to maximize performance in whatever application type they are using.
PCPer has done an extensive vid that covers all their earlier testing and the behaviour of latency with threads local and across CCX, worth watching as they cover it well and why it can impact games more so than applications/benchmark tools, and IMO it is well done and explained.
Only 20mins in myself, but they do explain possible partial workarounds (some simple and some not) in the latter half after 20mins.
Worth watching the PCPer vid, they explain that NUMA may not actually work as intended because of the design of Ryzen core/CCX/cache/DRAM is not correct for that type of approach, not only that but it would also require a lot of coding by developers not just in gaming but also office/rendering/etc apps - some apps they use for PCPer work do not support NUMA as they have noticed it with their 2S Xeon.
They raise it in the last part of the vid, after 20mins.
I know that is what I am sayingIt won't work, NUMA has to be programmed for and games are not NUMA aware, damn most applications are NUMA aware.
I know that is what I am saying
Some of the well known large apps they use for their own internal work at PCPer are not even NUMA aware, let alone gaming and let alone the architecture is not correct for NUMA
Thread Scheduling
We have investigated reports alleging incorrect thread scheduling on the AMD Ryzen™ processor. Based on our findings, AMD believes that the Windows® 10 thread scheduler is operating properly for “Zen,” and we do not presently believe there is an issue with the scheduler adversely utilizing the logical and physical configurations of the architecture.
yes we've been covering thatIn terms of the scheduling question and other questions, AMD have responded on that.
https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1
...
My takeaway is that future games will probably be better optimized but don't expect miraculous improvements for already released games.
In terms of the CCX issue it just raises more questions with the lower parts. How are they binned?
Is a 6 core 3+3 or 4+2? What about a quad, can it end as 4+0, 2+2 or 3+1?
I think this comes down to how much money and assistance AMD throws at developers. If AMD is willing to hold Devs hands over the next few years to ensure performance parity with Intel then we could see this work out in the end, but if AMD is not willing or simply can't afford to do this it could be an issue.Just coming back to this, oh man it could be a right can of worms going forward for gaming devs because of the nature of DX12 (high thread behaviour) and also that devs not only need to optimise for AMD/Nvidia GPUs but now also optimise high thread count next gen games for AMD and Intel as they will require different thread approach to get the most out of both.
I wonder if AMD has just impacted the PC gaming market for 6-Core and 8-core CPUs going forward with devs who will just code for 4 cores+smt to keep this simple for a while longer anyway or devs just ignore the requirement difference on thread structure now required for CCX.
Last point is not being critical of AMD, but one I am wondering if it will have repercussions beyond Ryzen with devs approach to games and multi-threading, potentially another level of complexity and cost to game development.
Would be better if AMD can get Microsoft to re-design the scheduler to help as much as possible (could be more 'intelligent'-weighted how assigned threads though to emphasise it was fine before Ryzen) but that will require extensive development/testing-QA with a lot of CPUs and discussions with not just AMD but Intel.
Sounds like AMD did not talk to Microsoft about its needs around the CCX.
Cheers