Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
http://www.phoronix.com/scan.php?page=article&item=nvidia-1080ti-ryzen&num=1
Hey folks: do you like 4k gaming benches? Welcome to Linux gaming, where even 4k on ultra settings is CPU limited!
Yeah, plus the ridiculous variation of results compared to 7700k. Someone with TR, Linux and Ryzen can consider trying to do these benches himself though, it uses a run of built-in benchmark.Interesting results, but something is incredibly fishy with those Tomb Raider results. 7700K hitting FPS in the 60's at 4K seems reasonable, but Ryzen hitting FPS in the 160's is just not possible at 4K on Ultra settings. As a matter of fact, it's hitting the same FPS numbers as it did with Normal settings, something defenitely fishy going on there.
Did any other sites test IOPS with an NVMe storage device and an ever increasing queue?Eh, there was more merit to BD lawsuit. 8 core chip does function like an 8 core chip in games, i.e. faster than 6 or 4 core chip.
What is funny is that Ryzen has some serious issues in 3dmark's combined test, in spite of having decent graphics and physics results. At this point it looks like the fabric simply can't handle too much stuff that well, leading to all the issues exaggerated by Win10 scheduler being non-CCX aware (Linux scheduler is).
Not that i remember, but i believe Tweaktown ran some tests with 960 EVO 1TB http://www.tweaktown.com/articles/8073/amd-ryzen-ssd-storage-performance-preview/index3.html (warning i use adblock so sorry if ads go wrong).Did any other sites test IOPS with an NVMe storage device and an ever increasing queue?
if this happens also for Naples it could be a concern there but not enough info to date to say whether a bug somewhere or a fabric limitation.
Cheers
I don't even know what to make of this thread anymore.
Now there are claims we can't see the diff between 60 and 120 on a variable refresh monitor...
That's just a new breed of insanity. It's fine if you say you're okay with 60. Cool. Or that most people play at 60. Fine. But saying there's no difference is just nuts.
I truly have no idea what is going on.
I have not seen intel drop prices as of yet nor do I predict they willPeople can't accept the very fact that Ryzen is flawed. It's simple as that. Imagine buying a car with that many issues. I see absolutely no reason to upgrade at this point. Did anyone see Intel drop prices?![]()
People can't accept the very fact that Ryzen is flawed. It's simple as that. Imagine buying a car with that many issues. I see absolutely no reason to upgrade at this point. Did anyone see Intel drop prices?![]()
I mean, that's literally definition of being flawed, though.Just because it isn't as fast in games for now doesn't mean its flawed bro.
I mean, that's literally definition of being flawed, though.
Imperfection is a flaw by definition, that's my point.not literally. Its not damaged in anyway. Only imperfect. lol
Imperfection is a flaw by definition, that's my point.
Just because it isn't as fast in games for now doesn't mean its flawed bro. It is what it is, if one doesn't like it they have choices. Thats why market exists.
Brute force option is to treat each CCX as a distinct processor to eliminate cache thrash.
That's why i am convinced it is not scheduling. Sure, it may account for like 5-10% of it, but that clearly does not cover the distance.
Within 1 CCX it works just fine. It all falls apart when it either accesses different CCX (involves fabric), memory (involves fabric) or I/O (involves fabric)
Gosh... I'm going to say an unpopular thing. But it needs to be said...
No one in this thread, no reviewer, and not even AMD know what the real performance potential of Zen is. This is all preliminary BS.
In order to really know what the performance is, the following things need to happen:
1. Compiler optimizations for the Zen architecture. That's not happened.
2. Applications need to be compiled with those optimizations.
3. New processor drivers.
4. Changes to thread scheduling.
5. Optimization of the cache usage.
6. Video card driver optimizations that take advantage of the new Zen architecture.
If all of that is done, and done properly, maybe in a best case scenario you could get a 15-20% performance increase if the developer was really invested in getting the most out of a processor. But that's unlikely- since software bloat is driven by higher level languages that produce bloated slow code.
So maybe the best case scenario is a 10% increase in the next 12 months for software.
But what really bothers me about the discussion are a couple of ideas which are pretty incorrect:
1. The idea that frame rates above 60fps are meaningful: they're not. It's a placebo affect. In fact gaming is smoother when the frame rate does not vary. V-Sync is an absolute godsend for these purposes. I'd not enter a competitive situation without it. The variable sync technologies are simply hyped technologies which get customers excited about a new (but meaningless) feature. In a sense the entire gaming "industry" is rooted by ideas pushed out by marketing departments rather than real improvements in image quality or lower latency.
2. Running at a full out frame rate, where either the proc or card is pushing 100%- is hard on your hardware. It also might increase latency in a way that lags the game- but you don't notice because the game appears "smooth" under a variable sync.
That being said, my perspective is that of a gamer, but also a workstation user and systems engineer. If you are a person who only uses your rig for gaming- I would consider that a waste of money honestly.
Its not unusual to hide a massive problems like the CCX problem that is UNFIXABLE?
Are you kidding me.
That is BS dude, you know what it is, another excuse.
Why don't you try not to rationalize what AMD did, instead tell us exactly what they did.
If they get sued for this, they will probably lose, because the 8 core chip is not and cannot function like an 8 core chip in games.
If we are honest, i think they knew the problem could/would come up but it really did not matter [for them] with what they tried to do with Zen. I mean, hell, the only situations
Anyways, i think his point is that nobody would be able to sue AMD for anything in regards to CCX interconnect because it.. does not actually violate any law to have shitty interconnect. That would be like suing Intel for the sort of RAM they supported on Skulltrail.
I mean, that's literally definition of being flawed, though.
Imperfection is a flaw by definition, that's my point.
If we are honest, i think they knew the problem could/would come up but it really did not matter [for them] with what they tried to do with Zen. I mean, hell, the only situations
Anyways, i think his point is that nobody would be able to sue AMD for anything in regards to CCX interconnect because it.. does not actually violate any law to have shitty interconnect. That would be like suing Intel for the sort of RAM they supported on Skulltrail.
There is nothing illegal in having an anemic interconnect. Pretty much all "many cores" ARM chips is the same. Consoles couldn't be called 8 cores either in your view. But they are.
Exactly, there are so many examples todays. Even 8-10 core devices that cant use all the cores the same time.
Don't worry about it. I'm a systems engineer and they won't take my word on anything either.
Your brain can only perceive changes at 13 to 16 hz. Very few people can see the flicker of an incandescent bulb at 60hz.
It's placebo. Above about 20fps you do not get any better as a gamer.
You keep deflecting from the real subject as to what any of these reviews mean in real world applications. Again I am not saying that 720p has no merit but rather you need CONTEXT (for the love of all that is holy LEARN that word, it will go a long way to making your post more appealing to the masses) for what they do in fact mean.
How about speaking to the owners who so far have yet to complain of the performance.
That's why i am convinced it is not scheduling. Sure, it may account for like 5-10% of it, but that clearly does not cover the distance.
If the assumptions are correct the design is Server based and scaled down to desktop hence having the issue and in all likelihood an unavoidable one at that.
It's not placebo on high refresh monitors. I'm 100% confident I could pass a blind test between 60fps and 144fps on a 144hz monitor. The difference is that stark. I feel like every person saying it's placebo has never actually tried a high refresh monitor and compared the difference with high and low fps.
After owning two 144 hz displays, I sold them and moved to a 40 inch 4k monitor (not a tv) and will never look back.https://us.hardware.info/reviews/4592/vast-majority-of-gamers-prefers-120-hz-monitors
There is your test. And 80+ percent could tell the difference. So unless he comes up with something else that I couldn't find (and I put effort in) then I'd conclude. He's wrong.
That said I've been on 60hz for years and it's never been something I cared to upgrade. 4K is worth it to me by far over a much smaller expensive 120+ hz display.
A lot of CRT refresh much faster than a LCD and have zero latency compared to a LCD, so that could very well be why. Also on a side note a lot Pro or hardcore CSGO players play with low resolutions because the larger the pixels are on your screen the larger the hit box.After owning two 144 hz displays, I sold them and moved to a 40 inch 4k monitor (not a tv) and will never look back.
Could I tell the difference? Yea on some games. Did 144 hz make a difference? Not in my opinion.
I've seen gamers whoop ass on a old Dell box with a crt and a free mouse at low resolution at csgo, kicking the snot out of guys with 144hz panels. They don't make anyone a better gamer at all.
I love the 4k real estate. And games at 4k 60fps are much more immersion imo.
Time to bring this back to topic, can you humanoids stop bickering on things which shouldn't belong to this thread.
Hardware.FR 2+2 vs 4+0 test with games. Clear penalty when accessing data from different CCX, at least with games. Here is another: PCGH.de
So when are we going to get a proper english deep dive to the issue at hand because we have now german review site and french site showing some hard data.
so you guys gonna sue ford because the mustangs don't corner as well as a bmw? I agree with Gideon close this stupid thread.
riiiight....
Is it due to a fault of the design of the car or is due to the design?
AMD's latency issue in its L3 cache is due to a fault not because it was designed that way. And this is why the BD lawsuit didn't make any sense, BD was designed in a certain way and that is why its performance was what it was.
Some crappy law firm can pick it up and go full steam ahead.
Just like the 970 memory lawsuit,
There was no merit for that lawsuit for its original grounds it was filed for (it was filed for partitioning of the ram slow 512 mb), It was a 4 GB card and functioned like one too albeit a portion of the ram was slower due to guess what Latency increases lol. But during discover, the ROP amounts were found out not be accurate, and that is why nV settled, cause they knew they couldn't win that part of it.
NV got sued because they misrepresented fact when it wasn't true, they touted specs they didn't have. Ryzen in the odd game doesn't do as well but that is not because of AMD misrepresenting it to have specs it doesn't have.