Separate names with a comma.
Discussion in 'Video Cards' started by Kyle_Bennett, Jan 7, 2019.
Your links are comparing mesh cpus to ring bus. Please try again.
Seems a little hard on nVidia. They are trying to push forward and sell something besides resolution and framerate. Ray tracing is the first real image quality improvement technology we've seen since shaders debuted. I know running Destiny 2 at 165fps locked on my 1440p monitor wouldn't do anything for me over running it at the 90-120fps I get off a 1080ti.
I don't even know how out of line their pricing is given the cost of memory and the size of the chips they are putting on these boards.
Of course the follow on to that is if nvida is selling them at a fair cost given the BoM for the card, does it's performance constitute a value for people to upgrade to. I think for most people here the answer to that is a resounding no. I'm sure as shit not going to buy it, but it exists as a stepping stone product in between pre-ray tracing cards and the die-shrinked version of this tech that will actually provide enough performance and hopefully enough software support to offer real value. Roll outs for hardware T&L and Shaders were similar in the immediate lack of support and extra cost for hardware you couldn't yet use.
Is RTX a thing that has value today, no not yet.
I do firmly believe Ray Tracing will be the future, it was just birthed a little early.
My 1080 Ti that I bought 2 years ago has probably proven to be the best investment in computer hardware that I've made pretty much ever. With this current crop I have no reason to upgrade for another 2 years....
Oh boy, and now we start moving the goal posts.
Intel stated that the ring bus was designed for up to 8 physical core CPUs, and mesh interconnect was introduced on 10+ core CPUs. Please provide some actual proof that the ring bus is introducing significant performance penalties in 4/6/8C desktop CPUs like what are being tested here. I'll wait.
Well that didn't take long did it? Here, the R7 1700 is matching the 7820x clock for clock in gaming.
Are you going to tell me that Ryzen will match Skylake clock for clock when it is running ring bus as well?
If so, there will be alot if pissed off 9900k owners out there.
They just forgot to mention is was a slide show gaming experience.
How is that relevant? It's a completely different architecture than Intel, it's not apples to apples.
Again, see above.
The CPUs in the links for Tom's are actually all ring bus CPUs. Both Broadwell-E and consumer Skylake (6600/6700) used a ring bus, only Skylake-X introduced the mesh interconnect, so I am not even sure what your original point was.
To be fair, the flagship GeForce256 DDR card - which introduced hardware T&L - cost $279 at release, and was considered expensive, which is about $408 in today's dollar. The cheapest entry level RTX card is $349, and if you want usable RTX performance you're spending over $800. And the performance of the 2080, which trades blows with the 1080 Ti, was available for less money before they phased out the 10 series. This is why people are angry.
People are angry because they are angry, who cares. Rage is the new normal, I couldn't care less.
The 2060 out performs the 1070ti for $100 less so there is that. The 2070/80/ti don't render a value, and if you were in the market for an upgrade you should've bought a 10x0 card when they were available. Hopefully AMD will announce a viable alternative to the 1080ti tomorrow morning. If we have to wait to 2020 for Intel to put pressure on nVidia, then so be it. I don't think nVidia is attempting to gouge with these cards, I just think all the AI hardware that they've repurposed for ray tracing is expensive. With a die shrink it'll be less expensive. Conventional rendering performance is becoming less and less meaningful as lowend cards can easily push 1440p in many titles. Ray tracing performance will be a feature that moves graphics cards after the technology hits critical mass. But being out in front of that critical mass is a painful place to be.
IT'S ALL SKYLAKE.
Well now you are comparing different architectures to show core scaling. That is even worse. Did you think your garbage links would not get checked or something?
My experience is a bit different. With Ultra everything DX12 TAA, HBAO, DXR Ultra @ 1440p, I'm well over solid 100FPS... 120-130 most (75%) of the time and under heavy stress drop to mid 90's. but no where near the 58 you are getting. I'm on 2080Ti and 9900K.
Is there a demo or something I can run easily to share/compare? seems odd I get 100% better performance than your numbers.
Yeah, I was going to wait and see the 9900k data but I am at 75fps 3440x1440 ultra/dxr low Rotterdam. Since I run 60Hz it’s zero negative impact to turn on ray tracing. But compensating for megapixel difference, assuming scaling linearly, that’s 50% more pixels being pushed per second. 2700x/2080ti.
I don’t know what the difference is caused by. I was going to post a picture of my settings later and if Brent is gracious enough maybe he can compare. Armenius made it sound like Brent turns on features higher than the “ultra” default which could be the difference too, if true.
Boy do they need a non twitch shooter to showcase this.
Fingers crossed for a new Tokyo Extreme Racer.
For what it's worth, it's quite easy to test whether the game is CPU bound by simply lowering the rendering resolution. BFV can be made to run at just 320x180 by setting the display to 720p and resolution scale to 25%.
I won't speak for the 7700k but my own i7-3770 is entirely thread bound while using DXR in this game. I've made a quick video to demonstrate this and you'll note how low GPU utilization always stays even at 1080p. Rendering at 25% will even cause the GPU to start downclocking itself and yet performance remains very similar to Kyle's numbers.
Glad i got my new 1080Ti when i did, best of both worlds.
They're all clocked the same with hyperthreading disabled and Skylake has, at best, a 5-10% IPC increase over Broadwell. This is as close to an "ideal" test as you could get at the time the article was written short of disabling cores.
If you for whatever reason still want to take an issue with those links then Google it yourself. There are tons of other articles comparing CPU speed between generations of CPUs that don't show huge increases in performance with additional cores. As I said originally, the vast majority of game engines don't scale beyond 6 cores. Some might do 8. That's about it.
If I'm wrong then prove it, don't throw out meaningless objections that you didn't even bother to research in the first place.
You guys arguing, take it to pm.
I know this is important to everyone. 9900K testing has begun.
Both 1080p and 1440p saw a 75% increase in vram usage going from dx11 to dx12.
I feel like nVidia drivers are doing this on purpose so the penalty going from dx12 to dxr won't look as bad.
In DX 11 medium difficulty was using around 5 to 6gb. Played it on Hardcore, same single player level - 11.9gb of VRAM! I am going to see if I can duplicate this, if so will do another video on that.! This was at 1080p max settings.
DX 11 compared to DX 12 seems like DX 12 has more consistent frame rates while Dx11 has highs but also Lows greater than Dx 12, way more fluctuations it seems.
Vulkan is good though.
amd did make some use of it, but given how far behind they were the results were "meh"
Just to add to the debate surrounding the frame rate results in this testing: My rig 1080ti (2050GHz), 5930k (4.5GHz) seems to be getting higher frame rates than the 2080ti @1440p, and I'm using DX12. Perhaps there is something to this >8 cores conjecture, looking forward to seeing the 9900k results.
It uses most of the ram available.
So whether you have a 6gb card, 8gb card or 11gb card ,it will use the ram thats available. Now there is a limit, as you cant use a 4gb card for 4k and the 2060 is not fast enough for 4k so there is no need for excessive ram amounts, your not going to use more than 6gb at 1440p.
Here is a simple solution that know one wants to talk about.
If you want to play at 4k with a .2080ti at 60 fps just play on high settings,not ultra, and use Ray tracing settings on low because unless you use still pictures that are magnified, you will not be able to tell the difference.
1440p? High settings , RT on low
Looks great 99% comparable to ultra settings and high RT with a 2080 and 60fps
1080p same for 2070 high settings, RT on low 60fps.
1080p high settings, low RT ,RTX 2060 is 19% slower than a 2070.
So put a few choice settings on med. Barely a difference in visual quality and back to 60fps.
All with no stuttering.
For me those VRAM numbers put into question the viability of the RTX2080 in future titles, it only has 8GB of RAM and at 1440P DX12 it uses nearly a whole 8GB.
As someone who owns a 1060 i was looking forward to the releases this year, however im going to hold onto my cash for now, I dont believe any current cards with the exception of the 2080 Ti and announced R7 have enough VRAM for future titles. the 2080 Ti is just too expensive, i have two kids and a mortgage goddamn it.
My 1060 plays all games at 1440P at high settings fine, i turn off AA as i just dont care when gaming at 1440P, but was looking at VR next and therefore the higher end cards.
But the question remains, is that memory usage an actual working set - or is the DX12 path just caching what it thinks it can, more explicitly than the DX11 path. Unlike system memory, there's really little downside to absolutely filling up VRAM, even aggressively.
Overall - this is part of the problem with low level APIs - the game developer now has to do the work of tight management of the GPU. If the IHV has incredible drivers, it's a pretty tall order to best what those drivers can do.
"Low level" has many pitfalls, especially if "high level" has relatively few (per hardware platform). If you haven't worked at that low level - I assure you, it's a confusing morass of experimentation and misleading documentation. I say this with love. Kinda.
Utilization and ram needed are very different.
You wont need 8 gb of vram to run this game on ultra settings.
If you have 8 it uses 7 or 8, if you have 6 ,it uses 6gb.
BIG DIFFERENCE between uses and needed.
It is a very interesting game/engine. It does indeed will use 8 threads at the same time on a CPU. For me in DX11 Vega FE I’ve seen over 12gb of ram used while DX 12 it stays between 6-7gb. Same level, maybe some driver stuff going on.
Now for sure if the asset is loaded into available VRAM space then it will most likely not stutter from loading next set of assets - smoother game play by doing this if you have the additional memory available.
I ran the game's campaign at 1440p with everything on ultra except AA and DXR on low and got these benchmarks,
13-01-2019, 10:21:06 bfv.exe benchmark completed, 287210 frames rendered in 3109.062 s
Average framerate : 92.3 FPS
Minimum framerate : 1.9 FPS
Maximum framerate : 145.8 FPS
1% low framerate : 67.2 FPS
0.1% low framerate : 2.5 FPS
YOu answered your own question. Exactly there is one game to test after 5 months of release! lol. So yea whats the point of buying these cards based on ray tracing when you barely have nay games and one that is there is no bueno. At this rate its a glorified marketing feature.
Ofcourse they did. Otherwise he wouldnt have repeated the entire launch presentation at CES reassuring how awesome ray tracing is like it hasn't been 5 months already and then keep hammering DLSS that is still no show lol. Gotta admit he is milking people on hope after 5 months of rtx launch.
Dont you find it strange that you get similar averages with 1440p ultrawide dxr medium as the article claim to get at 1080p low dxr?
if you aim at 55fps minimum in the worst parts of rotterdam 64 players yes you definitely can... not sure why numbers on this article are SO RIDUCULOUSLY LOW..
My favorite line in the article:
I just got BFV and will say for the record that the PC in my sig has no issues running this game in the single player campaign at 4K with DXR set to Ultra and using the Ultra graphics preset with no resolution scaling. I have yet to dive into the multiplayer which, frankly, I'm not interested in. Full disclosure: I got the game for free to test if the 2080 Ti is causing issues with the PG278Q, and I've been splitting my play time between it and my PG27UQ.
Let us know your thoughts after playing on Rotterdam with a full 64 players. Would love to hear your about your experiences.
I'm definitely going to give it a try at least to experience the multiplayer myself. I wasn't crazy about BF1, but I'd be eager to have a legitimate opinion on the multiplayer in this game while at the same time seeing the performance disparity between it and the single player with DXR.
Had a little time last night and did a little more playing/testing in single player. Using the rig in sig, DX12/DXR-Med @4096x2160. Any blur/distortion effects turned off, everything else maxed. The GPU was kept as 60C max but using factory OC which hovered 1900-1985mhz/14000Ghz. CPU averaged 40-55% usage.
I've been playing the Tirailleur campaign since visually it seems to be one of the more stunning environments to me and a bit more to offer in terms of reflections. Made it to the Egalite section. There are puddles and water in the roads and paths everywhere at this point so there were many opportunities to see the effects of RT vs performance. Suffice it to say Vram was ~7-9GB and FPS had lows ~50-52 and mostly held 55-60fps. You could tell when it was loading or rendering something RT intensive as it might briefly(3-4 seconds) drop to 48fps and vram spiked to 9GB but then re-stabilized to 50-52fps in those moments.
edit: Slowly getting used to the game. May eventually test with 64 player as [H]ard has. If you see someone running around spastic-ally shooting everything then suddenly stops to stare at a reflection like an idiot and then killed as I check my metrics-that'll probably be me.