Separate names with a comma.
Discussion in 'Video Cards' started by Stoly, Aug 9, 2019.
Care to explain?
Unfortunately the [H] was one of the only deep dive VR sites. I miss the content. VR gets covered by many who are excited by the tech but know nothing of the actual tech or how to analyze it and break it down for any useful data.
Don't argue with the nVZealots.. next you will be told that you only think DXR is crap until AMD supports it, then you will change your tune..
And they do not bother about you future predictions.. only the fact that you haven't drunk the coolaid yet.
and i have tried Quake2 and found it meehh.. tried exodus and that was a bit better - but ended up turning it off DXR for the sweet fps.. imho.. turing is a Jensen-mirage. looked good initially - but turned out to be really disapointing.
DXR IS JUST NOT READY YET!!! - to bring any meaningfull upgrade to the games in question.
So tldr; bring on NAVI!!!!!
Nvidia's patent closely matches their marketing description of RTX so there's a good chance it actually explains Turing's implementation.
AMD's patent is similar with one key difference.
In Nvidia's approach the custom RT hardware does a full iterative BVH traversal and passes the final hit point (or no hit) result back to the shader core for shading.
In AMD's patent the RT hardware passes control back to the shader core multiple times during traversal. This could potentially be slower but more flexible. Depends on how fast the data paths are between the RT hardware and shaders. For both AMD and Nvidia the RT hardware is in the SM/CU.
If anything Nvidia's black box approach is more feasible for a chiplet design but there's no way a chiplet interconnect can match on-chip bandwidth and latency.
/que: Too much raytracing...
Just like with AMD's tessellation deficit (even if tesselation is dynamic adaptive by desing and set by the DEVELOPER) introduced a "perfomance" slider.
AMD was all "It is all about tesselation!"...then got hammered by NVIDIA...and changed the stance to whine about too much about it being used.
It became this when they lost badly in performance:
Gone were the "We have had tessellation in our hardware for years, it is a BEAST!"...and now it was "Wooooaaa...TOO much tessellation"...the beast became a jellyfish .
But it is to be expected...looking the the R&D budets for each company.
And a little trip down memory lane:
Yet some peope expect new features to run at MAX fidelity...with no perforamnce loss at all.
In a few years they will be expecting to run new features FASTER than without...I kid you not.
Go mammoth air next time arou
But RT is literally Jesus! You must be an AMD fan who only will congratulate RT when AMD supports it (there already was an RT vega demo lol).
I'm 100% in the same camp. RT as a whole is great, since even SGI did it, big jump. Partial scene RT is mediocre and I don't find it to be impressive enough to be worth it yet. Looks cool in some shots but in others way overblown/unrealistic like a 3d movie tech demo. Worth an upgrade alone? No.
In next generation or two it will become more playable, more fully RT (not partial) and look even better with faster hardware to support it. Count me in then.
Right now, 3 hashed out FPS games and a demo benchmark? Barely 1440p 80fps on a 1200$ card? Hard pass.
From what I understand AMDs advantage is that it does not require dedicated RT hardware/repurposed commercial tensorflow cores from workstation cards to sit idly when not used. What it means is that parts can be used for shading when needed and the associated part also used for RT as needed. Instead of dedicated (to the point of being inflexible) hardware for each task. It makes it more scalable according to the scene and also means potentially more sillicon to push RT when really heavily needed. So might end up being faster due to ease of scaling and flexibility.
Re:bandwidth, the Vega IF links are 500gb/sec. Plenty for crossfire monolithic to OS GPU.
I don't think we'll see that until after Navi though.
CyberPunk 2077 being a DXR game is worth the price just for me...The Witcher 3 was the best game in years...CD Project Red know how to build high I.Q games with SUPERB story lines.
But no one is forcing you to use DXR.
If you are fine by playing with lesser graphics fidelity...by all means.
But your anger towards DXR, despite those facts can only lead me to conclude one of two things.
Either you want to play with DXR, but since you favourite vendor does not support it...it has to be BAD.
Or you got touched by DXR in a bad place...so which is it?
Butcause you had no problems support "async computing"...how did that go?
But going by the reasoning of some people in here ( with AMD and Nvidia having decades of a head start on them ) it will be a long time before they can compete.
Personally i dont give all that much about head start in this kind of ballgame, if we was talking something where highly skilled people make things with their hands then sure cuz it do take a while to train people to do some things.
Okay you cant just throw 5 billion transistors at silica and have a awesome GPU come out at the end, but i do think AMD actually know what it take to do it right it is only now that they bothered instead of just coasting along which they have done for years ( and if i was a shareholder i would have been pissed about )
Let's address some of the shortcomings from your post:
1). CyberPunk 2077 is due for release April 16, 2020 a full 20 months after the release of RTX cards, and that's if it drops on time. You (and the rest of the RT apologists) can't keep using CyberPunk as the savior of RTX cards.
2). Lesser graphics fidelity at higher frame rates. Not everyone is going to want a pretty image at the expense of frame rate depending on their situations and needs, and realistically, the hardware doesn't exist for you to drive a 1440p monitor at 144Hz with DXR unless you tone down the ray tracing. And then what do you really have? A hybrid with your "graphics past."
3). Based on the current implementation of RT in the sparse examples that are available, it is completely reasonable for N4CR to conclude what he did without "being touched by DXR in a bad place." He clearly stated that it was a performance issue after praising the idea of RT.
4). You always seem to miss the most obvious reason for anger toward DXR...the price Nvidia sells its high end card (the only one that will actually run RT good enough to make a difference).
Not exactly. According to the patent there is dedicated hardware that will be idle when not used. The reusable part is the memory pipeline that also loads textures but the traversal and intersection hardware will be idle. AMD is clearly stating here that those operations are too slow to run on general purpose shaders. Reusing the texture memory pipeline is an interesting idea but that means raytracing will compete with texturing for memory accesses. Nothing is free I guess.
"A fixed function BVH intersection testing and traversal (a common and expensive operation in ray tracers) logic is implemented on texture processors. This enables the performance and power efficiency of the ray tracing to be substantially improved without expanding high area and effort costs. High bandwidth paths within the texture processor and shader units that are used for texture processing are reused for BVH intersection testing and traversal. In general, a texture processor receives an instruction from the shader unit that includes ray data and BVH node pointer information. The texture processor fetches the BVH node data from memory using, for example, 16 double word (DW) block loads. The texture processor performs four ray-box intersections and children sorting for box nodes and 1 ray-triangle intersection for triangle nodes. The intersection results are returned to the shader unit."
Speed and flexibility are tradeoffs. You don't usually get to maximize both for the same transistor budget.
On-chip bandwidth is at least 10x that (for L2 cache).
Actually, he is correct.
It will be a major GPU breakthrough the first time anyone gets multiple gaming GPU chips/chiplets working without crossfire/SLI software and/or duplicated memory pools.
So far there is ZERO indication that anyone has this kind of multiple chip gaming GPU capability on the horizon.
Some people keep pointing to the multi-Chip GPU Research Paper that was done by NVidia, but can't seem to recognize that it was a compute design, not a gaming GPU design. Compute designs don't have the same kind of real time synchronization/latency issues as gaming. Heck you can run big compute problems across multiple different computers, or even across a geographically diverse regional network. Compute is not latency sensitive like real time graphics.
It will be and enormous coup if/when AMD, NVidia or Intel first makes a workable multi-chip design that doesn't suffer from driver/memory issues.
But again, no real sign of this breakthrough coming soon.
It's worth it for me....funny that is gives you anger...we are talking about LUXURY item here...and cheap ones at that.
Yes, lower graphics fidelity, as I stated....fake graphics at higher FPS is still fake graphics.
He can turn it off...he seem very angry for something he does not want....and so do you.
Again....LUXURY item here...and cheap ones at that.
How do you think we could possibly start with full scene RT? It only makes sense to start with some portion of RT effects where they can be done and grow from there.
Who is suggesting upgrading solely for RT?? That looks like a Straw Man argument to me.
Fully RT scenes are more than a generation or two away. The biggest change in a generation or two, is AMD will be supporting it.
Whenever I see a headline like the OP's I always take it with a huge grain of salt; whether it be for AMD or nVidia.
I'll always wait for 3rd party reviews and hardly ever look upon a company's promotional material as 'fact'.
That said, competition is great and I hope the rumors are true. AMD needs the boost.
aaaaand another Navi speculation thread has fully devolved into an RTX debate thread. le sigh. to think I was hoping for an informative reasonable discussion about future hardware... what a noob I am
Really, no surprise there. However, they do not speak for the majority and therefore, there are other places to have good tech discussions.
I will live with the better overall image quality that AMD has, anyways.
Edit: Which is better looking, a compressed image or an uncompressed image?
The title of this thread is "Nvidia Killer". If that isn't an invitation to compare to NVidia's product (AKA RTX cards), I don't know what is.
Even if this new card *is* an "Nvidia Killer", it will probably take several generations of AMD being ahead in order to climb back on top.
What I am saying is that Ryzen came out in 2017, and while reviews were good, it took them 2 years to got to the position they're at today.
So I think it can't just be one killer card, it has to be several cards over a year or two that are all great before people will see the light.
They can however, make a stellar price/performance beast - even if not absolute top dog.
My wager would be on something with 2080ti-ish levels of performance (sans RT) at a noticeably lower price. That's still a market spoiler - as we've seen in this thread many buyers today are far more interested in traditional rendering performance. NV would be put in an awkward position, and they'd have to triple down on RT as a selling point - which again, doesn't sway everyone.
Intel is getting into the mid-tier mix, where AMD is already mixing it up. They won't bring anything to the high end for a bit
And where did that hype come from exactly? It surely was not part of AMD marketing.
I’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.
You will be hard pressed to match the dreams of this guy, but i cant even hype that cuz i am often on another level to a degree people think i am on crack.
No doubt there must have been some AMD fans with high wishes, and so they should have what AMD had before was not much to talk about.
You can also compare my dream vehicle, which are a 10 - 15 ton armored personnel carrier, to your dream vehicle which might be a La Ferrari or something silly like that, they both cost about the same, they both have 4 wheels and the are both cars but then you are also out of what the 2 share and you are into personal preferences.
So my dream car have the largest wheels - weighs the most - and can flatten your car by driving over it, all that after you have been shooting at me with a AK47 for a few hours with unlimited ammo, while i sit laughing like a madman in my car as you forgot to move your Ferrari out of flattening distance.
Yeah, but "ever" is a long time. I'll tend to stick with not in the foreseeable future.
I had only read recently that Pixar didn't do ray tracing on the original movies and was a bit surprised. I assumed they just did RT and threw a render farm at it.
A bit of googling reveals that "Monsters University" was Pixar's first use of Ray Tracing (for Global Illumination).
I... have no idea how to parse this.
People misunderstand what we in general are talking about when we talk about ray tracing in games. Ray tracing in games is a hybrid thing. The engine may calculate a few rays with very tightly defined boundaries for a much more realistic simulation, which may look slightly better then faked light map approch. Of course that isn't always going to be the case and light maps allow for very specific artistic outputs. No some shafts of light in a game done with a light map may not be as "realistic" but what the artist wants to achieve may not be very realistic to begin with.
For the cartoony style games... ray tracing is not even something game developers want. The control of a light map is preferable. In the 3D rendering world IES lighting (illuminated engineering society) files are basically light maps for ray traced lighting, to better emulate throw patterns from Light bulbs. I think I'm saying light maps are not evil and games with good texture artists are not simply inferior to plugging in a bunch of ray style light bounce calculations.
What we are getting with DXR is NOT Ray tracing for real... its a hybrid that MAY look better if implemented properly in the right type of game. Of course some games are going for hyper realism and the final product with global illumination can no doubt look incredible. I am not really sure we should even be calling it ray traced gaming though... last I checked it still takes RadeonPros and Quadro cards multiple minutes to perform a single frame blender render of even a basic scene at 1080p.
Ever is a long time, I’d like to think it is possible to render real time lighting IN REAL TIME (lol) but I remain skeptical. I didn’t read about monsters Inc RT, honestly that’s a shocker to me as I was only aware of the algorithm they made to do skully’s hair. The RT in Cars is more worthy, I think they took a week to render a frame for that movie.
Ever? I'm sure it will happen, I'm pretty sure it'll be in my lifetime. It will be similar to anything else, slowly get introduced as it is in rtx series and slowly be built up. It'll be the same then as it is now, a high end product will be able to do raytracing without first rendering the scene with polygons, but at a slower speed for higher Fidelity. Some people will pay the $ for this have to have and others will wait for it to become more mainstream. A few generations later they'll be arguing about something else
When we talk about Real time ray tracing its more about the complexity of the tracing being done.
I wouldn't say we will never ever ever be able to do real ray tracing in real time. However if we are talking about current Pixar level of complexity in real time... ya that may never be realistic. At this point it would take something like 1,512,000x the current performance of Pixars render farm to achieve 60fps. As its reported it took 7hrs per frame for Toy Story 3.
The question really becomes how much do we need to really do in a video game scene. Who knows perhaps 10 years from now through a mix of raster and ray tracing we may have a very convincing facsimile of a playable pixar type game. The right game designers right now can do a pretty good job of faking that with light maps anyway. I'm not sure RT is the killer feature that gives us hyper realism.
A week to render a frame sounds absurd.
Verge has a decent write up, around Monsters U, about how much simpler it is for designers to use Ray Tracing:
How exactly do you discuss next generation hardware and not talk about raytracing?
Think of the era when monsters or cars were created. Frame render planning was a thing, although I’m probably wrong on it taking 7 days but it was a looooong time.
The most recent terminator had a write up about this, due to the time it took to render the frame with RT’ing, most scenes had no chance for editing.
Bruh, Gamer X says the new Xbox and PS will do “real raytracing” via RDNA2, not this fake stuff we have now. Gotta keep up broski. That 1,512,000x increase is coming soon, to a console near you.
If it isn't 'the end', then it is a massive, mandatory next step.
Also- people don't necessarily want 'hyper-realism' in games- they're there to escape!
I do miss how games looked ... like games.
Pixars quoted frame render times make no sense, unless that is per CPU, per Core or something like that:
Unless something is wrong with my math (please check) 17 hours/frame is hundreds of years of render time for a two hour movie:
It's a ~2 hour movie. (7200 seconds x 24 frames/sec x 17 hours)/(24 hours x 365 days) = 335 years...
So it sounds like a nonsense number to me. In reality the real time to render a frame on their render farm is likely less than an hour.
I believe when they quote those numbers it is per each computer.
So one frame does take 17 hours, but if you have 17 computers, it's only 1 hour/frame average, etc.