You are not correct about this.
Care to explain?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
You are not correct about this.
I know Vega was god awful at VR (relative to an equally priced nVidia) Was this ever resolved with Navi? I can’t find any good reviews. Most reviews use VRmark which is trash and not relevant to real games.
You keep telling yourself that an RTX investment in August 2018 was money well spent for Raytracing purposes... That extra performance in the "fake graphics past" is all you got for your $1200.
In August 2020/2021, that might be a different story AS I'VE SAID MULTIPLE TIMES.
I would imagine we’ll see the same performance hit Turing has if it is similar in process.
A beast called the tessellator has been added which enables games developers to create smoother, less blocky and more organic looking objects in games. This is the change users will probably be most aware of. And it?ll show up when someone is looking at the silhouettes of hills and mountains or the profiles of characters in games. Where artists previously had to trade off quality for performance, now artists will have the freedom to create naturalistic scenery. The tessellator represents a natural next step in gaming hardware (in fact the Xbox 360 graphics chip that AMD designed already has a tessellator, and AMD graphics hardware has featured tessellator technology starting with the ATI Radeon HD 2000 series right up to the latest ATI Radeon HD 4000 series cards today).
To overcome these challenges, our developer relations engineers make sure games can realize the full image quality benefits of tessellation while still making good use of GPU resources. This is done by using a variety of adaptive techniques that use high tessellation levels only for parts of a scene that are close to the viewer, on silhouette edges, or in areas requiring fine detail. Our goal is to keep polygon size at or above 16 pixels as much as possible. This allows for a fairly high polygon density, making scenes look great while also running well on all recent GPUs. We have also developed techniques that can help balance tessellation workloads by doing a limited amount of pre-tessellation in vertex shaders, which can help to reduce the impact of bottlenecks in the rendering pipeline.
Go mammoth air next time arouAlso... pretty sure I just killed a 2080ti with a water leak. I might be in the market for big Navi depending on price lol. Koolance clamp failed. The smartest among us mix water and electronics.
But RT is literally Jesus! You must be an AMD fan who only will congratulate RT when AMD supports it (there already was an RT vega demo lol).Don't argue with the nVZealots.. next you will be told that you only think DXR is crap until AMD supports it, then you will change your tune..
And they do not bother about you future predictions.. only the fact that you haven't drunk the coolaid yet.
and i have tried Quake2 and found it meehh.. tried exodus and that was a bit better - but ended up turning it off DXR for the sweet fps.. imho.. turing is a Jensen-mirage. looked good initially - but turned out to be really disapointing.
DXR IS JUST NOT READY YET!!! - to bring any meaningfull upgrade to the games in question.
So tldr; bring on NAVI!!!!!
Nvidia's patent closely matches their marketing description of RTX so there's a good chance it actually explains Turing's implementation.
AMD's patent is similar with one key difference.
In Nvidia's approach the custom RT hardware does a full iterative BVH traversal and passes the final hit point (or no hit) result back to the shader core for shading.
In AMD's patent the RT hardware passes control back to the shader core multiple times during traversal. This could potentially be slower but more flexible. Depends on how fast the data paths are between the RT hardware and shaders. For both AMD and Nvidia the RT hardware is in the SM/CU.
If anything Nvidia's black box approach is more feasible for a chiplet design but there's no way a chiplet interconnect can match on-chip bandwidth and latency.
Go mammoth air next time arou
But RT is literally Jesus! You must be an AMD fan who only will congratulate RT when AMD supports it (there already was an RT vega demo lol).
I'm 100% in the same camp. RT as a whole is great, since even SGI did it, big jump. Partial scene RT is mediocre and I don't find it to be impressive enough to be worth it yet. Looks cool in some shots but in others way overblown/unrealistic like a 3d movie tech demo. Worth an upgrade alone? No.
In next generation or two it will become more playable, more fully RT (not partial) and look even better with faster hardware to support it. Count me in then.
Right now, 3 hashed out FPS games and a demo benchmark? Barely 1440p 80fps on a 1200$ card? Hard pass.
Lets not forget Intel is getting in the mix. Nvidia have something better by then.
CyberPunk 2077 being a DXR game is worth the price just for me...The Witcher 3 was the best game in years...CD Project Red know how to build high I.Q games with SUPERB story lines.
But no one is forcing you to use DXR.
If you are fine by playing with lesser graphics fidelity...by all means.
But your anger towards DXR, despite those facts can only lead me to conclude one of two things.
Either you want to play with DXR, but since you favourite vendor does not support it...it has to be BAD.
Or you got touched by DXR in a bad place...so which is it?
Butcause you had no problems support "async computing"...how did that go?
From what I understand AMDs advantage is that it does not require dedicated RT hardware/repurposed commercial tensorflow cores from workstation cards to sit idly when not used. What it means is that parts can be used for shading when needed and the associated part also used for RT as needed. Instead of dedicated (to the point of being inflexible) hardware for each task. It makes it more scalable according to the scene and also means potentially more sillicon to push RT when really heavily needed.
So might end up being faster due to ease of scaling and flexibility.
Re:bandwidth, the Vega IF links are 500gb/sec. Plenty for crossfire monolithic to OS GPU.
I don't think we'll see that until after Navi though.
You are not correct about this.
Let's address some of the shortcomings from your post:
1). CyberPunk 2077 is due for release April 16, 2020 a full 20 months after the release of RTX cards, and that's if it drops on time. You (and the rest of the RT apologists) can't keep using CyberPunk as the savior of RTX cards.
2). Lesser graphics fidelity at higher frame rates. Not everyone is going to want a pretty image at the expense of frame rate depending on their situations and needs, and realistically, the hardware doesn't exist for you to drive a 1440p monitor at 144Hz with DXR unless you tone down the ray tracing. And then what do you really have? A hybrid with your "graphics past."
3). Based on the current implementation of RT in the sparse examples that are available, it is completely reasonable for N4CR to conclude what he did without "being touched by DXR in a bad place." He clearly stated that it was a performance issue after praising the idea of RT.
4). You always seem to miss the most obvious reason for anger toward DXR...the price Nvidia sells its high end card (the only one that will actually run RT good enough to make a difference).
Partial scene RT is mediocre and I don't find it to be impressive enough to be worth it yet.
Worth an upgrade alone? No.
In next generation or two it will become more playable, more fully RT (not partial) and look even better with faster hardware to support it. Count me in then.
aaaaand another Navi speculation thread has fully devolved into an RTX debate thread. le sigh. to think I was hoping for an informative reasonable discussion about future hardware... what a noob I am
aaaaand another Navi speculation thread has fully devolved into an RTX debate thread. le sigh. to think I was hoping for an informative reasonable discussion about future hardware... what a noob I am
Lets not forget Intel is getting in the mix. Nvidia have something better by then.
And where did that hype come from exactly? It surely was not part of AMD marketing.Small Navi was hyped to beat RTX 2070 for 1/2 the price; instead it matched it for 4/5 the price.
I’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.How do you think we could possibly start with full scene RT? It only makes sense to start with some portion of RT effects where they can be done and grow from there.
Who is suggesting upgrading solely for RT?? That looks like a Straw Man argument to me.
Fully RT scenes are more than a generation or two away. The biggest change in a generation or two, is AMD will be supporting it.
I’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.
You will be hard pressed to match the dreams of this guy, but i cant even hype that cuz i am often on another level to a degree people think i am on crack.
No doubt there must have been some AMD fans with high wishes, and so they should have what AMD had before was not much to talk about.
You can also compare my dream vehicle, which are a 10 - 15 ton armored personnel carrier, to your dream vehicle which might be a La Ferrari or something silly like that, they both cost about the same, they both have 4 wheels and the are both cars but then you are also out of what the 2 share and you are into personal preferences.
So my dream car have the largest wheels - weighs the most - and can flatten your car by driving over it, all that after you have been shooting at me with a AK47 for a few hours with unlimited ammo, while i sit laughing like a madman in my car as you forgot to move your Ferrari out of flattening distance.
Yeah, but "ever" is a long time. I'll tend to stick with not in the foreseeable future.
I had only read recently that Pixar didn't do ray tracing on the original movies and was a bit surprised. I assumed they just did RT and threw a render farm at it.
A bit of googling reveals that "Monsters University" was Pixar's first use of Ray Tracing (for Global Illumination).
Ever? I'm sure it will happen, I'm pretty sure it'll be in my lifetime. It will be similar to anything else, slowly get introduced as it is in rtx series and slowly be built up. It'll be the same then as it is now, a high end product will be able to do raytracing without first rendering the scene with polygons, but at a slower speed for higher Fidelity. Some people will pay the $ for this have to have and others will wait for it to become more mainstream. A few generations later they'll be arguing about something elseI’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.
Ever is a long time, I’d like to think it is possible to render real time lighting IN REAL TIME (lol) but I remain skeptical. I didn’t read about monsters Inc RT, honestly that’s a shocker to me as I was only aware of the algorithm they made to do skully’s hair. The RT in Cars is more worthy, I think they took a week to render a frame for that movie.
"What if we made these lights just work?" Kalache told The Verge, in a Jobsian turn of phrase. Instead of building reflections and shadows manually, why not do it automatically every time an artist placed a light source? "It was as if every time you took a photograph, you built a new camera," Kalache says. "It takes away from the art of taking a picture. We wanted to stop being engineers and be artists." The result was Pixar's new Global Illumination lighting system, which gets its first public debut on Thursday with Monsters University.
aaaaand another Navi speculation thread has fully devolved into an RTX debate thread. le sigh. to think I was hoping for an informative reasonable discussion about future hardware... what a noob I am
A week to render a frame sounds absurd.
Verge has a decent write up, around Monsters U, about how much simpler it is for designers to use Ray Tracing:
https://www.theverge.com/2013/6/21/...d-the-way-light-works-for-monsters-university
When we talk about Real time ray tracing its more about the complexity of the tracing being done.
I wouldn't say we will never ever ever be able to do real ray tracing in real time. However if we are talking about current Pixar level of complexity in real time... ya that may never be realistic. At this point it would take something like 1,512,000x the current performance of Pixars render farm to achieve 60fps. As its reported it took 7hrs per frame for Toy Story 3.
The question really becomes how much do we need to really do in a video game scene. Who knows perhaps 10 years from now through a mix of raster and ray tracing we may have a very convincing facsimile of a playable pixar type game. The right game designers right now can do a pretty good job of faking that with light maps anyway. I'm not sure RT is the killer feature that gives us hyper realism.
I'm not sure RT is the killer feature that gives us hyper realism.
If it isn't 'the end', then it is a massive, mandatory next step.
Also- people don't necessarily want 'hyper-realism' in games- they're there to escape!
Think of the era when monsters or cars were created. Frame render planning was a thing, although I’m probably wrong on it taking 7 days but it was a looooong time.
The most recent terminator had a write up about this, due to the time it took to render the frame with RT’ing, most scenes had no chance for editing.
- On average, it took 17 hours to render each frame of the film. In addition, it was the first Pixar film to use ray-tracing to accurately create the reflections on the cars.
"Nvidia killer" is the RDNA2 chip that takes the top Gaming crown from Nvidia in 2020. While "Big Navi" is the chip that is bigger than Navi10, that is being released within the next few months @ above 2080 Super's Performance.
So:
Radeon 5800 Series = big navi ($499)
Radeon 5900 Series = nvidia killer ($799)
So many nvidia trolls here are being jebaited right now.... because most of you will be buying AMD gpus within the next year. Ampere won't be out for 13 months, and that is just about the time that AMD will release it's 4th 7nm GPU... (before NVidia releases it's first).
In all honestly, Nvidia really doesn't factor in to anything for Gamers... they have nothing to compete with RDNA. As RDNA was a surprise on the Gaming Industry, as Dr Su kept their new GPU architecture hidden/secret really well. Now Nvidia is following (as in behind) AMD architecturally and have no answer for 13 months.
While at the same time, AMD will just keep pumping out cheap 7nm GPUs and Jensen tries to figure out how to make a 225mm^2 that can compete with mainstream Navi10. Because a shrunken down Turing is still much, much bigger than RDNA.
In fact, Nvidia killed themselves and thought 2080 Super was the best gaming GPU a company can provide to gamers.... @ $800