RTX 2060 Super or RX 5700 XT or RX 5700 ?

You need balanced improvement in all unit counts, not just RT cores.

Without being specific with respect to unit counts, I am generally talking about end results, not the cores themselves. I do understand that Nvidia will have to scale different parts of the GPU to emphasize RT performance more, and that as a result rasterized performance will naturally increase as well.
 
Without being specific with respect to unit counts, I am generally talking about end results, not the cores themselves. I do understand that Nvidia will have to scale different parts of the GPU to emphasize RT performance more, and that as a result rasterized performance will naturally increase as well.

If you are talking about 10X performance increase in RT, then you need 10x more RT cores and 10X more shader cores, which means 10X more transistors. Which is totally impossible.

That max plausible transistor increase for RTX 3000, might be 30%, not 1000%. You aren't even going to get 50% increase in RT performance out of that transistor budget, let alone 10X (1000%).

Though there is a thread speculating on RTX 3000, that is a better place to discuss this.
 
How ready will Samsung be in making big EUV 7nm node chips??? How can the 3080Ti have 16gb of ram - 512bit bus??? I don't see that, 384bit with faster DDR 6 would be my guess. I won't worry or think too much more about Ampere at this time.

I don't think the 3080Ti would have 512 bit bus. Its too expensive. Besides nvidia is so much more memory efficient that it doesn't really need as much bandwidth. 384bit/DDR6 should be plenty.
 
Sorry, but WCCFtech is NOT a news site. It ranges questionable rumors, to complete made up internal extrapolations.

Tip for the future. If you want a site that has a track record of credible leaks, it's videocardz.com. Just about the only credible leak source IMO.




It might not be physically impossible, but it is economically impossible. Transistor costs are stagnant, and massive increase in RT and Raster performance would require a massive increase in transistor count. Which entails a massive increase in build costs, which is not going to lead to cheaper prices.

Likewise the RT performance increase is NOT plausible. Even on something like Q2 RTX.

Just because Quake II uses RT for all effects, doesn't mean, the shader cores are idle. Far from it.

All RT cores do is calculate intersections. You still have to shade the pixels as the result of those intersections, and you still have to denoise those effects.

From the frame time breakdown shown for Control, a typical pure RT effect might use the HW like this:

40% RT intersection testing, 30% shading those results, 30% denoising those results.

Even if you built 10X as many RT cores. That only reduces your frame time from 100% to 64%, and that is on a pure RT path game like Q2.

So 10X RT cores, only gives you 56% improved frame rate on pure RT titles.

There is no quick fix for improving RT performance. You need balanced improvement in all unit counts, not just RT cores.

Denoise is made by the tensor cores, not the shaders.

AFAIK RT cores performance is "linear" meaning it can increase with both more cores and higher clockspeed. The "heavy" part on RT is precisely the raytracing part (duh!!) shaders don't actually do much more than what they do without RT.

I won't claim any performance target on RTX as there are plenty of factors. Core count, clock speeds and architechtural improvements among other things.

But I don't think your figures are realistic.

We'll see in a few months...
 
Denoise is made by the tensor cores, not the shaders.

AFAIK RT cores performance is "linear" meaning it can increase with both more cores and higher clockspeed. The "heavy" part on RT is precisely the raytracing part (duh!!) shaders don't actually do much more than what they do without RT.

I won't claim any performance target on RTX as there are plenty of factors. Core count, clock speeds and architechtural improvements among other things.

But I don't think your figures are realistic.

We'll see in a few months...

I would say denoising was intended to be done on the tensor cores.

AFAICT, almost no one is doing that.

Most are opting for their own custom temporal denoising done on the Shader cores.

Even the Q2 RTX update mentions its doing temporal denoising.
 
I would say denoising was intended to be done on the tensor cores.

AFAICT, almost no one is doing that.

Most are opting for their own custom temporal denoising done on the Shader cores.

Even the Q2 RTX update mentions its doing temporal denoising.


Wow, that would mean a whole lot of wasted space if that's true, as that is the "raison de etre" of the Tensor cores. Maybe something is broken? I mean tensor cores are supposed to be orders of magnitude faster for denoising than shaders.
 
Wow, that would mean a whole lot of wasted space if that's true, as that is the "raison de etre" of the Tensor cores. Maybe something is broken? I mean tensor cores are supposed to be orders of magnitude faster for denoising than shaders.

Yeah, the two reasons were supposed to be denoising and DLSS. Neither of which has really delivered.

Control has the best DLSS so far, and it turns out, they aren't aren't running a tensor network in that version either, but a coded algorithm, which presumably also runs on shader cores.

Tensor cores have really been massively oversold.
 
If you are talking about 10X performance increase in RT, then you need 10x more RT cores and 10X more shader cores, which means 10X more transistors. Which is totally impossible.

That max plausible transistor increase for RTX 3000, might be 30%, not 1000%. You aren't even going to get 50% increase in RT performance out of that transistor budget, let alone 10X (1000%).

Though there is a thread speculating on RTX 3000, that is a better place to discuss this.
It's been established by developers that the RTX architecture is a failure because it's highly inefficient, probably made in a rush and added to an already developed Turing architetcture. There is a cache problem of some sort that you can't obtain the full performance of the cores working all together. As a matter of fact Nvidia did great, since their Turing part was well ahead of AMD architecture and the hardware Raytracing for gaming, even out of AMD radar, not the real goal for them. Mind that the DXR optimized driver for pascal cards only show how bad those cards are for the specific features ued by Nvidia to feature raytrace on those cards. AMD may be much better fit by using algorythms with FP16 which would put Vega 56/64 cards quite close to RTX 2060. Nvidia may completely rework its architecture regarding Raytrace and implement it into the CUDA cores so this will bring flexibility. New DRX version from M$ tends to go into that direction with new APIs using balanced features between raster and raytrace, which Turing is incapable of.
So, all put together, the best speculation is whether you need to spend more than $300 on a graphics card if you are not ready to spend much more in less than 6 months to go in the newt to come high end of gaming. There is better value for a 6 month living card if you want to be in the middle class of gaming in 6 months. Meaning some $200 Ampere card will be much better in 6 month than any RTX 2060 S at $400 today. Same about RX 5700/XT. Too expensive for their value in 6 months.
 
5700 if you're going to flash the bios, otherwise 5700xt is the king right now. Anything else up to a 2080ti pounds sand value wise, and since the new consoles are all going to be amd, it will continue to eat up pc ports like rdr2 just showcased.
 
Back
Top