Nvidia Killer

Status
Not open for further replies.
I know Vega was god awful at VR (relative to an equally priced nVidia) Was this ever resolved with Navi? I can’t find any good reviews. Most reviews use VRmark which is trash and not relevant to real games.

Unfortunately the [H] was one of the only deep dive VR sites. I miss the content. VR gets covered by many who are excited by the tech but know nothing of the actual tech or how to analyze it and break it down for any useful data.
 
You keep telling yourself that an RTX investment in August 2018 was money well spent for Raytracing purposes... That extra performance in the "fake graphics past" is all you got for your $1200.

In August 2020/2021, that might be a different story AS I'VE SAID MULTIPLE TIMES.

Don't argue with the nVZealots.. next you will be told that you only think DXR is crap until AMD supports it, then you will change your tune..

And they do not bother about you future predictions.. only the fact that you haven't drunk the coolaid yet.



and i have tried Quake2 and found it meehh.. tried exodus and that was a bit better - but ended up turning it off DXR for the sweet fps.. imho.. turing is a Jensen-mirage. looked good initially - but turned out to be really disapointing.

DXR IS JUST NOT READY YET!!! - to bring any meaningfull upgrade to the games in question.


So tldr; bring on NAVI!!!!!
 
Last edited:
  • Like
Reactions: N4CR
like this
I would imagine we’ll see the same performance hit Turing has if it is similar in process.

Nvidia's patent closely matches their marketing description of RTX so there's a good chance it actually explains Turing's implementation.

AMD's patent is similar with one key difference.
In Nvidia's approach the custom RT hardware does a full iterative BVH traversal and passes the final hit point (or no hit) result back to the shader core for shading.

In AMD's patent the RT hardware passes control back to the shader core multiple times during traversal. This could potentially be slower but more flexible. Depends on how fast the data paths are between the RT hardware and shaders. For both AMD and Nvidia the RT hardware is in the SM/CU.

If anything Nvidia's black box approach is more feasible for a chiplet design but there's no way a chiplet interconnect can match on-chip bandwidth and latency.
 
/que: Too much raytracing...

Just like with AMD's tessellation deficit (even if tesselation is dynamic adaptive by desing and set by the DEVELOPER) introduced a "perfomance" slider.
AMD was all "It is all about tesselation!"...then got hammered by NVIDIA...and changed the stance to whine about too much about it being used.

https://www.cdrinfo.com/d7/content/amd-demonstrates-first-microsoft-directx-11-graphics-processor

A beast called the tessellator has been added which enables games developers to create smoother, less blocky and more organic looking objects in games. This is the change users will probably be most aware of. And it?ll show up when someone is looking at the silhouettes of hills and mountains or the profiles of characters in games. Where artists previously had to trade off quality for performance, now artists will have the freedom to create naturalistic scenery. The tessellator represents a natural next step in gaming hardware (in fact the Xbox 360 graphics chip that AMD designed already has a tessellator, and AMD graphics hardware has featured tessellator technology starting with the ATI Radeon HD 2000 series right up to the latest ATI Radeon HD 4000 series cards today).

It became this when they lost badly in performance:
https://www.geeks3d.com/20101201/amd-graphics-blog-tessellation-for-all/

To overcome these challenges, our developer relations engineers make sure games can realize the full image quality benefits of tessellation while still making good use of GPU resources. This is done by using a variety of adaptive techniques that use high tessellation levels only for parts of a scene that are close to the viewer, on silhouette edges, or in areas requiring fine detail. Our goal is to keep polygon size at or above 16 pixels as much as possible. This allows for a fairly high polygon density, making scenes look great while also running well on all recent GPUs. We have also developed techniques that can help balance tessellation workloads by doing a limited amount of pre-tessellation in vertex shaders, which can help to reduce the impact of bottlenecks in the rendering pipeline.

Gone were the "We have had tessellation in our hardware for years, it is a BEAST!"...and now it was "Wooooaaa...TOO much tessellation"...the beast became a jellyfish .

But it is to be expected...looking the the R&D budets for each company.

And a little trip down memory lane:
T&L performance:
https://www.anandtech.com/show/391/5
AA performance:
https://www.anandtech.com/show/850/8
Ansiotropic performance:
https://www.anandtech.com/show/875/14


Yet some peope expect new features to run at MAX fidelity...with no perforamnce loss at all.

In a few years they will be expecting to run new features FASTER than without...I kid you not.
 
Also... pretty sure I just killed a 2080ti with a water leak. I might be in the market for big Navi depending on price lol. Koolance clamp failed. The smartest among us mix water and electronics. ;)
Go mammoth air next time arou
Don't argue with the nVZealots.. next you will be told that you only think DXR is crap until AMD supports it, then you will change your tune..

And they do not bother about you future predictions.. only the fact that you haven't drunk the coolaid yet.



and i have tried Quake2 and found it meehh.. tried exodus and that was a bit better - but ended up turning it off DXR for the sweet fps.. imho.. turing is a Jensen-mirage. looked good initially - but turned out to be really disapointing.

DXR IS JUST NOT READY YET!!! - to bring any meaningfull upgrade to the games in question.


So tldr; bring on NAVI!!!!!
But RT is literally Jesus! You must be an AMD fan who only will congratulate RT when AMD supports it (there already was an RT vega demo lol).

I'm 100% in the same camp. RT as a whole is great, since even SGI did it, big jump. Partial scene RT is mediocre and I don't find it to be impressive enough to be worth it yet. Looks cool in some shots but in others way overblown/unrealistic like a 3d movie tech demo. Worth an upgrade alone? No.
In next generation or two it will become more playable, more fully RT (not partial) and look even better with faster hardware to support it. Count me in then.

Right now, 3 hashed out FPS games and a demo benchmark? Barely 1440p 80fps on a 1200$ card? Hard pass.
 
Nvidia's patent closely matches their marketing description of RTX so there's a good chance it actually explains Turing's implementation.

AMD's patent is similar with one key difference.
In Nvidia's approach the custom RT hardware does a full iterative BVH traversal and passes the final hit point (or no hit) result back to the shader core for shading.

In AMD's patent the RT hardware passes control back to the shader core multiple times during traversal. This could potentially be slower but more flexible. Depends on how fast the data paths are between the RT hardware and shaders. For both AMD and Nvidia the RT hardware is in the SM/CU.

If anything Nvidia's black box approach is more feasible for a chiplet design but there's no way a chiplet interconnect can match on-chip bandwidth and latency.

From what I understand AMDs advantage is that it does not require dedicated RT hardware/repurposed commercial tensorflow cores from workstation cards to sit idly when not used. What it means is that parts can be used for shading when needed and the associated part also used for RT as needed. Instead of dedicated (to the point of being inflexible) hardware for each task. It makes it more scalable according to the scene and also means potentially more sillicon to push RT when really heavily needed. So might end up being faster due to ease of scaling and flexibility.
Re:bandwidth, the Vega IF links are 500gb/sec. Plenty for crossfire monolithic to OS GPU.
I don't think we'll see that until after Navi though.
 
Go mammoth air next time arou

But RT is literally Jesus! You must be an AMD fan who only will congratulate RT when AMD supports it (there already was an RT vega demo lol).

I'm 100% in the same camp. RT as a whole is great, since even SGI did it, big jump. Partial scene RT is mediocre and I don't find it to be impressive enough to be worth it yet. Looks cool in some shots but in others way overblown/unrealistic like a 3d movie tech demo. Worth an upgrade alone? No.
In next generation or two it will become more playable, more fully RT (not partial) and look even better with faster hardware to support it. Count me in then.

Right now, 3 hashed out FPS games and a demo benchmark? Barely 1440p 80fps on a 1200$ card? Hard pass.

CyberPunk 2077 being a DXR game is worth the price just for me...The Witcher 3 was the best game in years...CD Project Red know how to build high I.Q games with SUPERB story lines.
But no one is forcing you to use DXR.
If you are fine by playing with lesser graphics fidelity...by all means.

But your anger towards DXR, despite those facts can only lead me to conclude one of two things.
Either you want to play with DXR, but since you favourite vendor does not support it...it has to be BAD.
Or you got touched by DXR in a bad place...so which is it?

Butcause you had no problems support "async computing"...how did that go?
 
Lets not forget Intel is getting in the mix. Nvidia have something better by then.

Yes.
But going by the reasoning of some people in here ( with AMD and Nvidia having decades of a head start on them ) it will be a long time before they can compete.

Personally i dont give all that much about head start in this kind of ballgame, if we was talking something where highly skilled people make things with their hands then sure cuz it do take a while to train people to do some things.
Okay you cant just throw 5 billion transistors at silica and have a awesome GPU come out at the end, but i do think AMD actually know what it take to do it right it is only now that they bothered instead of just coasting along which they have done for years ( and if i was a shareholder i would have been pissed about )
 
Last edited:
CyberPunk 2077 being a DXR game is worth the price just for me...The Witcher 3 was the best game in years...CD Project Red know how to build high I.Q games with SUPERB story lines.
But no one is forcing you to use DXR.
If you are fine by playing with lesser graphics fidelity...by all means.

But your anger towards DXR, despite those facts can only lead me to conclude one of two things.
Either you want to play with DXR, but since you favourite vendor does not support it...it has to be BAD.
Or you got touched by DXR in a bad place...so which is it?

Butcause you had no problems support "async computing"...how did that go?

Let's address some of the shortcomings from your post:

1). CyberPunk 2077 is due for release April 16, 2020 a full 20 months after the release of RTX cards, and that's if it drops on time. You (and the rest of the RT apologists) can't keep using CyberPunk as the savior of RTX cards.

2). Lesser graphics fidelity at higher frame rates. Not everyone is going to want a pretty image at the expense of frame rate depending on their situations and needs, and realistically, the hardware doesn't exist for you to drive a 1440p monitor at 144Hz with DXR unless you tone down the ray tracing. And then what do you really have? A hybrid with your "graphics past."

3). Based on the current implementation of RT in the sparse examples that are available, it is completely reasonable for N4CR to conclude what he did without "being touched by DXR in a bad place." He clearly stated that it was a performance issue after praising the idea of RT.

4). You always seem to miss the most obvious reason for anger toward DXR...the price Nvidia sells its high end card (the only one that will actually run RT good enough to make a difference).
 
From what I understand AMDs advantage is that it does not require dedicated RT hardware/repurposed commercial tensorflow cores from workstation cards to sit idly when not used. What it means is that parts can be used for shading when needed and the associated part also used for RT as needed. Instead of dedicated (to the point of being inflexible) hardware for each task. It makes it more scalable according to the scene and also means potentially more sillicon to push RT when really heavily needed.

Not exactly. According to the patent there is dedicated hardware that will be idle when not used. The reusable part is the memory pipeline that also loads textures but the traversal and intersection hardware will be idle. AMD is clearly stating here that those operations are too slow to run on general purpose shaders. Reusing the texture memory pipeline is an interesting idea but that means raytracing will compete with texturing for memory accesses. Nothing is free I guess.

"A fixed function BVH intersection testing and traversal (a common and expensive operation in ray tracers) logic is implemented on texture processors. This enables the performance and power efficiency of the ray tracing to be substantially improved without expanding high area and effort costs. High bandwidth paths within the texture processor and shader units that are used for texture processing are reused for BVH intersection testing and traversal. In general, a texture processor receives an instruction from the shader unit that includes ray data and BVH node pointer information. The texture processor fetches the BVH node data from memory using, for example, 16 double word (DW) block loads. The texture processor performs four ray-box intersections and children sorting for box nodes and 1 ray-triangle intersection for triangle nodes. The intersection results are returned to the shader unit."

So might end up being faster due to ease of scaling and flexibility.

Speed and flexibility are tradeoffs. You don't usually get to maximize both for the same transistor budget.

Re:bandwidth, the Vega IF links are 500gb/sec. Plenty for crossfire monolithic to OS GPU.
I don't think we'll see that until after Navi though.

On-chip bandwidth is at least 10x that (for L2 cache).
 
You are not correct about this.

Actually, he is correct.

It will be a major GPU breakthrough the first time anyone gets multiple gaming GPU chips/chiplets working without crossfire/SLI software and/or duplicated memory pools.

So far there is ZERO indication that anyone has this kind of multiple chip gaming GPU capability on the horizon.

Some people keep pointing to the multi-Chip GPU Research Paper that was done by NVidia, but can't seem to recognize that it was a compute design, not a gaming GPU design. Compute designs don't have the same kind of real time synchronization/latency issues as gaming. Heck you can run big compute problems across multiple different computers, or even across a geographically diverse regional network. Compute is not latency sensitive like real time graphics.

It will be and enormous coup if/when AMD, NVidia or Intel first makes a workable multi-chip design that doesn't suffer from driver/memory issues.

But again, no real sign of this breakthrough coming soon.
 
Let's address some of the shortcomings from your post:

1). CyberPunk 2077 is due for release April 16, 2020 a full 20 months after the release of RTX cards, and that's if it drops on time. You (and the rest of the RT apologists) can't keep using CyberPunk as the savior of RTX cards.

It's worth it for me....funny that is gives you anger...we are talking about LUXURY item here...and cheap ones at that.

2). Lesser graphics fidelity at higher frame rates. Not everyone is going to want a pretty image at the expense of frame rate depending on their situations and needs, and realistically, the hardware doesn't exist for you to drive a 1440p monitor at 144Hz with DXR unless you tone down the ray tracing. And then what do you really have? A hybrid with your "graphics past."

Yes, lower graphics fidelity, as I stated....fake graphics at higher FPS is still fake graphics.

3). Based on the current implementation of RT in the sparse examples that are available, it is completely reasonable for N4CR to conclude what he did without "being touched by DXR in a bad place." He clearly stated that it was a performance issue after praising the idea of RT.

He can turn it off...he seem very angry for something he does not want....and so do you.

4). You always seem to miss the most obvious reason for anger toward DXR...the price Nvidia sells its high end card (the only one that will actually run RT good enough to make a difference).

Again....LUXURY item here...and cheap ones at that.
 
Partial scene RT is mediocre and I don't find it to be impressive enough to be worth it yet.

How do you think we could possibly start with full scene RT? It only makes sense to start with some portion of RT effects where they can be done and grow from there.

Worth an upgrade alone? No.

Who is suggesting upgrading solely for RT?? That looks like a Straw Man argument to me.

In next generation or two it will become more playable, more fully RT (not partial) and look even better with faster hardware to support it. Count me in then.

Fully RT scenes are more than a generation or two away. The biggest change in a generation or two, is AMD will be supporting it.
 
Whenever I see a headline like the OP's I always take it with a huge grain of salt; whether it be for AMD or nVidia.

I'll always wait for 3rd party reviews and hardly ever look upon a company's promotional material as 'fact'.

That said, competition is great and I hope the rumors are true. AMD needs the boost.
 
aaaaand another Navi speculation thread has fully devolved into an RTX debate thread. le sigh. to think I was hoping for an informative reasonable discussion about future hardware... what a noob I am :rolleyes:

Really, no surprise there. However, they do not speak for the majority and therefore, there are other places to have good tech discussions.

I will live with the better overall image quality that AMD has, anyways. :)

Edit: Which is better looking, a compressed image or an uncompressed image?
 
Last edited:
aaaaand another Navi speculation thread has fully devolved into an RTX debate thread. le sigh. to think I was hoping for an informative reasonable discussion about future hardware... what a noob I am :rolleyes:

:rolleyes: yourself.

The title of this thread is "Nvidia Killer". If that isn't an invitation to compare to NVidia's product (AKA RTX cards), I don't know what is.
 
Even if this new card *is* an "Nvidia Killer", it will probably take several generations of AMD being ahead in order to climb back on top.

What I am saying is that Ryzen came out in 2017, and while reviews were good, it took them 2 years to got to the position they're at today.

So I think it can't just be one killer card, it has to be several cards over a year or two that are all great before people will see the light.
 
They can however, make a stellar price/performance beast - even if not absolute top dog.

My wager would be on something with 2080ti-ish levels of performance (sans RT) at a noticeably lower price. That's still a market spoiler - as we've seen in this thread many buyers today are far more interested in traditional rendering performance. NV would be put in an awkward position, and they'd have to triple down on RT as a selling point - which again, doesn't sway everyone.
 
How do you think we could possibly start with full scene RT? It only makes sense to start with some portion of RT effects where they can be done and grow from there.



Who is suggesting upgrading solely for RT?? That looks like a Straw Man argument to me.



Fully RT scenes are more than a generation or two away. The biggest change in a generation or two, is AMD will be supporting it.
I’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.
 
You will be hard pressed to match the dreams of this guy, but i cant even hype that cuz i am often on another level to a degree people think i am on crack.
No doubt there must have been some AMD fans with high wishes, and so they should have what AMD had before was not much to talk about.

You can also compare my dream vehicle, which are a 10 - 15 ton armored personnel carrier, to your dream vehicle which might be a La Ferrari or something silly like that, they both cost about the same, they both have 4 wheels and the are both cars but then you are also out of what the 2 share and you are into personal preferences.
So my dream car have the largest wheels - weighs the most - and can flatten your car by driving over it, all that after you have been shooting at me with a AK47 for a few hours with unlimited ammo, while i sit laughing like a madman in my car as you forgot to move your Ferrari out of flattening distance.
 
I’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.

Yeah, but "ever" is a long time. I'll tend to stick with not in the foreseeable future.

I had only read recently that Pixar didn't do ray tracing on the original movies and was a bit surprised. I assumed they just did RT and threw a render farm at it.

A bit of googling reveals that "Monsters University" was Pixar's first use of Ray Tracing (for Global Illumination).
 
You will be hard pressed to match the dreams of this guy, but i cant even hype that cuz i am often on another level to a degree people think i am on crack.
No doubt there must have been some AMD fans with high wishes, and so they should have what AMD had before was not much to talk about.

You can also compare my dream vehicle, which are a 10 - 15 ton armored personnel carrier, to your dream vehicle which might be a La Ferrari or something silly like that, they both cost about the same, they both have 4 wheels and the are both cars but then you are also out of what the 2 share and you are into personal preferences.
So my dream car have the largest wheels - weighs the most - and can flatten your car by driving over it, all that after you have been shooting at me with a AK47 for a few hours with unlimited ammo, while i sit laughing like a madman in my car as you forgot to move your Ferrari out of flattening distance.

I... have no idea how to parse this.
 
People misunderstand what we in general are talking about when we talk about ray tracing in games. Ray tracing in games is a hybrid thing. The engine may calculate a few rays with very tightly defined boundaries for a much more realistic simulation, which may look slightly better then faked light map approch. Of course that isn't always going to be the case and light maps allow for very specific artistic outputs. No some shafts of light in a game done with a light map may not be as "realistic" but what the artist wants to achieve may not be very realistic to begin with.

For the cartoony style games... ray tracing is not even something game developers want. The control of a light map is preferable. In the 3D rendering world IES lighting (illuminated engineering society) files are basically light maps for ray traced lighting, to better emulate throw patterns from Light bulbs. I think I'm saying light maps are not evil and games with good texture artists are not simply inferior to plugging in a bunch of ray style light bounce calculations.

What we are getting with DXR is NOT Ray tracing for real... its a hybrid that MAY look better if implemented properly in the right type of game. Of course some games are going for hyper realism and the final product with global illumination can no doubt look incredible. I am not really sure we should even be calling it ray traced gaming though... last I checked it still takes RadeonPros and Quadro cards multiple minutes to perform a single frame blender render of even a basic scene at 1080p.
 
Last edited:
Yeah, but "ever" is a long time. I'll tend to stick with not in the foreseeable future.

I had only read recently that Pixar didn't do ray tracing on the original movies and was a bit surprised. I assumed they just did RT and threw a render farm at it.

A bit of googling reveals that "Monsters University" was Pixar's first use of Ray Tracing (for Global Illumination).

Ever is a long time, I’d like to think it is possible to render real time lighting IN REAL TIME (lol) but I remain skeptical. I didn’t read about monsters Inc RT, honestly that’s a shocker to me as I was only aware of the algorithm they made to do skully’s hair. The RT in Cars is more worthy, I think they took a week to render a frame for that movie.
 
I’m honestly very skeptical of us ever achieving full screen RT in games. When we get it in movies maybe we can start hoping for it in other products. From what I gather movies are still being made with localized RT.
Ever? I'm sure it will happen, I'm pretty sure it'll be in my lifetime. It will be similar to anything else, slowly get introduced as it is in rtx series and slowly be built up. It'll be the same then as it is now, a high end product will be able to do raytracing without first rendering the scene with polygons, but at a slower speed for higher Fidelity. Some people will pay the $ for this have to have and others will wait for it to become more mainstream. A few generations later they'll be arguing about something else :)
 
When we talk about Real time ray tracing its more about the complexity of the tracing being done.

I wouldn't say we will never ever ever be able to do real ray tracing in real time. However if we are talking about current Pixar level of complexity in real time... ya that may never be realistic. At this point it would take something like 1,512,000x the current performance of Pixars render farm to achieve 60fps. As its reported it took 7hrs per frame for Toy Story 3.

The question really becomes how much do we need to really do in a video game scene. Who knows perhaps 10 years from now through a mix of raster and ray tracing we may have a very convincing facsimile of a playable pixar type game. The right game designers right now can do a pretty good job of faking that with light maps anyway. I'm not sure RT is the killer feature that gives us hyper realism.
 
Ever is a long time, I’d like to think it is possible to render real time lighting IN REAL TIME (lol) but I remain skeptical. I didn’t read about monsters Inc RT, honestly that’s a shocker to me as I was only aware of the algorithm they made to do skully’s hair. The RT in Cars is more worthy, I think they took a week to render a frame for that movie.

A week to render a frame sounds absurd.

Verge has a decent write up, around Monsters U, about how much simpler it is for designers to use Ray Tracing:
https://www.theverge.com/2013/6/21/...d-the-way-light-works-for-monsters-university
"What if we made these lights just work?" Kalache told The Verge, in a Jobsian turn of phrase. Instead of building reflections and shadows manually, why not do it automatically every time an artist placed a light source? "It was as if every time you took a photograph, you built a new camera," Kalache says. "It takes away from the art of taking a picture. We wanted to stop being engineers and be artists." The result was Pixar's new Global Illumination lighting system, which gets its first public debut on Thursday with Monsters University.
 
A week to render a frame sounds absurd.

Verge has a decent write up, around Monsters U, about how much simpler it is for designers to use Ray Tracing:
https://www.theverge.com/2013/6/21/...d-the-way-light-works-for-monsters-university

Think of the era when monsters or cars were created. Frame render planning was a thing, although I’m probably wrong on it taking 7 days but it was a looooong time.

The most recent terminator had a write up about this, due to the time it took to render the frame with RT’ing, most scenes had no chance for editing.
 
When we talk about Real time ray tracing its more about the complexity of the tracing being done.

I wouldn't say we will never ever ever be able to do real ray tracing in real time. However if we are talking about current Pixar level of complexity in real time... ya that may never be realistic. At this point it would take something like 1,512,000x the current performance of Pixars render farm to achieve 60fps. As its reported it took 7hrs per frame for Toy Story 3.

The question really becomes how much do we need to really do in a video game scene. Who knows perhaps 10 years from now through a mix of raster and ray tracing we may have a very convincing facsimile of a playable pixar type game. The right game designers right now can do a pretty good job of faking that with light maps anyway. I'm not sure RT is the killer feature that gives us hyper realism.

Bruh, Gamer X says the new Xbox and PS will do “real raytracing” via RDNA2, not this fake stuff we have now. Gotta keep up broski. That 1,512,000x increase is coming soon, to a console near you.
 
I'm not sure RT is the killer feature that gives us hyper realism.

If it isn't 'the end', then it is a massive, mandatory next step.

Also- people don't necessarily want 'hyper-realism' in games- they're there to escape!
 
Think of the era when monsters or cars were created. Frame render planning was a thing, although I’m probably wrong on it taking 7 days but it was a looooong time.

The most recent terminator had a write up about this, due to the time it took to render the frame with RT’ing, most scenes had no chance for editing.

Pixars quoted frame render times make no sense, unless that is per CPU, per Core or something like that:

https://disney.fandom.com/wiki/Cars
  • On average, it took 17 hours to render each frame of the film. In addition, it was the first Pixar film to use ray-tracing to accurately create the reflections on the cars.

Unless something is wrong with my math (please check) 17 hours/frame is hundreds of years of render time for a two hour movie:

It's a ~2 hour movie. (7200 seconds x 24 frames/sec x 17 hours)/(24 hours x 365 days) = 335 years...

So it sounds like a nonsense number to me. In reality the real time to render a frame on their render farm is likely less than an hour.
 
I believe when they quote those numbers it is per each computer.

So one frame does take 17 hours, but if you have 17 computers, it's only 1 hour/frame average, etc.
 
"Nvidia killer" is the RDNA2 chip that takes the top Gaming crown from Nvidia in 2020. While "Big Navi" is the chip that is bigger than Navi10, that is being released within the next few months @ above 2080 Super's Performance.

So:
Radeon 5800 Series = big navi ($499)
Radeon 5900 Series = nvidia killer ($799)



So many nvidia trolls here are being jebaited right now.... because most of you will be buying AMD gpus within the next year. Ampere won't be out for 13 months, and that is just about the time that AMD will release it's 4th 7nm GPU... (before NVidia releases it's first).

In all honestly, Nvidia really doesn't factor in to anything for Gamers... they have nothing to compete with RDNA. As RDNA was a surprise on the Gaming Industry, as Dr Su kept their new GPU architecture hidden/secret really well. Now Nvidia is following (as in behind) AMD architecturally and have no answer for 13 months.

While at the same time, AMD will just keep pumping out cheap 7nm GPUs and Jensen tries to figure out how to make a 225mm^2 that can compete with mainstream Navi10. Because a shrunken down Turing is still much, much bigger than RDNA.



In fact, Nvidia killed themselves and thought 2080 Super was the best gaming GPU a company can provide to gamers.... @ $800

232031_63259B60-9184-47FB-8A8F-A5EE9C463787.gif
 
Status
Not open for further replies.
Back
Top