Nvidia Killer

Status
Not open for further replies.
AMD is about to deliver. It's a 800W board with 12 fans requiring a triple length supporting case called omgATX (new standard).

(can't wait!)
 
unlikely that's what it means-- it's all hybrid raytracing right now, both in the sense that in-game RT effects are used alongside conventional rendering and also in the sense that in the hardware, DXR is partially accelerated in dedicated HW and partially computed on the shader cores.

A "non-hybrid" DXR approach where all rendering is done via pathtracing and all parts of the pathtracing are accelerated by dedicated hardware isn't something we're likely to see soon if ever.

I think Quake 2 RTX uses pathtracing already.
 
I may be alone here, but I care extremely little about Raytracing. And I don't care so much about who has "the fastest" card - I care about who has the fastest card that fits my budget.

I care about HDMI 2.1 and DP 2.0 much more than I do Raytracing.

soccer-player-alone-watching.jpg


Come an play man, it's fun out there.
 
Sorry I just don't play any games that support RT right now.

Quake II is the most interesting of the bunch that are currently out. Cyberpunk 2077 could be neat, but I'm not buying a video card today for a game that hasn't been released yet.

Honestly, it would be pretty cool if someone went back over old games and added ray-tracing to them. Likely impractical, but I'd buy games I already own to play them again with RT if it worked.

If market share is what AMD is after, they need to put the price of the 5700 at about $250. Whether that can be profitable or not is questionable, but beating Nvidia at the top end is cool, it wont instantly turn around the market for them. Everyone on this forum could buy one and I imagine they wouldnt notice.
 
I think Quake 2 RTX uses pathtracing already.

The original DIY "Quake 2 Vulkan Pathtrace (Q2VPT)" engine that Nvidias release is based on was indeed 100% pathtraced (hence why some things like particle effects were broken) but Nvidia's version on Steam etc. does use a hybrid approach, because it turns out that it's actually easier and looks better to do some effects with raster shaders. The basic global illumination/ambient occlusion/indirect lighting is all indeed pathtraced because PT excels at that and offers much more realism than traditional raster GI and SSAO/HBAO but I think it's going to be a long time until "soft" effects like particles, fog, bloom and whatnot are fully RT. Raster shader effects may be only so-so for basic lighting but they're fantastic for "photoshopping" the rendered image to make it prettier
 
The original DIY "Quake 2 Vulkan Pathtrace (Q2VPT)" engine that Nvidias release is based on was indeed 100% pathtraced (hence why some things like particle effects were broken) but Nvidia's version on Steam etc. does use a hybrid approach, because it turns out that it's actually easier and looks better to do some effects with raster shaders. The basic global illumination/ambient occlusion/indirect lighting is all indeed pathtraced because PT excels at that and offers much more realism than traditional raster GI and SSAO/HBAO but I think it's going to be a long time until "soft" effects like particles, fog, bloom and whatnot are fully RT. Raster shader effects may be only so-so for basic lighting but they're fantastic for "photoshopping" the rendered image to make it prettier

Agreed, and this is why I still maintain we're going to have hybrid renderers for the foreseeable future. Each rendering problem has easy and hard points, based upon the way you handle it. The ultimate goal for developers is to have a fast palette of all available options. Ray/path tracing makes some problems easier, if you have the oomph to do it in hw. Alternately, it can make it look better, at the cost of speed, just like every graphic option ever.

It is most likely we'll see some of the more cumbersome aspects of lighting / shading handled by ray/path tracing when possible, but other parts still sent to traditional shaders. There's no point in being pathological about "All Things Must Be Raytraced Because Reasons". Good developers will saturate everything you have. Have path tracing hardware? That is actually really useful functionality for many things.
At the risk of sounding like a FanPerson, I eagerly await what Tiago and iD do with their engine (if not Doom Eternal, the next one, given tech penetration in the market). They have a good grip on making good use of features while still making everything run well. And yes, they also design the game/levels around running quickly, but the point stands.

Expect RT related tech to simply be the next "Ultra" setting in games (even now) - that point where it does make some difference, but seems to have a non-linear impact on performance. We have of course been used to this scaling all along for "Ultra". For twitch games, I've NEVER selected the ultra setting. Give me frames, and I'll overlook a blurry shadow or imperfect puddle reflection. Don't care. But for other games which aren't twitch based, I go full fidelity.
For example: COD versus a System Shock sort of title. SS running at 60FPS or even 30 but looking amazing would be fine, but we'd rightly excoriate multiplayer COD for such performance / visual tradeoffs.
 
https://www.tomshardware.com/news/nvidia-ampere-gpu-graphics-card-samsung,39787.html

No leading-edge company in this era is so stupid as to put all of their chips in one fab, especially after what happened to Intel.

Then they are designing more then one version of Ampere... TMSC and Samsung are not using the same 7nm tech. There are subtle differences in their mfg process such as pitch and cell sizes. It is basically impossible to take the exact same design to both fabs. Of course they can have Ampere-A and Ampere-B but I would think having them both tap out and scale up at the same time would be harder. Yes its nice to have options... but going with two fabs increases costs as both have unique setups. Unless one is fabbing the high end part and one is fabbing the mainstream part. Guess we'll see in a year or soish.
 
Then they are designing more then one version of Ampere... TMSC and Samsung are not using the same 7nm tech. There are subtle differences in their mfg process such pitch and cell sizes. It is basically impossible to take the exact same design to both fabs. Of course they can have Ampere-A and Ampere-B but I would think having them both tap out and scale up at the same time would be harder. Yes its nice to have options... but going with two fabs increases costs as both have unique setups. Unless one is fabbing the high end part and one is fabbing the mainstream part. Guess we'll see in a year or soish.

Didn't Apple do precisely this with recent phone SOCs?
 
Quake II RTX is a ball, easily the most fun in a game I had this year.

And its not the ray tracing that make it fun, its that it is a good old game you can get right on board with.
If you honestly think it is the RT that make it cool, then maybe i have misunderstood something about that update and maybe they added a little more than fancy rays.
 
Didn't Apple do precisely this with recent phone SOCs?

Their 7nm are all TMSC I believe. (probably one good reason for NV to consider Samsung as TMSC is basically at capacity between Ryzen and Apple soc) I think previous gen A9s where dual source. TMSC 16nm finfet and Samsung 14nm finfet. They had different die sizes.

OK just googled it... yes A9,
https://www.anandtech.com/show/9665/apples-a9-soc-is-dual-sourced-from-samsung-tsmc
From the article"
"– but in more modern times dual sourcing of high performance parts is very rare due to how difficult and expensive it is. With dual sourcing essentially requiring the chip to be taped out twice – once for each fab – it requires significant financial and engineering resources, a cost that not very many companies besides Apple can take on without significant risk. "

So ya if I had to bet Nvidia at 7nm is NOT going to be doing the same chip at two different fabs. But they may be doing a Ampere big and little and choose a fab for each. Its also possible that NV statement that they are going to use both fabs going forward could just be dmg control from a bunch of news reports claiming NV is switching to Samsung in a year. (they still have to do business with TSMC)

Another possibility Samsung is a stop gap. With TMSC filled up with Apple/AMD perhaps they contracted TMSC for a 7nm+ ampere refresh part. With Samsung doing the first gen to get them out the door quicker.
 
They've been doing it...not interested still with bandwidth in the socket..Socket Killer...Id rather take a Reference Vega..maybe it wasn't dogged much...I'm gonna really think every Nivida is beyond dogged out now. That's still better than a 1070 lol....that's on you Probably don't need the new one.
 
And its not the ray tracing that make it fun, its that it is a good old game you can get right on board with.
If you honestly think it is the RT that make it cool, then maybe i have misunderstood something about that update and maybe they added a little more than fancy rays.

Them there fancy rays gave a great game a new dimension and replayability. Great vehicle for a RTX demo.

And while I was a doubter early on and hated on RTX equally with a lot of other folks it didnt take much to turn me around.

Being able to mess with this tech turned out to be 100% worth the price of admission to me.
 
Remember, AMD does not need to beat the 2080ti, they need to beat the 3080ti.

They just need to target 30% higher than 2080ti or so. In range with what nvidia targets every generation. So I dont think that is too much to ask given how the first gen RDNA is performing. They could technically get close to 2080ti if they made a bigger Navi chip as it is.
 
Okay, in a strange Koinkidink, my little one was painting and I walked by to spot this:

upload_2019-8-9_19-18-17.png


Tiniest fan ever? I have no idea where this came from, and he doesn't know either. Thought it looked neat, he said.

Raytracing is visible here.
 
Last edited:
Whatever it is, AMDs "Nvidia killer" will still be 7nm. Navi at 7nm can barely touch Nvidias 12nm. So figure out the math, re Nvidias 7nm likely potential.
 
Whatever it is, AMDs "Nvidia killer" will still be 7nm. Navi at 7nm can barely touch Nvidias 12nm. So figure out the math, re Nvidias 7nm likely potential.

Current Navi parts are not maxed out builds like the 2080Ti. So far Navi has looked promising and whatever Nvidia gets with 7nm is a bit of a wild card.
 
Current Navi parts are not maxed out builds like the 2080Ti. So far Navi has looked promising and whatever Nvidia gets with 7nm is a bit of a wild card.

Explain?

Nvidia's 7nm is a"wildcard"? Luck of the draw? Coin toss?

I'm gonna guess Nvidia will have no loose ends when it comes down to performance. Since were guessing shit anyhow. Unreal.
 
Well, 5700xt is rated at 225W and 2080ti is 250W. They will run into TDP limits if the 5700xt is anything to go off of.

https://www.tomshardware.com/reviews/nvidia-geforce-rtx-2080-ti-founders-edition,5805-10.html

its no where near 250w. It goes over nvidia's official number.

RX 590 was 225w, Vega 64 was over 50%+ faster at 300w. So it just doesn't work like that. They can easily come out with 300w card that is faster than 2080ti. Bigger chip doesn't mean its going to be double the power, just not how it works.
 
Didn't AMD call the Fury X the 4K killer and hyped up marketing killed it when 4 Gig of HBM was not enough.

I wouldn't hard my breath until real testing proves otherwise. Plus this card will not be affordable to the masses anyways.
 
https://www.tomshardware.com/reviews/nvidia-geforce-rtx-2080-ti-founders-edition,5805-10.html

its no where near 250w. It goes over nvidia's official number.

RX 590 was 225w, Vega 64 was over 50%+ faster at 300w. So it just doesn't work like that. They can easily come out with 300w card that is faster than 2080ti. Bigger chip doesn't mean its going to be double the power, just not how it works.

They can downclock and undervolt (not run it to the wall) which helps a lot. I understand that, just saying there's not a ton of room. I personally don't mind power draw.

Vega 64 was also AIO cooled and had HBM, so that probably helped.

Just slightly skeptical of a nVidia killer, especially if nVidia launches a 7nm at the same time...
 
Last edited:
Didn't AMD call the Fury X the 4K killer and hyped up marketing killed it when 4 Gig of HBM was not enough.

I wouldn't hard my breath until real testing proves otherwise. Plus this card will not be affordable to the masses anyways.


I think they called it the first 4k card
 
Whatever it is, AMDs "Nvidia killer" will still be 7nm. Navi at 7nm can barely touch Nvidias 12nm. So figure out the math, re Nvidias 7nm likely potential.

If Nvidia tries to squeeze 20 billion transistors into their first 7nm part... chances are it will either need a jet powered fan to keep it under 90 degrees, and or their yields will be so bad every wafer will have 3 working chips. lol

There is very good reason why Intel 10nm is being a bitch... and why AMD choose to split Ryzen into chiplets, and start at little navi.

At 7nm and beyond big massive dies are going to be very hard to pull off, voltage leak is just to strong. (although I guess NV could be doing a chiplet of some kind as well)
 
Im more excited about the bigger versions of Navi than anything raytracing. The current Navi at 225 Watts may seem like a lot , but my guess is that these are the worst dies. At 40 CU's they must be cut downs of 5800 / 5900 since AMD announced the faster Navi equipped cards by the end of the year. What I'd like to know is how many more CU's will this first gen go to. We have 40. Whats next. 56 where does it end for this current Gen of Navi. 96 or could it go all the way to 128. What memory will the top end have HBM2/3.
The ray tracing will be a part of amd at some point. PS6, Xbox whatever. Call me crazy.... rumor has it that ARM is working on something as well. Could it be a 4th player in the GPU race...:D
 
You have the custom stuff with team red card though...that is user discretion though. RTX cards died and Nvidia's Locking of everythingggg. But I don't know still not a bad mid/upper mid tier market and polishing.
 
A 64 CU Navi card running at 1.9 GHz is going to blow away all kinds of TDP records for a single GPU and will barely beat a reference 2080 Ti, while Nvidia will be rolling out Ampere.

Unless AMD is rolling out a 80 CU card with a 50% improvement in perf per watt, it's going to be dead on arrival, or, likely once again, just reprising the role of Radeon 7, selling an expensive to manufacture card with barely any margins or even at a loss while being a mile away from the performance crown.
 
Last edited:
One wild card would be the ray-tracing performance. If AMD's method is vastly faster than Nvidia, that could be a big deal.
Well it will be the one in the most GPU's and supported by the greater amount of software development by a wide margin. Nvidia's RT was a tactical move that is only useful on its top tiers of product greatly limiting the user base.
 
in the past I wouldn't have taken this seriously but with the huge success of their recent CPU line I'm hoping it bleeds over to their GPU division

AMD is kicking ass with Zen but much of that is due to Intel's multi-year failure on 10nm. Could Nvidia also fail massively and leave an opening? Sure but that's just wishful thinking by AMD's fans.
 
Or it's the still launching a 6gb Polish and knowing that's still a bottleneck too stuff.
 
I'd be happy to have some real competition in the high end market. It will mean lower prices and more options for us consumers. I own a 2080 Ti and have no intention of upgrading it until something comes out that has significantly better raytracing performance as that is where its main sticking point lies at the moment. Without it I can already get high framerates at 4K.

My next display will also be Freesync 2 so I will no longer have an incentive to stick with Nvidia for anything but performance/features.
 
And let's not forget that Big Navi real competitor will be Ampere.
I don't know if AMD can do better than 1st gen RTX on thier first try, and then there's Intel. So who knows.
Hopefully nvidia learned a lot from Turing and Ampere should be a lot better.

2020 sound like a great year for gaming, no matter who comes on top.

"Nvidia killer" is the RDNA2 chip that takes the top Gaming crown from Nvidia in 2020. While "Big Navi" is the chip that is bigger than Navi10, that is being released within the next few months @ above 2080 Super's Performance.

So:
Radeon 5800 Series = big navi ($499)
Radeon 5900 Series = nvidia killer ($799)



So many nvidia trolls here are being jebaited right now.... because most of you will be buying AMD gpus within the next year. Ampere won't be out for 13 months, and that is just about the time that AMD will release it's 4th 7nm GPU... (before NVidia releases it's first).

In all honestly, Nvidia really doesn't factor in to anything for Gamers... they have nothing to compete with RDNA. As RDNA was a surprise on the Gaming Industry, as Dr Su kept their new GPU architecture hidden/secret really well. Now Nvidia is following (as in behind) AMD architecturally and have no answer for 13 months.

While at the same time, AMD will just keep pumping out cheap 7nm GPUs and Jensen tries to figure out how to make a 225mm^2 that can compete with mainstream Navi10. Because a shrunken down Turing is still much, much bigger than RDNA.



In fact, Nvidia killed themselves and thought 2080 Super was the best gaming GPU a company can provide to gamers.... @ $800
 
Last edited:
"Nvidia killer" is the RDNA2 chip that takes the top Gaming crown from Nvidia in 2020. While "Big Navi" is the chip that is bigger than Navi10, that is being released within the next few months @ above 2080 Super's Performance.

So:
Radeon 5800 Series = big navi ($499)
Radeon 5900 Series = nvidia killer ($799)



So many nvidia trolls here, are being jebaited right now.... because most of you will be buying AMD gpus within the next year. Ampere won't be out for 13 months, and that is just about the time that AMD will release it's 4th 7nm GPU... (before NVidia releases it's first).

In all honestly, Nvidia really doesn't factor in to anything for Gamers... they have nothing to compete with RDNA. As RDNA was a surprise on the Gaming Industry, as Dr Su kept their new GPU architecture hidden/secret really well. Now Nvidia is following (as in behind) AMD architecturally and have no answer for 13 months.

While at the same time, AMD will just keep pumping out cheap 7nm GPUs and Jensen tries to figure out how to make a 225mm^2 that can compete with mainstream Navi10. Because a shrunken down Turing is still much, much bigger than RDNA.



In fact, Nvidia killed themselves and thought 2080 Super was the best gaming GPU a company can provide to gamers.... @ $800

2070 has 5% more transistors and is 2% slower than the 5700xt IIRC. The difference in transistor/rasterized performance isn’t great. AMD needs RDNA2 (and a decent improvement) when nVidia launches 7nm to keep parity.

I am really hoping for some kind of AMD chiplets design after Ryzen’s success and some monster GPUs. It has to be on their minds. I realize it’s harder with GPUs, though.
 
2070 has 5% more transistors and is 2% slower than the 5700xt IIRC. The difference in transistor/rasterized performance isn’t great. AMD needs RDNA2 (and a decent improvement) when nVidia launches 7nm to keep parity.

I am really hoping for some kind of AMD chiplets design after Ryzen’s success and some monster GPUs. It has to be on their minds. I realize it’s harder with GPUs, though.

And NVidia is sacrificing something like 10-15% of those transistors on Ray Tracing. AMD will need to do something similar when it adds Ray Tracing HW.
 
2070 has 5% more transistors and is 2% slower than the 5700xt IIRC. The difference in transistor/rasterized performance isn’t great. AMD needs RDNA2 (and a decent improvement) when nVidia launches 7nm to keep parity.

I am really hoping for some kind of AMD chiplets design after Ryzen’s success and some monster GPUs. It has to be on their minds. I realize it’s harder with GPUs, though.



Turing is not going to be great all of a sudden, because it is on 7nm. :confused:


Jensen knows this and why he belayed moving to 7nm and shopped around, because he needs time to re-develop Ampere into something that can compete with RDNA. Usually Nvidia knows what AMD is doing, but RDNA was top-secret and caught Nvidia off guard (Hence SUPER).

Thus, Nvidia's new architecture will be 13 months late. While AMD's RDNA will be in a half a dozen chips by that time....!

Understand that Turing, (no matter what size), can not compete with RDNA. So Nvidia is essentially stalled for 13 months... until Jensen drop the 7nm RTX3000 series (Ampere). But AMD will reign on that party too, with 5nm RDNA chips... (given TSMC's latest announcements). I believe that 5900 Series chip is aptly named "nvidia killer", because it literally cuts nvidia off at the legs. Nvidia big gpus serve the 4k crowd, but Nvidia's small gpu are going to have to support 4k too. How can they do that without mesing with the cost of their servers hand me downs... is to design a game only chip and Jensen is scrambling to figure out how to do that.


In One Year's time, Nvidia has to release a brand new gaming architecture, on a brand new node process, with brand new features…. on their 1st try.

While AMD will have a full year of RDNA driver optimization and console support..




That is how it is most likely going to play out. But late 2020 is going to be the 4k GPU price wars! Until then, it is all AMD.
 
Last edited:
I may be alone here, but I care extremely little about Raytracing. And I don't care so much about who has "the fastest" card - I care about who has the fastest card that fits my budget.

I care about HDMI 2.1 and DP 2.0 much more than I do Raytracing.

I don't understand the Raytracing either...a year this month after the RTX release and the best showcase for the product is a tech demo of a 20 year old Quake 2. But I'm sure someone will tell me that all the cool games are coming "soon."
 
Status
Not open for further replies.
Back
Top