Nvidia Killer

Status
Not open for further replies.
The RT in Cars is more worthy, I think they took a week to render a frame for that movie.
A week per frame? So with that math it took 3008.57 years to render all 168480 frames of the 117 min movie.
EDIT
* I see it has been tackled already*
 
CyberPunk 2077 being a DXR game is worth the price just for me...The Witcher 3 was the best game in years...CD Project Red know how to build high I.Q games with SUPERB story lines.
But no one is forcing you to use DXR.
If you are fine by playing with lesser graphics fidelity...by all means.

But your anger towards DXR, despite those facts can only lead me to conclude one of two things.
Either you want to play with DXR, but since you favourite vendor does not support it...it has to be BAD.
Or you got touched by DXR in a bad place...so which is it?

Butcause you had no problems support "async computing"...how did that go?

I think we all know the answer to the above. ;)
 
Really, no surprise there. However, they do not speak for the majority and therefore, there are other places to have good tech discussions.

I will live with the better overall image quality that AMD has, anyways. :)

Edit: Which is better looking, a compressed image or an uncompressed image?
I don't know exactly what you're referring to, but both NVIDIA and AMD use delta color compression.
 
CyberPunk 2077 being a DXR game is worth the price just for me...The Witcher 3 was the best game in years...CD Project Red know how to build high I.Q games with SUPERB story lines.
But no one is forcing you to use DXR.
If you are fine by playing with lesser graphics fidelity...by all means.

But your anger towards DXR, despite those facts can only lead me to conclude one of two things.
Either you want to play with DXR, but since you favourite vendor does not support it...it has to be BAD.
Or you got touched by DXR in a bad place...so which is it?

Butcause you had no problems support "async computing"...how did that go?
I never made a big fuss about asynch I think you have me confused.

I want to play it, when it's ready. Right now 3 games plus expensive and lacking hardware implementation isn't enough to make it worth while yet, as I already wrote. Each to their own.
 
I never made a big fuss about asynch I think you have me confused.

I want to play it, when it's ready. Right now 3 games plus expensive and lacking hardware implementation isn't enough to make it worth while yet, as I already wrote. Each to their own.


Wonder which comes first..

2077 or 3080..

3080 would be hillarious :) that would make it a very short-sighted decision to buy a 1 gen. RTX card for 2077 - Keanu need them sweet rays to shine bright!
 
Last edited:
Wonder which comes first..

2077 or 3080..

3080 would be hillarious :) that would make it a very short-sighted decision to buy a 1 gen. RTX card for 2077 - Keanu need them sweet rays to shine bright!
LMAO if they launch a 3080 before CP2077 I would die laughing but unfortunately if the performance is a large jump over a 2080 ti I would buy it or the ti version if they launch both.
 
Wonder which comes first..

2077 or 3080..

3080 would be hillarious :) that would make it a very short-sighted decision to buy a 1 gen. RTX card for 2077 - Keanu need them sweet rays to shine bright!
Isnt that the way it works for all GPUs? Bad timing has always been the curse of consumers on both sides. I wonder how R VII gamers felt when their cards were EOL just a few short months later and the 5700XT equaling it at a much reduced price.
 
Isnt that the way it works for all GPUs? Bad timing has always been the curse of consumers on both sides. I wonder how R VII gamers felt when their cards were EOL just a few short months later and the 5700XT equaling it at a much reduced price.

The Radeon VII was the least gaming of any gaming card released recently as it's a rebadged Instinct card. If anyone did their homework prior to buying it, they would have known it's strengths and weaknesses depending on various workloads and already knew Navi was incoming within 5-6 months.

It does have good value for certain workloads compared to workstation cards. For gaming, not so much compared to Navi.
 
The Radeon VII was the least gaming of any gaming card released recently as it's a rebadged Instinct card. If anyone did their homework prior to buying it, they would have known it's strengths and weaknesses depending on various workloads and already knew Navi was incoming within 5-6 months.

It does have good value for certain workloads compared to workstation cards. For gaming, not so much compared to Navi.

That is the exact reason Mi50 Radeon 7 went EOL, because Vega20 @ 7nm is not as powerful as Navi10 at games, even if it is smaller.

That is all Turing is too, is a rebranded burnt-out Enterprise card... it is not a Gamer card, like RDNA is.


Nvidia's Ampere has to be it's own thing, or it will be a flop upon arrival, if it's nothing other than a Jensen hand-me-down chip, from the Compute/Server world. It would be a fail/fail for Gamers if that is what nvidia has in store for Ampere. That is why "big-navi" won't need much more die space to beat the 2080ti in games.
 
The Radeon VII was the least gaming of any gaming card released recently as it's a rebadged Instinct card. If anyone did their homework prior to buying it, they would have known it's strengths and weaknesses depending on various workloads and already knew Navi was incoming within 5-6 months.

It does have good value for certain workloads compared to workstation cards. For gaming, not so much compared to Navi.
Look for reviews of the R VII. Nearly all done from a gaming context. I'll bet more people bought them for gaming than its other functions. AMD purposed it for gaming to begin with.
 
I never made a big fuss about asynch I think you have me confused.

I want to play it, when it's ready. Right now 3 games plus expensive and lacking hardware implementation isn't enough to make it worth while yet, as I already wrote. Each to their own.
6 + 1
  1. Asetto Corsa Competizione
  2. Battlefield V
  3. Metro Exodus
  4. Quake II RTX
  5. Shadow of the Tomb Raider
  6. Stay in the Light
+
  1. Wolfenstein: Youngblood (in future update)
Control releases in two weeks with ray tracing at launch, making it 7 + 1.
 
6 + 1
  1. Asetto Corsa Competizione
  2. Battlefield V
  3. Metro Exodus
  4. Quake II RTX
  5. Shadow of the Tomb Raider
  6. Stay in the Light
+
  1. Wolfenstein: Youngblood (in future update)
Control releases in two weeks with ray tracing at launch, making it 7 + 1.

And how long has the RTX series been available? it´s still a very meager showing..

hoping that control is fantastic, my RTX card really need it to be!
 
How many developers will want to jeopardize their schedules to kludge in new technology to already in progress games?

And, how long does it take to develop a new game, so you can plan from the beginning to include Ray Tracing??

Well as some claim NV has been working on ray tracing for years and years right ?

They also have hands down the best developer support program.

So should those 2 things together not meant that when Jensen held that card up... everyone in the industry should have had games and patches ready to drop ?

As I see it there are only really a few possibilities.
1) NV had not been working on it for years and years ... but shoe horned it in to make their tensors not completely sit there looking stupid.
2) NV software program isn't as good as we think. (I don't think that is true NV has a great developer support apparatus)
3) NV MS AMD Sony have been talking about it... and either
A) developers where targeting a different date for go time (perhaps next gen console launches) or;
B) developers don't really want Ray tracing. There are still a lot of developers that believe light maps are superior artistically. Reflections are great sure but the control of a light map is often preferable over trying to bounce light everywhere to properly light a scene.

As cool as RT can be.... imo I don't think game developers are in love with the feature. Some games will use it but it is hardly going to be a defacto thing in AAA games any time soon if EVER. In the right type of game it can and will be very cool... in a great number of other games it will always just look marginally better, require a ton of development time to setup properly and run like shit compared to just using a simple light map.
 
The Radeon VII was the least gaming of any gaming card released recently as it's a rebadged Instinct card. If anyone did their homework prior to buying it, they would have known it's strengths and weaknesses depending on various workloads and already knew Navi was incoming within 5-6 months.

This is the case for every single large GPU AMD has ever released (meaning, since they bought ATI). Every single one has been compute heavy and targeted commercial applications first. And every single one was worse for gaming for its die size and power draw than a comparable Nvidia part, and since the release of the GTX680, Nvidia has built dedicated high-end gaming GPUs that have smoked AMDs compute-heavy large GPUs. They still do today.

Big Navi will literally be AMDs first. Ever.
 
This is the case for every single large GPU AMD has ever released (meaning, since they bought ATI). Every single one has been compute heavy and targeted commercial applications first. And every single one was worse for gaming for its die size and power draw than a comparable Nvidia part, and since the release of the GTX680, Nvidia has built dedicated high-end gaming GPUs that have smoked AMDs compute-heavy large GPUs. They still do today.

Big Navi will literally be AMDs first. Ever.

Your joking right ?

Nvidia literally only released volta as a compute card.

Turing is clearly not designed as a gaming first card, unless you are really really smoking the NV marketing materials. 90% of the Turning white papers read as a love letter to AI developers. Granular tensor cores yes please.

I'm not saying your wrong and that AMD has not been building compute first cards... but man so has NV. Or are you really going to argue that tensor cores have a serious game use ? The first arch we have seen in years that isn't obviously trying to suck off the AI indsutry is Navi. (although AMD may go there with Navi+ / Navi 2) lol
 
Nvidia literally only released volta as a compute card.

...and then didn't release it as a consumer card, so your point is?

Turing is clearly not designed as a gaming first card

-Fastest gaming cards available
-More efficient than brand-spanking-new AMD parts that are using a smaller node
-
While including industry-standard features that AMD is now a year behind on supporting

Your joking right ?

So no, I'm not joking. And AMD shouldn't joke around either, as they're about to get pushed out of the market by Intel.
 
So no, I'm not joking. And AMD shouldn't joke around either, as they're about to get pushed out of the market by Intel.

Well I doubt that highly... if anyone should be worried about Intel its NV. Intels real goal isn't games either. Intel already has major XE supercomputer wins that could and should have (if your the NV sales team) went NVs way. NV has been making the majority of its actual margin from compute clusters for a long time now... and Intel is coming for that cheddar.

Intel is more likely to push NV out of super computers first.... and AI not long after.

Anyone thinking Intel is getting into GPUs to sell gamers cards... is dreaming or smoking some real great stuff. Hopefully we get some interesting Intel consumer cards as a byproduct. Whats funny is if NV wants to stay in those markets it's probably going to have to find away to work with AMD. (or spend a lot more money on ARM development again)
 
Turning looks a lot like a gaming GPU to me… Strong concurrent FP32/INT32 performance, support for VRS and DXR… Also, Turing only has 2 FP64 units per SM, which makes it terrible for anything that loves double precision float performance.
 
Last edited:
Well I doubt that highly... if anyone should be worried about Intel its NV. Intels real goal isn't games either. Intel already has major XE supercomputer wins that could and should have (if your the NV sales team) went NVs way. NV has been making the majority of its actual margin from compute clusters for a long time now... and Intel is coming for that cheddar.

Intel is more likely to push NV out of super computers first.... and AI not long after.

Anyone thinking Intel is getting into GPUs to sell gamers cards... is dreaming or smoking some real great stuff. Hopefully we get some interesting Intel consumer cards as a byproduct. Whats funny is if NV wants to stay in those markets it's probably going to have to find away to work with AMD. (or spend a lot more money on ARM development again)
What do you think Xeon Phi is? It was developed to be a consumer graphics card, first, before being repurposed as a MCP for HPC and other data-oriented applications. Intel have wanted to enter the discrete consumer graphics market for at least 15 years.
 
Turning looks a lot like a gaming GPU to me… Strong concurrent FP32/INT32 performance, support for VRS and DXR… Also, Turing only has 2 FP64 units per SM, which makes it terrible for anything that loves double precision floating performance.

I never said it was terrible at gaming... just that it was designed as much (and perhaps more so) to hit these goals
https://www.nvidia.com/en-us/data-center/tesla-t4/
then it was to provide gaming performance.

Volta would have made great consumer cards as well... NV just felt they could keep selling you pascal for a few more years.

You want to know why games are not the first requirement of a NV GPU design.... Google rents T4s for around $22 a day per card. So counting a few discounts they are making $500-600 per month per card. So think about that T4s are basically 2070 supers. Google is renting them out making around 24k per year per card. (and before anyone says it yes you can rent them in a VM type situation for as low as 30c an hour or so but those are pooled and not dedicated... google makes more on that)

https://cloud.google.com/blog/produ...ads-on-nvidias-t4-gpu-now-generally-available
 
Last edited:
What do you think Xeon Phi is? It was developed to be a consumer graphics card, first, before being repurposed as a MCP for HPC and other data-oriented applications. Intel have wanted to enter the discrete consumer graphics market for at least 15 years.

The Intel engineers on that project thought if they could sell it as a consumer part it would give them the funds to further develop it. There really was no non x86 compute business in those days. The players know better now.
 
So think about that T4s are basically 2070 supers.

The T4 is a dedicated enterprise card. It's only a 70W TDP card whereas the 2070 Super is 215W.

If we follow your logic, nobody should buy a Ford to drive around because one time Ford made a race car which was "basically" the same as an SUV.

More directly, nobody should buy AMD CPUs to play games because AMD's server CPUs are made out of the same style of silicon transistors.

Ultimately, your line of reasoning is just plain silly anyway. The fastest gaming GPU on the market today is made by Nvidia. The second fastest gaming GPU on the market today is made by Nvidia. Third place is currently a tie between Nvidia and AMD's best ever GPU. Are games more fun to play at low frame rates if you tell yourself that, "well, it might run like hot garbage, but at least it isn't a SERVER part!!@!!23?" Would a rose by any other name not run faster on Nvidia?

The rumored "Nvidia Killer" here is rumored to match the performance of an 18mos old GPU. Not beat the performance, just match. Not a current model, but one that will be 18mos old and discontinued at the time of "Killer's" launch.
 
The T4 is a dedicated enterprise card. It's only a 70W TDP card whereas the 2070 Super is 215W.

If we follow your logic, nobody should buy a Ford to drive around because one time Ford made a race car which was "basically" the same as an SUV.

More directly, nobody should buy AMD CPUs to play games because AMD's server CPUs are made out of the same style of silicon transistors.

Ultimately, your line of reasoning is just plain silly anyway. The fastest gaming GPU on the market today is made by Nvidia. The second fastest gaming GPU on the market today is made by Nvidia. Third place is currently a tie between Nvidia and AMD's best ever GPU. Are games more fun to play at low frame rates if you tell yourself that, "well, it might run like hot garbage, but at least it isn't a SERVER part!!@!!23?" Would a rose by any other name not run faster on Nvidia?

The rumored "Nvidia Killer" here is rumored to match the performance of an 18mos old GPU. Not beat the performance, just match. Not a current model, but one that will be 18mos old and discontinued at the time of "Killer's" launch.

I never said no one should buy a NV card. Buy whichever you prefer and whichever you feel is the best for your pocketbook/usage case. I simply said something when someone claimed AMD has zero thought about gaming products. It's an odd line of reasoning when its true of ALL GPU MFGS forever more. Also ya the T4 and 2070 are using the exact same chip. T4 doesn't need outputs / analog converters ect. The same is true of AMDs instinct cards same chips as the vegas much lower TDPs.

However if we want to argue cars. When Ford designs a race car they don't put that racing chassis in a minivan. Then try and charge 3/4 the race car price for the minivan. Sort of what NV did with Turing. Ford doesn't do that they may learn things building race cars that help them build better mustangs... but a mustang is NOT the same car with a different set of wheels. People buying $1200 GPUs with 1/4 of the die being useless to them tensor cores, does drive the cost of gaming cards up. Of course NV would probably have higher costs if they did design a nothing but gaming card anyway... the industry in general is in an odd place where building chips is so expensive that trying to make a one chip fits all chip is the cheapest best option.

Chiplets are probably the answer to that. It sounds like with in a few years all the major players including NV will be building chiplet based parts. That should fix a lot of the issues with cost of designing and tapping big massive monolithic one size fits all chips. Should mean better Gaming cards as well as better products for their other markets. The margin increases should mean that some real competition should drive pricing down as well. I truly hope in a few years we all shake our heads remembering when top end video cards where selling for more then top end prosumer CPUs.
 
if anyone should be worried about Intel its NV.

Yes to both. AMD GPUs go first.

Intels real goal isn't games either.

Their goal is profit.

Gaming is one facet of the desktop market, but so is compute for content creation and inference and so on. Intel has to attack all of it to ship products and get development attention.
 
The rumored "Nvidia Killer" here is rumored to match the performance of an 18mos old GPU. Not beat the performance, just match. Not a current model, but one that will be 18mos old and discontinued at the time of "Killer's" launch.
It is code named "NV Killer" not "Matcher". The 5700XT is being held back by its memory speed(artificially it seems) and is knocking on the RTX2070s/2080 door. So where would that place the 5800XT? Now where would that place the "NV Killer" 5900XT?
If it comes out 6 months from now, that will be a full two years after the RTX2080ti. I don't understand why it would be so unfathomable to some why a brand new gaming oriented architecture wouldn't best the top bar. In fact it should be expected or we are going backwards.
 
It is code named "NV Killer" not "Matcher". The 5700XT is being held back by its memory speed(artificially it seems) and is knocking on the RTX2070s/2080 door. So where would that place the 5800XT? Now where would that place the "NV Killer" 5900XT?
If it comes out 6 months from now, that will be a full two years after the RTX2080ti. I don't understand why it would be so unfathomable to some why a brand new gaming oriented architecture wouldn't best the top bar. In fact it should be expected or we are going backwards.

We can basically extrapolate from the 5700xt.

What is all the “gaming oriented architecture” nonsense lately? It’s like someone put a marketing team on hardforum. So all the cards since the 7970 weren’t gaming oriented?
 
So where would that place the 5800XT? Now where would that place the "NV Killer" 5900XT?
If it comes out 6 months from now, that will be a full two years after the RTX2080ti. I don't understand why it would be so unfathomable to some why a brand new gaming oriented architecture wouldn't best the top bar. In fact it should be expected or we are going backwards.

You're proving my point. That would place the 5900XT at simply matching the 2080Ti, but 18mos after it was released. During the same 18mos, you don't think Nvidia was working on anything? Best case, this puts the 5900XT as a tie for the 2080Ti for two months before the 3080Ti comes out, at which point the 5900XT will get crushed for the top spot and likely only match the performance of the 3080 or even 3070. In order for AMD to take the top spot in that timeframe, they can't simply update their current offerings - they need to launch a new architecture. Given they just launched a new one a few weeks ago, that simply isn't going to happen until Q1 2021 at the earliest.
 
We can basically extrapolate from the 5700xt.

What is all the “gaming oriented architecture” nonsense lately? It’s like someone put a marketing team on hardforum. So all the cards since the 7970 weren’t gaming oriented?
AMD, out of necessity, has had to get the best of both worlds out of their one architecture.
 
In a strange turn of events, AMD is going to release a new set of cards that they simply call "meh".

"They're cards. Just so, so cards. They'll be priced competitively. The MEH 5600xt and MEH 5600. They'll produce heat, come with a blower cooling solution and consume a lot of power."
 
We can basically extrapolate from the 5700xt.

What is all the “gaming oriented architecture” nonsense lately?
It’s like someone put a marketing team on hardforum. So all the cards since the 7970 weren’t gaming oriented?


Yes, it means 100% of the GPU was designed for Gamers, not another Industry.

RDNA architecture is not a hand-me-down architecture/design, from another GPU segment. As such, Turing's architecture no matter how big, can not hang with RDNA in games, because RDNA is more powerful & game focused. This will become quite evident when "big-navi" and "bigger-navi" hit the market. Both 5800 & 5900 will still be small GPU chips, because they don't suffer from transistor bloat.

Dr Lisa Su made a point about this and paused and repeated herself. To drive home the point that AMD is going in 2 different directions, using two different architectures. Since sharing isn't fair to Gamers because Games require different transistors, than does the business world.



This is also noted by many of the well respected People in the industry and why RDNA is killing it (meaning turned the whole GPU market on it's side) and it is only the 1st release of RDNA (navi10). There is still the 5600, 5800 & 5900 Series to be released... all before AMD's competition will be able to respond to AMD's well kept secret, RDNA...

Jensen didn't know, (until it was too late), that AMD had developed a forward thinking Gaming architecture, that nearly every developer and designer has fallen on board with. Including Xbox scarlet and PlayStation5.

;)
 
As such, Turing's architecture no matter how big, can not hang with RDNA in games, because RDNA is more powerful & game focused.

It's funny that you say this. Literally every game runs at least 30% better on the 2080 Ti than it does on anything AMD has released and is scheduled to release in 2019.

I'm sorry, but there really is nothing to discuss here. The frame rates don't lie. To say anything else is nothing more than fanboi BS.

"Hand me down" tech? Doesn't matter. AMD is slower.
"Transistor bloat?" Doesn't matter. AMD is slower.
"RTX is wasted tech?" Doesn't matter. AMD is slower.
"7nm arouses me." Doesn't matter. AMD is slower.
"Nvidia is too expensive!" Doesn't matter. AMD is slower.
"Consoles actually matter." LOL. AMD is slower. Go find a new forum.
"Gamers require magical unicorn transistors" Doesn't matter. AMD is slower.

Do you have anything else you'd like to say? Maybe you can come up with something that is more substantive than agreeing that AMD is slower?
 
6 + 1
  1. Asetto Corsa Competizione
  2. Battlefield V
  3. Metro Exodus
  4. Quake II RTX
  5. Shadow of the Tomb Raider
  6. Stay in the Light
+
  1. Wolfenstein: Youngblood (in future update)
Control releases in two weeks with ray tracing at launch, making it 7 + 1.


Add "GrimStar":
 
Well thats because it is a tech demo and not really the game trailer. If it was they would be excluding %90 of their market which is not something they would want to do. Look how shitty it looks for you! LOL
If a company can stuff some RT in their game, no matter how bad or half baked, it is a great marketing plan with people so desperate to play content with it supported. Reminds me of my old Matrox G400 that had spanking new Bump-Mapping. I literally bought shit games that had BM tacked on to show off my new card. By the time any good games came out every card supported BM :(
 
Chiplets are probably the answer to that. It sounds like with in a few years all the major players including NV will be building chiplet based parts. That should fix a lot of the issues with cost of designing and tapping big massive monolithic one size fits all chips. Should mean better Gaming cards as well as better products for their other markets. The margin increases should mean that some real competition should drive pricing down as well.

Chiplets are an intriguing idea. Theoretically you could move ROPs, memory controllers and cache off to a separate on-package die like AMD did with Zen 2. You would also need to move the "front end" that talks to the CPU. It'll be slower and use more power than a big fat chip though.

I truly hope in a few years we all shake our heads remembering when top end video cards where selling for more then top end prosumer CPUs.

Why should small CPU dies with single digit performance increases cost more than GPUs 3-4x the size that substantially improve performance each generation ?
 
I honestly could not imagine a more unimpressive trailer for a game.


And also, lets not forget - a trailer for an upcoming game.. does not add much weight to the list, mayby when it's released a some point, until then - its a boring trailer on youtube - and plays the same on all cards :)
 
Status
Not open for further replies.
Back
Top