• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

5080 Reviews

If we go back in 1994, telling people what get close to realtime rendering with optix and a 5090 in a blender viewport would sound quite something.
Real time 3D was a thing before 1994, just look at the Super Nintedo. It was Sony who flipped the gaming industry on it's head that shows that you could do it better. Ray Tracing was something everyone wanted but required hardware that wouldn't be viable. Here's three PS3's doing Ray Tracing on a car that looks like 3fps.

View: https://youtu.be/oLte5f34ya8?si=Q3T_JAOck_5zRID3

In other words, for AAA games in the last 5 years or so, if you are not using RT you are getting a subpar product.
This is why people are noticing that 9 year old games look better than today's games because we're seeing developers giving up on image quality.

View: https://youtu.be/mam6by5JdS8?si=VsMqwXT3yj71HOM4
And we have even seen a move towards titles that require RT in the last few months. Indiana Jones and the Great Circle is one of them. There was one more too, but I forget what it was now.

These are games where there is no option to run without RT, and they do not run on GPU's that do not support RT.

This IMHO is the future. Not because I want it to be - mind you - but because development is easier and cheaper for game developers. Raster graphics require so many manual workarounds to get things to look good that can be accomplished with RT by just dropping light sources and cameras.
This is why when the RTX 20 series was released I said at some point we need to all buy graphic cards with Ray-Tracing because that's the direction the industry is going. Developers aren't going to waste their time with a toggle where you can turn on and off Ray-Tracing, because that's a lot of work. They just said you need a GPU with Ray-Tracing.
I think it sucks, because rasterization can look 99% as good as RT at a fraction of the GPU render load if a dev really wants it to, but the truth is that whatever is easier for developers usually wins.
The all mighty Nvidia has spoken and you must use Ray Tracing. Your games may look worse but that's a risk they're willing to take.
Lets be real about RT. When Nvidia dropped it with the 20 series, it was never a feature for the Gamer. They pushed RT because:

1.) They knew it would make developers lives easier, and they could get huge adoption relatively quickly.
2.) They knew they had an advantage over the competition with RT, and knew that if game devs started using it, even reluctant gamers would feel forced to buy into it, or get a subpar experience.
I think Nvidia pushed Ray Tracing because when the crypto bubble popped in 2017 it's not like Nvidia had a reason for people to buy their latest GPU's. They invented Ray Tracing to push gamers to buy it.
So it was a way to make devs happy, and a way to stick it to AMD. And that has pretty much come true. All reports are that AMD has mostly caught up with the new 90 series Radeons, at least in the mid range now, and that going forward devs that were titillated by the massive cost and time savings they could get out of RT, but feared implementing it because they didn't want to alienate mainstream gamers are going to have less and less reason to not push RT and make their titles "mandatory RT".
If you look at Steam's Hardware Survey you can see that up until the past couple of years the GTX cards dominated it. Now if you look it's dominated by RTX. Mainly low end RTX cards with it comprised of RTX 3060, 4060, 2060, and laptop variants. You have the 3070 and 4070 on the list but further down. If you wanna know why people are turning on DLSS, it's because they own a RTX **60 class GPU and therefore need to turn on DLSS in order to get a playable frame rate with Ray Tracing. Because everyone who bought an RTX card is certainly turning on Ray-Tracing in games, just to see what the meme is all about.
5fece640938fb.jpg
 
Real time 3D was a thing before 1994
Yes openGL started in 1992 and we had 3d real time rendering before that, 4d sports boxing had some 3d rendering, it released in 1991 9https://dosgames.com/game/4d-boxing/), I did work at some dental scanning place that had 92-94 era openGL code still in there.

Not sure exactly of what you mean here, it is because it was a thing they could have been so impressed by what Optix-denoiser enabled blender window can do and not just have no grasp if it is impressive or not.

I think Nvidia pushed Ray Tracing because when the crypto bubble popped in 2017 it's not like Nvidia had a reason for people to buy their latest GPU's. They invented Ray Tracing to push gamers to buy it.
That seem to exaggerated of how competent companies can be, the GV100 gpu with 68 rt core released in March 2018, Turing that follow august, crypto busted what in January that year ?

They probably had started to work seriously on that in 2015 at the latest, they always did obviously at the research level but talking about the RT core type solution Turing used.
 
They stopped production because they probably needed the capacity at TSMC because they needed the wafers for the 5000 series.

The problem is the real 4080 should of been a cut down die of the 5090. But then it would cost way more than $1000.....But instead Nvidia is trying to sell you a 5070 for $1000....OOPS I mean 5080.
Maybe later? But Nvidia will need those cut-down GB202 chips for the Quadro lineup.
Nvidia still hasn't announced or released the workstation Blackwell components, the current Quadro RTX 6000 ADA, uses the full AD102, as does the L40, but the chips that didn't bin for those are then passed down to the Quadro RTX 5880 ADA, and 5000 ADA and the L20.
Given that lineup's popularity, they will have a similar hardware stack for Blackwell, and unlike a consumer launch, the SIs and OEMs will not tolerate a paper launch.
 

View: https://www.youtube.com/watch?v=Nh1FHR9fkJk

Nvidia not understanding Open source... or see why it should work the way it does. NV not getting that using year old code and tweaking it might be bad... Nvidia being an over all bad open source player.... ahhhh shocked. Surely that can't be true.
😲

Was about to post about this. There's some VERY interesting things said during this video. Definitely worth a watch. There's a reason why Linus couldn't stand them... and GBM. Red Hat should NOT have helped them with their EGL Streams fiasco.
EDIT: This is soooo nVidia. They can't just make good hardware, they have to rig the game too and it's just nerve racking when it's not really needed per se. Just let the hardware speak for itself.
 
Last edited:
He take so much time to say so little, yet speak fast to do it, strange combo.... 20 minutes in, yet to hear a single start of any issues being presented...

Just hear him talk around 23 minutes.... what he is even rambling about, yes generated interpolated frame are not the same, that was well establish 10 times before, 23 minutes in a video about the subject, little script editing would be worth it...
 
He take so much time to say so little, yet speak fast to do it, strange combo.... 20 minutes in, yet to hear a single start of any issues being presented...

Just hear him talk around 23 minutes.... what he is even rambling about, yes generated interpolated frame are not the same, that was well establish 10 times before, 23 minutes in a video about the subject, little script editing would be worth it...
YouTube algorithm punishes videos under 19 minutes or so.
They need to pad them as a result.
Same reason why sites with recipes need to put them at the bottom forcing you to scroll through their life story. Google does weird shit and wants you engaged for as long as possible in one place, the longer you are there or the further down you scroll the better the content must be so the more people it’s recommended to.
 
Same reason why sites with recipes need to put them at the bottom forcing you to scroll through their life story.

If they gave you the recipe right up front you wouldn't ever know they won prizes in second grade for their horse drawings. Simple algebra.
 
Last edited:
If they gave you the recipe right up front you wouldn't ever know they won prizes in second grade for their horse drawings. Simple math really.
I could have lived a long happy life not knowing about their hoof fetish though
 
He take so much time to say so little, yet speak fast to do it, strange combo.... 20 minutes in, yet to hear a single start of any issues being presented...

Just hear him talk around 23 minutes.... what he is even rambling about, yes generated interpolated frame are not the same, that was well establish 10 times before, 23 minutes in a video about the subject, little script editing would be worth it...
you should tell GN, not us...

fg feels like the base rate but with worse lag and artifacts. its just not the same.
 
EDIT: This is soooo nVidia. They can't just make good hardware, they have to rig the game too and it's just nerve racking when it's not really needed per se. Just let the hardware speak for itself.
Are you complaining about Nvidia over-delivering? Is this what it's come down to now? Not only do they make the fastest GPU, but they add value with their software and drivers and you dislike this over-delivery?

I've seen it all now on this forum.
 
Are you complaining about Nvidia over-delivering? Is this what it's come down to now? Not only do they make the fastest GPU, but they add value with their software and drivers and you dislike this over-delivery?

I've seen it all now on this forum.
You'll have to watch the video, they made their own open source version of a tool to obfuscate data regarding actual frame times when using frame gen. They're making it much more difficult to accurately benchmark framegen in general because they're going against open source industry standards when they could easily contribute to it as well. They're being cagey about how to get numbers for their cards basically.
 
You'll have to watch the video, they made their own open source version of a tool to obfuscate data regarding actual frame times when using frame gen. They're making it much more difficult to accurately benchmark framegen in general because they're going against open source industry standards when they could easily contribute to it as well. They're being cagey about how to get numbers for their cards basically.

In other words, standard Nvidia practice. Oh hey guys, frame times really matter here is fcat to measure them. See how good our product is? Fast forward a couple of generations and it is "fcat, what is fcat?"
 
  • Like
Reactions: ChadD
like this
Are you complaining about Nvidia over-delivering? Is this what it's come down to now? Not only do they make the fastest GPU, but they add value with their software and drivers and you dislike this over-delivery?

I've seen it all now on this forum.
You obviously didn't watch the video if you think that's what I'm talking about.
 
The long and short of the open source stuff.
Intel developed this;
https://game.intel.com/us/intel-presentmon/
Presentmon is 100% opensource. https://github.com/GameTechDev/PresentMon
Nvidia has used the code for Nvidia frameview https://www.nvidia.com/en-us/geforce/technologies/frameview/

The code Nvidia choose to use isn't the current version of the presentmon which has incorporated a ton of new methods to better measure generated frames. Instead they are using a much older (year and a half or something like that) version of the presentmon code. They are not feeding back anything they have done to the code back to the open source project. (which they are allowed to do but makes their software pretty suspect). When Steve asked them if they have corrected for the things Intels current version corrects for to accurately measure frame gen latency... it seems like at least the people Steve spoke too didn't even understand the question. Though to be fair I doubt Steve was talking to the Nvidia software people that created frameview.

If you want a better description watch the GN video even if it is annoyingly padded with stupid jokes.
Bottom line is Intel has come up with a solid reliable tool that is fully open so no shenanigans. AMD is using it to make their OCAT open Capture Analasis Tool. AMD uses the presentmon library (and even praise it on their OCAT page) and have fed code back to the project. Nvidia has cherry picked an out of date version that wasn't as accurate, and have not been feeding anything they may have tweaked or added back to the project. Basically they have turned an old version closed source. If you trust Nvidia maybe they made the same fixes the presentmon project has come too.... or maybe they haven't. lol
 
You'll have to watch the video, they made their own open source version of a tool to obfuscate data regarding actual frame times when using frame gen. They're making it much more difficult to accurately benchmark framegen in general because they're going against open source industry standards when they could easily contribute to it as well. They're being cagey about how to get numbers for their cards basically.

You obviously didn't watch the video if you think that's what I'm talking about.
I clicked on it at first, then instantly closed the tab not even 10 seconds in when the first words out of his mouth were, "Nvidia is being a dick".

I used to watch a lot more GN videos, but in the last few years all the videos are just Steve flinging insults every 90 seconds. It's gotten obnoxious, regardless of what manufacturer he's flinging insults to. I've gotten to an age where my tolerance for unprofessional conduct in a professional setting has been exhausted.

My apologies.
 
I clicked on it at first, then instantly closed the tab not even 10 seconds in when the first words out of his mouth were, "Nvidia is being a dick".
Your take is a fair one.
Gotta say it though... Nvidia is being a dick. ;) I would have said they were being shady more then being dicks. Then I'm not making money on the situation and Steve is. Your point is valid.
 
I clicked on it at first, then instantly closed the tab not even 10 seconds in when the first words out of his mouth were, "Nvidia is being a dick".

I used to watch a lot more GN videos, but in the last few years all the videos are just Steve flinging insults every 90 seconds. It's gotten obnoxious, regardless of what manufacturer he's flinging insults to. I've gotten to an age where my tolerance for unprofessional conduct in a professional setting has been exhausted.

My apologies.
No worries, I only watched it because the youtube algorithm fed it to me while I was blasting ninja gaiden 2. Typically don't care for most tech tuber content but it was somewhat interesting atleast. The only tech tube adjacent stuff i really like is DIY Perks. I still want to build a "breathing" pc some day.
 
I clicked on it at first, then instantly closed the tab not even 10 seconds in when the first words out of his mouth were, "Nvidia is being a dick".

I used to watch a lot more GN videos, but in the last few years all the videos are just Steve flinging insults every 90 seconds. It's gotten obnoxious, regardless of what manufacturer he's flinging insults to. I've gotten to an age where my tolerance for unprofessional conduct in a professional setting has been exhausted.

My apologies.

Lots of people hate professionalism and as a result, hate journalism these days. Hence you get the popularity of GN's bullshit sensationalism and general total lack of ethical and professional conduct.

Now I see he's filing the ol' virtue signalling, class-action, clout suit against Honey. No surprise there, PayPall won't even feel it, the lawyers get paid, GN gets clout, the victims get fucked a second time. I'm glad vexatious litigation is so strictly regulated here in Kanukistan. People don't understand that somebody is paying for that, all of it, not just the lawyer's fees, the court system isn't free, and vexatious litigation is incredibly costly. All for a bit of internet cred.

Sorry, that turned into a rant.

Anyway, the 5080 is definitely a huge upgrade for normal users who upgrade their machines every 5-7 years.

Power users with 40 series cards? Not so much...
 
He take so much time to say so little, yet speak fast to do it, strange combo.... 20 minutes in, yet to hear a single start of any issues being presented...
He's building up to a multiple video series about this topic. He makes a lot of valid points, especially about PresentMon. I myself didn't know how many times DLSS has changed over the course of it's history.
Just hear him talk around 23 minutes.... what he is even rambling about, yes generated interpolated frame are not the same, that was well establish 10 times before, 23 minutes in a video about the subject, little script editing would be worth it...
A lot of people want to see the graphs but the graphs tell you nothing other than frame generation does indeed increase performance. Future videos from Gamers Nexus on this topic is when things will get good. This isn't your Apple fanboy YouTuber who runs Geekbench and Cinebench and calls themselves a "reviewer". Let the man cook.
9iy4ze.jpg
 
Lots of people hate professionalism and as a result, hate journalism these days. Hence you get the popularity of GN's bullshit sensationalism and general total lack of ethical and professional conduct.

Now I see he's filing the ol' virtue signalling, class-action, clout suit against Honey. No surprise there, PayPall won't even feel it, the lawyers get paid, GN gets clout, the victims get fucked a second time. I'm glad vexatious litigation is so strictly regulated here in Kanukistan. People don't understand that somebody is paying for that, all of it, not just the lawyer's fees, the court system isn't free, and vexatious litigation is incredibly costly. All for a bit of internet cred.

Sorry, that turned into a rant.

Anyway, the 5080 is definitely a huge upgrade for normal users who upgrade their machines every 5-7 years.

Power users with 40 series cards? Not so much...
A friend of mine wants to go from a 3070 TI to a 5080, I can't say he won't notice the difference.
And since that 3070TI was purchased at the height of the pandemic pricing woes, I'm not even sure he will be spending that much more for it than he did on the 3070TI...
 
A friend of mine wants to go from a 3070 TI to a 5080, I can't say he won't notice the difference.
And since that 3070TI was purchased at the height of the pandemic pricing woes, I'm not even sure he will be spending that much more for it than he did on the 3070TI...
Tell him to hold off a month or two or three. (not like we have games that can't run on a 3070 ti anyway) Wait to see what AMD actually has. Even if its nothing.... right now he is going to be fighting over slim restocks marked up 25-50% anyway. Wait till Nvidia gets enough stock for their to actually be cards close to MSRP. Or to take a serious look at AMD, or heck maybe even all the "Get your shit together AMD" people hoping AMD would drop a card that would force Nvidia to sell their 5080 at the 5070 ti pricing it should be at get their way. lol
 
For the first time in years, I am looking at AMD as a potential option.
 
A friend of mine wants to go from a 3070 TI to a 5080, I can't say he won't notice the difference.
And since that 3070TI was purchased at the height of the pandemic pricing woes, I'm not even sure he will be spending that much more for it than he did on the 3070TI...
I nearly, unironically, said "wait for the 9070" but I stopped myself. AMD doesn't deserve that courtesy, just move on as though RTG didn't exist.

Unless they use Linux. 9700s show up really cheap from time to time. I'm sure you've already given the "be patient" advice anyway.
 
Your sig indicates a 4080S. What are you looking at, as an upgrade provided by AMD?
My 4080s was to be a place holder for a 5090. Now that it is headed back (RMA) and I will have nothing for at least 20 days round trip, my attitude about AMD has changed. If AMD has something better than a 5080 in the same price range or less then I would be interested, assuming that they pull something out of thier hat. I do not see a 5090 in my future given that MSRP is a pipe dream now. I just cannot justify it. I may suck it up and ride on the integrated for a month. No gaming sucks though.
 
Last edited:
Reason #584 why Nvidia will wind up the only video card company, and then prices will go up even more.

Yeah it sucks, big time, but the pattern with RTG is well established. Release a slightly lesser card with a very slightly lower price and less stable drivers and then discount it much later once it's really ready to take on Nvidia but everyone has already upgraded for that generation.

It's entirely up to them to break the pattern, not consumers.
 
It's entirely up to them to break the pattern, not consumers.
Sure. And if everyone says "meh, I don't want to skip that", well, you know what'll happen. I'm not suggesting people owe it to AMD to buy their products. But I had a 6800 and had very few problems with it--after they fixed stupid driver issues (and ignoring RT because there weren't any games I played where I couldn't live without RT.)
 
Sure. And if everyone says "meh, I don't want to skip that", well, you know what'll happen. I'm not suggesting people owe it to AMD to buy their products. But I had a 6800 and had very few problems with it--after they fixed stupid driver issues (and ignoring RT because there weren't any games I played where I couldn't live without RT.)
I remember people couldn't lay their hands on a 6000 series in those dark days. Queues on sites lasted for hours and were only open for seconds.

RT still has a very long way to go, it certainly isn't central in my video card decisions.

Personally I wonder if the lack of competition has to do with TSMC, AMD has a lot of products coming from TSMC and I don't know anything about how that capacity is purchased or allocated.
 
The easy gains are gone. It is not an act of laziness that there is a ton of emphasis on "alternative techniques" to make things faster, from all vendors.
I don't think that's entirely true. In one gen going from 10240 (80 SM) to 10752 (84 SM) shaders isn't trying. As derided as Turing was, they at least had the common sense to push the 2080 out with a 545mm2 die since they didn't have a transistor density increase with 12nm over Pascal's 16nm.

I think they could have easily done a larger GB203 to allow for 12-15k shaders and got a true next gen successor to the 4080 Super out of it as well as something that could meet or beat the 4090 easily.

The last time they also did one gen to another on the same node was Kepler to Maxwell, with 28nm. Again, Kepler GK104 at 294mm2 was succeeded by Maxwell's GM204 at 398mm2.

They could have done better...they chose not to.
 
Last edited:
I don't think that's entirely true. In one gen going from 10240 (80 SM) to 10752 (84 SM) shaders isn't trying.
Without node shrink, the only way to rise SM can become to make a bigger die and that not people mean per easy gain (or an actual gain) maybe they could have reduced cache size with the high-bandwidth, could have added more cores that a bit just saying a 4090 is faster than a 5080... and a 5080 is faster than a 5070.

They could have done better...they chose not to.
They could have made bigger die card and rise the price per sku like Turing did but that strategy do not goes against the idea that easy gains are gone. They could also have used heavily cut down 5090 and call those 5080 and have a much faster 5080 that they sell for $1400 msrp, but that not an actual generational gain. Easy gain was meant by easy performance / cost gain, you can always have more power, memory bus, die size that easy if you charge more money and people accept to pay it, but that was not meant in that sense.

In term of generational gain, the 5080 getting around 20% with an OC over a similar SM core count card is maybe not that bad, would have to look what similar diesize-core count gen on gen tend to be.
 
Last edited:
I remember people couldn't lay their hands on a 6000 series in those dark days.
Me too. When you could finally find one, I grabbed one from Microcenter at basically scalper prices, only to see the price drop drastically just weeks later (right outside the return window, of course.)
 
Tell him to hold off a month or two or three. (not like we have games that can't run on a 3070 ti anyway) Wait to see what AMD actually has. Even if its nothing.... right now he is going to be fighting over slim restocks marked up 25-50% anyway. Wait till Nvidia gets enough stock for their to actually be cards close to MSRP. Or to take a serious look at AMD, or heck maybe even all the "Get your shit together AMD" people hoping AMD would drop a card that would force Nvidia to sell their 5080 at the 5070 ti pricing it should be at get their way. lol
Honestly, a resupply from China isn't happening until late May at the earliest just because of how the shipping situation is, so even if he wanted to head out and buy one (not scalped) he would have to wait.
He's a normie, he was about to pull the trigger on a slightly overpriced 4080, because... he wanted one.
 
Without node shrink, the only to rise SM is to make a bigger core that not an easy gain (or an actual gain), that just saying a 4090 is faster than a 4080....


They could have made bigger die card and rise the price per sku like Turing did but that strategy do not goes against the idea that easy gains are gone.
So I am just going to paint a picture here. Now I'll preface this by saying I know full well you can't necessarily compare one gens CUDA core to another. A different architecture is a different architecture so its not straight apples to apples. Turing to Ampere saw a doubling of shaders mainly because of how they changed the INT32/FP32 pathing per clock cycle for example. But still...

That said...does paint a picture. So let's look at the similar die classes. In this case we'd be talking about the x04 and since Ada, x03 dies.

  • GTX 680, GK104, 1536 shaders (294mm2 on TSMC 28nm)
  • GTX 980, GM204, 2048 shaders (398mm2 on TSMC 28nm) +33.33% shader increase over GTX 680 / GTX 770
  • GTX 1080, GP104, 2560 shaders (314mm2 on TSMC 16nm) +25% shader increase over GTX 980
  • RTX 2080, TU104, 2944 shaders (545mm2 on TSMC 12nm) +15% shader increase over GTX 1080
  • RTX 2080 Super, TU104, 3072 shaders (545mm2 on TSMC 12nm) +20% shader increase over GTX 1080

Ampere used inferior Samsung 8nm, so its die tiering shifted, but I'll go ahead and add both 3080 for product branding continuity, but also the 3070/3070 Ti for die tiering continuity.

  • RTX 3080, GA102, 8704 shaders (628mm2 on Samsung 8nm) +183.33% shader increase over RTX 2080 Super
  • RTX 3070, GA104, 5888 shaders (392mm2 on Samsung 8nm) +91.66% shaders increase over RTX 2080 Super
  • RTX 3070 Ti, GA104, 6144 shaders (394mm2 on Samsung 8nm) +100% shaders increase over RTX 2080 Super
  • RTX 4080, AD103, 9728 shaders (379mm2 on TSMC 4N (5nm) +11.76% shaders increase over RTX 3080, +58.44% over RTX 3070 Ti
  • RTX 4080 Super, 10240 shaders (379mm2 on tSMC 4N (5nm) +17.65% shaders increase over RTX 3080, +66.66% over RTX 3070 Ti
  • RTX 5080, 10752 shaders (378mm2 on TSMC 4N (5nm) +5% shaders increase over RTX 4080 Super
 
Back
Top