Are games starting to out-pace GPU development?

WilyKit

Gawd
Joined
Dec 18, 2020
Messages
928
Perhaps it's my imagination, but I feel the last couple years, PC game requirements for recent games has taxed GPUs of today far harder than games of yesteryear had taxed the GPU's of their time.

It wasn't that long ago when we had so much GPU power on tap compared to what games required that we would actually render games at a HIGHER than native resolution and downscale it back to native just to get some extra IQ. Today, with games being what they are, we're trending in the opposite direction with technologies like DLSS and FSR.

What say you?
 
It's cyclical. Back in the early days of 3D acceleration, it was quite the opposite. No GPU was really good enough for the games of the time. The Voodoo cards were the first that were, but they came with downsides and required a specific API which frankly, didn't have good image quality. During the Crysis days, it was several generations before a GPU could do the game justice. Fast forward after that, and you run into the Eyefinity/NVSurround days where it took multiple GPU's to push games at the required resolutions. It's been kind of the same thing ever since 4K monitors dropped. If you are trying to push 4 or 5K monitors, you need more GPU power for some games than is currently available. Honestly, the RTX 3080 and RTX 3090's are the first cards that have really been able to push 4K resolutions in most games sufficiently. Even then, you have outliers like Cyberpunk 2077 which bring those cards to their knees without DLSS.

High resolution monitors or display arrays are where we've always needed more and more GPU power faster than anyone else. If you are running at 1920x1080, frankly you can get away with some fairly low end hardware in just about every game out there.
 
It seems like the real problem is just that there aren't any affordable low-mid range cards.

As I understand it, most games don't actually require a 3090 Ti unless you have unreasonably high expectations, but even the so-called mid-range cards cost as much as a high end card did just a few years ago.

VR is a different story, though. You simply can't buy enough GPU (or CPU) to turn on all the eye candy in DCS World if you're playing VR, even with an older, lower res HMD. But again, that's kind of an unreasonable expectation. The game is totally playable at medium settings, and if you're actually playing it, you don't notice the difference between medium and maxed out anyway.
 
Depend versus when because not so long ago (Turing and before), the talk about 4K gaming was a no-no and some fringe SLI setup type of things, I feel someone with a 1080p monitor does not feel what you are saying at all.
 
It seems like the real problem is just that there aren't any affordable low-mid range cards.

As I understand it, most games don't actually require a 3090 Ti unless you have unreasonably high expectations, but even the so-called mid-range cards cost as much as a high end card did just a few years ago.

VR is a different story, though. You simply can't buy enough GPU (or CPU) to turn on all the eye candy in DCS World if you're playing VR, even with an older, lower res HMD. But again, that's kind of an unreasonable expectation. The game is totally playable at medium settings, and if you're actually playing it, you don't notice the difference between medium and maxed out anyway.

I'm referring specifically to higher end cards. I'm easily GPU limited in several games on my 3080 even at 1440p. That's not to say performance is bad, but it is my GPU that's limiting it. Even in games as old as AC: Odyssey, with settings cranked my limiting factor is the GPU. With Cyberpunk, if I enable Ray Tracing (and not even full ray tracing) and even using DLSS Quality, I'm GPU limited at 1440p

I know stuff like VR, Surround gaming put on significant load, but those are all fringe use cases. I'm referring to standard single monitor gaming, and you don't even need to push 4K to see GPU limited performance in many games.

I'm not complaining, I like that games are pushing current gen hardware. More of an observation.
 
Games in the 2006-2013 era tended to be much lower on GPU requirements due to being console ports. It's a bit of a cycle watching the performance requirements shoot up and lower down over time as new game consoles come out.

Only in the past 6 years or so have we seen things like 120hz+, 4k monitors, and VR come out which all favor very high performance requirements. Before that, nearly everything had a 1080p 60hz goal.
 
Want a top of the line GPU?

2002 = 9700 Pro for $399 MSRP
2012 = GTX 680 for $499 MSRP
2022 = RTX 3090 Ti for $1,999 MSRP

Add to that, buying two cards (not necessarily at the same time) and going the SLI/Crossfire route is no longer an option either. Back when I got my GTX 680, I used that for a little bit and then got a 2nd card. I eventually got a 3rd GTX 680 for 3-way SLI and milked that all the way through the first bitcoin boom until I got my 2080. If SLI was still an option, I can guarantee you I would be running 2x 2080 right now.

Instead you see gaming systems being sold with cards like the RTX 3050 and GTX 1660 where in the past you would only see low-end cards like that in budget systems.

Games aren't starting to out-pace GPU development, they are starting to out-pace GPU affordability.
 
More likely just crappy coding/optimisation.

"Shove it out the door half baked!"
If anything optimization has became extreme in the past few years. It is a relatively recent thing that GPU manufacturers are optimizing their drivers individually for every major title.
And this is what partly caused the death of SLI and crossfire in my opinion. They don't want to go through the multi GPU optimization process for a handful of fanatics who have two $3000 GPUs. Making a game run smoothly on one GPU is one thing. Making it run smoothly on two trying to work together through chicken wire connection is almost mission impossible. The effort became more and more until eventually they just gave up. Arguably crossfire never even worked well outside a handful of titles even when SLI was still usable.
A side benefit in this to manufacturers is that they probably have a higher profit margin on one high end GPU than two mid range ones.
 
Want a top of the line GPU?

2002 = 9700 Pro for $399 MSRP
2012 = GTX 680 for $499 MSRP
2022 = RTX 3090 Ti for $1,999 MSRP

Add to that, buying two cards (not necessarily at the same time) and going the SLI/Crossfire route is no longer an option either. Back when I got my GTX 680, I used that for a little bit and then got a 2nd card. I eventually got a 3rd GTX 680 for 3-way SLI and milked that all the way through the first bitcoin boom until I got my 2080. If SLI was still an option, I can guarantee you I would be running 2x 2080 right now.

Instead you see gaming systems being sold with cards like the RTX 3050 and GTX 1660 where in the past you would only see low-end cards like that in budget systems.

Games aren't starting to out-pace GPU development, they are starting to out-pace GPU affordability.
Oh come on, this is when the pricing started getting stupid. A GTX 680 was clearly a mid-range chip/card marketed and priced as an 80-class card because Nvidia could get away with it after the initial 7970 launch. Kinda genius on their part as most outside the enthusiast crowd would have no idea. And thus started the trend of less for more. A GTX 580 of the previous generation was a fully unlocked high-end chip card for the same price. You never got that again until certain Titan or Ti card variants of later gens for much higher prices.
 
Want a top of the line GPU?

2002 = 9700 Pro for $399 MSRP
2012 = GTX 680 for $499 MSRP
2022 = RTX 3090 Ti for $1,999 MSRP

Add to that, buying two cards (not necessarily at the same time) and going the SLI/Crossfire route is no longer an option either. Back when I got my GTX 680, I used that for a little bit and then got a 2nd card. I eventually got a 3rd GTX 680 for 3-way SLI and milked that all the way through the first bitcoin boom until I got my 2080. If SLI was still an option, I can guarantee you I would be running 2x 2080 right now.

Instead you see gaming systems being sold with cards like the RTX 3050 and GTX 1660 where in the past you would only see low-end cards like that in budget systems.

Games aren't starting to out-pace GPU development, they are starting to out-pace GPU affordability.

GTX 680 - GK104
RTX 3090 Ti GA102

Different GPU classes. Also, you can get a GA102 GPU for $700 (RTX 3080).

Pricing is completely based on the market. If AMD were to release a top-of-the-line GPU for $200 tomorrow, Nvidia would be force to respond, but neither company is going to do that because as big as these GPUs have gotten, both companies would be losing gobs of money selling a flagship GPU at $200.
 
GTX 680 - GK104
RTX 3090 Ti GA102

Different GPU classes. Also, you can get a GA102 GPU for $700 (RTX 3080).

Pricing is completely based on the market. If AMD were to release a top-of-the-line GPU for $200 tomorrow, Nvidia would be force to respond, but neither company is going to do that because as big as these GPUs have gotten, both companies would be losing gobs of money selling a flagship GPU at $200.
No they wouldn't. They don't HAVE to lower prices that low and lose tons of money. They will just lose some market share while AMD takes a hard hit financially.
 
GTX 680 - GK104
RTX 3090 Ti GA102

Different GPU classes. Also, you can get a GA102 GPU for $700 (RTX 3080).

The GTX 680 was the fastest single-GPU card of that series when that series was released. That is what I was comparing, which should have been pretty obvious IMO.
 
Last edited:
Want a top of the line GPU?

2002 = 9700 Pro for $399 MSRP
2012 = GTX 680 for $499 MSRP
2022 = RTX 3090 Ti for $1,999 MSRP

I think gaming GPU top of the line is more 6900xt/3080TI, 3090 class is a pro line.

About $1300 with the current MSRP, but that a tier of GPU that did not exist in 2002 yet.

the 9700 pro of yesterday is more akin imo to a 12gb 3080 ($870)/6800xt ($800) now.

The $400 card of 2002 is $642 today, so it still got 30% more expensive over time.
 
The GTX 680 was the fastest single-GPU card of that series when that series was released. That is what I was comparing, which should have been pretty obvious IMO. There was no Kepler GPU faster than GK104.
GTX 690 (2x GK104)
GTX 770 (GK104)
GTX 780
GTX 780 Ti
GTX Titan
GTX Titan Black
GTX Titan Z (2x GK102)

Kepler spanned 2 generations, btw. For the 700 series, a "Big Kepler" (GK102) was introduced, as well as the GTX Titan (which was introduced during the 600 series)
 
I feel like it's still the other way around to be honest.

It seems like the only things actually pushing requirements forward are RT and high res \ refresh displays. Otherwise requirements seems like they have been pretty stagnate. I know plenty of 1080p gamers still playing everything fine on GTX 970s and 4690ks. That's 8yo hardware! In the late 90s or early 2000s you wouldn't be playing AAA releases if you weren't upgrading your rig every few years.
 
Want a top of the line GPU?

2002 = 9700 Pro for $399 MSRP
2012 = GTX 680 for $499 MSRP
2022 = RTX 3090 Ti for $1,999 MSRP

Add to that, buying two cards (not necessarily at the same time) and going the SLI/Crossfire route is no longer an option either. Back when I got my GTX 680, I used that for a little bit and then got a 2nd card. I eventually got a 3rd GTX 680 for 3-way SLI and milked that all the way through the first bitcoin boom until I got my 2080. If SLI was still an option, I can guarantee you I would be running 2x 2080 right now.

Instead you see gaming systems being sold with cards like the RTX 3050 and GTX 1660 where in the past you would only see low-end cards like that in budget systems.

Games aren't starting to out-pace GPU development, they are starting to out-pace GPU affordability.
I used to buy GPU's in pairs from the 6800GT onwards. I've been paying double up through the GTX 1080 Ti. With the RTX 2080 Ti, there was absolutely no point to SLI and that was the first time I'd bought a single graphics card since the dawn of PCI-Express based GPU's. That's why I didn't balk at the MSRP of the RTX 3090 or even the RTX 3090 Ti. Although, the latter is such a minimal upgrade over the 3090, I'm skipping it.
 
. I know plenty of 1080p gamers still playing everything fine on GTX 970s and 4690ks. That's 8yo hardware! In the late 90s or early 2000s you wouldn't be playing AAA releases if you weren't upgrading your rig every few years
Other case in that direction, many people still play on 2013 play station 4.

Yes people in 2012 were still playing on 2003 consoles and in 2003 people still had SNES/N64 hanging around, but that was more niche / retro gaming, people with PS4 bought Elder Rings and putting hundreds of hours on it not in a retro gaming way at all.
 
I used to buy GPU's in pairs from the 6800GT onwards. I've been paying double up through the GTX 1080 Ti. With the RTX 2080 Ti, there was absolutely no point to SLI and that was the first time I'd bought a single graphics card since the dawn of PCI-Express based GPU's. That's why I didn't balk at the MSRP of the RTX 3090 or even the RTX 3090 Ti. Although, the latter is such a minimal upgrade over the 3090, I'm skipping it.

Yeah I know that some would buy multiple GPUs at the same time when they were new to get max performance or enable higher-resolution gaming, but I was referring more to the other reason people went with multiple GPUs. Buy one card when they are new, and then a year or two later buy a 2nd (or more if your mobo supported it). That gave you the option of upgrading your graphics without having to replace what you already had, saving considerable money in the process (especially if you bought that 2nd card used). In an era where GPUs are getting more and more expensive, I wish that money-saving option was still available. Like I mentioned, I would LOVE to have the option to buy a 2nd 2080 right now rather than dumping a grand or more for a higher-end 3000 series card.
 
Other case in that direction, many people still play on 2013 play station 4.

Yes people in 2012 were still playing on 2003 consoles and in 2003 people still had SNES/N64 hanging around, but that was more niche / retro gaming, people with PS4 bought Elder Rings and putting hundreds of hours on it not in a retro gaming way at all.

Fair point but I do believe this thread is focused on PC gaming \ hardware. Granted I'm sure much of the stagnation is related to most AAA PC games being console ports these days.
 
Last edited:
Granted I'm sure much of the stagnation in related to most AAA PC games being console ports these days
That was in big part, how could a game that run well enough on a PS5/Xbox X on a 4K tv possibly out pace a new mid-range Desktop GPU ? Terrible port aside, it is on his face impossible to be a thing.
 
My views on PC gaming industry

1. Tired of being a Beta tester at release date for 60 to 100 USD.
2. No real change in creativity all about social media and brand image.
3. GPU price goes up with rising technical challenges at smaller lithography scales and poor yields. As consumers demand small litho and major perf gains within a one year cycle manufacturers have less time to recoup ROI. For example NVDIA paid 2 to 10 billion to ASML just to reserve time so how many GPU dies must they make sell to break even?, not even factoring in advertising cost and royalties. I guarantee you as soon as the 4xxx GPUs hit the retail space people will demand the 5xxx at 2 nm for less within a year.
4. Consoles are better and cloud gaming is realistic with fiber, our AT&T connection is 1 Gig at 4 ms ping.
 
Last edited:
Seems mostly due to the addition of ray tracing in a lot of titles. If you turn that off, frame rates tend to go up dramatically. Maybe others will disagree but I honestly don't feel it's worth the perf hit in a lot of cases. That wasn't the case with games long ago, going from medium to high settings usually had a huge IQ benefit. I will only put RT on if I have enough headroom to maintain high FPS.
 
Perhaps it's my imagination, but I feel the last couple years, PC game requirements for recent games has taxed GPUs of today far harder than games of yesteryear had taxed the GPU's of their time.

It wasn't that long ago when we had so much GPU power on tap compared to what games required that we would actually render games at a HIGHER than native resolution and downscale it back to native just to get some extra IQ. Today, with games being what they are, we're trending in the opposite direction with technologies like DLSS and FSR.

What say you?
4K and even 1440p, require a lot of fillrate. and with how dense game FX are nowadays----it takes a lot of shader/computer power, bandwidth, etc: to get all of that done at 4K, in 16ms. The 4K aspect, is the real issue. It really takes a lot of hardware, to do all of those pixels in 16ms.
 
The GTX 680 was the fastest single-GPU card of that series when that series was released. That is what I was comparing, which should have been pretty obvious IMO. There was no Kepler GPU faster than GK104.
GK110??? The GTX 780 Ti was the actual high end card for Kepler.

EDIT: Actually maybe the top end was Titan Black? I forget. GK110 was cut down a lot, I forget which one was the actual full chip. Basically 600-series was all mid-range and below products despite the so called 6"80" naming, and Nvidia sandbagged hard even with the big chip on the 700-series.
 
I say NO. It is just the "console'itis" of PC. Companies want to sell as many games as possible. If they over reach then that comes back to them. Sometimes it is the hype of over reaching, ala CRYSIS that becomes its own marketing strategy as a form of benchmark. As much as PC is flourishing, I see its death in this. The future will hold no space for custom one off's and combustibility. Too expensive!
 
Yeah I know that some would buy multiple GPUs at the same time when they were new to get max performance or enable higher-resolution gaming, but I was referring more to the other reason people went with multiple GPUs. Buy one card when they are new, and then a year or two later buy a 2nd (or more if your mobo supported it). That gave you the option of upgrading your graphics without having to replace what you already had, saving considerable money in the process (especially if you bought that 2nd card used). In an era where GPUs are getting more and more expensive, I wish that money-saving option was still available. Like I mentioned, I would LOVE to have the option to buy a 2nd 2080 right now rather than dumping a grand or more for a higher-end 3000 series card.

Just sell your 2080 towards a newer card. I would always opt for that rather than getting a second card to go SLI, if just for the simplicity and more predictable performance upgrade across all games.

SLI never made much sense to me outside of flagship cards for the absolute best performance available on the market. Otherwise just go for a faster single-GPU solution so there's more linear scaling (you rarely saw a 90%+ performance increase with the 2nd GPU), better compatibility (some games just straight up didn't support SLI), and of course less heat and power requirement.
 
If anything optimization has became extreme in the past few years. It is a relatively recent thing that GPU manufacturers are optimizing their drivers individually for every major title.

No, it's not a relatively recent thing that GPU manufacturers are doing per-title game optimization, that started back in the mid 90s. And it's really less about "optimization" and more about having to create code path exceptions in the drivers for dumb things developers were doing in their games that were causing rendering and performance issues. If game developers follow the rendering standards to the crossed T and dotted i, there would be far fewer issues. Sure, drivers aren't perfect, but you have to ask yourself "why does X game run fine but Y game has graphical artifacts on the same hardware setup?"

Games have gotten exponentially more bloated and unoptimized as time has moved on. Up until the end of the 90s, PCs and consoles were severely limited hardware wise, and game developers had to use every optimization, hack and trick they could think of to get the desired result they wanted, or at least as close as possible. Then there were the number of years where all PC gamers got were shitty broken console ports. Some so lazy that they didn't even bother changing the menus to work with mice and keyboards.

All of that came crashing down in just a few years with Microsoft forcing .NET Framework and Direct X down everyone's throats. By the mid 2000s, they had succeeded on getting the lions share of the market using their libraries, which promoted lazy programming practices. Programmers could just reference some function out of an exponentially growing bloated API that's not backwards or forwards compatible in many cases. Said library functions are often inefficient and slower than writing a hand optimized function, but you save lots of time at the cost of speed. .NET Framework and Direct X have since morphed into dozens of gigabytes of dependency hell and performance problems. By the time DX11 came around, the API was almost spending more time doing housekeeping tasks than actual rendering.
 
No, it's not a relatively recent thing that GPU manufacturers are doing per-title game optimization, that started back in the mid 90s.
I don't remember that there were new drivers before each game launch with optimizations specific to that title. We had much fewer driver versions than now, and they'd cover much broader issues than fixing a specific game.
And it's really less about "optimization" and more about having to create code path exceptions in the drivers for dumb things developers were doing in their games that were causing rendering
Except in most cases the games work without glitches on older drivers, the optimizations are performance related.
and performance issues.
That is what optimization is. Improving performance.
If game developers follow the rendering standards to the crossed T and dotted i, there would be far fewer issues. Sure, drivers aren't perfect, but you have to ask yourself "why does X game run fine but Y game has graphical artifacts on the same hardware setup?"
Because not all games are using the same rendering techniques and don't load the hardware the same way? That's like asking why Cryengine runs differently than UE4.
Games have gotten exponentially more bloated and unoptimized as time has moved on. Up until the end of the 90s, PCs and consoles were severely limited hardware wise, and game developers had to use every optimization, hack and trick they could think of to get the desired result they wanted, or at least as close as possible.
Games have gotten more complex, and that's a good thing. I don't have any wish to go back to the more basic games we had in the 90s. I'm especially not nostalgic for a time when the developers hand optimized games on one specific GPU. So it would run like dogshit on all others, or not at all. It was the wild west, where everyone was doing something, and if you were lucky you had hardware that managed to run the game with relatively few graphical glitches. I mean who in their right mind would want to go back to a time where games had 5 different renderers built in, and it would have different graphical / performance issues on all of them? Nostalgia often clouds people's judgment, but the late 90s and early 2000s were the dark ages as far as I'm concerned.
Then there were the number of years where all PC gamers got were shitty broken console ports. Some so lazy that they didn't even bother changing the menus to work with mice and keyboards.
Some games are still like that, nothing new there. Is this relevant to nvidia/amd dropping drivers optimized for specific games? I don't see how.
All of that came crashing down in just a few years with Microsoft forcing .NET Framework and Direct X down everyone's throats. By the mid 2000s, they had succeeded on getting the lions share of the market using their libraries, which promoted lazy programming practices. Programmers could just reference some function out of an exponentially growing bloated API that's not backwards or forwards compatible in many cases. Said library functions are often inefficient and slower than writing a hand optimized function, but you save lots of time at the cost of speed. .NET Framework and Direct X have since morphed into dozens of gigabytes of dependency hell and performance problems. By the time DX11 came around, the API was almost spending more time doing housekeeping tasks than actual rendering.
This is what allows games to be bigger and better standardiazition and the freedom from having to hand code even the most basic functions. If we still had to code in assembly without any higher level libraries we'd still have the crappy little games of the nineties. You can't write hand optimized code for everything when your game is 1000 times more complex than what was available in the 90s, it is not feasible.
 
No, it's not a relatively recent thing that GPU manufacturers are doing per-title game optimization, that started back in the mid 90s. And it's really less about "optimization" and more about having to create code path exceptions in the drivers for dumb things developers were doing in their games that were causing rendering and performance issues. If game developers follow the rendering standards to the crossed T and dotted i, there would be far fewer issues. Sure, drivers aren't perfect, but you have to ask yourself "why does X game run fine but Y game has graphical artifacts on the same hardware setup?"

Games have gotten exponentially more bloated and unoptimized as time has moved on. Up until the end of the 90s, PCs and consoles were severely limited hardware wise, and game developers had to use every optimization, hack and trick they could think of to get the desired result they wanted, or at least as close as possible. Then there were the number of years where all PC gamers got were shitty broken console ports. Some so lazy that they didn't even bother changing the menus to work with mice and keyboards.

All of that came crashing down in just a few years with Microsoft forcing .NET Framework and Direct X down everyone's throats. By the mid 2000s, they had succeeded on getting the lions share of the market using their libraries, which promoted lazy programming practices. Programmers could just reference some function out of an exponentially growing bloated API that's not backwards or forwards compatible in many cases. Said library functions are often inefficient and slower than writing a hand optimized function, but you save lots of time at the cost of speed. .NET Framework and Direct X have since morphed into dozens of gigabytes of dependency hell and performance problems. By the time DX11 came around, the API was almost spending more time doing housekeeping tasks than actual rendering.

No AAA games where performance matters are even written using C# or .NET framework. It's not like the performance in some 2D Unity turd even matters. Sure, it might require a computer 18 times more powerful than it should need to run, but it'll still hit 60hz on almost anything. All the real games are still C or C++ under the hood.

It's actually these crappy visual programming languages that are the cause of 99% of Unreal Engine games running like horse shit.

https://blueprintsfromhell.tumblr.com/

Look at those dog turds. Some of those atrocities are actually from shipping AAA games. And people wonder why every Unreal engine game stutters.
 
I don't remember that there were new drivers before each game launch with optimizations specific to that title. We had much fewer driver versions than now, and they'd cover much broader issues than fixing a specific game.

Drivers in the late 90s and early 2000s were moving far faster than now with the blistering pace of hardware releases. You had a multitude of different drivers because of all of the overlapping operating system versions and hardware configurations.

Except in most cases the games work without glitches on older drivers, the optimizations are performance related.

Yeah, no. For every generation of video hardware, there are games that don't work properly. You can take a trip back in time on archive.org, or look at the archives of the game review and discussion sites.

Because not all games are using the same rendering techniques and don't load the hardware the same way? That's like asking why Cryengine runs differently than UE4.

That's what rendering API standards are for. It doesn't matter what engine you have, or how different they are, they all communicate with the video hardware using the same DirectX, OpenGL or Vulkan API. Those standards are set in stone, and the former two are extremely mature, being around for decades. We're not in the wild west of software rendering or DOS direct hardware calls. If a rendering bug happens, it's because a developer screwed up doing something with the rendering API in many cases.

Games have gotten more complex, and that's a good thing.

Games have gotten less complex over time, to the point now where most "AAA" games are decade plus old IP that have turned into F2P, Pay to Win, loot box gambling mechanic, micro transaction trash that's the same year after year. You pay $69.99 for the same game year after year with +1 version number and small iterative changes. Prime example being FIFA, what possibly could be new and exciting about the same decade old soccer game, changing a few head models and replacing names in a roster? Modern "FPS" games are just rail shooters with an hour long ingame movie split up into several minute sections disguised as cutscenes.

If by "more complex" you mean better visuals, who really cares if you can see the skin pores on laura crofts face. Games today have no substance. I'll gladly take 90s sprite and blocky polygon games over the trash that exists now, it's why I haven't bought a new game in a very long time. What made that era exciting is that games were developed by people with a passion and vision, and there was always something interesting to look forward to. That changed in the early to mid 2000s when monster publishers started swallowing up IP and turning into anti-consumer, pure profit greedy monsters like EA. They care more about their shareholders than gamers. It's all about safe existing IP and maximizing profit with microtransactions, supscriptions and gambling mechanics. They treat their employees like disposable trash and suck any passion and happiness out of them.

I'm especially not nostalgic for a time when the developers hand optimized games on one specific GPU. So it would run like dogshit on all others, or not at all. It was the wild west, where everyone was doing something, and if you were lucky you had hardware that managed to run the game with relatively few graphical glitches. I mean who in their right mind would want to go back to a time where games had 5 different renderers built in, and it would have different graphical / performance issues on all of them? Nostalgia often clouds people's judgment, but the late 90s and early 2000s were the dark ages as far as I'm concerned.

Game developers optimized their games for 3dfx cards because they were the first to the table with a good 3D solution. You can't expect miracles from a Cyrix 5x86 and a S3 Trio64 with no 3D acceleration at all. It's the same in any era, budget hardware comes with problems.

This is what allows games to be bigger and better standardiazition and the freedom from having to hand code even the most basic functions. If we still had to code in assembly without any higher level libraries we'd still have the crappy little games of the nineties. You can't write hand optimized code for everything when your game is 1000 times more complex than what was available in the 90s, it is not feasible.

Completely false. You don't need a high level programming API to make amazing games. The bulk of what makes modern games huge are assets, and those don't matter as much in programming. Game engines now have the same set of rules as game engines 20-25 years ago. And since engines and code tend to be recycled for years, once you write something, you don't have to do it again unless you need to make an iterative change or bugfix.
 
Completely false. You don't need a high level programming API to make amazing games. The bulk of what makes modern games huge are assets, and those don't matter as much in programming. Game engines now have the same set of rules as game engines 20-25 years ago. And since engines and code tend to be recycled for years, once you write something, you don't have to do it again unless you need to make an iterative change or bugfix.
You could be talking about each other (maybe have a difference on high level, for most C++ is high level but for younger people maybe not), but just think how long it took a big team to make PacMan, yes coding a video game in assembler instead of an compiled language like C/C++ will mean very simple game.

Game Engines are high level C++ for the most part, some critical section goes deeper than that with C/Assembler but C++ compiler and what Boost/get into the standard get better and better has well.

Even drivers have been coded in high level language like C++ for more than 20 year's now, let alone games.

f we still had to code in assembly without any higher level libraries we'd still have the crappy little games of the nineties.
Not sure if many were still coding games in assembly in the 90s on PC, but on consoles yes including SNES (6502 Assembly), which is mind-blowing.
 
All of that came crashing down in just a few years with Microsoft forcing .NET Framework and Direct X down everyone's throats
Not sure about that, I didn't use either much in all my professional life, really unsure about anyone being forced in anyway here (maybe, but I doubt it), certainly not when it comes to make games, was there ever a big game made in .Net ?

If we are talking about forced on their console, yes I can imagine, but was there ever a time you could not use OpenGL back in they day, Vulkan now for the Direct3d part and alternative for other parts instead of Direct X for your game, with people picking DirectX because they preferred it ?
 
You could be talking about each other (maybe have a difference on high level, for most C++ is high level but for younger people maybe not), but just think how long it took a big team to make PacMan, yes coding a video game in assembler instead of an compiled language like C/C++ will mean very simple game.

On the other hand, Sawyer wrote almost the entirety of Rollercoaster Tycoon in assembly within 2 years... but generally speaking it's unfeasible to require that level of talent if you want to keep the gaming industry going at full throttle. At the end of the day companies also tend to want code transparency, not genius level efficiency, unless you work in certain fields. Which gaming is not one of...
 
but just think how long it took a big team to make PacMan, yes coding a video game in assembler instead of an compiled language like C/C++ will mean very simple game.

It took 10 people a year and a few months to make PacMan, and quite a bit of that time was actually development of a new arcade board that would run the game, rather than programming the game itself. PacMan at the time was not a simple game, 10 people building the hardware and then the software to run on said hardware is a great technological feat for a tiny team of 10 people. It sounds like you don't have a very good knowledge of video game history and are just equating simple graphics meaning something is simple and pointless.

And expanding on that, single individuals in the demo scene have made far more technologically impressive feats on computers from the 80s and 90s in less time than it took to make PacMan. Hundreds of games on the Genesis and SNES were made by small teams and many of them were technologically impressive. The Streets of Rage series and Sub Terrania being some.

To further expand on that, video games on consoles up well into the PS2 era were predominantly coded in pure assembly. Some N64 game developers like Rare and Factor 5 went even farther and wrote their own microcode for the MIPS R4300 CPU to gain more speed out of the console. These were full featured 3D games. The PS3 more or less required coding in assembly if you wanted to utilize the SPE cores, which many games did not due to how difficult they were to program at the time.


Not sure about that, I didn't use either much in all my professional life, really unsure about anyone being forced in anyway here (maybe, but I doubt it), certainly not when it comes to make games, was there ever a big game made in .Net ?

If we are talking about forced on their console, yes I can imagine, but was there ever a time you could not use OpenGL back in they day, Vulkan now for the Direct3d part and alternative for other parts instead of Direct X for your game, with people picking DirectX because they preferred it ?

Microsoft pushed DirectX hard from the very first releases. They did everything they could to woo developers to using the API and locking them in to the Windows and Xbox ecosystem. OpenGL after 2000 virtually disappeared from mainstream games outside of ports to the Apple Macintosh and mobile devices, which required it. It wasn't until 2013 when Valve started pushing Steam to become cross platform has OpenGL and its successor Vulkan come back.
 
On the other hand, Sawyer wrote almost the entirety of Rollercoaster Tycoon in assembly within 2 years... but generally speaking it's unfeasible to require that level of talent if you want to keep the gaming industry going at full throttle. At the end of the day companies also tend to want code transparency, not genius level efficiency, unless you work in certain fields. Which gaming is not one of...
Using the already made (and not in assembly) DirectX SDK.

It took 10 people a year and a few months to make PacMan, and quite a bit of that time was actually development of a new arcade board that would run the game, rather than programming the game itself. PacMan at the time was not a simple game, 10 people building the hardware and then the software to run on said hardware is a great technological feat for a tiny team of 10 people. It sounds like you don't have a very good knowledge of video game history and are just equating simple graphics meaning something is simple and pointless.
That a really strange way to read my text, making pacman today would take a single person very little time and would be easy why it was quite the feat (and giant amount of hours back in the days), how do you go from that to say pointless ? (But yes I do not have very good knowledge of video games and video game history, very superficial one, most of my 3d engine experience is on the CAD/industrial side, I did not work for a long in the video game side)

To further expand on that, video games on consoles up well into the PS2 era were predominantly coded in pure assembly.
That sound a bit incredible to me, I thought it was mainly C/C++ on the playstation 1-2 with a bit of assembly (Sony shipped a GCC on their devkit)

that an original PSX game source code:
https://illusion.64history.net/2022/wipeout-psx-windows-source

Pretty much all very regular .c / .h C code

That an other one:
https://tcrf.net/360:_Three_Sixty

game loop standard C code.

They did everything they could to woo developers to using the API
Yes in many ways, nice documentation and very nice API in general, but did they force it down their throat ? Doom 3 was openGL, call of duty 4, you could still make your games with MacOS/Linux/Playstation in mind and make it portable would you have the resource/will to do so.
 
Using the already made (and not in assembly) DirectX SDK.

When I worked in games, we got him to do the PC conversions of Amiga/ST games. He was damn good.
We would throw the most gnarly ear bleeding 68k code over the wall, and a reasonably predictable amount of time later, functionally identical real mode x86 code would come back.
He did have CPU speed to his advantage, and made extensive use of macros, but even so - fitting things into 64k segments was HARD (I dimly recall AOS to SOA type transforms).

Which development tools were used to develop RollerCoaster Tycoon? The game was written and compiled using MS Macro Assembler V6.11c, MS Visual C V5, MS DirectX 5 SDK, plus assorted custom-written tools. The graphics were created using a variety of 3D modelling, rendering, and paint packages, including Lightwave V5.6, Raydream Studio V5, DeBabelizer Pro 4.5, Photoshop 5.5, Paint Shop Pro V5, Deluxe Paint 2E, Pro Motion V4.2, Painter V5, True Space V2, Corel Draw 8, and Meta Creations Poser 4.
What language was RollerCoaster Tycoon programmed in? It's 99% written in x86 assembler/machine code (yes, really!), with a small amount of C code used to interface to MS Windows and DirectX.
https://www.reddit.com/r/gamedev/comments/b2ugmi/how_was_game_with_such_complexity_as/eix1rqa/

99% of it is in Assembly... as I understand it he also had custom libraries that he made from when he ported Amiga/ST games. I don't think boiling that level of expertise down to 'he just used the DirectX SDK" is reasonable. I guess if you mean that he didn't literally write the entire thing in assembly, graphics and all, then... well yeah, that's getting into a much harder problem at that point...

I mean I'm agreeing with you either way, expecting that type of person to be behind every video game is just unreasonable. Some people have the willpower and the idea, but not the means, if game development was still like that. I'd prefer badly optimized games with new ideas, rather than Assassin's Creed 395, because we don't have enough time to come up with a new series...
 
Oh fwiw I have actually done some assembly coding myself, as part of my (long in the past) coursework for CompE. It wasn't too bad, and I've actually seen how it works on a register level (flip flops and all that goodness) in a simple computer implementation in some VHDL simulation, but I can't really imagine doing anything complex in it (probably because my actual job diverged quite far from that sort of low level coding at this point)... and as a counterpoint to the idea that "well everyone could optimize on this level", I wonder how much GPUs themselves would be locked down in complexity on a hardware level if games required that sort of granular language? Like could you actually say we would have a 3090 Ti right now, or would we be stuck at the 780 GTX at this point? Just intuitively speaking I feel like there's a bit of a link there, too.
 
Last edited:
It goes in cycles mostly decided by the consoles. If one has been around long enough then they will remember when we were trying to hit 30fps (playstation and 3dfx voodoo era) cause the PC hardware was so weak. Towards the end of the playstation era the PCs and GPUs were so far ahead that we got very good framerates even with max res and everything turned up until about a year or so after PS2 launched where crossplattform games became very demanding. It has happened time and time again that PC GPUs start moving far enough ahead of consoles that the framerate gets very high in the newest games and then a new console generation appears. Within 1-2 years even the high-end cards struggled to hit 40fps with everything turned up to max. This hasn't happened in the last two generations due to consoles being based on mid-range PC GPUs so people have gotten used to having high framerates in the newest games with everything at max.

The new consoles are fairly powerfull (Xbox has aprox. a 6700xt and PS5 is slightly slower), but they can't match a high-end PC GPU so framerates will still stay decent on PC. Only catch is when ray-tracing is used as it is an immature technology and not yet mainstream so we get low framerates. My last 2 GPUs have mostly lasted around 4 years as there wasn't a big need for upgrading while I used to upgrade every 1-2 years before that. My 3080 might get swapped within the next 12 months or it might stay for a few more years, depending on which games come out and the performance of the new cards. Having a GPU for more than 2 years and getting good framerates was unthinkable back in the 2000s when rendering at mainstream resolutions. IMO 2560x1440 and to some degree 1920x1080 are the mainstream resolutions today while 4k is more rare, due to lack of high refresh-rate monitors at reasonable prices and generally even low refresh rate monitors are more expensive when going for 4k.
 
Back
Top