The best hacks games used to beat hardware restrictions

M76

[H]F Junkie
Joined
Jun 12, 2012
Messages
14,035
I was always fascinated with the tricks games used to make graphics look better with clever trickery.
So here are my favourite tricks that made games look better without exceeding hardware limitations.

  1. Sprites
    Everyone knows sprites, they were the favorite way to make objects look nicer in 3D environments. Sprites were used for everything, from power ups to items, trough NPCs, trough enemies, and even smoke, fire and explosions were sprites.

    But what is a sprite? Well it's a 2D drawing that is inserted into the 3D environment, that is always facing you. So basically it's a 2D image, where you only have to worry about it's scale and clipping when drawing it.
    In earlier games sprites were static, meaning they always showed the same side towards you. But later games overcame that by drawing the object from 4 or 8 sides, and they switched out the sprite based on your vantage point. Doom also used this method.

    Now in 2017 you'd think why would they ever do that? Well because back then 3D objects couldn't have more than a dozen polygons for computers to handle them with reasonable speed. So back then even some car racing games used sprites to represent AI cars. Most notably the F1GP series even up till the third instalment where they still used sprites to represent cars, which was released in 2000. Albeit these had much more than 8 sides to them. But sprites were still used as trackside objects even further into the future.

  2. Well this is sort of an extension of sprites, I don't know if this method has a specific name or not. It's not baking, but the method is similar. It is used to render a complex object with very little resource usage. It was used mostly for trees or circular objects that would normally have many sides and thus very high polygon count which was unattainable back in the day. Basically these are sprites that are put as textures on to rectangles in the 3D environment, so instead of always facing you they have a fixed direction. They don't pop in and out as you move around the object. Which makes them much more realistic when looking from a distance, and the illusion only breaks down when viewed up close or from a high or low angle. Here you can see you can do a fairly convincing tree by only using three rectangles which makes this very low poly count object. But if you would look at the tree from top it would only look like an asterix.
    flat.png flat2.png

  3. Detail textures
    This is a method to create the illusion that textures are much higher resolution than they actually are. By using so called noise maps. Which is created by taking an extremely high resolution sample from the base material, and making a monochrome noise map from it, and tiling that many times under the medium resolution texture you put on the object. The smaller resolution map overplayed on the noise map masks any tiling patterns too, even when you're looking at the object from a distance.

  4. Pre-rendering
    This method is used to pre-render the environment and only display the pre-rendered frames on the player's computer, and then project any interactive object on top of the pre-rendered background. This made graphics look like it was 10 years ahead of it's time, but also came at a big cost. The player was confined to travel on a single trajectory which he could not deviate from. Notable games that used this method was Megarace and Cyberia.

    However there was one game that used this but still preserved some degree of freedom, The Need For Speed in 1994 / 1995 on PC. Trough some sort of affine mapping you could turn your perspective by a small degree in the game, but since it's a racing game you only ever facing one direction. Funny thing was that the game allowed you to turn back but then immediately switched to an outside view where the camera still faced the front direction so the system didn't break down. Another interesting thing about this game is that I'm not really sure to this day how they did it, but the trajectory could be altered by a 3rd party track editor, which made the game plot the altered tracks on a different trajectory. If anyone knows how did they achieve that the game could plot the environment along any trajectory please enlighten me I'm really interested.

  5. Geometry instancing
    This is the method to render the same object in multiple instances in a single effort. Although it requires hardware support so it's not just a trick but a feature, but it's still a great method to reduce hardware requirements when you need to render multiple copies of the same or very similar complx objects in a single scene. FarCry is the first game I know of to use this method to render complex foliage, not the crap I mentioned above. Actually this made it possible to do away with that method finally. But at first the instancing only worked on ATI video cards since NVIDIA only added support for it with the GF 6000 series. Which was released after the game. ATI already supported it since Radeon 9500 and upwards. I still remember even if you had an ATI card you had to enable instancing trough console.
That's all for now. There were other interesting breakthroughs in graphics but those were mostly made possible due to new features in hardware.

Anything to add? What method did I miss? Back in the day there were tons of tricks on C64 and other oldschool 8 bit systems to make the best use of the limited capabilities, but I'd like to focus on the PC here.
 
Mipmaps for better performance when lots of objects over various distances are rendered.

Bump mapping gives illusion of textures or depth when surfaces are rendered as flat or continuous and improves performance over detailed polygons.
 
A big one used today is ambient occlusion, which uses the depth buffer to calculate how bright a surface should be based on surrounding light. It is currently faster than global illumination, generating a geometrical map, and/or ray tracing. The resulting appearance of light diffusion gives a 3D scene more depth. The effect is dramatic in a game like Dark Souls.

A more accurate but expensive method compared to parallax or bump mapping that adds surface depth is tessellation, which subdivides polygons into more triangles that can simulate bumps, layers, or create a smooth surface. Since this is done in the geometry pipeline instead of the pixel pipeline edge definition is maintained, allowing for more accurate lighting on the tessellated models in the final scene. It also adds complexity to the scene without the overhead that comes from working with source models that have a high polygon count to begin with. I consider it a trick because with a high enough tessellation factor you could turn a 12 polygon cube into a smooth sphere.

An older trick that brought us depth in 3D is infinite far z, or infinite projection. This gave us skyboxes. It allows whatever is being rendered on the far plane to always stay behind whatever else is being rendered in the scene, giving the illusion of an unreachable horizon.
 
Oh, ok. I didn't mean to sound like a smart-ass, I was just thinking maybe it was some programming/art jargon. Thanks! :D
 
He may have forgotten the 'h' to make it "through." It wouldn't make the composition of that sentence any better, though :D.
Though, trough, through I never know which one's which.
 
Back
Top