What will be the next big (technical) thing within games?

M76

[H]F Junkie
Joined
Jun 12, 2012
Messages
14,036
Rambling mode on:

I feel that the technology in games has been stagnating for a while now. Games that come out just use the same old things over and over again, and everyone is focused on graphics and nothing else. When the underlying elements of the games are basically unchanged since the early 2000s.

Because let's face it, the difference between GTA3 and GTA5 is only graphical. There is literally no other improvement, the mechanics are exactly the same, the AI is no different, the gamelplay is no different, the physics is no different, the cars handle the same. So what's actually new?

And most AAA game producers refuse to adapt technologies that have been around for years. Like soft body phyisics in racing games is still almost non-existent, despite it giving 100 times better realism than anything else.

It was first properly implemented (afaik) in a tech demo by a student in 1997 (terep2) and still it hardly crops up in any big production game. Of course phyiscs is not immediately apparent, it's not something you can sell in a trailer, it's something that needs a hands on experience to be appreciated. Maybe that's why big budget games are not concerned with this?

But there are many other areas where I feel there is a huge lag, and need for improvement in games.

It's like games took the first step and then stopped. They implemented ragdoll, and basic havok physics as far as 15 years ago, and since then, there is no will to improve,

I feel by now we should have full cloth simulations, and proper collision detection. Because that's the part that needs the most improvement. I hate poke trough and objects passing trough each other, or getting stuck together.

But I think the next big leap in games also could be character simulation. When the characters aren't static polygonal objects but fully simulated flesh and bone. It would give the next step necessary in realism. If you could actually see muscles moving and actual separate simulation of the skin, instead of the current implementation where everything is static. And the limbs on characters look weird and distorted when bent, because they are static polygonal objects that are unable to change shapes. Their "simulation" currently only goes so far as the stretching of the polygons at the joints to hide the seams.

But I fear this could be still many years away (if they even care about it), as I only started to notice this kind of attention to detail even in movie VFX effects very recently.

I don't know what you think, but I feel pushing graphics further is pointless until we get the basics right, graphics is only the presentation, and it's worthless without substance.
 
VR. I can't even explain the difference between playing a game normally and on a Rift. It is game changing. There are certainly hardware limitations at the moment for VR compared to a flat monitor, but even right now you can see that VR is going to change the way we interact with virtual worlds.

In one of the racing games, Live For Speed, the mirrors are actually like real mirrors. They aren't just static displays showing you whats behind you, as you move your head, the view point changes.....just like in real life.



Pixels is not the answer though. Polygons or some new method of drawing will make things more realistic. A movie playing in 480p still looks more realistic then some 4K game, when it comes to organic objects at least.
 
Also, most games still have sprites for foliage. I would love to see real grass blades and leaves. Not sprites.
 
Pixels is not the answer though. Polygons or some new method of drawing will make things more realistic. A movie playing in 480p still looks more realistic then some 4K game, when it comes to organic objects at least.

A movie is still made of pixels. And polygons is the current method, pixels are just how the image is shown, but the objects are polygon based in almost every current game. And I think polygons are not the problem.

You can get pretty realistic things with polygons trough photogrammetry. It's going to be used, as computers get more powerful, because it is just impossible to model things manually at higher detail after a point. Every CG background you see in movies is based on photogrammetry. Therefore it is pixels and polygons. The only reason it is not in games yet is because we still don't have enough computing power for it to render a simple flat surface with 500k polygons.

The reason movies look better is because they have infinite anti aliasing basically. You're downsampling with the camera sensor from infinity* to 30MPX or whatever.

*Of course by infinity I mean all the photons bouncing off the subject. So to achieve movie like graphics we need two things:

  1. Model everything with photogrammetry (it's already pretty advanced technology and you can get textured models from almost anything with little to medium manual work)
  2. higher internal rendering resolution. 4K? Fuck that, more like 40K, and in this case it doesn't matter what is the resolution of your screen this is just the internal rendering resolution of the scene, that is then subsampled to the pixel count, just as reality is subsamlped by the camera to the pixel count. (I think this is the harder part, to get hardware capable of rendering at such resolutions)

Also, most games still have sprites for foliage. I would love to see real grass blades and leaves. Not sprites.

It again comes down to computing power, we'll get there hopefully. Although sprites are not very common nowadays. Sprites are the things that always face towards the viewer and not actual 3d objects.
 
Two things I have noticed;

One was the tech demo on [H] a little while back from an aussie company that had a new way of generating images without polygons, I believe they used points and it was infinitely zoomable/scalable and fast as hell.

Second is procedural generation. After many hours in NMS I am a huge fan of it. Now we can have massive worlds and detailed maps which can be scaled as needed.
 
Two things I have noticed;

One was the tech demo on [H] a little while back from an aussie company that had a new way of generating images without polygons, I believe they used points and it was infinitely zoomable/scalable and fast as hell.

Second is procedural generation. After many hours in NMS I am a huge fan of it. Now we can have massive worlds and detailed maps which can be scaled as needed.

I think procedural generation has it's place, but it's not building worlds. An algorithm can't do a believable city layout, unless the shit is constrained out of it. But then the results will be too similar all the time making the world look repetitive. It seems to me that procedural generation should be used on assets not entire layouts of game worlds. I've been fantasizing about an asset catalog since about 2003 where you can as a game designer pick and choose assets for your game without having to model and design them. Kind of a repositry but one that doesn't have fixed assets but are generated. Of course this software would be the holy grail. Where you just tell the software that you want a modern looking chair made from a certain material and it spits out the model in 15 seconds, thta you can re-fine by making it rounder or blockier, or whatever.

I have been envisioned a gaming ecosystem where asset creation and game design becomes two completely separate disciplines. Where there are companies focused solely on creating assets to be used in games, and companies creating games. Much like as VFX studios are separate entities in the movie business. This way assets aren't throw away, they can be re-used in multiple projects, which in theory could reduce both the time needed for game development and the cost.
 
Sprites are the things that always face towards the viewer and not actual 3d objects.
he probably means those bits of flat polygons on the ground with alpha-transparent textures on them... ya know, Sprites v1.2 :p

VR seems like the next big thing

maybe after that when processing power gets strong enough we'll see things like non-wireframe/non-texture physics based worlds (built using "atoms" or whatever we'll call them, complete with properties to simulate solids/bendables/cloth/water/fire/smoke/etc...) right now we sorta barely simulate some of these things

procedural I think will also continue to gradually improve, especially as AI advances and we can instruct them to build our new worlds sim-city style, giving us broader control over things to maintain a bit of human touch... although I do think at some point we'll see completely non-human-design games, which may be fun
 
oh I forgot to mention haptics and smell... we already know they can increase immersion within VR a ton-fold (a fan turning on when you enter a windy tunnel), so, things like sub-woofers and electrode body meshes to titillate you while you frolic along those green yoshi fields, and smells of lavender and rotten corpse flesh... ah, can't wait for that
 
Model everything with photogrammetry
does photogrammetry offer any sorta interactivity? what I've seen so far is all static meshes, which are fine for movies but a big bummer in game worlds...
 
does photogrammetry offer any sorta interactivity? what I've seen so far is all static meshes, which are fine for movies but a big bummer in game worlds...
Static meshes are good for inanimate objects. Characters and anything dynamic needs to be simulated. But some games already use photogrammetry for characters on some level. I'm not privy to the exact details of the tecnology and how much photogrammetry is involved. But I suspect they use it to create a baseline model which they use as a basis for the actual character. And to generate textures from.
 
Back
Top