Euclideon's Unlimited Detail Engine

TheBuzzer

HACK THE WORLD!
Joined
Aug 15, 2005
Messages
13,005
I know you guys heard about this before but I just thought today what are they doing now and how much more ready is their tech.

Seems like they are getting closer to releasing and I am still unsure how great this will be.

I got a feeling for this to be extremely great, they would need to mix polygons with their unlimited detail engine.

 
This has been "almost ready" for years now. No reason to believe anything real is coming out of it at this point unless I see it working on my own machine for myself.
 
This has been "almost ready" for years now. No reason to believe anything real is coming out of it at this point unless I see it working on my own machine for myself.

No kidding. It seems to resurface every couple years, when they run low on funding I'm guessing, and have some more hype and then off again in to silence. If there really was something to it, it would have been delivered by now. You don't screw around with technology for over 7 years in the computer world (we first hear about this in 2010 and it was already supposedly almost done at that point) because shit changes too much. What was true of systems and GPUs 7 years ago isn't what the current tech is doing so you wait that long, and your stuff is obsolete.

The real issue is they seem to have more or less reinvented sparse voxel octrees, a hierarchical method for storing and rendering data if you look at their patent. It's not a new idea and that fact that it isn't new and yet isn't used should give a clue that there might be a problem and there's a big one: Animation. You can't animate the data you have because of how it is stored. So while it can work for doing a static scene, you aren't using it for a game. What they seem to be doing is more or less having multiple octrees and switching between them which is problematic and makes for a pretty jerky appearance. Likewise realtime lighting can't be done, you just have to calculate a lightmap and bake it in.

And for that matter, I'm not that impressed with their stuff anymore compared to the kind of detail you see in the Frostbite engine. BF1 and Battlefront aren't great games but man, they have some serious scene and closeup detail going on and they actually run, in realtime, on all kinds of systems.
 
they seem to have more or less reinvented sparse voxel octrees
I was under the impression it wasn't about that so much as it was the compression algorithm they developed to store and retrieve the info.

I'm pretty happy to see any progress here, even though at present it still looks and feels (and tastes) like poop.
 
This technology looks great and I can see it having many uses but gaming isn't one of them.

I still am of the belief that they emerge every few years or so with 'gaming' demos for the sake of brand awareness and press hubbub but they don't actually aim to be in the business of games or game engines.

I can see this having many uses in the educational realm with maps, since they have their laser scanning tech, and with static models such as for interactive museum exhibits.
 
Here is a better presentation w/ interview...



Jump to 21m57s for the best line:
"I don't think there's any skeptics anymore"
dude apparently hasn't been to [H]ardforums :p
 
This technology looks great and I can see it having many uses but gaming isn't one of them.
It's nothing new, laser scanning and generating models from point clouds has been around for years in professional VFX, and some other fields. But GPUs weren't strong enough until now to be able to use these point cloud based automatically constructed models for any real time graphics. It's still a struggle, but the biggest problem is that the models look fucking ugly up close. They look good from a distance with textures, so you can sell a lie with them to the uninitiated, meaning you can sell them as background in VFX scenes in Tv and movies. But on anything you can go up close to in a game, it won't fly.
 
It's nothing new, laser scanning and generating models from point clouds has been around for years in professional VFX, and some other fields. But GPUs weren't strong enough until now to be able to use these point cloud based automatically constructed models for any real time graphics. It's still a struggle, but the biggest problem is that the models look fucking ugly up close. They look good from a distance with textures, so you can sell a lie with them to the uninitiated, meaning you can sell them as background in VFX scenes in Tv and movies. But on anything you can go up close to in a game, it won't fly.
I do a form of this for a living. I could actually see it being workable if some of the newer techniques were used in addition to regular lidar.

For instance, photogrammetric techniques have been developed in the last few years that actually synthesize a lidar-like 3D point cloud from a series of photos. I could see the fusion of this technique with conventional lidar adding enough resolution to make a convincing video game. Also, there's nothing that says they couldn't just model characters and objects using conventional modeling software and then synthesize a point cloud from the model.

The thing I'm not really clear on is how using a point cloud eliminates the need for lots of compute horsepower for rendering. I get how the infinite detail would work, but not how they get around the need to move lots of data round to accomplish it.
 
I do a form of this for a living. I could actually see it being workable if some of the newer techniques were used in addition to regular lidar.

For instance, photogrammetric techniques have been developed in the last few years that actually synthesize a lidar-like 3D point cloud from a series of photos. I could see the fusion of this technique with conventional lidar adding enough resolution to make a convincing video game.

The thing I'm not really clear on is how using a point cloud eliminates the need for lots of compute horsepower for rendering. I get how the infinite detail would work, but not how they get around the need to move lots of data round to accomplish it.
You don't have to tell me, I work with agisoft photoscan.
But in many cases it's a hit and miss with complicated structures. But it does work great with aerial photography, to the point of almost making ALS pointless, when you can get comparable results from just photos.

Also, there's nothing that says they couldn't just model characters and objects using conventional modeling software and then synthesize a point cloud from the model.
But their method's supposed advantage is that they don't have to model manually anything just use point clouds to generate meshes. There is no point in going back to point cloud from a model.
 
Well it's not impossible. You can make infinite detail with one ray trace per monitor pixel, continue the trace through for transparent objects, and a reflection ray trace per point to determine collision lighting or cubemap. You could do that with any geometric data structure. However, using point clouds will make it more difficult to do animation, effects, and storage.
 
Back
Top