The "Unlimited Detail" Guys Are Back

Voxels? Right?

As to the guys voice: hey, guess what : not every one with a computer's from Uhmurica, or whatever nationalistic shithole you hail from.
 
The lighting seemed like it was pre-rendered given the light map that was shown... maybe that was just an example... another reason why this might not exactly work in a game. Not to mention calculating collisions etc.

Could be some cool tech for other applications... I'm interested.
 
Just to clarify. I don't know what one would consider these "atoms" (if anything at all - assuming this isn't complete bullshit) but it's definitely no point cloud. You could not fit enough rendering power into a single desktop to render an entire environment, as well as NPC models ALL with point clouds.

Sounds about right, from their website the "breakthrough" is a new way to handle point cloud data, that is supposedly fast/efficient enough to allow "unlimited detail".

http://unlimiteddetailtechnology.com/description.html said:
Unlimited Detail's method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are ray tracing polygons and point clouds/voxels, they all have strengths and weaknesses. Polygons run fast but have poor geometry, ray-tracing and voxels have perfect geometry but run very slowly.



Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.

The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software

Only one piece of the game making puzzle (rendering geometry) but an interesting one. The proof is in the pudding as they say, but it certainly seems plausible. I wouldn't invest myself until I saw an actual working demo though ;)
 
Just to clarify. I don't know what one would consider these "atoms" (if anything at all - assuming this isn't complete bullshit) but it's definitely no point cloud. You could not fit enough rendering power into a single desktop to render an entire environment, as well as NPC models ALL with point clouds.

I've worked with Point clouds at my job before. And they were incredibly small compared to these environments - just the size of an oil compressor you find in a car, and yet our computers ground to a halt. 2GB of vRAM, over 4GB of system ram...and the frame rates were like slideshows.

I don't know what's more impossible at the moment for future gaming. Real-time Ray Tracing, or Real-time Point cloud rendering. Both are so physically taxing on the hardware, that running them on anything but an entire server is an exercise in futility.

I can remember rendering scenes with POVray back in the early 90's, 1280x1024's that took hours and hours to render a single scene. now? no idea. same scene on my current rig would be done in less than a minute, but we still can't do practical realtime rendering of raytraces. yet.
 
Sounds great. Call me when it shows up in something I can run on my machine.
 
as he mentioned, give it to artists and other notable developers, or quite frankly release a demo, even a fly through demo
 
If you took the time to read the description of the video, which most of you seem to have dismissed, they mention that they have animations, and that it is coming soon.
As Josiah said, you could use the cloud point data to create a frame of the environment, and include high resolution models to create the animations. My friend and I figured that is how we would make use of the technology, based off when they called it Unlimited Detail. The amount of processing power to sift through billions of non-moving points is nothing when compared to computing transformations on hundreds of objects. They had the previous one running on a laptop, if I remember correctly. Since most of the complex calculations for objects and textures is done on a graphics card, all that is really needed is a single core to control the objects, and the rest to work on finding the proper points in the cloud. A search such as that is massively parallel.

You could compare this technology to Octrees. (To quote my friend)
 
Fuck yes, I didn't get past thirty seconds because I think its a fake accent.

It's real he is an Aussie and its the same in real life.

Good to see he is still trying I think its time to go have a chat to him again.
 
Anyone else think that the video was spoiled by the guys voice?

Nah, it was like watching Zero Punctuation, really.

I've worked with Point clouds at my job before. And they were incredibly small compared to these environments - just the size of an oil compressor you find in a car, and yet our computers ground to a halt. 2GB of vRAM, over 4GB of system ram...and the frame rates were like slideshows..

Could you describe what it is that you did without breaking any NDA you might have signed upon employment? This is the first time of me hearing of such a technology.

You could dynamically scale the level of details by decreasing point cloud count on specific types of objects. Within the engine certain objects inherit from base class environment, character, background and sliders in game. Models being defined with point clouds with specific levels of priority/importance (base elements that make up the shape of a rock vs. extra point clouds that give the rock more detail if you will)

Bam. Done. But it might look like shit as someone pointed out with the low count voxel tech.

I don't really care, it's just time we have moved forward.
 
High object redundancy, no animation, no visual effects, and otherwise a blah look. Their excuse that they're not artists just isn't enough to explain the shortcomings. And, the lack of any major company jumping in with them shows that they haven't convinced anyone in industry that they have anything that's any good. And, the lack of any downloadable interactive demo suggests that even the little that's in their video is a con.
 
LOL, love how you guys hit my two immediate thoughts while watching:

Voxels?

That voice... I swear I expected him to go," In 2010 our technology appeared in most of the world's media, but then we smoked a lot of hash and the waves came up..." Am I alone in hearing the combination Hippie Surfer/Aussie?

Still my curiosity is piqued, we'll see what/if they can deliver.

I want to know what kind of systems that demo was run on and I'd like to see some proof that this was running in real time. Is it interactive? Does it play nice with AA, can it be sped up running on the parallel shader cores in my GPU? Are there any CPU cycles left over for physics. Imagine setting up specific physical properties for each kind of "atom" and how they respond to gravity and force, real time winds, water splashing from throwing rocks into it... Like I said, my curiosity is piqued, now let's see what they can deliver.
 
You'll notice nothing moved either. All that detail is useless if it can't be animated.

Any reason they wouldn't be able to attach this to a skeleton or any dynamic object?

There's also a trick in 3D modeling where rather than apply physics to individual pieces, you group those pieces together and treated as one entity. That way you got a high level of detail and takes less time to do render calculations.
 
Put this in with a new flight sim and I'm sold. Of course, I'd like it if it could take input, such as Google Maps or whatever, and create scenery based on that. :) 5 years out for that, I'd say.

Looks good, but by the time it hits market or anything, it will be old tech. 2-4 years from now, it'll look old and crappy. Mark my words. I'm bookmarking this thread. :) Personally, I'd LOVE for this to work out and kick some ass. Get together with some top notch gaming company and get the ideas together and come up with a -product- that will sell and actually be available to end users.
 
Could you imagine the raw processing power it would take to render a collision between two objects at the atom level...would be enourmous.

Though I know they would just create chunk boundaries of some sort so as not to waste all those resources....though it would be awesome to get othe point that they render the ripples of energy through objects and whatnot...
 
I don't know why people are complaining about animation. This company is working on a new way to push graphics - ya know, 3D models. Animation/physics are a separate issue and I imagine would be just as easy (or as difficult, take your pick) to implement in an 'unlimited detail' game as it would be in a traditional, polygon-based game.
 
I dunno why you guys hate his voice so much, he sounds fine to me. This kind of tech would be great, and its lightyears better than anything in current games, but until I see this stuff implemented then I'll consider it vaporware.
 
Their atoms sound suspiciously like voxels to me. You know, like the ones they used for old games like Command and Conquer: Tiberium Sun. If it is indeed a new takeoff of that technology I can see their claims being possible, if they figured out a way to render voxels using OpenCL or DirectCompute you could indeed see the type of results they're claiming.

I have no idea how something like that would perform and it wouldn't be good for "unlimited" anything but you could get a lot more small details than you can with polygons for the same processing "cost". Collisions could be calculated with simplified geometry (similar to the way hitboxes work for 2D games).

It's an interesting idea, but since they're not giving us any details and this is all coming from an developer that no one has ever heard of before skepticism is the order of the day.
 
Watching that video made me say HOLY FUCK.

And I don't curse or ever take the Lord's name in vain.
 
They are using fractal compression to generate the scenery. The giveaway is the pyramids of elephants. Those are 3D versions of a Sierpinski triangle. The cool thing about fractals is you can generate a natural looking object at any scale from cosmic to microscopic with a relatively simple equation. The fun part is generating articulating objects like people and animals (which is probably why there aren't any in the video). They obviously need some assistance in that department as they were asking for partners to help them bring this tech to a commercial product.
 
Guys and Gals,

I've worked with the scanning and point cloud technology for over 5 years and implemented it in almost every single industry. These "atoms" are "point clouds" he is just attempting to simplify or redefine. Throughout almost every industry including entertainment these are known as "point clouds". Yes it is the wave of the future, and yes its very likely that we will be seeing more of this technology in its raw form (as opposed to post-processed "point clouds" turned into polygons)

http://knol.google.com/k/3d-laser-scanner#
http://knol.google.com/k/point-cloud#
 
Ya, not buying the bullshit.

Sorry, but if they really could deliver "unlimited detail" as they claim, well, they'd do it. Plenty of interested parties. Intel sure as hell would want to get in on it if it was CPU based since they long for the day where you buy a faster CPU instead of a CPU and GPU.

When you see some unknown group talk about something amazing and new, but that you can't interact with except for their totally hands off completely pre-scripted demo it should set off your "con man" alert. Throughout history, con men have done that exact thing. Claimed something amazing, shown a demonstration, but only on their terms, only in a special setting, but yet not having a final product.

So my thing to these guys would be put up or shut up. Either release a demo on the web or license your technology to a reputable company in the computer industry or go away. I do not buy the "It is so amazing but it isn't ready so you have to see a Youtube video!" thing.

Hell as others alluded to with the Bitboys thing, even if it is with good intentions, just because you think you can make something happen doesn't mean it can be reality. Bitboys did seem sincere in their desire to make a 3D card, and they had VHDL experience, but they couldn't actually deliver on what they claimed. Same shit with Elbrus and the "E2K" processor. Elbrus is a real Russian computer company. They had supercomputer experience and VHDL experience and thought that they could put a supercomputer on a chip. However turns out there are different problems chip level than node level and it worked like not at all and when the renamed Elbrus 2000 finally launched it was a piece of garbage.
 
Not the first time something like this was shown. Only to disappear after several jaw dropping videos. [H] should dig up all the old tech vids of people claiming similarly amazing detail that went the way of the Dodo.
 
the simple question is if this new system is so ground breaking, so revolutionary, they would have artists willing to jump onboard and push the technology... he replied multiple times ' we are not artists... ' WTF with that?
 
Just had a chat to Bruce going to do another interview with him. Any real questions about the technology any of you want to ask?
 
Just had a chat to Bruce going to do another interview with him. Any real questions about the technology any of you want to ask?

Ask him if they plan on releasing a tech demo or something that we can actually run.
 
Meh.. I already heard John Carmack say the words "Mega Geometry" in an interview. Could be that this tech is already second best.

Ask him why he is confused about the difference between an atom and a point. Or a grain of sand and a rock.

Ask him why his tech smells like bullshit and all he releases are videos.

Ask him why he sounds like he is trying to sell me something instead of telling me how it works.

Should I go on?:rolleyes:
 
The man is soo fucking irritating I want to punch him in the throat, grab hold of whatever vocal choards ard left and string him up with them...
 
Voxels? Right?

As to the guys voice: hey, guess what : not every one with a computer's from Uhmurica, or whatever nationalistic shithole you hail from.

Im not for uhmurica and he sounded like a whiny little irritating shitface. :p
 
Just had a chat to Bruce going to do another interview with him. Any real questions about the technology any of you want to ask?

I've got a few questions:

1) What is the memory footprint of the Island they talked about in the video? What about objects like the elephant? The tree? Is the objects primarily storage device intensive, or local RAM intensive?

2) Can shaders or other effects be applied to objects? Can the objects/environments react to real-time lighting and shadows? All the shadows in the video looked 'pre-baked', and all the objects seemed to lack reflections/shine/effects. Its very important to know if this is a limitation of the tech imo, because that is the main reason their videos look quite 'dead' to me (and actually significantly worse than the games they compare their tech to, as the dynamic shadows and lighting makes the worlds come alive) .

3) What kind of hardware is their Island tech demo running on? What kind of bottlenecks could you potentially run into when you, for example, up the resolution (Using 1920x1080 as a target, what would that require? Just a ballpark figure would be nice) Can 'levels' be streamed from - let's say - a modern day optical drive (or even older ones, like Xbox360 DVD drive/PS3 blu-ray drive), or does it require higher bandwith?

4) Animations are claimed to be do-able, but what are the potential bottlenecks when running many, simultaneous animations and physics dependent actions? Is it processing intensive, RAM intensive, storage intensive?

5) Can objects and terrain be deformed/transformed?

I also suspect that to make the kind of content that this technology craves, the art assets costs of games (an area that is strained, even on our 'old' tech) will skyrocket. But that's another discussion entirely.
 
Back
Top