Euclideon & Unlimited Detail - Bruce Dell Interview @ [H]

Long time reader, first time poster.

I’m not going to claim to be a computer genius who excels at graphics or anything of the sort but I was curious about something. A lot of the posters on this thread are talking about memory requirements for storing the information about the scene, and this is where I get a little confused...

Wouldn’t it be possible for the developer just to attach a single placement handle to one of the atoms in each object and then store the position of the placement handle in the terrain file that points to the object’s source data for use when rendering? I don’t see why it wouldn’t be possible to list the x,y,z location, the roll, pitch, yaw orientation of the object, and some sort of object identifier (whether it just be something so simple as an integer or Huffman coding to something more complicated and custom) and not have that take up excessive amounts of space. When the engine looks at a location, it would just find a pointer to the object data that it would go look up and then complete the rendering task on the object's atoms. That way you would only have a single instance of the point cloud representing an object but you could still have multiple copies of it appear on the screen. Having said that, even that small amount of data could easily pile up quickly if you consider that the dirt is composed of individual rendered pieces of dirt (why a developer would need individually rendered dirt particles is beyond me).

If the people were talking about the space required to represent the objects themselves I suggest they watch the video again and take a look at the weird warrior statue that they were showing the background that was switching between atom view and rendered view. It appeared to me as though it wasn’t necessary to have an atom for every square mm of surface area and yet the engine appears to be able to fill in the gap without an issue (take a look at the wings of this creature to really see what I’m getting at), so the numbers that someone calculated earlier seem a little suspect to me.

My last gripe is that if anyone here is going to argue that game developers don’t reuse rendering objects, they are just lying to themselves. Someone show me a game where every enemy you come up against has some unique feature and then we’ll have something to talk about.

Anyhow, I just wanted to thank Kyle and John for getting this interview and posting it for us. I really didn’t expect to see anymore from Euclideon for quite a long time and you guys delivered. Articles like this are why I’ve been reading your site for the last 12 years myself.
 
Long time reader, first time poster.

I’m not going to claim to be a computer genius who excels at graphics or anything of the sort but I was curious about something. A lot of the posters on this thread are talking about memory requirements for storing the information about the scene, and this is where I get a little confused...

Wouldn’t it be possible for the developer just to attach a single placement handle to one of the atoms in each object and then store the position of the placement handle in the terrain file that points to the object’s source data for use when rendering? I don’t see why it wouldn’t be possible to list the x,y,z location, the roll, pitch, yaw orientation of the object, and some sort of object identifier (whether it just be something so simple as an integer or Huffman coding to something more complicated and custom) and not have that take up excessive amounts of space. When the engine looks at a location, it would just find a pointer to the object data that it would go look up and then complete the rendering task on the object's atoms. That way you would only have a single instance of the point cloud representing an object but you could still have multiple copies of it appear on the screen. Having said that, even that small amount of data could easily pile up quickly if you consider that the dirt is composed of individual rendered pieces of dirt (why a developer would need individually rendered dirt particles is beyond me).

If the people were talking about the space required to represent the objects themselves I suggest they watch the video again and take a look at the weird warrior statue that they were showing the background that was switching between atom view and rendered view. It appeared to me as though it wasn’t necessary to have an atom for every square mm of surface area and yet the engine appears to be able to fill in the gap without an issue (take a look at the wings of this creature to really see what I’m getting at), so the numbers that someone calculated earlier seem a little suspect to me.

My last gripe is that if anyone here is going to argue that game developers don’t reuse rendering objects, they are just lying to themselves. Someone show me a game where every enemy you come up against has some unique feature and then we’ll have something to talk about.

Anyhow, I just wanted to thank Kyle and John for getting this interview and posting it for us. I really didn’t expect to see anymore from Euclideon for quite a long time and you guys delivered. Articles like this are why I’ve been reading your site for the last 12 years myself.

12 years and no account...
your account could be 12 years old =/
account age demands respect
 
My last gripe is that if anyone here is going to argue that game developers don’t reuse rendering objects, they are just lying to themselves. Someone show me a game where every enemy you come up against has some unique feature and then we’ll have something to talk about.

Well, while not enemies, the new id game claims to have unique textures throughout (no tiles)... of course, it's hardly as "detailed" as most modern games on a square-inch per square-inch basis, but does introduce a somewhat new technique to mainstream gaming. Whether or not it sticks is yet to be known.

To me, if they can manage to do the megatexture thing, but with voxels.... that would be desirable. Maybe that's what this Unlimited Detail tech provides.... it's definitely not clear.
 
You know, if these guys had shown up looking and sounding like professionals at something like SIGGRAPH with white paper in hand to professionaly discuss their patented technological breakthrough on a technical level people might just accord them some serious interest.

I mean, if this is legitimate than surely their work must be patented and proteced by now, right? They must have legal representation to protect this most valuable asset that will revolutionize the entire graphics industry, yes? And that being the case they shouldn't be afraid to discuss in technical terms what it is exctly they're doing, or present a white paper on the subject, should they? Do we see any of that? Anything even remotely close to that? No.

Putting out a video with such wild assertions, and then responding to criticisms of it - without actually addressing any of the points with anything else than vague marketing speech and misdirection - tends to lead people to believe that this little more than another vaporware scam like so many we've seen in the past. Phantom console anyone?

I'm willing to bet dollars to donuts this never sees the light of day. This is, in fact, just so much snake oil.
 
Long time reader, first time poster.

I’m not going to claim to be a computer genius who excels at graphics or anything of the sort but I was curious about something. A lot of the posters on this thread are talking about memory requirements for storing the information about the scene, and this is where I get a little confused...

Wouldn’t it be possible for the developer just to attach a single placement handle to one of the atoms in each object and then store the position of the placement handle in the terrain file that points to the object’s source data for use when rendering? I don’t see why it wouldn’t be possible to list the x,y,z location, the roll, pitch, yaw orientation of the object, and some sort of object identifier (whether it just be something so simple as an integer or Huffman coding to something more complicated and custom) and not have that take up excessive amounts of space. When the engine looks at a location, it would just find a pointer to the object data that it would go look up and then complete the rendering task on the object's atoms. That way you would only have a single instance of the point cloud representing an object but you could still have multiple copies of it appear on the screen. Having said that, even that small amount of data could easily pile up quickly if you consider that the dirt is composed of individual rendered pieces of dirt (why a developer would need individually rendered dirt particles is beyond me).

If the people were talking about the space required to represent the objects themselves I suggest they watch the video again and take a look at the weird warrior statue that they were showing the background that was switching between atom view and rendered view. It appeared to me as though it wasn’t necessary to have an atom for every square mm of surface area and yet the engine appears to be able to fill in the gap without an issue (take a look at the wings of this creature to really see what I’m getting at), so the numbers that someone calculated earlier seem a little suspect to me.

My last gripe is that if anyone here is going to argue that game developers don’t reuse rendering objects, they are just lying to themselves. Someone show me a game where every enemy you come up against has some unique feature and then we’ll have something to talk about.

Anyhow, I just wanted to thank Kyle and John for getting this interview and posting it for us. I really didn’t expect to see anymore from Euclideon for quite a long time and you guys delivered. Articles like this are why I’ve been reading your site for the last 12 years myself.

The way these voxel renders works depends on representing the voxel data in a hierarchy; a tree data structure. The farther you traverse down the tree, the more detail you get. When drawing a pixel you only have to traverse the tree deep enough to where the detail level matches the pixel size (or to find out that there is no voxel to draw for that pixel). Tree data structures are not very memory efficient because they typically use a lot of pointers.

If you watch the Euclideon video they claim that there are 64 'atoms' per square millimeter for a 1 km square island. They also say it has the detail of 21 trillion+ polygons. Those are the numbers you have to work with, and it just isn't going to work with unique detail regardless of the compression techniques.

Regarding the repeated objects, if you were to take the voxel tree and have many nodes pointing to the same sub-trees, you would expect to see recursively repeating cubes of data at even intervals. Also, all the repeated data would have to be oriented the same way. This is exactly what appears in the demo. Coincidence?
 
I'm willing to bet dollars to donuts this never sees the light of day. This is, in fact, just so much snake oil.
I guess the biggest problem with that is there doesn't seem like many people here who would bet against you... most of us aren't convinced one way or another... or at least I'm not... hmmm, that might be an interesting poll
 
I will say that his explanation of their being a search algorithm does help give the idea of how "unlimited" detail works. It doesn't matter (besides loading things in memory/etc) how much is in the view, you're only rendering the the amount of pixels needed for the entire picture (1024x768, etc).
 
Wether this story is BS or not I enjoyed watching it and it was interesting. HardOCP is here to make money and getting an interview of a company like this gets a ton of hits. It is just good business, would you guys rather just see another page of the same shit every week. Honestly, how many power supply, mobo, and video card reviews do you want to read?

On that note, and slightly off topic, the amount of "noobies" posting lately caught my eye. It certainly seems to have brought in the traffic.
 
I will say that his explanation of their being a search algorithm does help give the idea of how "unlimited" detail works. It doesn't matter (besides loading things in memory/etc) how much is in the view, you're only rendering the the amount of pixels needed for the entire picture (1024x768, etc).

This is true, but he's playing a bit of a semantic game. What he is doing is a well known technique called ray casting (or ray tracing if secondary rays are cast for accurate lighting). It is 'searching' for the first voxel intersected by rays cast into the scene from each screen pixel. I won't speculate as to why he wants you to compare his work to Google instead of a well known graphics rendering technique.
 
I will be alive then probably so no biggie. What I am most excited about is cutting up people and seeing the layers upon layers of innards... Think Sim Butcher... I have thought about this type of game since I was 12 I think... This was WAY before I even heard the term voxel, I somehow knew it was the future of graphics, just imagining bitmaps but in 3d.
Wow, that's quite fucked up... lol

But aside from that, you had a good point. Go Euclideon!
 
I'm also quite impressed at the number of negative new noobie posters trashing this.

A lot of programmers that fear their current old fashioned but cosy way of working might be turned upside down? Might have to learn some new skills or fade away? Trash talk it now in the hope it just goes away?

I guess the cart and coach builders felt the same way when the automobile was first touted.

Well as a consumer I hope it does work because as other have mentioned, its got awful stale around here.
 
I don't think it's that horribly out of whack. Everyone keeps talking about the storage requirements for keeping track of all those polygons, but he's mentioned several times about how it accepts 3d polygon models and converts them. I would assume all of the unique assets would be stored as 3d models as is done normally, and then those models are procedurally converted into voxels/atoms as needed. Animation still seems to be a tricky issue, but they can most likely work something out.
 
As for all this "oh its only one tree, one rock etc. etc. thats all it has to be at the moment. They could scan 100 different trees and it wouldnt make any difference to the basic maths or techniques employed.

That sort of stuff is a distraction in a project. Get the fundamentals working then add the fun bits to it.

You dont design a car around the stereo.
 
I doubt GPU's make a huge amount of difference with this technology. The concept here is actually fairly simple.

Search is a key term used during the video. This engine is basically generating the view by searching for all the "atoms" required to build it. Once the view is built it is cached frame by frame and rendered to the screen. This creates some difficulty with rendering animation, but really they just have to have some fluid skeletal structure to "glue" the atoms to.

The atoms are just pre-rendered images.... this is kind of like Myst on steroids x 100k...
 
This whole debate is quite easily settled. The problem is people don't LISTEN.

The key to his technology is two particular elements:

1. Point cloud based
2. SEARCH algorithm.

I would add memory compression as number 3, as this seems to be a rather major part of the technology--at least as I gathered from the video. I believe the words "memory compaction" were used. This makes me think that when the software runs it runs out of (highly?) compressed ram, which allows for the use of gigabytes of memory, because running it uncompressed out of ram would very possibly demand up to terabytes of ram--wholly impractical today.

The segment with one of the programmers, IIRC, was also interesting in that he said (paraphrased): "My job is to take a hammer and bang the point data down into very small amounts"--or something like that--forgive me for not recalling the actual quote. This sounds as if he's talking about compressing the point data for storage on disk media.

Anyway, it sounds as if compression/compaction algorithms are key to the application of this technology in practical terms, and that memory (ram) compression is used along with disk compression. Not sure if the same algorithm could handle both, but it may be possible.

s is why he can have UNLIMITED detail, because what he's able to do is shuffle through the point cloud data quickly enough and with enough detail, that he can only grab the small amount of point cloud data needed to render what each pixel of your output screen is needed. This is why he constantly refers to the resolution of the screen throughout the process.

By being able to sift through that point cloud data, you can have an UNLIMITED amount of detail (that doesn't however, mean, that you will have the corresponding UNLIMITED amount of DISK SPACE necessary to hold all that point cloud data). Hence why they ALSO discuss the appropriate compression algorithms which are necessary to store that information in a structured manner so that the search algorithm can quickly scan and decompress only the parts it needs.

Yes, compression would certainly prove necessary, it seems to me. This is where the quality and efficacy of the compression algorithms would be of critical importance--if the algorithms run too slowly then you might well have trouble with animated data, especially highly detailed animated data.

The problem here is that people just don't THINK. They see something like this and say "IMPOSSIBLE" because they never bothered to approach the problem from a different perspective. They've been brought up in a particular way of thinking, and so become a slave to it.

Yes, people become so conditioned to the status quo, invest so much money and time into the status quo, that developing new technologies producing fundamentally superior results just doesn't occur to them anymore.

I believe this guy's (Bruce) story. It's likely. He wanted to get into graphics programming but knew absolutely nothing about the status quo at the time and so he had to come up with his own, original formula for getting to where he wanted to be. This is truly how unique inventions come to be--1% inspiration and 99% perspiration...;) The other guys were too stilted in their "improving the status quo" mindsets to see the things that obviously occurred to Bruce at the time, because Bruce wasn't hamstrung by their assumptions.

While NOTCH and CARMACK are awesome people, the fact of the matter is CARMACK invented LITTLE with regard to 3D graphics techniques, his skill was in applying them. The math and science was already in place WELL before he decided to write his games. Carmack's genius was in how to apply that math in a manner befitting a computer and it's methods. He didn't invent binary space partitioning, nor did he invent ray casting, although he definitely played a major role in POPULARIZING them.

NOTCH made an awesome, flexible, game. However, anyone who's actually written mods for it will tell you the code SUCKS in many places just like any program. Unfortunately for Notch, he also isn't a 3D graphics expert either, so his opinion on new techniques is no where near the first I'd personally seek. In fact, he'd be effectively the LAST. Call me when Notch finally decides to implement a space-partitioned culling technique to the server chunks so the server doesn't have to cram the entire chunk down the client's throat each time a new chunk is spawned.

Carmack is probably thinking about the massive disk-space and ram that a point data approach would have to consume, and compression of anything but textures in his chosen rendering process doesn't seem to have occurred to him--possibly because he has already decided that no such algorithm would work fast enough, and cleanly enough, to succeed. Next Gen hardware in a few years, however, will possibly be able to support point data rendering comfortably sans compression because of greatly expanded ram and storage capacities. Possibly, this is what Carmack meant.

I don't understand Notch's position at all. He must know that it cannot be more obvious that as a graphics programmer he lacks a great deal--but as a game design programmer he is excellent! People buy (or just download) Minecraft in great numbers *despite* its abysmally poor graphics (reminiscent of a child's Lego blocks.) They like the game so much that the graphics are a wholly secondary consideration for them--and that is an amazing achievement. It's obvious to me that Notch isn't qualified to make the statements he's made about Bruce's company--and I'm really scratching my head wondering why he made those statements in the first place. I cannot fathom his motivation.

Carmack, otoh, has done graphics himself for a long time and is certainly qualified to speak on the subject. As usual, Carmack always injects an element of class whenever he speaks publicly about other people working in the industry, and I really appreciate that when I see it.

Just want to say that initially I watched these two videos at *shudder* 360P because I was lazy. Went back and viewed them again at 1080P, full screen--and the difference in quality was astounding! I was actually very impressed by the demonstration in the second, short video. The detail is very, very good--blows traditional software-rendering away, hands down.

I have to agree with John Gatt, who did a superlative job on the interview, btw! John said that the video was not doing justice to what he was seeing on the screen in front of him--so I know that the real McCoy must have looked better than even what I saw at 1080P...;) I also believe John mentioned that the demo looked better to him than the Heaven benchmark! In software, that's an achievement indeed. Thanks for the info, John.

I wish all interviewers could manage being as direct and manage to ask as many pertinent questions as John did, and I also wish that most interviewers could be as selfless and as polite as John managed to do at the same time he was getting the answers to his questions! Kudos, John. I enjoyed that interview more than any I have watched in recent memory. Keep up the good work!
 
Animation seems to be the core challenge that needs addressing before any of this will fly.

Creating code for translating "atoms" and more importantly meaningful arrays of "atoms", and with deformers and perhaps soft body physics, is going to be quite the challenge. But these are smart guys. I hope they quickly starting working on how to integrate poly rigs for character animation, or come up with brand new ways of creating rigs. I wonder what is going to drive those atoms, certainly they should be able to exist with polys in their engine (I hope) but how do they interface?

Their static environments look awesome. Maybe they need to focus on providing static content, I have a feeling they wont.

If atoms can be controlled by fields and forces like particles can, this technology could make for some very original game play.
Sounds very physx oriented.
 
I'm also quite impressed at the number of negative new noobie posters trashing this.

A lot of programmers that fear their current old fashioned but cosy way of working might be turned upside down? Might have to learn some new skills or fade away? Trash talk it now in the hope it just goes away?

I guess the cart and coach builders felt the same way when the automobile was first touted.

Well as a consumer I hope it does work because as other have mentioned, its got awful stale around here.

Just because we're new posters doesn't mean we're new to OCP or gaming. I've been gaming since the 70s. Been involved in BBS', MUDs, MMOs and stint in professional game development too, as well as a lot of years in IT for the last 3 decades. I'm all for new, ground-breaking tech, as is everyone else. However, when I see something like what Euclideon has posted that is basically all assertion and zero fact I feel the need to speak up. If they actually have something and it's not BS, super. However, untill I actually see some cold, hard science on this actually works it's all just so much talk. At best it'll never happen, at worst they could con a lot of people out of their money, which is not copacetic. So OCP hasn't done anything here except provide a fluff piece which answers nothing. I've relied on OCP for most of my hardware over the years and I respect the hell out of this site and its readers. So, lets just say I expect better than this coming from them. This interview was a total waste of time.
 
Just because we're new posters doesn't mean we're new to OCP or gaming. I've been gaming since the 70s. Been involved in BBS', MUDs, MMOs and stint in professional game development too, as well as a lot of years in IT for the last 3 decades. I'm all for new, ground-breaking tech, as is everyone else. However, when I see something like what Euclideon has posted that is basically all assertion and zero fact I feel the need to speak up. If they actually have something and it's not BS, super. However, untill I actually see some cold, hard science on this actually works it's all just so much talk. At best it'll never happen, at worst they could con a lot of people out of their money, which is not copacetic. So OCP hasn't done anything here except provide a fluff piece which answers nothing. I've relied on OCP for most of my hardware over the years and I respect the hell out of this site and its readers. So, lets just say I expect better than this coming from them. This interview was a total waste of time.

Con who out of money? Did you even watch the fucking video? They're not taking money from anyone and from the way Dell made it sound they won't until it's done so who exactly would they be conning out of money?
 
I'll say this as cleanly as possible since I'm in a basic forum and not in a Genmay type setting.

The concept is there, the premise sounds VERY promising. Ultimately the things causing the most controversy is the fact that the company makes such bold claims that challenge everything we were taught to know and then disappears and then leaves us all wanting.

It's like finally getting the girl of your dreams to agree to a night of well yknow and then only letting you put the tip in, then stopping you and then saying. We can do the rest in a years time. LOL :D

I think Euclideon is giving gamers Blue brains if you get what I mean. That is sure to cause some angry tech elite and gamers :)

Controversy is priceless. Good or Bad comments only keeps their name in the news.
Controversial comments are free and a more effective marketing strategy.

On the other hand, people are doubting this tech because they are applying past and present principles to future technology.
I wouldn't expect it to make sense if I used today's approach to development, to explain what Euclideon is doing.
 
I believe in it. It's very promising and I hope it actually works the way they explain it to be. Can't wait!
 
The way these voxel renders works depends on representing the voxel data in a hierarchy; a tree data structure. The farther you traverse down the tree, the more detail you get. When drawing a pixel you only have to traverse the tree deep enough to where the detail level matches the pixel size (or to find out that there is no voxel to draw for that pixel). Tree data structures are not very memory efficient because they typically use a lot of pointers.

I am honestly clueless about voxels as I don't deal with them in my work but I had some comments. I would first ask you to define what you mean by trees being memory inefficient because they use pointers.
Personally I don't see why a tree using a pointer would be inefficient on memory, that's the whole point of a pointer. If you have a leaf that is pointing to an instance of a class that takes the amount of memory to redirect all data calls on that leaf to the target instance and the amount of memory consumed by the instance itself. If you have 500 leaves pointing to the same instance it should only be the amount of memory to point to the same instance 500 times. Its not like the data of the class instance is going to be placed in memory 500 times. Now if you were to tell me that searching a tree of significant size was expensive when it came to CPU cycles I would completely agree and then suggest that maybe their 'search algorithm' is just a bitchin' heuristic that happens to work in their situation (will it work in all situations if it is a heuristic? probably not...).

If you watch the Euclideon video they claim that there are 64 'atoms' per square millimeter for a 1 km square island. They also say it has the detail of 21 trillion+ polygons. Those are the numbers you have to work with, and it just isn't going to work with unique detail regardless of the compression techniques.

I agree that he claimed that you were seeing the equivalent of 21 trillion+ pixels, and I think that his statement is completely contradictory to his statement that he later made of each pixel corresponding to a single atom. I think what he meant was that the engine was capable of representing 21 trillion+ pixels in the scene that it was rendering and that when moving through the environment it would give the appearance of a 21 trillion+ pixel scene. He didn't say that but that is what I am inferring. Again though when you are saying compression, if there was a good enough heuristic in place then you could potentially have a lot of the leaves pruned using that method and the search space wouldn't be nearly as large. Remember, tree searching algorithms are some of the most efficient search algorithms in existence. I mean searching a binary tree is log n in complexity right? I don't feel like breaking out my discrete book right now but I don't recall a multi-leaf tree being absolutely horrible at search unlike other approaches...

Regarding the repeated objects, if you were to take the voxel tree and have many nodes pointing to the same sub-trees, you would expect to see recursively repeating cubes of data at even intervals. Also, all the repeated data would have to be oriented the same way. This is exactly what appears in the demo. Coincidence?

For this, I can honestly say I have no idea what you are talking about when it comes to the "recursively repeating cubes at even intervals." I also don't see what repeated data would have to be oriented the same way. As far as I can tell if you have an instance of a class that contains point cloud data for an object and you left out the location and orientation of the object in this class instance than the engine could supply the position and orientation later without an issue. If lighting needed to be calculated, then the same point cloud data, combined with a transformation matrix could turn any light calculations coming in from the differing direction (due to the object facing a direction other than the default) to get the correct shadowing right? Again, my graphics knowledge is limited, but am I missing something with this thought process?

The biggest thing we need to remember here is that we don't know everything about what is going on with this approach to rendering objects. I'm just throwing out ideas that came to me when reading all of the negative comments about the approach in this thread... Either way, I'm not a part of the development staff and I don't think any else is a part of the development staff to refute any claims in this thread beyond the few snippets that John has provided regarding the demo size.

12 years and no account...
your account could be 12 years old =/
account age demands respect

I'm not going to argue with you on that, I honestly just never felt that I needed to respond to a comment thread before. Every time that I had considered it, someone else made my argument for me but this time I felt like no one had brought up my point. Having said that, the only people who know what exactly is going on in this engine aren't going to tell us yet.
 
A good magic trick is one where you don’t know how he did it.

For my first example: If one has an lattice of potted plants arranged equidistant into a crystalline like lattice. Assume all of these plants are identical and all are facing or polarized the same way. If I stand in the middle of this lattice and look out in one direction I'll see something curious, none of the plants are identical. Every plant seen is looked at thru a different angle, and those with the same angle are completely obscured by the closest plant.

It would make a great flight-simulator as is. (Wouldn’t be fun if the buildings and mountains moved.)

What the end user is presented with is a finite amount of information. If one looks at a pixel one only sees color information; all geometry is lost. If one zoomed out far above this island, it wouldn’t matter that there was a seemingly unlimited amount of geometry behind this one pixel; all that matters is its color. It’s a very tiny amount of information. In this instance it doesn’t matter how one looks at a fern, its green, that small rock, its black, that large rock, its grey. The tricky part is determining what weights to use for these colors.

Most impressive is the large rock. An oblique perspective would be telling.

Sorting algorithms are used ubiquitously in computing. A sorted list is easier to search, and searching or comparing is required for sorting. By grouping together objects ahead of time into closely positioned objects, when one wants to know if an object is in a position one can instead look for a group which would be a smaller set than the total number of objects. (This is arguably most interesting and really, to do it justice would require expanding this paragraph at another time.)

Groups would benefit with large cache, Sorting requires conditionals for comparison. These are features of CPUs.
 
Forgive me if someone has all ready hammered on this point. Mr. Dell said laser scanning was used to produce the objects shown in the demo. The raw data for each point would be as follows:

-3 floats for position (x,y,z) @ 4 bytes each
-4 bytes for color (R,G,B,A)

That gives a total of 16 bytes per point in the point cloud. If an object contains 500 million data points (as I believe is the figure he gave for the rather lovely elephant in his demo), then the maximum amount of storage space necessary for the raw data would be:

16 bytes * 5.0E8 / (1024 byte/kilobyte * 1024 kilobyte/megabyte * 1024 megabyte/gigabyte) = 7.5 gigabytes

That's for one object. Compression could knock the size down a bit, the amount depending upon the compression technique used. Regardless, it will be a non-trivial amount of data. A scene with any degree of complexity at that detail level would require several Blue-ray's worth of data.

I don't see this being feasible until we have significantly increased storage capacity. Perhaps we'll see this once holographic discs with Terabyte capacity start arriving on scene, and we're at least 5 years to a decade from large-scale commercialization of that technology. I don't take issue with the fundamental technology being showcased, but I do take issue with its specific implementation, its current practicality, the lack of technical details and the rather fantastical claims being made.

One day this will surpass polygonal rendering, but today is not that day.
 
Forgive me if someone has all ready hammered on this point. Mr. Dell said laser scanning was used to produce the objects shown in the demo. The raw data for each point would be as follows:

-3 floats for position (x,y,z) @ 4 bytes each
-4 bytes for color (R,G,B,A)

That gives a total of 16 bytes per point in the point cloud. If an object contains 500 million data points (as I believe is the figure he gave for the rather lovely elephant in his demo), then the maximum amount of storage space necessary for the raw data would be:

16 bytes * 5.0E8 / (1024 byte/kilobyte * 1024 kilobyte/megabyte * 1024 megabyte/gigabyte) = 7.5 gigabytes

That's for one object. Compression could knock the size down a bit, the amount depending upon the compression technique used. Regardless, it will be a non-trivial amount of data. A scene with any degree of complexity at that detail level would require several Blue-ray's worth of data.

I don't see this being feasible until we have significantly increased storage capacity. Perhaps we'll see this once holographic discs with Terabyte capacity start arriving on scene, and we're at least 5 years to a decade from large-scale commercialization of that technology. I don't take issue with the fundamental technology being showcased, but I do take issue with its specific implementation, its current practicality, the lack of technical details and the rather fantastical claims being made.

One day this will surpass polygonal rendering, but today is not that day.

You will find out a lot more about the compression in the next update. Compression is far greater now than I even knew.
 
Last edited:
Compression will have to be key because that will be the only way to feasibly store complex scenes on current hardware. It's not that I'm not excited about what I've seen so far, but I have to remain skeptical as it's part and parcel of my profession. Well, that and I don't like being disappointed.
 
Compression will have to be key because that will be the only way to feasibly store complex scenes on current hardware. It's not that I'm not excited about what I've seen so far, but I have to remain skeptical as it's part and parcel of my profession. Well, that and I don't like being disappointed.

Mr. Gatt already said the entire Demo fit on a DVD with room to spare. So yes I am guessing they have pretty good compression if your coming up with 7.5 GB's for just one of the objects.

Sorry brother I can say this the demo will fit on a dvd and have lots of room but for them it still need to be smaller and it will in time. You see unlike you I know whats coming next.
 
I am honestly clueless about voxels as I don't deal with them in my work but I had some comments. I would first ask you to define what you mean by trees being memory inefficient because they use pointers.
Personally I don't see why a tree using a pointer would be inefficient on memory, that's the whole point of a pointer. If you have a leaf that is pointing to an instance of a class that takes the amount of memory to redirect all data calls on that leaf to the target instance and the amount of memory consumed by the instance itself. If you have 500 leaves pointing to the same instance it should only be the amount of memory to point to the same instance 500 times. Its not like the data of the class instance is going to be placed in memory 500 times. Now if you were to tell me that searching a tree of significant size was expensive when it came to CPU cycles I would completely agree and then suggest that maybe their 'search algorithm' is just a bitchin' heuristic that happens to work in their situation (will it work in all situations if it is a heuristic? probably not...).

The octrees that are typically used have 8 children per node which means 8 pointers per node at 32 or 64 bits for each pointer depending on the platform. So the pointers will likely take up more space than the voxel data at each node (probably just 32 bits for color and maybe a few bytes more for other info). On the plus side, any region without voxel data is not added to the tree (so empty space is effectively removed). If the you were looking for maximum memory efficiency, though, there would be much better ways to go. The tree structure is used because it makes ray casting much faster.

I agree that he claimed that you were seeing the equivalent of 21 trillion+ pixels, and I think that his statement is completely contradictory to his statement that he later made of each pixel corresponding to a single atom. I think what he meant was that the engine was capable of representing 21 trillion+ pixels in the scene that it was rendering and that when moving through the environment it would give the appearance of a 21 trillion+ pixel scene. He didn't say that but that is what I am inferring. Again though when you are saying compression, if there was a good enough heuristic in place then you could potentially have a lot of the leaves pruned using that method and the search space wouldn't be nearly as large. Remember, tree searching algorithms are some of the most efficient search algorithms in existence. I mean searching a binary tree is log n in complexity right? I don't feel like breaking out my discrete book right now but I don't recall a multi-leaf tree being absolutely horrible at search unlike other approaches...

He says 21+ trillion polygons in the original video, not pixels. I have no idea how many atoms are created for each polygon, but it is probably several.

For this, I can honestly say I have no idea what you are talking about when it comes to the "recursively repeating cubes at even intervals." I also don't see what repeated data would have to be oriented the same way. As far as I can tell if you have an instance of a class that contains point cloud data for an object and you left out the location and orientation of the object in this class instance than the engine could supply the position and orientation later without an issue. If lighting needed to be calculated, then the same point cloud data, combined with a transformation matrix could turn any light calculations coming in from the differing direction (due to the object facing a direction other than the default) to get the correct shadowing right? Again, my graphics knowledge is limited, but am I missing something with this thought process?

If you look at the zoom into the island in the video, it is obvious that the world is made out of a limited number of tiles. But it doesn't end there. Each tile is made up of a limited number of sub-tiles and some of those sub-tiles repeat even smaller tiles, etc. This is because each level down in the voxel octree is splitting a region of space into 8 equal parts. Because pointers are used, it is easy to repeat regions at any detail level simply by having nodes in the tree point to the same children. This will exactly repeat the same models in the same orientations. And since the octree creation always splits regions into 8 equal sub-regions things will also be equally spaced as if they were constructed on a grid.

Also as I said, octrees are used to speed up ray casting. The big downside is that you can't move voxels or rotate anything without reconstructing the affected parts of the octree.
 
Forgive me if someone has all ready hammered on this point. Mr. Dell said laser scanning was used to produce the objects shown in the demo. The raw data for each point would be as follows:

-3 floats for position (x,y,z) @ 4 bytes each
-4 bytes for color (R,G,B,A)

That gives a total of 16 bytes per point in the point cloud. If an object contains 500 million data points (as I believe is the figure he gave for the rather lovely elephant in his demo), then the maximum amount of storage space necessary for the raw data would be:

16 bytes * 5.0E8 / (1024 byte/kilobyte * 1024 kilobyte/megabyte * 1024 megabyte/gigabyte) = 7.5 gigabytes

That's for one object. Compression could knock the size down a bit, the amount depending upon the compression technique used. Regardless, it will be a non-trivial amount of data. A scene with any degree of complexity at that detail level would require several Blue-ray's worth of data.

I don't see this being feasible until we have significantly increased storage capacity. Perhaps we'll see this once holographic discs with Terabyte capacity start arriving on scene, and we're at least 5 years to a decade from large-scale commercialization of that technology. I don't take issue with the fundamental technology being showcased, but I do take issue with its specific implementation, its current practicality, the lack of technical details and the rather fantastical claims being made.

One day this will surpass polygonal rendering, but today is not that day.

I think you're probably a few orders of magnitude off there. The video says 530,906 polygons for the elephant. Not clear how many 'atoms' are generated for that many polygons.
 
Forgive me if someone has all ready hammered on this point. Mr. Dell said laser scanning was used to produce the objects shown in the demo. The raw data for each point would be as follows:

-3 floats for position (x,y,z) @ 4 bytes each
-4 bytes for color (R,G,B,A)
This is the second time i've read this here, and im confused.

Isn't one of the advantages of voxel data that it's usually uneecessary to give an xyz coordinate for each individual voxel point since you can just have a long array of points and their position in space can be determined by how far down the array it is and what deformations are being applied to the object that the voxel point is part of?

edit: ahh.. here we go:
A voxel (volumetric pixel or, more correctly, Volumetric Picture Element) is a volume element, representing a value on a regular grid in three dimensional space. This is analogous to a pixel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). As with pixels in a bitmap, voxels themselves do not typically have their position (their coordinates) explicitly encoded along with their values. Instead, the position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image).
-wikipedia (hopefully i can trust them on this)
 
Mr. Gatt already said the entire Demo fit on a DVD with room to spare. So yes I am guessing they have pretty good compression if your coming up with 7.5 GB's for just one of the objects.

If we assume lossy compression with, for the sake of argument, a very generous 6% compression rate, then that elephant still takes up about 450 megabytes. That is a non-trivial amount of data for a single art asset. The demo appears to be made up of approximately 40 or so objects, with only the two statues and the two trees being similar in complexity. I have no doubt that amount of data could fit on a DVD. Where we run into trouble is when you have a game with thousands of art assets, not including level geometry, animations, music, dialogue and sound effects.

I'll be delighted to be proven wrong, but until then this amounts to little more than a nice technology demo. I wish them the best of luck in their attempts to produce a commercially viable product.

I think you're probably a few orders of magnitude off there. The video says 530,906 polygons for the elephant. Not clear how many 'atoms' are generated for that many polygons.

At 32:30 he says the elephant contains 500 million polygons. I'm not entirely sure whether he was using polygons in the traditional sense or if he meant atoms. His use of terminology seems non-standard. If there was another part of the video where he said otherwise I missed it and I apologize.

Isn't one of the advantages of voxel data that it's usually uneecessary to give an xyz coordinate for each individual voxel point since you can just have a long array of points and their position in space can be determined by how far down the array it is and what deformations are being applied to the object that the voxel point is part of?

I was referring to the raw point cloud data. I have absolutely no idea how the data is being stored as I have no technical information to go from. The best I can do is some back of the envelope math and come to what is an educated guess. If they're using SVOs, then the octree structure encodes position data. I believe the size of the structure on disk is determined by the number of nodes and the maximum depth of the tree. Tom's Hardware has a nice article on SVOs here: http://www.tomshardware.com/reviews/voxel-ray-casting,2423.html.
 
The pin toy is the key to understanding how this can be compressed.

It's indeed not possible to create complex objects using a single pin toy. It only gives you one 'depth surface' for the top. Now imagine having a 'depth surface' for the bottom, left side, right side, front, and back as well. If you use these six surfaces to 'carve out' a solid block, you can actually represent the vast majority of real-world objects in minute detail.

This only requires storing six bitmaps with depth and color information. That's really cheap! Rendering it is also cheap. You just perform ray casting on each of the six sides, and keep the 'deepest' carving point by comparing against the depths of the other sides. It would be blazingly fast on a CPU with AVX2 support.

Objects that are too complex to be represented this way, like the tree, can be composed out of multiple 'carved' objects. Not a whole lot will be needed. All of the objects in the foreground of the video at 16:20 could consist of a single carved block. I imagine a tree to only require two or three sets of six-sided carving information. If they coincide, that even saves calculations for referencing the other faces.
 
Last edited:
I think you're probably a few orders of magnitude off there. The video says 530,906 polygons for the elephant. Not clear how many 'atoms' are generated for that many polygons.

I think he's been interchanging polygons and atoms to mean the same. He's just trying to portray the fact that his method has a massively greater amount of polygons, such that, the images don't actually look like polygons anymore, more like atoms.
 
The octrees that are typically used have 8 children per node which means 8 pointers per node at 32 or 64 bits for each pointer depending on the platform. So the pointers will likely take up more space than the voxel data at each node (probably just 32 bits for color and maybe a few bytes more for other info). On the plus side, any region without voxel data is not added to the tree (so empty space is effectively removed). If the you were looking for maximum memory efficiency, though, there would be much better ways to go. The tree structure is used because it makes ray casting much faster.

You're making an assumption that he is not developing anything new here with your thought on the octrees. I also don't understand how the pointer would take up more data than a 'voxel' that he has claimed he is not using. If you have an instance of an object that contains the point cloud, you would have a pointer to it, from there you no longer need to use a pointer. It is possible that his object classes have some way of storing data in large sorted contiguous blocks that you haven't considered.

He says 21+ trillion polygons in the original video, not pixels. I have no idea how many atoms are created for each polygon, but it is probably several.

When I typed 21 trillion+ pixels what I meant was 21 trillion+ polygons, the word that I was hoping to stress there was representing. When rendering on a GPU you are rendering all polygons in front of the scene unless some of them are removed by some optimization method. His approach just shows you a pixel that was somehow chosen from a huge amount of data, so when you zoom in and out of the scene you see differing pixels that simulate the look of a scene that has 21 trillion+ polygons in it but in reality in his demo he's only ever displaying 786,432 atoms (if the every pixel on the screen was represented). So the scene he is rendering may be equivalent of a scene with 21 trillion+ polygons, but with the approach that he is attempting to implement I don't see how you would ever see everything in said scene in a single view of it. Does that make sense?

If you look at the zoom into the island in the video, it is obvious that the world is made out of a limited number of tiles. But it doesn't end there. Each tile is made up of a limited number of sub-tiles and some of those sub-tiles repeat even smaller tiles, etc. This is because each level down in the voxel octree is splitting a region of space into 8 equal parts. Because pointers are used, it is easy to repeat regions at any detail level simply by having nodes in the tree point to the same children. This will exactly repeat the same models in the same orientations. And since the octree creation always splits regions into 8 equal sub-regions things will also be equally spaced as if they were constructed on a grid.

Also as I said, octrees are used to speed up ray casting. The big downside is that you can't move voxels or rotate anything without reconstructing the affected parts of the octree.

I believe that you are looking at this as though it is something you have seen before but you have been told by the creator of the approach that you haven't seen this... I don't know why you are assuming that all instances of an object would have to be facing the same direction still.

Direction in an unknown, brand new, rendering context could possibly be irrelevant. Let's say I created a leaf node class that contained x,y,z, roll,pitch,yaw, and object index (as I brought up before). The object class that would be pointed to from the object index would be such that it contained all of the atom data in aligned in some normalized direction (front of the object facing positive/negative z and bottom of the object facing towards negative x). If you were to point to the object and give an orientation that differs from the standard (being 0,0,0) you could use a transformation matrix with the data in the object to poll the data as though you had it in a different orientation in its source object.

I will agree that the world is made up of a limited number of tiles but it was pointed out that the entire map was probably just generated to be the shape of the logo for the company.

Again you have listed voxels when the creator has stated he isn't using them.

The point of everything I'm trying to say is that no one but that team of (was it 9 guys?) knows what this new approach is doing. Everyone stop being armchair graphics experts and wait to see what happens when they give us another peak into their work.
 
I am a believer now. The only issue I see now is memory. He had animation working years ago so I think that is a non issue, it's probably very expensive to animate an object for them to bother with currently.

The fact it is a single core CPU, not even multicore or GPU shows that it needs little processing power so he has very efficient algorithms. THus very detailed worlds will be possible even on current hardware once they tap the extra power.

Memory requirements will also disappear with time and thanks to Moore's law it wont take very long.

One way or another voxel technology will eventually replace polygons it is just a matter of time. Even if Euclideon gets beaten by bigger companies to the finish line we will have a future of voxels. John Carmack predicted that future consoles should focus of voxel type technology that allows high detail for objects similar to how Rage uses high detail texturing.

Even if they are repeating objects I don't see why that is a problem. Most large open world games repeat objects over and over to create the illusion of a large world. Games that come to mind are GTA and Just Cause.
 
Im excited about this! Great interview and cant wait for them to show up again! Good work [H]ard Ocp staff!
 
If we assume lossy compression with, for the sake of argument, a very generous 6% compression rate, then that elephant still takes up about 450 megabytes. That is a non-trivial amount of data for a single art asset. The demo appears to be made up of approximately 40 or so objects, with only the two statues and the two trees being similar in complexity. I have no doubt that amount of data could fit on a DVD. Where we run into trouble is when you have a game with thousands of art assets, not including level geometry, animations, music, dialogue and sound effects.


Apart from the 24 major items we had more rocks weeds and other stuff I just didn't have time to show! A wild guess I found 80 to 150 other items that I noticed. I counted 20+ leaves alone.

I gotcha, and you're right there is a lot we got to see yet but like others would love to see this eventually in games.
 
Direction in an unknown, brand new, rendering context could possibly be irrelevant
It could be irrelevant, or it could not be. There are known techniques, that look very like what Euclideon are doing, where orientation (and in particular, orientation and positioning that isn't "grid-like") is a significant extra challenge.

[If you consider the "search" metaphor, it should be obvious that it's quicker to search a grid for nearby squares than it is to search an arbitrary structure].

Let's say I created a leaf node class that contained x,y,z, roll,pitch,yaw, and object index (as I brought up before). The object class that would be pointed to from the object index would be such that it contained all of the atom data in aligned in some normalized direction (front of the object facing positive/negative z and bottom of the object facing towards negative x). If you were to point to the object and give an orientation that differs from the standard (being 0,0,0) you could use a transformation matrix with the data in the object to poll the data as though you had it in a different orientation in its source object.
Note that the same argument says that even if "they have no artists", it would be very easy for them to produce world maps where the objects are not placed on a grid all facing the same way. Just give each tree a random x,z perturbation, a small random rotation around the x axis and then a random spin around the y-axis, and the job's done (if not done well).

When they brought out demos a couple (3?) years ago, the fact that everything was on a grid was one of the big criticisms. Three years later, when everything is still on a grid, you *have* to be skeptical about the claim this is because they don't employ artists.

Very similar arguments apply to animation. To show they can animate objects (heck, even *move* objects), they don't need a skilled animator - just animate some rotation/translation matrices on a simple skeleton would be enough to prove "yes, we can animate".

The point of everything I'm trying to say is that no one but that team of (was it 9 guys?) knows what this new approach is doing.
And in many respects, that is NOT an argument in their favour.

Everyone stop being armchair graphics experts and wait to see what happens when they give us another peak into their work.
The "armchair graphics experts" are basically saying "Well, I think you're using voxels, and if so, you'll find it difficult to do animation or scenes that aren't hugely repetitive".

If they're wrong, it really wouldn't be hard for Euclideon to prove it. You have to wonder why they aren't doing so.

All that said, even if they're doing voxels, they're doing them better than I've seen anyone else do so. There's definitely some good, original work there. But if they're talking to developers and graphics card manufacturers like they're presenting their work to the general public, I can't see anyone significant wanting to work with them.
 
Back
Top