Euclideon & Unlimited Detail - Bruce Dell Interview @ [H]

Forgive me if someone has all ready hammered on this point. Mr. Dell said laser scanning was used to produce the objects shown in the demo. The raw data for each point would be as follows:

-3 floats for position (x,y,z) @ 4 bytes each
-4 bytes for color (R,G,B,A)

That gives a total of 16 bytes per point in the point cloud. If an object contains 500 million data points (as I believe is the figure he gave for the rather lovely elephant in his demo), then the maximum amount of storage space necessary for the raw data would be:

16 bytes * 5.0E8 / (1024 byte/kilobyte * 1024 kilobyte/megabyte * 1024 megabyte/gigabyte) = 7.5 gigabytes

That's for one object. Compression could knock the size down a bit, the amount depending upon the compression technique used. Regardless, it will be a non-trivial amount of data. A scene with any degree of complexity at that detail level would require several Blue-ray's worth of data.

I don't see this being feasible until we have significantly increased storage capacity. Perhaps we'll see this once holographic discs with Terabyte capacity start arriving on scene, and we're at least 5 years to a decade from large-scale commercialization of that technology. I don't take issue with the fundamental technology being showcased, but I do take issue with its specific implementation, its current practicality, the lack of technical details and the rather fantastical claims being made.

One day this will surpass polygonal rendering, but today is not that day.

The major problem you all have is that you are approaching what they are doing from your own view point and you make no attempt to think outside of your own view point. You have all already been shown a live working engine demo with at least 150 unique objects in it where each one is made up of anywhere between millions to trillions of pollygons and it all fit on one DVD and ran on a latop (single core, in software) at 15 FPS. which is impossible according to what most of you are already assuming yet it has been done. Which should tell you there is a lot more to what they are doing than a simple search algorithm.

This is similar to my recent play through of the game limbo. Many times a puzzle seemed impossible and as frustration mounted i would even say "This is impossible" out loud to nobody... but then all of a sudden a light would go on and i would see the puzzle completely diffferently and what was impossible was laughably easy. why? because i approached the problem from a completely different point of view and method. It was indeed "impossible" the way i was trying to force it to be done, but it was as easy as pie once i changed my point of view.

They are clearly coming up with completely new ways of thinking about several areas at once, not just one, and applying them together.

if you are the only one working on a radically new way of doing something and everyone else is stuck in a rut. the LAST thing you would want to do is go to some tech show with white papers and help everyone else with more money than you take your ideas. even in patenty form it would be dangerous. I am sure there are patents on the way, and papers on the way, but not until they are ready and for good reason.

It blows my mind how most of you are already looking at something that is "impossible" according to you, yet real and verified with a hands on live test and still you insist its a scam... that is totally utterly foolish.
 
Last edited:
Hope this works out, been waiting for something ground breaking to come along... keeping my fingers crossed...

BTW - How the hell can any of you say either way if this tech is real or not? You honestly don't know... because you did not create it. Just like you did not create the current tech... and since non of you created anything and don't know how any of it really works in the first place why not just watch from a distance and see what happens. Chill out... keep your fingers crossed, and wait and see. If the tech is bunk then ohh well, the dude is an asshole, there will be plenty more after him, shrug it off, they like the attention either way. If the tech is real then this dude just bitch slapped the world and it would not be the first time, and everyone who doubted them will look like total idiots, they will shrivel up into a dried out husk of a person and probably turn gay too... just sayin.
 
Hope this works out, been waiting for something ground breaking to come along... keeping my fingers crossed...

BTW - How the hell can any of you say either way if this tech is real or not? You honestly don't know... because you did not create it. Just like you did not create the current tech... and since non of you created anything and don't know how any of it really works in the first place why not just watch from a distance and see what happens.
You make a lot of assumptions about the readership here.
 
The major problem you all have is that you are approaching what they are doing from your own view point and you make no attempt to think outside of your own view point. You have all already been shown a live working engine demo with at least 150 unique objects in it where each one is made up of anywhere between millions to trillions of pollygons and it all fit on one DVD and ran on a latop (single core, in software) at 15 FPS. which is impossible according to what most of you are already assuming yet it has been done. Which should tell you there is a lot more to what they are doing than a simple search algorithm.

No where did I say this technology was impossible. It's clearly possible by what has been shown on this website. What I did say is that it would not be feasible for use in games for some time due to storage constraints. Words mean something, and when you make claims about unlimited detail you essentially make claims about the process by which data is generated. It's possible that the point cloud data is used to procedurally generate data beyond what is stored on disk. On the other hand, I have no idea what he's doing because he's not said.

They are clearly coming up with completely new ways of thinking about several areas at once, not just one, and applying them together.

That remains to be seen. We have no hard technical information to go off of other than the claim of Mr. Dell that his engine is capable of displaying unlimited detail and that it's not voxel technology. I don't think Mr. Dell is a scam artist or a charlatan, but I will urge others to use caution in accepting the claims he makes. His use of terminology worries me and gives me reason to think he lacks the technical background through which to properly evaluate the capabilities of his technology, hence his fantastical claims.
 
It looks good but until they demonstrate robust lighting/shadows in a dynamic environment it isnt usable as a game engine.

And to refer to LOD as 'level of distance' doesn't help his credibility.
 
Forgive me if someone has all ready hammered on this point. Mr. Dell said laser scanning was used to produce the objects shown in the demo. The raw data for each point would be as follows:

-3 floats for position (x,y,z) @ 4 bytes each
-4 bytes for color (R,G,B,A)

That gives a total of 16 bytes per point in the point cloud. If an object contains 500 million data points (as I believe is the figure he gave for the rather lovely elephant in his demo), then the maximum amount of storage space necessary for the raw data would be:

16 bytes * 5.0E8 / (1024 byte/kilobyte * 1024 kilobyte/megabyte * 1024 megabyte/gigabyte) = 7.5 gigabytes

I think the answer is a combination of (A) the elephant is less than 500 million atoms and (B) the object data compression is really good.

In the video I remember he got really excited when talking about the compression and potential improvements to it. I think this indicates it is currently one of their key technologies and they know it's one of the big hurdles to this type of approach.

This is the second time i've read this here, and im confused.

Isn't one of the advantages of voxel data that it's usually uneecessary to give an xyz coordinate for each individual voxel point since you can just have a long array of points and their position in space can be determined by how far down the array it is and what deformations are being applied to the object that the voxel point is part of?

In your example the array position is essentially the same as the xyz data JeffCarlson's calculations. JeffCarlson was assuming they only store information on which voxels in the array are actually populated, you seem to be under the assumption that every voxel in an object will have data associated with it.

Imagine that each object has a potential size of 8x8x8 atoms. To represent this as an array (uncompressed) you'd need 512 elements regardless of what it was. To represent a simple cube you only need the 216 surface atoms (8^3-6^3). To represent a plane you'd only need 64 atoms.

As long as you use around 1/4 or less of the total available atoms in the array it's cheaper from a storage standpoint to use Jeff's method versus a brute force array where position is implicit. Jeff presented one means of compressing the atom (voxel) data for an object. It may or may not be the way Euclideon chose to do it.
 
You're making an assumption that he is not developing anything new here with your thought on the octrees. I also don't understand how the pointer would take up more data than a 'voxel' that he has claimed he is not using. If you have an instance of an object that contains the point cloud, you would have a pointer to it, from there you no longer need to use a pointer. It is possible that his object classes have some way of storing data in large sorted contiguous blocks that you haven't considered.

You are correct. I'm assuming that he is using the existing technology of ray casting on a sparce voxel octree. The reason I'm assuming that is because everything he has actually shown is consistent with this existing technology. And I don't believe it is a coincidence that things like shadows and animation that are currently impractical with SVO ray casting have not been shown yet. Also, with an SVO there is only one voxel per node which makes for 8 pointers for every 1 voxel (you are not pointing to an object or a cloud of points).

When I typed 21 trillion+ pixels what I meant was 21 trillion+ polygons, the word that I was hoping to stress there was representing. When rendering on a GPU you are rendering all polygons in front of the scene unless some of them are removed by some optimization method. His approach just shows you a pixel that was somehow chosen from a huge amount of data, so when you zoom in and out of the scene you see differing pixels that simulate the look of a scene that has 21 trillion+ polygons in it but in reality in his demo he's only ever displaying 786,432 atoms (if the every pixel on the screen was represented). So the scene he is rendering may be equivalent of a scene with 21 trillion+ polygons, but with the approach that he is attempting to implement I don't see how you would ever see everything in said scene in a single view of it. Does that make sense?

Yes, this does make sense. In fact, this is a property of any ray casting or ray tracing renderer. You are tracing rays into the scene for each pixel instead of projecting the geometry of the scene onto the view plane and filling in the pixels that get covered up.

I believe that you are looking at this as though it is something you have seen before but you have been told by the creator of the approach that you haven't seen this... I don't know why you are assuming that all instances of an object would have to be facing the same direction still.

Direction in an unknown, brand new, rendering context could possibly be irrelevant. Let's say I created a leaf node class that contained x,y,z, roll,pitch,yaw, and object index (as I brought up before). The object class that would be pointed to from the object index would be such that it contained all of the atom data in aligned in some normalized direction (front of the object facing positive/negative z and bottom of the object facing towards negative x). If you were to point to the object and give an orientation that differs from the standard (being 0,0,0) you could use a transformation matrix with the data in the object to poll the data as though you had it in a different orientation in its source object.

I will agree that the world is made up of a limited number of tiles but it was pointed out that the entire map was probably just generated to be the shape of the logo for the company.

Again you have listed voxels when the creator has stated he isn't using them.

The point of everything I'm trying to say is that no one but that team of (was it 9 guys?) knows what this new approach is doing. Everyone stop being armchair graphics experts and wait to see what happens when they give us another peak into their work.

If you watch the interview at about 16:50 he does admit to using voxels although he tries make you believe that the definition of the word voxel is somehow confusing or controversial (which really isn't the case).
 
Last edited:
There's always going to be new ways to accomplish tasks, I think the reason why so many are in doubt here, are the claims, in conjunction with the demo videos which really have not evolved much at all, leading to the inference, that they are BSing. I understand the idea of not wanting to show tech that isn't fully functional yet, but in a year, he basically showed us the same video, with some more objects input into it with some increased detail. He still hasn't addressed what the general gaming community wants to know, or any of the questions the community had a year ago when the prior video was released, etc.

I found that interview to be 40 minutes that I would have rather spent reading this thread. Nothing was revealed. I'm not going to argue, the still life world that they have created is quite detailed, and that excites me, but there is something in my gut that tells me that his new tech may not be revolutionary.

Am I having a brain fart, or did I hear him mention that he has no formal CS/logic training in the video?
 
The major problem you all have is that you are approaching what they are doing from your own view point and you make no attempt to think outside of your own view point. You have all already been shown a live working engine demo with at least 150 unique objects in it where each one is made up of anywhere between millions to trillions of pollygons and it all fit on one DVD and ran on a latop (single core, in software) at 15 FPS. which is impossible according to what most of you are already assuming yet it has been done. Which should tell you there is a lot more to what they are doing than a simple search algorithm.

This is similar to my recent play through of the game limbo. Many times a puzzle seemed impossible and as frustration mounted i would even say "This is impossible" out loud to nobody... but then all of a sudden a light would go on and i would see the puzzle completely diffferently and what was impossible was laughably easy. why? because i approached the problem from a completely different point of view and method. It was indeed "impossible" the way i was trying to force it to be done, but it was as easy as pie once i changed my point of view.

They are clearly coming up with completely new ways of thinking about several areas at once, not just one, and applying them together.

if you are the only one working on a radically new way of doing something and everyone else is stuck in a rut. the LAST thing you would want to do is go to some tech show with white papers and help everyone else with more money than you take your ideas. even in patenty form it would be dangerous. I am sure there are patents on the way, and papers on the way, but not until they are ready and for good reason.

It blows my mind how most of you are already looking at something that is "impossible" according to you, yet real and verified with a hands on live test and still you insist its a scam... that is totally utterly foolish.

Being shown the same thing that others have achieved (wow.. rendering with voxels) is not really revolutionary. It's the promises they're making that are revolutionary. Coincidentally (or not, you decide) those revolutionary promises just happen to be the very things that they are not showing (or unable to show, you decide).

In other words, I could show you a time machine and show you how the motor starts up and runs... much like vehicles we have come to know and love over the last 100 years. What I haven't shown you is the fact that it travels through time. I'd imagine that's the part you're interested in seeing, no? But I've promised that. So it must be true, since the motor is running, right? Well I'm glad you believe me, because I'm not going to show you the time travel part.

To me, if he was straight up about this all, I'd be perfectly happy with his claims and give him the chance to prove himself. But as it stands, he's going *so far* out of his way to confuse concepts, and talk about concepts he clearly doesn't understand (i.e. tessellation, level of detail) and making impossible claims that my brain has identified that it is trying to be sold something. If the tech were there, it would speak for itself. But I can tell I'm being "sold" by watching these videos, and the real challenges have all been skirted around in discussion.

Someone actively skirting around the questions about well established challenges for this technology tells me they're aware of the challenges and would rather not talk about them because it's not their strong point or they don't have a solution to it. If I walk into a room with you and you jump and instantly shut a book you were reading and tuck it away, I'm going to wonder why you felt guilty reading that book, and wonder what it was that I caught you reading.
 
Last edited:
New technology always runs the risk of being picked apart especially when those that are invested in the old technology see their income stream being threatened. I have no dog in this hunt just talking about the way the world works when it comes to differing technologies and change. What if 3D memory and this new technology became a match made in heaven? How long ago was it that DDR3 memory was sky high now I just saw 12GB for $59.00. What if a smart company decided to invest in other technologies to take advantage of Bruce's technology and this convergence brought us to a new unexpected level? The real question for me is not what it can do right now, but how can it be adapted to future technologies.
 
Am I having a brain fart, or did I hear him mention that he has no formal CS/logic training in the video?

You're absolutely correct. In fact when talking to people he takes every opportunity to tout that fact, and that he does not bother researching graphics.. he takes pride in working in a vacuum, which might explain why he thinks that what he has working today is so revolutionary. Just to be clear, I'd find it very respectable that he's so driven to work on this tech without formal education if it weren't for the fact that I feel this is a bit of a scam, or at least knowingly deceiving the population of people out there who don't have any computer science background or rendering knowledge.

Just found this... here's an interesting thread on Beyond3D (technical graphics guru forum) where he didn't know what a CPU cache was, 3 years ago, then reminds people of the benefits of working in vacuum (?):
http://forum.beyond3d.com/showthread.php?p=1145451#post1145451
http://forum.beyond3d.com/showthread.php?p=1145452#post1145452

Bruce Dell said:
Could some one please explain to me what memory cache is and why I have never encountered it in C programming.

what factor of speed would knowledge of this subject give?

P.S there are advantages to working in a vacuum

And this one:
http://forum.beyond3d.com/showthread.php?p=1142124#post1142124

Bruce Dell said:
Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. The media is hyping up the death of polygons but really that’s just not practical, this will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with.

SQRT may I ask what company you are from, all appointments in America where pushed till May.
Please contact me [email protected]

Kindest Regards
Bruce Dell

He actively avoids saying voxels because he doesn't want people to associate it with decades of work that has already been done on voxels.
 
What's sad is that I was talking about "atoms" and creating 3D images with tiny "atoms" where each has it's own qualities and effect - physics, colors, etc. - a LONG time ago. Doom era. Like most of my ideas, someone else brought it to life (although, once I was asked if I wanted to patent an idea for $2000. Didn't have the cash, so no. 6 months later: the product was on the shelves).

I hope to see something come of this. Good idea! ;)
 
What's sad is that I was talking about "atoms" and creating 3D images with tiny "atoms" where each has it's own qualities and effect - physics, colors, etc. - a LONG time ago. Doom era. Like most of my ideas, someone else brought it to life (although, once I was asked if I wanted to patent an idea for $2000. Didn't have the cash, so no. 6 months later: the product was on the shelves).

I hope to see something come of this. Good idea! ;)

Well voxels go back decades dude, so you didn't get the first on that one.
 
Very interesting technology. I believe it would indeed be possible to create a game using this sort of technology although I'm inclined to believe that they would be hyrbids of sorts using conventional characters and animations and perhaps just use this technology to create the scenery. (At least at first)

However due to development time and the learning curve needed for something like this I do tend to agree that this sort of tech probably won't be hitting for a couple more years if it is ever finished, of course only time will tell.
 
Still looks like snake oil to me. The guy is exactly like a used car salesman. He's extremely cocky, arrogant, and responds to every question the exact same way. You can watch the entire 40+ minute video and really only get the same response over and over again. There's no new information here, just claims with nothing to back them up.

I also love how during the demo, he allows the other guy to move around the environment for only a minute or so, until he demands over and over again to drive, with the final comment being something like, "I made it so I get to drive". Something to hide, perhaps?

So they have over 2 million dollars of funding, and he claims it's far more money than they even need to complete the project. He says disappearing is just how they do things there. Well, he's actually very right. They might pop up a time or two again, but they'll finally disappear for good soon enough.

Sorry, but this "technology" is just a dead end. No amount of PR from some elitist prick is going to save it.
 
I'll just go through the positive things first: Well done, Bruce Dell and the rest of Euclideon - not the least for having made a very quick search algorithm for their voxel-based render. This is no small feat, and considering the size of the company, impressive. Their technology could be able, with a few optimizations, to run native resolution voxelbased graphics at something like 30 or 60 fps, which in itself is something other developers who has experimenting with voxels, seems to have problem with. That, however, is not enough for a full engine. The biggest problem is, that right now Euclideon has created beautiful, relatively smooth-running dead worlds.
How their render, and especially their search algorithm, is going to cope with many simultaneous animations, shading, dynamic light/shadows, object deformation and so on, will be crucial if they want to succeed. Will the perfect palm trees be able to sway gently in the wind?

Also, keep in mind that the biggest problem with voxel-based approaches (with complex objects) is the closer you get on an object, the harder the framerate will drop. That was quite evident in the video, when they closed in on the ground. Obviously optimizations could reduce the problem, but it just shows another challenge, if you want to keep a steady framerate.

As it is, I'll await their next videos and/or timedemos. At the very least, [H]ardOCP and Euclideon has shown us that the technology represented on Youtube videos actually work in realtime.. But besides that, more than a few questions remains to be answered.
 
Last edited:
Finally back from vacation and got to watch the video. VERY interesting.
First, I know nothing about coding or the nitty gritty of technology, just what I gather from the news online. That being said I have to say I am impressed. While there are some lingering questions about whether this can really work (which is the case with any pre-alpha technology from a pre-revenue company, especially in the tech/gaming sector), I am hopeful. The guy seems to be legit, and a good CEO (i.e. a good salesman), and is pretty good at protecting his company's IP.

Here's hoping he can get in bed with a major game company for a solid, playable, dl-able tech demo sometime soon.
 
If Bruce Dell and his small cadre of developers had anything novel to show, they would have been presenting a paper last week at Siggraph in front of the best people to evaluate their claims. Instead, Dell chose to present his work to the much more gullible audience of youtube and armchair experts on various blogs that for the most part share only Dell's professed ignorance with regards to traditional 3d rendering.

I don't feel qualified to evaluate the full merits of Dell's presentation, despite having written multiple triangle rasterizers from scratch, worked on several game/rendering engines, and had the math/computer science education that Dell lacks. Still, I don't think one need consider the particulars of his claims to dismiss some of them out of hand.

The most egregious example of such claims is "unlimited detail." Charitably, this is disingenuous marketing bs. Less charitably, it reflects a fundamental ignorance on Dell's part of what both traditional rendering techniques (like triangle rasterization) and his own methods are doing. Triangles in a polygonal model are defined by three points with the edges and internal area left as an exercise for the renderer to generate and shade. This process interpolates the values defined at the triangle vertices to generate as many screen space pixels as you need (and more besides in certain cases). Whatever Dell's technique is, he is using model data of fixed size and presumably interpolating between values if necessary. That is not unlimited; it's just a higher frequency sample of a model than the corresponding polygon model.

I also found his discussion of ray tracing to be humorous. I'd like to think he knows better than his poor commentary reflects but perhaps not. How does he think he is indexing into his SVO without deprojecting a point from screen space and doing some collision tests? Oh yeah, a "search algorithm"... sort of like the above...

Others have raised the obvious memory/space problems that this new interview/demo do nothing to address. At least among graphics professionals in the industry, Dell's videos and claims are dismissed out of hand as they probably should be. I hope that the Australian government recognizes this and doesn't renew future grants for Euclideon; the Australian game industry could use a cash infusion right about now.
 
Found this Interview with Mr. Dell over at Gamespy from a couple days ago: http://pc.gamespy.com/articles/118/1187338p1.html

Regarding the memory, If we were making our world out of little tiny atoms and had to store x, y, z, colour etc… for each atom, then yes it would certainly use up a lot of memory. But instead we've found another way of doing it.
While it doesnt reveal anything directly new, he does point out that they arent keeping track of all that information on a per atom basis.
 
yanqin1.jpg

yanqin2.jpg

yanqin3.jpg


nteresting video but the shortcomings of the technology should not be underestimated. There are all kinds of neat concepts out there with great fundamental ideas but some stumbling blocks that prove to be mountains.

Kudos to these guys for trying to think outside of the box, but this approach to rendering will probably come from somewhere else, and likely hybridized with a more traditional approach.
 
Found this Interview with Mr. Dell over at Gamespy from a couple days ago: http://pc.gamespy.com/articles/118/1187338p1.html


While it doesnt reveal anything directly new, he does point out that they arent keeping track of all that information on a per atom basis.

Sorry that sets off my bullshit-o-meter. You can't store information while not storing information. They *MINIMALLY* need to store color (RGBA), since position can be inferred by position in the octree (though that then complicates the animation side of things). There's definitely storage on a per-atom basis. It's simply a question of what components are stored there.

Saying that you don't need to store per-atom information where there are tons of atoms is like saying you can make something from nothing. Sounds an awful lot like that "free energy" scam done by Steorn:

http://en.wikipedia.org/wiki/Steorn
 
Last edited:
Sorry that sets off my bullshit-o-meter. You can't store information while not storing information. They *MINIMALLY* need to store color (RGBA), since position can be inferred by position in the octree (though that then complicates the animation side of things). There's definitely storage on a per-atom basis. It's simply a question of what components are stored there.

Saying that you don't need to store per-atom information where there are tons of atoms is like saying you can make something from nothing. Sounds an awful lot like that "free energy" scam done by Steorn:

http://en.wikipedia.org/wiki/Steorn

I posted a data map on my blog the other day that might give you a better idea what what it looks for and how items are places. The point information has the be there and the search looks for and what needs to go in to pixel 1 the 2 then 3 until it fills the screen all other data is ignored. The way the data is stored is strange too but that system is about to change and that's why we didn't get into data and file size.
 
I posted a data map on my blog the other day that might give you a better idea what what it looks for and how items are places. The point information has the be there and the search looks for and what needs to go in to pixel 1 the 2 then 3 until it fills the screen all other data is ignored. The way the data is stored is strange too but that system is about to change and that's why we didn't get into data and file size.

Let me tell you about *my* new revolutionary technology:
I have a revolutionary new technology to display graphics in realtime with immense amounts of detail. Instead of rendering everything in the scene, I only render the objects that are closest to the camera and are not occluded by other objects. In this way, we can have tons of objects in a scene without the need to render them all... the result depends on the visibility of the objects. This is similar to the way that Google searches for results with high powered search. We search for objects that are visible, and then render them.

Additionally, we have another ground-breaking technology, which involves comparing the depth of fragments generated with a test of depth, to ensure that we avoid wasting cycles and bandwidth writing to the color buffer and depth buffer. This is similar to how Google searches for web pages through their search engine... we search for and find the correct fragment to display... the one that is closest to the camera for that pixel.


The first "technology" is called "occlusion culling", and the second "technology" is called "depth test". Both are part of modern every-day graphics pipelines *today*.

Sorry guys but I think the reason this is seen as "revolutionary" is that the people who say that have no clue how graphics works today. If I felt these guys were being honest I would not be so down on them, and would say, hey good luck. But I genuinely feel the guy is deceiving a large number of people who are particularly gullible and naive when it comes to graphics technology.

I've shown proof that Bruce Dell is intentionally avoiding using the word voxels (despite proof that he's aware of them and what they are) because the term is not as "sexy" as explaining it as atoms. When someone re-brands an existing technology that they've publicly said they're aware of I become very confident that they are not just renaming for the sake of it... they are doing it to confuse and deceive. I'll leave with this:

There's a reason that graphics engineers from major game studios and graphics companies are laughing at this guy on Beyond3D forums... they know what most in this thread don't. Do some research and take a cue from the experts.
 
Last edited:
I've shown proof that Bruce Dell is intentionally avoiding using the word voxels (despite proof that he's aware of them and what they are) because the term is not as "sexy" as explaining it as atoms.

Funny then why at 16:50 in the Interview is Bruce telling us it is voxels ? Please if you want to argue watch the video first. also looking at my interview with him in 2008 he all so called it voxels then.
 
Last edited:
Funny then why at 16:50 in the Interview is Bruce telling us it is voxels ? Please if you want to argue watch the video first.

Actually I've watched all videos in full. But this is exactly my point. *NOW* he is saying they're voxels... now that it's become undeniable that's what he's doing because of people in the industry calling him out on it. Why didn't he call them voxels in the original video 1 year ago?

Every interview, he couldn't just say "yes". It's always a bunch of qualifying words then "if .... then yeah but..."

He's actively avoiding using it if he can help it. What does that say to you?

http://forum.beyond3d.com/showthread.php?p=1142124#post1142124

Bruce Dell said:
Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. The media is hyping up the death of polygons but really that’s just not practical, this will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with.

SQRT may I ask what company you are from, all appointments in America where pushed till May.
Please contact me [email protected]

Kindest Regards
Bruce Dell

If the underlined bit isn't proof that he's knowingly and actively engaged in re-branding existing technology, I don't know what is.

Honestly Johnny I'm not trying to attack you... you seem like a very nice guy... but I feel that because you and [H] have received this exclusive access to them, you feel sort of intrinsically bound to their success, because that would make you guys look good, having brought the exclusive interview first. In my opinion you're almost functioning as an extension of their marketing because you were given "NDA level knowledge" to make you guys feel like you're on the inside with the devs and are defending it accordingly. Tech companies do this "exclusive knowledge sharing" all the time with hardware reviewers, and it tends to make reviewers feel somewhat obligated to defend the company because the company was nice enough to give this special knowledge. I just kind of wish this was more investigative instead of cheerleading.

I've replied to this thread a lot and I think I'm beginning to repeat myself, so I'm gonna back out of this one. Good debate in here though.
 
Last edited:
Looking at where they are I'd say that we should really expect a pretty big reveal in a years time.

If nothing further has come forth by then then maybe its time to get itchy feet. A year will go quickly (well they do at my age). So lets all meet up here in 12 months time and see where they are? Put a date in your diarys folks.

However, I'm hopeful and if nothing else even if animation doesnt pay off then the tech would work wonders for filling in the game environments around the animation.
 
Actually I've watched all videos in full. But this is exactly my point. *NOW* he is saying they're voxels... now that it's become undeniable that's what he's doing because of people in the industry calling him out on it. Why didn't he call them voxels in the original video 1 year ago?

Every interview, he couldn't just say "yes". It's always a bunch of qualifying words then "if .... then yeah but..."

He's actively avoiding using it if he can help it. What does that say to you?

http://forum.beyond3d.com/showthread.php?p=1142124#post1142124



If the underlined bit isn't proof that he's knowingly and actively engaged in re-branding existing technology, I don't know what is.

Honestly Johnny I'm not trying to attack you... you seem like a very nice guy... but I feel that because you and [H] have received this exclusive access to them, you feel sort of intrinsically bound to their success, because that would make you guys look good, having brought the exclusive interview first. In my opinion you're almost functioning as an extension of their marketing because you were given "NDA level knowledge" to make you guys feel like you're on the inside with the devs and are defending it accordingly. Tech companies do this "exclusive knowledge sharing" all the time with hardware reviewers, and it tends to make reviewers feel somewhat obligated to defend the company because the company was nice enough to give this special knowledge. I just kind of wish this was more investigative instead of cheerleading.

I've replied to this thread a lot and I think I'm beginning to repeat myself, so I'm gonna back out of this one. Good debate in here though.

Yeah I remember that time he posted that as the hole thread back then was about something I wrote, at the time I also had a lot of input from key people within a GPU developer. He was getting a lot of flak about how crap voxels where and he was trying to explain that his work was not inventing voxels it was a new way to work with them. So I didn't go into this blind. Bruce's project is where he says it is now. It runs real time and it looks good. Only time will tell if we get to see it in games any time soon but I do know he has a good team and he has some very cool stuff to come, but proof is in the pudding and I get that.
 
Why then, in this video (at 7:26) does he characterize the technology as "vastly different than voxels"? :mad:

I "guess" he was trying to not get a label put on it, as it is a lot more that just voxels. You know as well as I do if people where told it's voxels we would get dumb comments like oww we know all about that it will never work.. with out ever looking at it. Anyway who gives a crap what its called lets just see if it works!
 
I "guess" he was trying to not get a label put on it, as it is a lot more that just voxels. You know as well as I do if people where told it's voxels we would get dumb comments like oww we know all about that it will never work.. with out ever looking at it. Anyway who gives a crap what its called lets just see if it works!

The technology obviously works. The real question remains to see if it can be animated smoothly and how much variety they can cram. Still then I see this as a big step forward in 3D graphics rendering if they can pull it off in an actual game environment
 
Its hard to see from the video but if that demo is playing real time on that shitty little laptop at the same quality as their previous videos, I'm impressed and can't wait to see this in the hands of some game developers.

I like to keep an open mind, even with things like storing the data for a huge amount of points is something that can be gotten around it seems.

Things like lighting will be interesting to see, but fuck, I'd love to play a game with that technology even if it didn't have much lighting effects, the shear level of detail reduces the need to have awesome lighting to hide your shoddy textures and big ugly polygons.
 
I like to keep an open mind, but this guy is quite clearly not willing to offer a balanced perspective - something you get from someone like John Carmack when he talks about the compromises he has had to make with his engines. Bruce never appears to address any of the weaknesses of his rendering technique, and only ever uses huge superlatives to describe things. Maybe it's just because he's conscious of not scaring away potential funding?

Maybe there are absolutely zero weaknesses, and he's found the holy grail of engine tech; if that's the case, why hasn't the company been bought by another to invest more resources into the technology?

I'd like to fast forward another few years to see if this really is vaporware or if he shows off something with decent art (i.e. not just repeated stuff), some animation, and some effects. I'd love for this to be real, it's just that it sets off the TGTBT alarm for me.
 
Yes, good artists are hard to come by these days ;)
BTW am I the only one who remembered the game Archipelagos when seeing the UD Demo?
http://www.youtube.com/watch?v=9pheHnnqbik
A flat island, made out of square blocks, populated by one type of trees and two or three other object types. Just that the trees of Archipelagos had better animation - 22 years ago, (god what am i old :( ) on systems clocked at 7-8Mhz, equipped with 512KB of RAM (yes, the whole system memory would fit into the L2 cache of some of todays CPU cores!).
Maybe Euclideon should produce a remake of that game and offer that as downloadable demo ^^
Honestly what i have seen so far from Euclideon doesnt give me much hope that this tech will lead to a paradigm shift in the foreseeable future. What makes me feel bad about the whole matter is the impression that Mr. Dell tries to convey the opposite, without giving any evidence and avoiding to give answers to _any_ of the critical questions other than "it's all ok, it works" or "its not finished yet, but no problem here, you will see". What the Atomontage guy has to say on the other hand seems much more reasonable to me.
Sorry but how i see it at the moment, HOCP had been pulled the wool over their eyes. But I greatly appreciate the interview, thank you very much and i cant wait for further information to come. I am ready to change my opinion as soon as I see anything that gives evidence of this tech being capable of serving as the backend for a believable gaming world. As compression only brings you so far, i very much doubt this is the case. An environment big enough to allow the feeling of moving freely and with enough individual detail to not look like being made of a limited set of building blocks probably contains just too much information to be modeled in voxels (or whatever Mr. Dell may call them) and fit into todays computers memory.
In fact, the demo made me think of another eposide in computer game graphics: the graphics based on modified charsets in the sidescrollers of the 8 bit era. In a way, this was "unlimited detail" already, as you could scroll on and on, a seemingly vast landscape made out of a small set of tiles passing before your eyes. Seems the tricks did not change that much since then at all.
 
Yes, good artists are hard to come by these days ;)
BTW am I the only one who remembered the game Archipelagos when seeing the UD Demo?
http://www.youtube.com/watch?v=9pheHnnqbik
A flat island, made out of square blocks, populated by one type of trees and two or three other object types. Just that the trees of Archipelagos had better animation - 22 years ago, (god what am i old :( ) on systems clocked at 7-8Mhz, equipped with 512KB of RAM (yes, the whole system memory would fit into the L2 cache of some of todays CPU cores!).
Maybe Euclideon should produce a remake of that game and offer that as downloadable demo ^^
Honestly what i have seen so far from Euclideon doesnt give me much hope that this tech will lead to a paradigm shift in the foreseeable future. What makes me feel bad about the whole matter is the impression that Mr. Dell tries to convey the opposite, without giving any evidence and avoiding to give answers to _any_ of the critical questions other than "it's all ok, it works" or "its not finished yet, but no problem here, you will see". What the Atomontage guy has to say on the other hand seems much more reasonable to me.
Sorry but how i see it at the moment, HOCP had been pulled the wool over their eyes. But I greatly appreciate the interview, thank you very much and i cant wait for further information to come. I am ready to change my opinion as soon as I see anything that gives evidence of this tech being capable of serving as the backend for a believable gaming world. As compression only brings you so far, i very much doubt this is the case. An environment big enough to allow the feeling of moving freely and with enough individual detail to not look like being made of a limited set of building blocks probably contains just too much information to be modeled in voxels (or whatever Mr. Dell may call them) and fit into todays computers memory.
In fact, the demo made me think of another eposide in computer game graphics: the graphics based on modified charsets in the sidescrollers of the 8 bit era. In a way, this was "unlimited detail" already, as you could scroll on and on, a seemingly vast landscape made out of a small set of tiles passing before your eyes. Seems the tricks did not change that much since then at all.

Dirk first, gratz on being stupid enough to put your actual name as your handle. That takes one of two things, Epic stupidity, or giant balls. I think by reading your post we know you have no balls.

Comparing "Unlimited Detail" to some old game with crude blocky square flat ground and 2D sprites sitting on top of it just dosn't make any sense.

The reality is that this engine does exist, and it is real-time. People have touched it and interacted with it. No one on this site has had any "wool" pulled over their eyes. Anyone with a small capacity to think around here realizes that this engine may not be capable of ever being useable in a game, and that there are still major hurdles to overcome to make it useable. But to say that this is comple BS is implying that the interviewer, and the owners of this site are in on this "hoax." To be honest I just don't buy that, anyone who was around durring the "Phantom Console" crap knows that these people have spent a lot of time and money to solidify their integrity.
 
I have a list of things I check for when dealing with people trying to convince me of things that sound too good to be true. Usually the ones that are full of shit spend way too much time and money on their image, because otherwise they have nothing of substance to show. Let me give Bruce Dell a quick test.

1. Douchebag hat – Check!
2. Douchebag accent – Check!
3. Uses a Mac and Apple products are prominent everywhere – Hey, this guy could be for real!

Fortunately for Bruce, passing number 3 trumps all others. Euclidean may be genuine!
 
If it is fully capable of doing what they say I dont blame them for keeping quiet on it. What do they have to gain from giving out specifics of how they're overcoming things other than to appease the nerds on random internet forum X?

They're a small team, I wouldn't blame them for being worried that by giving out a lot of details they'll give out too many details.

Plus, they're Australian and we all know heaps of awesome ideas come out of Australia (assuming the retard government and corporations aren't dumb enough to sell it off :p)
 
Back
Top