Most importantly, how do the atoms create a surface to properly reflect, scatter, and occlude light? Points alone, by definition, have no geometry, and if I were to calculate the vectors of reflected light rays, the easiest way to do it would be off of a simple surface like a polygon. How can 'atoms' be easier/computationally lighter?

I was wondering the same thing, how can you take an "atom" and know which way the light should reflect off it, etc. My guess is that when they create a model, there is some sort of "normal" vector associated with each atom which indicates the surface orientation of the macro object. That way they would be able to calculate light reflections, etc. Each atom could also have some surface properties that define how it scatters light or if it is more of a specular/diffuse reflection, etc. This takes up more memory but it doesn't require much processing power at runtime.

If all of this is true, the big innovation would have to be in their 3D search algorithm for detecting which atom should be drawn at which pixel. I think the reason they can say "unlimited power" (*within the constraints of data storage) is that for each pixel on the screen they are only drawing a single "atom". This means it doesn't matter how complex their environment is, they are always drawing exactly the same number of atoms (which is the number of pixels they have on screen). The hard part is searching through a scene and calculating the atom that is closest to the camera for each pixel. It would be interesting to see what their data structures look like and how they search through them, but I doubt they would let anyone know since this is their "secret sauce".
 
Very cool demo glad to see this tech is still kicking around. Reminds me a lot of voxels used in games awhile back.

I was wondering the same thing, how can you take an "atom" and know which way the light should reflect off it, etc. My guess is that when they create a model, there is some sort of "normal" vector associated with each atom which indicates the surface orientation of the macro object. That way they would be able to calculate light reflections, etc. Each atom could also have some surface properties that define how it scatters light or if it is more of a specular/diffuse reflection, etc. This takes up more memory but it doesn't require much processing power at runtime.

That was my thought after seeing the snail close up also, reminiscent of traditional specular lighting or bump mapping with a bit higher resolution. Lighting and animation seem to be the weakest points right now. JohnnyGatt any thoughts on what the next steps are for the lighting tech? The interview mentions using point lights driven by the GPU, with a lot of GPU power leftover could things like raytracing be achieved with greater efficiency? GPU assisted physics also seems like an easy win when the majority of the graphics pipeline is handled in CPU.
 
I'd love for this to be real and the next big thing, in terms of the underlying technology, but the Holoverse clips appeared to be choppy and didn't appear to be as smooth as the VR experiences currently available. Is this due to the video compression?

I'd like to know how well that Holoverse demo is running. What frame rates are being achieved with that hardware? What kind of response times are being achieved with the custom IR and motion sensors?
 
Last edited:
Keep in mind you are looking at a video of a video. IIRC JohnnyGatt said it was s smooth experience. He should be up in a few hours and hopefully ping in.
 
They should franchise it out. I'll set one up at the local mall. :) Neat system.

I'll have to settle with the Vive for now.
 
John Gatt, I had another question...

Seeing many people don't have 3D laser scanners in their homes, will the engine support photogrammetry and/or normal polygonal models converted into UD format?

Yep the Snail was made using ZBrush. take any polygonal models import it in.
 
I was really intrigued by this bit..

"In year four we did a worldwide public test of our technology that streams unlimited point cloud data over the web. This is a very important part of giving the internet a 3D future. Then this year we released the hologram room. We have 4 more major projects under development at the moment which are just as interesting as what has come forth so far."


Are these guys actively pursuing an AR-type web? An immersive "holographic" online world? Because I'd be very very interested in something like that.

This is one i wish i could answer But Zipp....
 
This tech has the potential to shake up everything..

With this potential SDK. How would that look in the PC space anyways? Are we talking running off DX12 / Vulkan, or completely new API?

Any chance one of these Holoverse coming to Canada? I would love to check it out.

After watching that video, I have to agree. The deal of the polygon is inevitable..

We can use DX12 for lighting and other things but it's not needed to run the models. It will be used to enhance the models.
 
Other than the "Unlimited Detail" I really don't see anything that stands out (pun intended) on the "hologram room". I know I'm over simplificating but its just a room with images projected on the walls, ceiling and floor with proper perpective correction. I think I saw something similar based on trials on tatooine when it was first announced.

You are right Others have made rooms like this the best match is in Germany it needs 12 systems all have SLI setups and that's for just one room. We do it with one non SLI pc.
 
To add to my previous question (a ton more..):

I think I speak for a lot of people when I say that I would also love to see more high resolution, high FPS (YouTube does support 60fps these days) footage of your engine/technology/Holoverse in action.

I can clearly see a lot of changes from your previous videos from years ago but it's hard to gauge just how infinite the level of detail your technology is capable of if the footage is upscaled to 1080p from some unknown resolution. The higher-res looking footage shown is also not given much justice due to the low frame rate/compression of the videos. I'd hate to be that guy that doesn't believe in what you're all doing but if your tech is as scalable as you all claim it is with almost no performance penalty to speak of (1 atom vs 1,000,000,000 atoms), then I don't think it's asking a lot to see, at the very least, a 60fps high resolution video of your previous demonstrations or your even your current up-to-date demonstrations.

In the five years since Euclideon's debut, processors have basically become 2^5 times more efficient/faster so even a 60fps 1080p version of your very first demonstration would be much appreciated.

Aside from the video request...

Q12: If I have a .3ds file how hard is it to convert to your file format?

A: We just drag the file into our tools. Then we must set an atom resolution for the model, eg. 512, 1024, 2048 etc. Think of it like a 2D screen bitmap resolution but in 3D. After we’ve moved a model into our engine we have the ability to enhance the detail as we are not limited by polygons. Why would you want square or octagon wheels when you can make them perfectly round?

Besides the time it takes for the one-time calculation to convert such models to atom maps and assuming the starting 3ds file is an extremely high polygon render, why would any model be chosen to be rendered at 512 atom resolution? I'm assuming the model here is generated into an 'atom map' once and is then rendered on the fly by the engine so why the resolution options if the scalability is as infinite as it is? Are there texture rendering limitations since they involve human creation or is there some other reason? If I had a perfectly 'round' sphere to render made of up of trillions of polygons, setting the conversion resolution to 512 atoms would render a blocky sphere as opposed to 1 million atoms. Could your tools generate a nearly infinite number of atoms to render the sphere from the high-poly model? How would performance be impacted at both conversion and run time for a render like that?

If the spaces between atoms are generated into planes, how different is the concept from polygons? If there isn't a plane generated from the spaces between atoms, how exactly are the 'voids' filled if an atom resolution is too low for a model conversion?
 
Last edited:
I imagine for the projectors you have to have a very specific room size with a specific wall covering, though. Even now with VR it's sometimes hard to find the empty room space, I would think this would be even harder.

I have about the same size room setup for my Vive It was a bit easier at Holoverse as I was able to see the real world when I look back in the Holoverse space I feel more comfortable. I have no problem standing with my nose 1 inch from the wall I know it is 1 inch away but I don’t get the emersion broken by the grid popping up at me. Saying that we do get people running into the walls even after the warnings but no injuries.
 
What they've build has been around for a long time and is called a "CAVE" (Wikipedia) This type of setup was the state of the art in VR for quite a while because you don't need to miniaturize things and fit them into a headset.

The projectors are all 3D projectors, and the glasses are either passive (like in a movie theater) or active (like some 3D TVs use).

You are right it is a cave but we lower the hardware specs needed to run it :)
 
Given that you're optimizing for less of a hardware requirement from traditional 3D/VR/AR rendering,, have mobile devices moved to the forefront of the company focus?
 
Yeah I had forgotten about these guys as well. Honestly I think it's pretty neat what they are doing, but the video also shows some of the failings. The sword/pickaxe tracking was noticeably laggy. The lighting is poor, the environments are sometimes crazy detailed but seem haphazard. It's just hard to say if they are onto something but need a better artists and stuff to lay things out better and handle the overall scene - or if the tech just isn't capable of those things. Some of the animations looked terrible. Is that because they are skeletal (they mentioned a new animation system)? Other video games have good looking animations though so I'm not sure that being skeletal is the problem, it's more likely their implementation of it or their animators.

I had the same feeling going into it the first time I had seen the video and was worried about what i had seen.. Then you put on the glasses and it all changes it’s like the feeling you get when you first use the Vive but 10 fold as you can see your body. You can see a Holographic wall that you know is not in front of you but all of your senses freak out as you walk through it. You look at the snow and fish and are amazed at how real they are and at how real they move and for the love of God want to know how it's all floating around you. The video jaus cannot do it justice we are working on green screen to show what it's like for the user.
 
To add to my previous question (a ton more..):

I think I speak for a lot of people when I say that I would also love to see more high resolution, high FPS (YouTube does support 60fps these days) footage of your engine/technology/Holoverse in action.

I can clearly see a lot of changes from your previous videos from years ago but it's hard to gauge just how infinite the level of detail your technology is capable of if the footage is upscaled to 1080p from some unknown resolution. The higher-res looking footage shown is also not given much justice due to the low frame rate/compression of the videos. I'd hate to be that guy that doesn't believe in what you're all doing but if your tech is as scalable as you all claim it is with almost no performance penalty to speak of (1 atom vs 1,000,000,000 atoms), then I don't think it's asking a lot to see, at the very least, a 60fps high resolution video of your previous demonstrations or your even your current up-to-date demonstrations.

In the five years since Euclideon's debut, processors have basically become 2^5 times more efficient/faster so even a 60fps 1080p version of your very first demonstration would be much appreciated.

Aside from the video request...



Besides the time it takes for the one-time calculation to convert such models to atom maps and assuming the starting 3ds file is an extremely high polygon render, why would any model be chosen to be rendered at 512 atom resolution? I'm assuming the model here is generated into an 'atom map' once and is then rendered on the fly by the engine so why the resolution options if the scalability is as infinite as it is? Are there texture rendering limitations since they involve human creation or is there some other reason? If I had a perfectly 'round' sphere to render made of up of trillions of polygons, setting the conversion resolution to 512 atoms would render a blocky sphere as opposed to 1 million atoms. Could your tools generate a nearly infinite number of atoms to render the sphere from the high-poly model? How would performance be impacted at both conversion and run time for a render like that?

If the spaces between atoms are generated into planes, how different is the concept from polygons? If there isn't a plane generated from the spaces between atoms, how exactly are the 'voids' filled if an atom resolution is too low for a model conversion?


We are making some tech demo videos to show off our work at high FPS we had problems combining video clips Adobe did not like what we wanted to do. I think we ended up running it through WMM to fix the audio in the end. Why would any model be chosen to be rendered at 512 atom resolution? Why make a grain of sand out of 512 atoms and not 2048? "Data" Yes you can make it out of 2048 but what will you get out of it apart for more lines of code?

Given that you're optimizing for less of a hardware requirement from traditional 3D/VR/AR rendering,, have mobile devices moved to the forefront of the company focus? Not at this time.
 
We are making some tech demo videos to show off our work at high FPS we had problems combining video clips Adobe did not like what we wanted to do. I think we ended up running it through WMM to fix the audio in the end. Why would any model be chosen to be rendered at 512 atom resolution? Why make a grain of sand out of 512 atoms and not 2048? "Data" Yes you can make it out of 2048 but what will you get out of it apart for more lines of code?

If you just confirmed that lines of code generated are a direct correlation of atom resolution, how does that not equate a O( n ) runtime where n= number of atoms?

Run time analysis is performed with the worse case scenario, if high atom resolution conversions generates more code, how exactly is that scalable as well as the claims state?

And how are the spaces between atoms handled? I'm very curious about the spaces between atoms and whether or not they are treated similarly to the points a polygon makes up. How does that differ with your technology from the points/vertices that make up a polygon?
 
Last edited:
As someone pointed out, this looks like CAVE running euclideon's point cloud rendering.

There are dozens of reason why this sort of scan based point cloud/voxel rendering hasn't caught on. Basically, its' a state of the art image according to the parameters of 1995 computer graphics- Meaning that it prioritizes resolution as the only form of fidelity.

From a development standpoint, working entirely from scan data is hugely limiting and cost prohibitive. From an art directors perspective, you mostly lose the ability to design objects, environments, and creatures for a game. More specifically, it makes designing lighting and the specific tonalities of objects, environments, and the complete image cumbersome/impossible. From a workflow standpoint, you could certainly build high res 3d objects from polygons like a normal workflow and then convert them to point data, which would at least allow for original designs, but then you have to render it with close to no shader complexity and static lighting. Animation is another issue. If you look at the video, everything that moves is clearly rendered as polygons.

Also, in all of euclideon's demos they haven't ever showed anything with purposeful lighting or any material complexity. It's clearly whatever lighting was captured at the time of the scan baked into the object diffuse color data with a few bright lights added on top (likely baked into the scene, would be surprised if they were dynamic at all) in an attempt to tie it all together. Functionally, the materials are diffuse light only. No shader complexity like you see in any modern game or CG. Maybe if we are lucky there will be a 1990s style glass smooth reflected water plane.

Basically, euclideon's "tech" (the most advanced in Australia) solves a problem developers don't actually have. Nobody making video games right now is thinking "I wish I could have an unlimited triangle budget but be forced to lose all lighting data, material complexity, and animation while at the same time making content production a more rigid and convoluted process."
 
Last edited:
Okiedokie, I made this argument in the wayback machine during the last "reveal" and I'll basically make it again...

The laser scan point mapping of course looks fantastic, but it will mean nothing until the software can make it move and change lighting (as AMoody mentions). You may have a method (or enough ram) to dump the point data into memory and display the static points in a pretty 3D space but using it in a real game engine is going to be the snag when billions of "atoms" must move in virtual space.

Last time, I said I wanted moving environments. I wanted the massive "island" to have very basic physics animations like trees swaying in the wind or the foliage bobbing in a breeze. That hasn't happened for obvious reasons: those enormous point maps need to be updated, en masse, then read again, st least 60 times per second for each frame render. That kind of IO is pure science fiction.

So ok, have a static "background" space/environment with only some objects able to move, outside of the evironment point map. Fair enough, our 3D game worlds have worked this way for decades and that's exactly what the holoroom appears to be... but it seems quite obvious, even from the videos, that the animated objects on the screens have a very low point count and a limited (animated) movement. This suggests to me that the objects (take the elephants running at the user for example) have their own point maps, probably multiple ones for each animation frame, that ALL have to be in memory and are still relatively low-atom/point/voxel.

Yeah I said voxel because that's literally still the technical issue we're dealing with.

So howabout this for my 2016 demand: 10 different, high-point, animated animal types, all on screen and moving at once, including that frog kicking up some dirt "atoms" and maybe that fish making some ripples in the pond "atoms" while the elephant stomps around, causing some grass to sway and/or be crushed on normal, consumer hardware. Do this and I will make a video of myself trying my damnedest to eat my hat after chugging a 750ml bottle of Jack Daniels.

Look, I *like* the holoverse rooms and they look like they could be a lot of fun regardless of the technical limitations, but when it comes to the patronizing tone of the "ooh smart Oculus guy" and the continual "we're just about there" claims of this technology being the immediate future of gaming, at least before some insane IO breakthrough that happens after we're all dead and buried, I still, flatly, call bullshit.
 
Johnny Gatt I heard you've been hired by Euclideon, so what exactly is your position in the company? Also, will the Euclideon website be updated anytime soon? It seems to be out of date in many areas and even broken on some pages.
 
If they have "unlimited detail", why do the graphics look so bad?

But in all seriousness, these guys are going a really good job emulating Infinium Labs. Overpromise, underdeliver, slowly go out of business.
 
But in all seriousness, these guys are going a really good job emulating Infinium Labs. Overpromise, underdeliver, slowly go out of business.
Actually, Infinium Labs, if I am correct, never delivered anything and went out of business quickly as to its true focus.
 
This is great tech for Holoverse type stuff and even for archival purposes like museums that can walk patrons through a perfect recreation of the Sistine Chapel. But Euclideon should not market this in any way shape or form like it's going revolutionize gaming. That's hugely misleading when games of today can outdo this tech with a studio of people and great hardware to match. Museums and other commercial uses for this are perfect matches for this but at the end of the day they are not true interactive experiences where one can change the environment around them like in video games. Touting zero artist involvement and small team capabilities only furthers the point that this is made for uses in which the end coders have little or no incentive to dedicate such resources to, like Museums or City mapping.

If Euclideon was a bit more honest then I'd have more respect for them. As it stands, all I see are brief attempts at demoing their software to drum up support from an industry they don't stand to make any difference in. Marketing to the wrong people either intentionally or unintentionally will win you money in the short term but not in the long term.
 
Last edited:
Sell your tech, or co-develop it with video game developers. Worth BILLIONS!
Or open up a halo-deck theme arcade that doesn't seem to really rely that much on your tech, for $36 hr.
I have a feeling why one is being pursued, and not the other.
 
For me I "Want" to believe this. Now I don't particularly care about the VR/Hologram angle, but I really want to see this technology working first hand on a modern PC. I also really want some hard data on what kind of hardware this requires. They go to great lengths to stress that this doesn't rely on the GPU, so how much CPU does it use? I'm also not entirely convinced that "unlimited" is anything more than marketing speak. I mean it is a terrible example but We go told for years that our cellphone data plans were unlimited as well. So pushing the marketing speak aside, what hardware does this technology actually use and how much of it? I think the answer to that question will be the answer that determines how successful this will or won't be. If this can run that level of detail on an average computer, this could be a massive game changer. If it still requires perhaps not a beast GPU but a beast CPU or even multiple then it might not be nearly as much of a shift.

Either way, I remain hopeful that this is at least 80% of what it claims to be.
 
Have you ever downloaded a screener of a movie? Try doing it with a 3D game. We are filming projected game footage and you want it to look amazing? Do you have a VIVE or Rift? if you do you will also how hard 2 screens can be on your video card but we are doing it with 4 and we have a map the size of all of World of Warcraft put together. After you get to have your first trip into a Holoverse center feel free to come back and comment about what we have or have not done :)

Thank you for taking the time to answer my comment...

To answer your question, yes, I am familiar with filming a display device, and also know how that it can make things look both better than reality (regarding quality, resolution, framerate etc), and worse. However, 4K 60fps capture devices are very cheap, video editing software is also cheap, so I'm sure you could connect one of these up to capture the output of your rendering computer, and show us what your software is truly capable of showing. YouTube does 4K 60fps, so you have a zero cost place to upload your video. I do not need it to be in 3D to tell me if it is good or not, as I have no way to view 3d, but am fully aware of what it brings to the table. You can just capture the forward viewport. It may not be great for showcasing your 3D skills, however it WILL showcase the quality of the graphics/models and physics at play.
 
Last edited:
Did shit get too real for Johnny? Are we not going to see another post from him for another 5 years?

If you make a video that attempts to address community skeptisim and then attempt to address said skepticism interactively, it doesn't do anything for credibility when the questions aren't fully answered and then the answers stop altogether.
 
Did shit get too real for Johnny? Are we not going to see another post from him for another 5 years?

If you make a video that attempts to address community skeptisim and then attempt to address said skepticism interactively, it doesn't do anything for credibility when the questions aren't fully answered and then the answers stop altogether.

Ever bother to think that your own bias is so evident that no matter the answer he provides he knows you will just disagree with it?
 
Ever bother to think that your own bias is so evident that no matter the answer he provides he knows you will just disagree with it?

Re-read my questions before my last two posts. What would there be to disagree with if I'm asking fairly objective questions? I've got a Computer Science degree under my belt and this is the type of stuff I would raise my hand up in class to question.

Prior to my highly opinionated last two posts, the questions I posed are very straight forward and answerable. I was truly curious when I asked everything I did and again, they are rather simple questions that should dispel any doubts about the claims this company is making. I'm not about to jump on the bandwagon just because this looks great. The claims made merit the tough questions and if they aren't fully answered or aren't answered at all, then what does that say? As in the workplace, the best answer is sometimes as simple as 'I don't know, I'll try to find out and get back to you' instead of attempting to vaguely answer or not directly answering at all.

Even presidential candidates can answer shitty biased questions and if a company like this is aiming to dispel skepticism then the best they can do is attempt to answer questions honestly. It's a total PR gimmick if they only end up answering the easy non-threatening questions.

Imagine the cure for cancer was found...do you think the media should immediately buy it and ask about how great it felt when they came up with it or should they ask about the numerous studies involved in proving that it works?
 
Last edited:
Re-read my questions before my last two posts. What would there be to disagree with if I'm asking fairly objective questions? I've got a Computer Science degree under my belt and this is the type of stuff I would raise my hand up in class to question.

Prior to my highly opinionated last two posts, the questions I posed are very straight forward and answerable. I was truly curious when I asked everything I did and again, they are rather simple questions that should dispel any doubts about the claims this company is making. I'm not about to jump on the bandwagon just because this looks great. The claims made merit the tough questions and if they aren't fully answered or aren't answered at all, then what does that say? As in the workplace, the best answer is sometimes as simple as 'I don't know, I'll try to find out and get back to you' instead of attempting to vaguely answer or not directly answering at all.

Even presidential candidates can answer shitty biased questions and if a company like this is aiming to dispel skepticism then the best they can do is attempt to answer questions honestly. It's a total PR gimmick if they only end up answering the easy non-threatening questions.

Imagine the cure for cancer was found...do you think the media should immediately buy it and ask about how great it felt when they came up with it or should they ask about the numerous studies involved in proving that it works?

If he doesn't want to answer them he doesn't have to. It doesn't mean he his "hiding something", it means it is not beneficial to the business to release that data. What if the answer you wanted would require a release of IP or sensitive data? Do you think you are still ENTITLED to that answer? Secondarily, even tell you the answer is sensitive gives insight to others including their competitors.
 
If he doesn't want to answer them he doesn't have to. It doesn't mean he his "hiding something", it means it is not beneficial to the business to release that data. What if the answer you wanted would require a release of IP or sensitive data? Do you think you are still ENTITLED to that answer? Secondarily, even tell you the answer is sensitive gives insight to others including their competitors.

"I'll try to get back to you on that."

It's simple. My feelings aren't hurt by not getting any responses but it fosters doubt.
 
Just like I said years ago this will never amount to anything, the company is a sham, they are trying to scam yall
 
So, do you have to wear glasses/VR goggles in these rooms, or is stuff just there like floating in the air and you don't need any eye wear ? If its the latter, that's really impressive...


EDIT: N/M You do need glasses.
 
Last edited:
Yeah it appears you need some sort of hybrid active/passive 3D glasses.
 
Back
Top