Euclideon Makes World’s Most Realistic Graphics

I don't know what pipe they are smoking, but whatever it is, it must be good s***.

Those graphics are obviously computer generated, the shadows are horrible and the shading is way off. Reflectivity of the surfaces are wrong and the lack of "light-source" variances is also incorrect. Heck, even the color temperature of the sun is wrong.
 
The idea for this is neat. Take an already built physical model, scan it, import it, and use it. That would be much faster then taking pictures and having virtual artist recreate it and re-render it. It still doesn't look "real" to me. Still computer generated.

I disagree about it being faster. Sure it might be faster for a room but only if its a static 3d model that is non-interactive. Imagine doing this with an area the size of that in Skyrim. Scanning in that much data would be a nightmare. On top of that you still need to identify all the objects in the world that can be manipulated. Their technology still has to be told what is a door, a wall, an operable chest or a basketball.
 
I don't know what pipe they are smoking, but whatever it is, it must be good s***.

Those graphics are obviously computer generated, the shadows are horrible and the shading is way off. Reflectivity of the surfaces are wrong and the lack of "light-source" variances is also incorrect. Heck, even the color temperature of the sun is wrong.

The lighting is wrong because its a frozen moment in time or a 3D engine for Matrix Bullet Time (tm). :D
 
i would love to see this scanning adapted to realtime so a movie could be filmed in 3d and played back on an oculus with "individual looking around"
it isnt bad. they found a use, simplifying environment creation.

but yeah, everything in focus and single directional movements in relation to the camera aim are usually a dead giveaway of cg.
 
Anyone who couldn't tell the "real world" shots were computer generated the second they were presented needs an eye exam. Color me unimpressed.

/beat that dead horse deader
 
John Carmack himself said what they claim they are doing in real time wasn't possible a few years ago. If it would have been two years down the road, I am sure he would have said it. But he didn't.
 
They have a very nice method of displaying 3D point data, but that's all it is. Completely static point data.

There's a reason all they show is a totally static 3D space. Rendering millions of voxels is nothing to a modern video card or CPU. They 'stream' off a HD just like anything else would be loaded into memory. The catch is actually changing the point data on the fly, which absolutely is not being done in any of their videos.

3D point data files are FUCKING HUGE in general. Imagine trying to read a point data file that is GIGABYTES in size, find particular points that need to be modified (say just one tree needs to sway) change the points, save the file, read it, update it again for the next pixel's-worth of movement, wash rinse repeat at 60fps. Yeah not possible.

This is why there are no games using this tech now and the 'games division' they are developing hasn't already been revolutionizing the industry. It takes a hundred million metric fucktons of I/O to re-write 3D point maps in real-time. Now factor in physics engines, even simple ones, that have to watch over millions or even billions of points in a simulated 3D space, then determine how they transform in motion, collision detection, etc., THEN write the changes to memory/disk then read them back into the renderer.

It's a beautiful tech for something like Google Street View, where it could become a virtual world-space to move around in, but it will never be viable for gaming until I/O capabilities increase by multiple orders of magnitude. Otherwise, we'll have much (MUCH) lower-quality voxel graphics for anything that is actually animated.
 
What's with all the hate? You'd think that most of the people here are game developers or something. Geez. These guys aren't asking for any money, just showing a demo of some interesting technology. If they can pull it off in a game, then great, we will all enjoy it. If not, then no skin off our noses. So step off the hate wagon, m'kay?

This isn't interesting technology, this is snake oil.
 
With low-quality VR goggles around the corner, this now has great potential in the consumer market. Obviously, the resolution is still very low, so it's a perfect fit for a 1440p Rift or Morpheus. Keep the user in the middle of the scene away from the walls, and I can see it looking very realistic given the lack of visual acuity in VR devices at present. Like looking around with legally blind eyes. But... you'd still feel like you're there.
 
I knew they were computer graphics. Still doesn't look "real"

thats not the point, the point is that this COULD enable a much greater leap forward TOWARDS photorealism in games.. of course you don't get an exact copy of reality from verson 1.0 of this type of thing but it still looks a shit ton better and has vastly more detail than current games..
 
thats not the point, the point is that this COULD enable a much greater leap forward TOWARDS photorealism in games.. of course you don't get an exact copy of reality from verson 1.0 of this type of thing but it still looks a shit ton better and has vastly more detail than current games..

For any given programming problem there are usually multiple known algorithms for solving the problem which have trade-offs in performance based on certain usage scenarios. This point cloud stuff is a known algorithm that is really efficient for accurately capturing real world environments and scaling geometry - which are the two things Euclideon have shown it doing. It's very poorly efficient when it comes to animating and shading. They're not going to make it better by working at it. The trade-offs in performance are inherent limitations of the algorithm.

So no, it can't enable a much greater leap forward towards photorealism in game. It has vastly more of a certain kind of detail compared to current games. It completely lacks other details like animation and shaders that current games have which it will never be able to compete with performance-wise. At some point the performance cost of doing animations and shaders with photo realistic point clouds may be trivial as hardware continues to get faster, but we're several orders of magnitude in performance away from that happening which puts this several decades away from being useful in gaming. At that point the polygon counts in traditional 3D will be several orders of magnitude greater too.
 
Reflectivity of the surfaces are wrong and the lack of "light-source" variances is also incorrect.
It appears to me that there is absolutely no reflectivity on any surface, just completely flat matte/diffuse color. That really kills it more than anything. If they can get an artist in there to paint a specular layer they could probably add a ton of realism.
 
It appears to me that there is absolutely no reflectivity on any surface, just completely flat matte/diffuse color. That really kills it more than anything. If they can get an artist in there to paint a specular layer they could probably add a ton of realism.

It looks to me that because the 'scene' is recorded from fixed points all the lighting & reflection is from the wrong PoV once you have moved. It also lacks the rays of sunlight that should be in the scenes but a laser scanner probably wouldn't record correctly.
 
Ah, yes, the company which sells snake oil is still selling snake oil, in a new package.
While this looks impressive on paper and with their "demos", none of their products or technologies have ever made it into any games or products.

More of the same tired rehashed nonsense from a money-grab company.
 
Ok, so a "search engine" finds points "one Atom" for every pixel on the screen?
So, in layman's terms, how does this "Atom" become a rendered polygon/triangle whatever? How does it figure your point of view?
 
Even after all these years, his accent and voice is so damn grating I want to punch him in his posh face.
 
Did you watch the video? From the end: "Regarding the question: Will this technology be used in games? Yes. Euclideon is working on two games that use solidscan technology. Yes, we can do animation, and yes, it's very good. But that will remain hidden away until our next video."
That, i don't believe. If they make a game with their engine I'm expecting something like Myst.
 
Feels like every game today use children level palette with many over saturated colors that you even need dark mod for diablo 3 to remove this eye defect gray shade, sharpen image, get rid of over saturated colors balance contrast a little, diablo 2 seemed like it used some real world textures and lightning, so even highly saturated areas looked really good because it used textures from real world, so saturation was accurate, now everything is over saturated.
I hope this Euclideon won't eat up our internets with point cloud data streams :D
 
it has "scam" written all over it. unlimited detail cannot exist without unlimited memory.

Well, they do claim to use terabyte hard drives for a 'render' with their 'laser point scanner'. If we assume that once church is say at minimum '1TB' and at most 3TB-4TB of hard drive space (ie it fits all onto one drive) that that's a lot more dense data than anything in modern day gaming. I've never heard of 'one room's assets' taking that much space. I'd wager a Skyrim tavern for example uses maybe 300MB maximum total of art assets between models, chairs, desks, walls, lighting fixtures, etc since the entire game is a few GB. If there was a Skyrim mod that uses 1TB for a tavern alone -- well, it would be insanely detailed and likely seem like unlimited detail.

I'd wonder about loading times though attempting to stream data from a 1TB HDD even if its an SSD.
 
I knew they were computer graphics. Still doesn't look "real"

A big improvement, but 100% agreed. You are still going to be employing tons of artists to make their pointclouds look pleasing.
 
Back
Top