Most importantly, how do the atoms create a surface to properly reflect, scatter, and occlude light? Points alone, by definition, have no geometry, and if I were to calculate the vectors of reflected light rays, the easiest way to do it would be off of a simple surface like a polygon. How can 'atoms' be easier/computationally lighter?
I was wondering the same thing, how can you take an "atom" and know which way the light should reflect off it, etc. My guess is that when they create a model, there is some sort of "normal" vector associated with each atom which indicates the surface orientation of the macro object. That way they would be able to calculate light reflections, etc. Each atom could also have some surface properties that define how it scatters light or if it is more of a specular/diffuse reflection, etc. This takes up more memory but it doesn't require much processing power at runtime.
If all of this is true, the big innovation would have to be in their 3D search algorithm for detecting which atom should be drawn at which pixel. I think the reason they can say "unlimited power" (*within the constraints of data storage) is that for each pixel on the screen they are only drawing a single "atom". This means it doesn't matter how complex their environment is, they are always drawing exactly the same number of atoms (which is the number of pixels they have on screen). The hard part is searching through a scene and calculating the atom that is closest to the camera for each pixel. It would be interesting to see what their data structures look like and how they search through them, but I doubt they would let anyone know since this is their "secret sauce".