A mobile AR experiment using depth sensors and camera capture to recompute surface normals and re-light video scenes using a physical based renderer and virtual lights with virtual gobos - all for a more dramatic look - an application of computational cinematography.


Using ARKit, Scenekit, custom Metal passes and SCNTechnique post processing, a close approximation of scene relighting is possible. However, depth sensors today dont output at high resolution, are noisy spatially and temporally, and frames in are not sync with video (differing frame rates), and we cant easily recompute surface material properties, so the result is interesting, but could be better. A taste of whats to come.