ReconFusion: 3D Reconstruction with Diffusion Priors
Using an multi-view image conditioned diffusion model to regularize a NeRF enabled few-view reconstruction.
I am a researcher in 3D computer vision, generative models, and computer graphics. I was previously a research scientist at Google. I received my Ph.D from the University of Washington in 2021 where I was advised by Ali Farhadi and Steve Seitz.
Using an multi-view image conditioned diffusion model to regularize a NeRF enabled few-view reconstruction.
Preconditioning camera optimization during NeRF training significantly improves their ability to jointly recover the scene and camera parameters.
By applying ideas from level set methods, we can represent topologically changing scenes with NeRFs.
Given a lot of images of an object category, you can train a NeRF to render them from novel views and interpolate between different instances.
Learning deformation fields with a NeRF let's you reconstruct non-rigid scenes with high fidelity.
By learning to predict geometry from images, you can do zero-shot pose estimation with a single network.
By pairing large collections of images, 3D models, and materials, you can create thousands of photorealistic 3D models fully automatically.