Keunhong Park
Research Scientist
Education
University of Washington
Ph. D in Computer Science, advised by Steven M. Seitz and Ali Farhadi.
Supported by Samsung Scholarship 2015-2020 ($50,000/year for 5 years)
University of Illinois at Urbana-Champaign
B.S. in Computer Science, advised by Derek Hoiem.
Employment
World Labs
Founding Member leading pretraining efforts. Creator of RTFM.
San Francisco, CA
Research Scientist. Co-built the team and technologies to generate 3D assets for products on Google Search.
San Francisco, CA
Research Intern on the Project Starline team. Published Nerfies and HyperNeRF.
Seattle, WA
NVIDIA
Robotics Research Intern at the Seattle Robotics Lab. Worked on LatentFusion.
Seattle, WA
Amazon
Applied Scientist Intern on the Amazon Go team. Worked on human activity detection.
Seattle, WA
Ministry of National Defense, Cyber Command
Software Engineer (mandatory military service). Worked on network monitoring software.
Seoul, Korea
Software Engineering Intern. Create document conversion system for Google Cloud Print.
Mountain View, CA
Qualcomm
Software Engineering Intern. Optimized performance of JGit, reducing push times from hours to seconds. Implemented multi-master support for Gerrit Code Review.
Boulder, CO
Publications
K. Park with collaborators at World Labs
A real-time, auto-regressive diffusion model renders persistent 3D worlds on a single GPU.
IllumiNeRF 3D Relighting without Inverse Rendering
X. Zhao, P. Srinivasan, D. Verbin, K. Park, R. Martin-Brualla, P. Henzler
3D relighting by distilling samples from a 2D image relighting diffusion model into a latent-variable NeRF.
ReconFusion: 3D Reconstruction with Diffusion Priors
R. Wu, B. Mildenhall, P. Henzler, K. Park, R. Gao, D. Watson, P. Srinivasan, D. Verbin, J. Barron, B. Poole, A. Holynski
Using an multi-view image conditioned diffusion model to regularize a NeRF enabled few-view reconstruction.
CamP: Camera Preconditioning for Neural Radiance Fields
K. Park, P. Henzler, B. Mildenhall, J. Barron, R. Martin-Brualla
Preconditioning camera optimization during NeRF training significantly improves their ability to jointly recover the scene and camera parameters.
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
K. Park, U. Sinha, P. Hedman, J. Barron, S. Bouaziz, D. Goldman, R. Martin-Brualla, S. Seitz
By applying ideas from level set methods, we can represent topologically changing scenes with NeRFs.
FiG-NeRF: Figure Ground Neural Radiance Fields for 3D Object Category Modelling
C. Xie, K. Park, R. Martin-Brualla, M. Brown
Given a lot of images of an object category, you can train a NeRF to render them from novel views and interpolate between different instances.
Nerfies: Deformable Neural Radiance Fields
K. Park, U. Sinha, J. Barron, S. Bouaziz, D. Goldman, S. Seitz, R. Martin-Brualla
Learning deformation fields with a NeRF let's you reconstruct non-rigid scenes with high fidelity.
K. Park, A. Mousavian, Y. Xiang, D. Fox
By learning to predict geometry from images, you can do zero-shot pose estimation with a single network.
PhotoShape: Photorealistic Materials for Large-Scale Shape Collections
K. Park, K. Rematas, A. Farhadi, S. Seitz
By pairing large collections of images, 3D models, and materials, you can create thousands of photorealistic 3D models fully automatically.