Lei Xiao
Email Resume LinkedIn Twitter Google Scholar

Lei Xiao


About Me

I am a staff research scientist and tech lead manager at Meta Reality Labs Research, leading a team of AI research scientists focused on advancing 3D computer vision, graphics, and imaging for immersive XR technologies. Before joining Meta in 2017, I earned my Ph.D. in Computer Science with a focus on computational imaging from the University of British Columbia.

My recent research focused on neural rendering for real-time graphics, and 3D computer vision including view synthesis, immersive videos, telepresence, 3D/4D reconstruction and generative content. My work has been published in top-tier venues, including SIGGRAPH, SIGGRAPH Asia, CVPR, ICCV, ECCV, NeurIPS, etc, and has been featured in the Meta Connect keynote, Oculus Blog, demos to Meta CTO and various media outlets. I have also had the privilege of giving invited talks at NVIDIA GTC, Stanford SCIEN talk series, and the CV4MR workshop at CVPR. Please see my publications below for details.

Beyond core algorithm research, as a tech lead, I collaborated with a multidisciplinary team of software engineers, hardware engineers and technical artists, to design and develop real-time, end-to-end prototype demonstrations. Our published work includes gaze-contingent rendering for varifocal VR headsets, real-time perspective-correct MR passthrough, real-time supersampling for high-resolution VR, and ultra-wide field-of-view MR passthrough.

Before I joined Meta, during my PhD study under supervision of Prof. Wolfgang Heidrich at UBC, I worked on computational photography and imaging, including image restoration, superresolution, time-of-flight imaging, non-line-of-sight imaging, etc.


Selected Publications


OnlineVideoSynthesis Project

Geometry-guided Online 3D Video Synthesis with Multi-View Temporal Consistency

Hyunho Ha, Lei Xiao, Christian Richardt, Thu Nguyen-Phuoc, Changil Kim, Min H. Kim, Douglas Lanman, Numair Khan

Online novel view synthesis of dynamic scenes from multi-view capture, a step towards real-time telepresence

CVPR 2025

LIRM Project

LIRM: Large Inverse Rendering Model for Progressive Reconstruction of Shape, Materials and View-dependent Radiance Fields

Zhengqin Li, Dilin Wang, Ka chen, Zhaoyang Lv, Thu Nguyen-Phuoc, Milim Lee, Jia-Bin Huang, Lei Xiao, Yufeng Zhu, Carl S Marshall, Yuheng Ren, Richard Newcombe, Zhao Dong

Large model for high-quality inverse rendering

CVPR 2025

WFOV Project

Wide Field-of-View Mixed Reality

Lei Xiao, Yang Zhao, Dave Lindberg, Joel Hegland, Eric Penner, Dan Tebbs, Daniel Terpstra, Seth Moczydlowski, Ian Ender, Yu-Jen Lin, Nick Chu, Julia Majors, Douglas Lanman

MR passthrough headset prototype with an ultra-wide field of view — approaching the limits of human vision — designed to create a seamless, almost invisible headset experience

Under Single-Blind Review for SIGGRAPH 2025 Emerging Technologies | Media Report on a previous iteration of our prototype that Meta CTO shared publicly

ReplaceAnything3D: Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields

Edward Bartrum, Thu Nguyen-Phuoc, Zhengqin Li, Numair Khan, Chris Xie, Armen Avetisyan, Douglas Lanman, Lei Xiao

Text-guided, localized editing method that enables object replacement in a 3D scene

NeurIPS 2024

GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis

Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao

View synthesis of dynamic scenes from monocular videos using learned deformable 3D Gaussian Splatting

arXiv 2024

TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion

Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl Marshall, Zhao Dong, Zhengqin Li

Transfers photorealistic, high-fidelity, and geometry-aware textures from sparse-view images to arbitrary 3D meshes

CVPR 2024

AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation

Thu Nguyen-Phuoc, Gabriel Schwartz, Yuting Ye, Stephen Lombardi, Lei Xiao

Stylize a dynamic 3D avatar fast

arXiv 2023

Tiled Multiplane Images for Practical 3D Photography

Numair Khan, Douglas Lanman, Lei Xiao

Efficient 3D photography using tiled multi-plane-images

ICCV 2023

Temporally-Consistent Online Depth Estimation Using Point-Based Fusion

Numair Khan, Eric Penner, Douglas Lanman, Lei Xiao

Online video depth estimation of dynamic scenes via a global point cloud and image-space fusion

CVPR 2023

NeuralPassthrough: Learned Real-Time View Synthesis for VR

Lei Xiao, Salah Nouri, Joel Hegland, Alberto Garcia Garcia, Douglas Lanman

The first learned approach to address the passthrough problem, achieving superior image quality while meeting strict VR requirements for real-time, perspective-correct stereoscopic view synthesis

SIGGRAPH 2022

SNeRF: Stylized Neural Implicit Representations for 3D Scenes

Thu Nguyen-Phuoc, Feng Liu, Lei Xiao

Transfer a neural radiance field to a user-defined style with cross-view consistency

SIGGRAPH 2022

Neural Compression for Hologram Images and Videos

Liang Shi, Richard Webb, Lei Xiao, Changil Kim, Changwon Jang

Effective compression method for holograms

Optics Letters 2022

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes

Kai-En Lin, Lei Xiao, Feng Liu, Guowei Yang, Ravi Ramamoorthi

High-quality view synthesis of dynamic scenes for immersive videos

ICCV 2021

Neural Supersampling for Real-time Rendering

Lei Xiao, Salah Nouri, Matt Chapman, Alexander Fix, Douglas Lanman, Anton Kaplanyan

The first neural rendering technique for high-fidelity and temporally stable upsampling of rendered content in real-time applications, even in the highly challenging 16x upsampling scenario

SIGGRAPH 2020 | Oculus Blog

DeepFocus: Learned Image Synthesis for Computational Displays

Lei Xiao, Anton Kaplanyan, Alexander Fix, Matt Chapman, Douglas Lanman

The first real-time neural rendering technique to synthesize physically-accurate defocus blur, focal stacks, multilayer decompositions, and light field imagery using only commonly available RGB-D images

SIGGRAPH ASIA 2018 | Oculus Blog| Oculus Connect Keynote | Media Report

Discriminative Transfer Learning for General Image Restoration

Lei Xiao, Felix Heide, Wolfgang Heidrich, Bernhard Schölkopf, Michael Hirsch

Transactions on Image Processing 2018

Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

Lei Xiao, Jue Wang, Wolfgang Heidrich, Michael Hirsch

ECCV 2016 Spotlight

Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

Lei Xiao, Felix Heide, Matthew O'Toole, Andreas Kolb, Matthias B. Hullin, Kiriakos N. Kutulakos, Wolfgang Heidrich

CVPR 2015

Stochastic Blind Motion Deblurring

Lei Xiao, Felix Heide, Matthew O'Toole, Andreas Kolb, Matthias B. Hullin, Kiriakos N. Kutulakos, Wolfgang Heidrich

Transactions on Image Processing 2015

Imaging in Scattering Media Using Correlation Image Sensors and Sparse Convolutional Coding

Felix Heide, Lei Xiao, Andreas Kolb, Matthias B. Hullin, Wolfgang Heidrich

Optics Express 2014

Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport

Matthew O'Toole, Felix Heide, Lei Xiao, Matthias B. Hullin, Wolfgang Heidrich, Kiriakos N. Kutulakos

SIGGRAPH 2014

Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors

Felix Heide, Lei Xiao, Wolfgang Heidrich, Matthias B. Hullin

CVPR 2014 Oral

Compressive Rendering of Multidimensional Scenes

Pradeep Sen, Soheil Darabi, Lei Xiao

Video Processing and Computational Video, LNCS 7082, Springer, 2011