|
|
|
| |
Seminar by Dr. Songyou PENG from ETH Zurich
|
Date | Thursday, 22 February 2024 |
Time | 2:30 p.m. – 3:30 p.m. |
Venue | RR301, Run Run Shaw Building |
|
Title | 2D magic in the 3D world |
Abstract |
For decades, 2D or monocular predictors have shown impressive results in estimating depth, surface normals, and performing semantic segmentation. Recent advancements in 2D foundation models have further pushed these capabilities to new heights, leveraging vast amounts of data for unprecedented performance. However, the challenge of applying these powerful tools to 3D tasks remains significant. In this talk, we will introduce our innovative approaches in this aspect, first MonoSDF, which enhances neural 3D surface reconstruction using monocular geometric cues, and NeRF On-the-go, a plug-and-play module integrating the 2D foundation model DINO v2 for dynamic real-world NeRF reconstruction. We will also discuss leveraging 2D models for 3D scene understanding through OpenScene and Segment3D, showcasing their potential in open vocabulary understanding and fine-grained 3D segmentation. Through these discussions, our talk aims to shed light on new perspectives in utilizing 2D foundation models for advancements in 3D tasks. |
About the speaker |
Dr. Songyou Peng is currently a Senior Researcher/PostDoc at ETH Zurich, and an incoming research scientist at Google Research in San Francisco. He has recently earned his PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems, under the supervision of Marc Pollefeys and Andreas Geiger. Throughout his academic journey, Songyou has done internships at Google Research, Meta Reality Labs Research, TUM, and INRIA. His research interest lies in the intersection of 3D Vision and Deep Learning. He particularly focused on pioneering neural scene representations and harnessing the power of 2D foundation models to advance the fields of 3D reconstruction, novel view synthesis, SLAM, and scene understanding.
|
|
|
| |
|
|
|
|
|