Pre-train, Self-train, Distill: A simple recipe for Supersizing 3D Reconstruction

Kalyan Vasudev Alwala1
Abhinav Gupta1,2
Shubham Tulsiani2

1Facebook AI Research
2Carnegie Mellon University


We present an approach to learn a single self-supervised reconstruction model across diverse object categories. Given an input image depicting any (segmented) object spanning over 150 categories, this unified reconstruction model can infer its 3D shape.

Our work learns a unified model for single-view 3D reconstruction of objects from hundreds of semantic categories. As a scalable alternative to direct 3D supervision, our work relies on segmented image collections for learning 3D of generic categories. Unlike prior works that use similar supervision but learn independent category-specific models from scratch, our approach of learning a unified model simplifies the training process while also allowing the model to benefit from the common structure across categories. Using image collections from standard recognition datasets, we show that our approach allows learning 3D inference for over 150 object categories. We evaluate using two datasets and qualitatively and quantitatively show that our unified reconstruction approach improves over prior category-specific reconstruction baselines. Our final 3D reconstruction model is also capable of zero-shot inference on images from unseen object categories and we empirically show that increasing the number of training categories improves the reconstruction quality.

Approach Overview

Approach Overview. We first pre-train a reconstruction model using multi-view renderings of synthetic data. We then self-train category-specific models on diverse image collections in-the-wild with only foreground mask annotations. We finally distill the learned models from prior training stages into a unified reconstruction model.