Learning Unsupervised Multi-View Stereopsis
via Robust Photometric Consistency


Tejas Khot*1
Shubham Agrawal*1
Shubham Tulsiani2
Christoph Mertz1
Simon Lucey1
Martial Hebert1
1 Carnegie Mellon University
2 Facebook AI Research

[Paper]
[Code]



Our model consumes a collection of calibrated images of a scene from multiple views and produces depth maps for every such view. We show that this depth prediction model can be trained in an unsupervised manner using our robust photo consistency loss. The predicted depth maps are then fused together into a consistent 3D reconstruction which closely resembles and often improves upon the sensor scanned model.
Left to Right: Input images, predicted depth maps, our fused 3D reconstruction, ground truth 3D scan.

We present a learning based approach for multi-view stereopsis (MVS). While current deep MVS methods achieve impressive results, they crucially rely on ground-truth 3D training data, and acquisition of such precise 3D geometry for supervision is a major hurdle. Our framework instead leverages photometric consistency between multiple views as supervisory signal for learning depth prediction in a wide baseline MVS setup. However, naively applying photo consistency constraints is undesirable due to occlusion and lighting changes across views. To overcome this, we propose a robust loss formulation that: a) enforces first order consistency and b) for each point, selectively enforces consistency with some views, thus implicitly handling occlusions. We demonstrate our ability to learn MVS without 3D supervision using a real dataset, and show that each component of our proposed robust loss results in a significant improvement. We qualitatively observe that our reconstructions are often more complete than the acquired ground truth, further showing the merits of this approach. Lastly, our learned model generalizes to novel settings, and our approach allows adaptation of existing CNNs to datasets without ground-truth 3D by unsupervised finetuning.


Paper

Khot*, Agrawal*, Tulsiani, Mertz, Lucey, Hebert

Learning Unsupervised Multi-View Stereopsis via Robust Photometric Consistency



[pdf]     [Bibtex]


Network Overview


Robust Photometric Consistency Loss


Results



Code


 [GitHub]


Acknowledgements

This project is supported by Carnegie Mellon University's Mobility21 National University Transportation Center, which is sponsored by the US Department of Transportation. This webpage template was borrowed from some colorful folks.