STD-Net: Structure-preserving and Topology-adaptive Deformation Network for Single-View 3D Reconstruction

Published in IEEE Transactions on Visualization and Computer Graphics, 2021

Abstract: 3D reconstruction from single-view images is a long-standing research problem. To date, there are various methods based on point clouds and volumetric representations. In spite of success in 3D models generation, it is quite challenge for these approaches to deal with models with complex topology and fine geometric details. Thanks to the recent advance of deep shape representations,learning the structure and detail representation using deep neural networks is a promising direction. In this paper, we propose a novel approach named STD-Net to reconstruct 3D models utilizing mesh representation that is well suited for characterizing complex structures and geometry details. Our method consists of (1) an auto-encoder network for recovering the structure of an object with bounding box representation from a single-view image; (2) a topology-adaptive GCN for updating vertex position for meshes of complex topology; and (3) a unified mesh deformation block that deforms the structural boxes into structure-aware meshes. Evaluation on ShapeNet shows that STD-Net has better performance than state-of-the-art methods in reconstructing complex structures and fine geometric details.

Download paper here

More information

Recommended citation: Aihua Mao, Canglan Dai, Qing Liu, Jie Yang, Lin Gao, Ying He, Yong-Jin Liu. STD-Net: Structure-preserving and Topology-adaptive Deformation Network for Single-View 3D Reconstruction. IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 3, pp. 1785-1798, 1 March 2023, doi: 10.1109/TVCG.2021.3131712.