Deep Part Induction from Articulated Object Pairs
TimeWednesday, 5 December 20189am - 9:21am
DescriptionObject functionality is often expressed through part articulation -- as when the two rigid parts of a scissor pivot against each other to accomplish the cutting function. Such articulations are often consistent across objects within the same underlying functional category. In this paper we explore how the observation of different articulation states provides evidence for part structure of 3D objects. We start from a pair of unsegmented 3D CAD models or scans, or a pair of a 3D shape and an RGB image, indicating two different articulation states of two functionally related objects and aim to induce their common part structure. This is a challenging setting, as we assume no prior shape structure, the articulation states may belong to objects of different geometry, plus
we allow noisy and partial scans as input. Our method learns a neural network architecture with three modules that respectively propose correspondences, estimate 3D deformation flows, and perform segmentation. As we demonstrate, when our architecture is iteratively used in an ICP-like fashion alternating between correspondence, flow, and segmentation prediction, it significantly outperforms state-of-the-art techniques in the task of discovering articulated parts of objects. We note that our part induction is object-class agnostic and generalizes well to new and unseen objects.