BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:0900
TZOFFSETTO:0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181210T184843Z
LOCATION:Hall B5(1) (5F\, B Block)
DTSTART;TZID=Asia/Tokyo:20181206T141500
DTEND;TZID=Asia/Tokyo:20181206T143600
UID:siggraphasia_SIGGRAPH Asia 2018_sess130_papers_483@linklings.com
SUMMARY:Monte Carlo Convolution for Learning on Non-Uniformly Sampled Poin
t Clouds
DESCRIPTION:Technical Papers\nFull Conference Pass (FC), Full Conference O
ne-Day Pass (1D)\n\nMonte Carlo Convolution for Learning on Non-Uniformly
Sampled Point Clouds\n\nHermosilla Casajus, Ritschel, Vazquez, Vinacua, Ro
pinski\n\nDeep learning systems extensively use convolution operations to
process input data. Though convolution is clearly defined for structured d
ata such as images, this is not true for other data types such as sparse p
oint clouds. Previous techniques have developed approximations to convolut
ions for restricted conditions. Unfortunately, their applicability is limi
ted and cannot be used for general point clouds. We propose an efficient a
nd effective method to learn convolutions for non-uniformly sampled point
clouds, as they are obtained with modern acquisition techniques. Learning
is enabled by four key novelties: first, representing the convolution kern
el itself as a multilayer perceptron; second, phrasing convolution as a Mo
nte Carlo integration problem, third, using this notion to combine informa
tion from multiple samplings at different levels; and fourth using Poisson
disk sampling as a scalable means of hierarchical point cloud learning. T
he key idea across all these contributions is to guarantee adequate consid
eration of the underlying non-uniform sample distribution function from a
Monte Carlo perspective. To make the proposed concepts applicable to real-
world tasks, we furthermore propose an efficient implementation which sign
ificantly reduces the GPU memory required during the training process. By
employing our method in hierarchical network architectures we can outperfo
rm most of the state-of-the-art networks on established point cloud segmen
tation, classification and normal estimation benchmarks. Furthermore, in c
ontrast to most existing approaches, we also demonstrate the robustness of
our method with respect to sampling variations, even when training with u
niformly sampled training data only. To support the direct application of
these concepts, we provide a ready-to-use TensorFlow implementation of the
se layers at https://github.com/viscom-ulm/MCCNN
URL:https://sa2018.conference-program.com/presentation?id=papers_483&sess=
sess130
END:VEVENT
END:VCALENDAR