Headshot of Chen Feng

Can Neural Networks Learn Paper Folding?

Abstract: It is always fun to learn paper-folding/zhezhi/origami, the art of turning a piece of paper into a 3D shape. It is even more fun to do so with a deep network. In this talk, I will explain how we design a 3D point cloud auto-encoder that essentially resembles the paper-folding operations in its decoder, leading to better reconstructions of 3D shapes and better linear separable latent features, yet being more parameter-efficient than its competitors. It is potentially useful for robotics, autonomous driving, design automation, and so on.

Dr. Chen Feng earned his Bachelor degree in geospatial engineering from Wuhan University in China. Then he went to the University of Michigan at Ann Arbor and earned a master degree in electrical engineering and a Ph.D. in civil engineering in 2015, where he studied robotic vision and learning and attempted to apply them in civil engineering. After graduation, he became a research scientist in the computer vision group at the Mitsubishi Electric Research Labs (MERL), focusing on visual SLAM and deep learning. In 2018 August, he became an assistant professor jointly in the Department of Civil and Urban Engineering and the Department of Mechanical and Aerospace Engineering in NYU Tandon School of Engineering, where his lab AI4CE (A-I-force) aims to advance the robotic vision and learning with applications in civil and mechanical engineering.