( !! Official code and dataset is now released !! )
Abstract
Object placement is a fundamental task for robots, yet it remains challenging for partially observed objects. Existing methods for object placement have limitations, such as the requirement for a complete 3D model of the object or the inability to handle complex shapes and novel objects, which restrict the applicability of robots in the real world. Our focus was on addressing the Unseen Object Placement (UOP) problem. We tackled the UOP problem using two methods: (1) UOP-Net, a point cloud segmentation based approach that directly detects the most stable plane from partial point clouds, and (2) UOP-Sim, a large-scale dataset to accommodate various shapes and novel objects. Our UOP approach enables robots to place objects stably, even when the object’s shape and properties are not fully known, providing a promising solution for object placement in various environments. We verify our approach through simulation and real-world robot experiments, demonstrating state-of-the-art performance for placing single-view and partial objects. For comprehensive findings, please refer to https://sites.google.com/uop-net. (This page)
UOP Pipeline
UOP-Sim Data Generataion
UOP-Sim Data Test Set Visualize (YCB Objects)
Inference UOP-Net in Real World
Comparison each Result with the YCB Object Set & Novel Household Objects
Mustard bottle (YCB objects)
Potted meat can (YCB objects)
Sugar box (YCB objects)
Toy block (Novel Household Object)
Toy dinosaur (Novel Household Object)
Additional Inference Results
Chips can (YCB Object)
Soap tray (Novel Household Object)
Duct tape (Novel Household Object)
Watering can (Novel Household Object)
Wooden bowl (Novel Household Object)
Citation
@article{noh2023learning,
title={Learning to Place Unseen Objects Stably using a Large-scale Simulation},
author={Noh, Sangjun and Kang, Raeyoung and Kim, Taewon and Back, Seunghyeok and Bak, Seongho and Lee, Kyoobin},
journal={arXiv preprint arXiv:2303.08387},
year={2023}
}
Acknowledgements
This work was fully supported by the Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (Project Name: Shared autonomy based on deep reinforcement learning for responding intelligently to unfixed environments such as robotic assembly tasks, Project Number: 20008613).
This work was also partially supported by the HPC Support project of the Korea Ministry of Science and ICT and NIPA.
Author Contacts
Contact email to get more information on this project
[ Address : Dasan Building (C9) 204/206 & Central Research Facilities (C11) 403,
123 Cheomdangwagi-ro, Buk-gu, Gwangju, 61005, Korea ]