TY - JOUR
T1 - Deep reinforcement learning‐based robotic grasping in clutter and occlusion
AU - Mohammed, Marwan Qaid
AU - Kwek, Lee Chung
AU - Chua, Shing Chyi
AU - Aljaloud, Abdulaziz Salamah
AU - Al‐Dhaqm, Arafat
AU - Al‐Mekhlafi, Zeyad Ghaleb
AU - Mohammed, Badiea Abdulkarem
N1 - © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Data Availability Statement:
The authors confirm that the data supporting the findings of this study are available within the article.
PY - 2021/12/10
Y1 - 2021/12/10
N2 - In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well‐ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi‐view change observation‐based approach (MV‐COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well‐ordered object, and occlusion scenarios, respectively.
AB - In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well‐ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi‐view change observation‐based approach (MV‐COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well‐ordered object, and occlusion scenarios, respectively.
U2 - 10.3390/su132413686
DO - 10.3390/su132413686
M3 - Article
AN - SCOPUS:85120942375
SN - 2071-1050
VL - 13
JO - Sustainability (Switzerland)
JF - Sustainability (Switzerland)
IS - 24
M1 - 13686
ER -