Knowledge Commons of Institute of Automation,CAS
Learning Human-to-Robot Dexterous Handovers for Anthropomorphic Hand | |
Duan, Haonan1,2![]() ![]() ![]() ![]() | |
发表期刊 | IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS
![]() |
ISSN | 2379-8920 |
2023-09-01 | |
卷号 | 15期号:3页码:1224-1238 |
产权排序 | 1 |
摘要 | Human-robot interaction plays an important role in robots serving human production and life. Object handover between humans and robotics is one of the fundamental problems of human-robot interaction. The majority of current work uses parallel-jaw grippers as the end-effector device, which limits the ability of the robot to grab miscellaneous objects from human and manipulate them subsequently. In this article, we present a framework for human-to-robot dexterous handover using an anthropomorphic hand. The framework takes images captured by two cameras to complete handover scene understanding, grasp configurations prediction, and handover execution. To enable the robot to generalize to diverse delivered objects with miscellaneous shapes and sizes, we propose an anthropomorphic hand grasp network (AHG-Net), an end-to-end network that takes the single-view point clouds of the object as input and predicts the suitable anthropomorphic hand configurations with five different grasp taxonomies. To train our model, we build a large-scale data set with 1M hand grasp annotations from 5K single-view point clouds of 200 objects. We implement a handover system using a UR5 robot arm and HIT-DLR II anthropomorphic robot hand based on our presented framework, which can not only adapt to different human givers but generalize to diverse novel objects with various shapes and sizes. The generalizability, reliability, and robustness of our method are demonstrated on 15 different novel objects with arbitrary handover poses from frontal and lateral positions, a system ablation study, a grasp planner comparison, and a user study on 6 participants delivering 15 objects from two benchmark sets. |
关键词 | Anthropomorphic hand handovers human-robot interaction |
DOI | 10.1109/TCDS.2022.3203025 |
关键词[WOS] | MOVEMENT PRIMITIVES ; GRASP |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[91748131] ; National Natural Science Foundation of China[62006229] ; National Natural Science Foundation of China[61771471] ; Strategic Priority Research Program of Chinese Academy of Science[XDB32050106] ; InnoHK Project |
项目资助者 | National Natural Science Foundation of China ; Strategic Priority Research Program of Chinese Academy of Science ; InnoHK Project |
WOS研究方向 | Computer Science ; Robotics ; Neurosciences & Neurology |
WOS类目 | Computer Science, Artificial Intelligence ; Robotics ; Neurosciences |
WOS记录号 | WOS:001089186500020 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
是否为代表性论文 | 是 |
七大方向——子方向分类 | 智能机器人 |
国重实验室规划方向分类 | 实体人工智能系统(软、硬件) |
是否有论文关联数据集需要存交 | 否 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/54389 |
专题 | 多模态人工智能系统全国重点实验室_智能机器人系统研究 |
通讯作者 | Wang, Peng |
作者单位 | 1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China 3.Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China 4.Chinese Acad Sci, Hong Kong Inst Sci & Innovat, Ctr Artificial Intelligence & Robot, Hong Kong, Peoples R China |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Duan, Haonan,Wang, Peng,Li, Yiming,et al. Learning Human-to-Robot Dexterous Handovers for Anthropomorphic Hand[J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS,2023,15(3):1224-1238. |
APA | Duan, Haonan,Wang, Peng,Li, Yiming,Li, Daheng,&Wei, Wei.(2023).Learning Human-to-Robot Dexterous Handovers for Anthropomorphic Hand.IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS,15(3),1224-1238. |
MLA | Duan, Haonan,et al."Learning Human-to-Robot Dexterous Handovers for Anthropomorphic Hand".IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS 15.3(2023):1224-1238. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Learning_Human-to-Ro(10850KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论