Device Placement Optimization for Deep Neural Networks via One-shot Model and Reinforcement Learning
Zixiang Ding; Yaran Chen; Nannan Li; Dongbin Zhao
2020-12
会议名称IEEE Symposium Series on Computational Intelligence
会议日期December 1-4
会议地点Canberra, Australia
摘要

With the development of deep learning that employs deep neural networks (DNN) as powerful tool, its computational requirement grows rapid together with the increasing size (e.g. depth and parameter) of DNN. Currently, model and data parallelism are employed for accelerating the training and inference process of DNN. However, the above techniques make placement decision on devices for DNN based on heuristics and intuitions by machine learning experts. In this paper, we propose an novel approach for designing device placement of DNN in an automatic way. For a DNN, we employ a sequence-to-sequence model as controller to sample device placement from a one-shot model, which contains all possible device placements with respect to a specific hardware environment (e.g. CPU and GPU). Then, reinforcement learning treats the execution time of sampled device placement as reward to guide the sequence-to-sequence model for finding better one. The proposed approach is employed to optimize the device placement for both model and data parallelism of Inception-V3 on ImageNet. Experimental results show that the optimal placements discovered by our method can outperform hand-crafted one.

收录类别EI
七大方向——子方向分类强化与进化学习
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/40613
专题多模态人工智能系统全国重点实验室_深度强化学习
通讯作者Yaran Chen
作者单位Insititute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zixiang Ding,Yaran Chen,Nannan Li,et al. Device Placement Optimization for Deep Neural Networks via One-shot Model and Reinforcement Learning[C],2020.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zixiang Ding]的文章
[Yaran Chen]的文章
[Nannan Li]的文章
百度学术
百度学术中相似的文章
[Zixiang Ding]的文章
[Yaran Chen]的文章
[Nannan Li]的文章
必应学术
必应学术中相似的文章
[Zixiang Ding]的文章
[Yaran Chen]的文章
[Nannan Li]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。