| BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture |
| Ding ZX(丁子祥)1,2 ; Yaran, Chen1,2 ; Nannan, Li1,2 ; Dingbin, Zhao1,2 ; Zhiquan, Sun3; C. L. Philip Chen4,5
|
发表期刊 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
![](/image/waiting.gif) |
| 2020-03
|
期号 | 0页码:0 |
摘要 | Efficient neural architecture search (ENAS)
achieves novel efficiency for learning architecture with
high-performance via parameter sharing and reinforcement
learning (RL). In the phase of architecture search, ENAS employs
deep scalable architecture as search space whose training process
consumes most of the search cost. Moreover, time-consuming
model training is proportional to the depth of deep scalable
architecture. Through experiments using ENAS on CIFAR-10,
we find that layer reduction of scalable architecture is an effective
way to accelerate the search process of ENAS but suffers from
a prohibitive performance drop in the phase of architecture
estimation. In this article, we propose a broad neural architecture
search (BNAS) where we elaborately design broad scalable architecture
dubbed broad convolutional neural network (BCNN) to
solve the above issue. On the one hand, the proposed broad scalable
architecture has fast training speed due to its shallow topology.
Moreover, we also adopt RL and parameter sharing used in
ENAS as the optimization strategy of BNAS. Hence, the proposed
approach can achieve higher search efficiency. On the other hand,
the broad scalable architecture extracts multi-scale features and
enhancement representations, and feeds them into global average
pooling (GAP) layer to yield more reasonable and comprehensive
representations. Therefore, the performance of broad scalable
architecture can be promised. In particular, we also develop
two variants for BNAS that modify the topology of BCNN.
In order to verify the effectiveness of BNAS, several experiments
are performed and experimental results show that 1) BNAS
delivers 0.19 days which is 2.37× less expensive than ENAS who
ranks the best in RL-based NAS approaches; 2) compared with
small-size (0.5 million parameters) and medium-size (1.1 million
parameters) models, the architecture learned by BNAS obtains
state-of-the-art performance (3.58% and 3.24% test error) on
CIFAR-10; and 3) the learned architecture achieves 25.3%
top-1 error on ImageNet just using 3.9 million parameters. |
关键词 | Broad convolutional neural network (BCNN), image classification, neural architecture search (NAS), reinforcement learning (RL)
|
收录类别 | SCI
|
语种 | 英语
|
WOS记录号 | WOS:000732345100001
|
七大方向——子方向分类 | 强化与进化学习
|
引用统计 |
|
文献类型 | 期刊论文
|
条目标识符 | http://ir.ia.ac.cn/handle/173211/46583
|
专题 | 多模态人工智能系统全国重点实验室_深度强化学习
|
通讯作者 | Dingbin, Zhao |
作者单位 | 1.the State Key Laboratory of Management and Control for Complex Systems,Institute of Automation, Chinese Academy of Sciences 2.the School of Artificial Intelligence, University of Chinese Academy of Sciences 3.the School of Automation and Electric Engineering, University of Science and Technology Beijing 4.the School of Computer Science & Engineering, South China University of Technology 5.the College of Navigation, Dalian Maritime University
|
第一作者单位 | 中国科学院自动化研究所
|
通讯作者单位 | 中国科学院自动化研究所
|
推荐引用方式 GB/T 7714 |
Ding ZX,Yaran, Chen,Nannan, Li,et al. BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2020(0):0.
|
APA |
Ding ZX,Yaran, Chen,Nannan, Li,Dingbin, Zhao,Zhiquan, Sun,&C. L. Philip Chen.(2020).BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(0),0.
|
MLA |
Ding ZX,et al."BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS .0(2020):0.
|
修改评论