BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search
Zixiang, Ding1,2; Yaran, Chen1,2; Nannan, Li1,2; Dongbin, Zhao1,2
发表期刊IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS
2022-01
卷号0期号:0页码:0
摘要

In this paper, we propose BNAS-v2 to further
improve the efficiency of BNAS, which employs Broad Convolutional
Neural Network (BCNN) as the search space. In BNAS, the
single-path sampling-updating strategy of an over-parameterized
BCNN leads to terrible unfair training issue, which restricts the
efficiency improvement. To mitigate the unfair training issue,
we employ a continuous relaxation strategy to optimize all
paths of the over-parameterized BCNN simultaneously. However,
continuous relaxation leads to a performance collapse issue
that leads to unsatisfactory performance of the learned BCNN.
For that, we propose the Confident Learning Rate (CLR), and
introduce the combination of partial channel connections and
edge normalization. Experimental results show that 1) BNAS-v2
delivers state-of-the-art search efficiency on both CIFAR-10 (0.05
GPU days, which is 4 faster than BNAS) and ImageNet (0.19
GPU days) with better or competitive performance; 2) the above
two solutions are effective alleviating the performance collapse
issue; and 3) BNAS-v2 achieves powerful generalization ability
on multiple transfer tasks, e.g., MNIST, FashionMNIST, NORB
and SVHN.

关键词Broad neural architecture search (BNAS), continuous relaxation, confident learning rate, partial channel connections, image classification.
收录类别SCI
语种英语
WOS记录号WOS:000750216400001
七大方向——子方向分类强化与进化学习
引用统计
被引频次:11[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/46597
专题多模态人工智能系统全国重点实验室_深度强化学习
通讯作者Dongbin, Zhao
作者单位1.the School of Artificial Intelligence, University of Chinese Academy of Sciences
2.the State Key Laboratory of Management and Control for Complex Systems,Institute of Automation, Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Zixiang, Ding,Yaran, Chen,Nannan, Li,et al. BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search[J]. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS,2022,0(0):0.
APA Zixiang, Ding,Yaran, Chen,Nannan, Li,&Dongbin, Zhao.(2022).BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search.IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS,0(0),0.
MLA Zixiang, Ding,et al."BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search".IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 0.0(2022):0.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
bnas_v2.pdf(7657KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zixiang, Ding]的文章
[Yaran, Chen]的文章
[Nannan, Li]的文章
百度学术
百度学术中相似的文章
[Zixiang, Ding]的文章
[Yaran, Chen]的文章
[Nannan, Li]的文章
必应学术
必应学术中相似的文章
[Zixiang, Ding]的文章
[Yaran, Chen]的文章
[Nannan, Li]的文章
相关权益政策
暂无数据
收藏/分享
文件名: bnas_v2.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。