Low Latency Spiking ConvNets with Restricted Output Training and False Spike Inhibition | |
Chen RZ(陈睿智)1,2; Ma H(马鸿)1; Guo P(郭鹏)1; Xie SL(谢少林)1; Wang DL(王东琳)1 | |
2018-10 | |
会议名称 | 2018 International Joint Conference on Neural Networks (IJCNN) |
会议日期 | 2018-7 |
会议地点 | 巴西里约热内卢 |
摘要 | Deep convolutional neural networks (ConvNets) have achieved the state-of-the-art performance on many real-world applications. However, significant computation and storage demands are required by ConvNets. Spiking neural networks (SNNs), with sparsely activated neurons and event-driven computations, show great potential to take advantage of the ultra-low power spike-based hardware architectures. Yet, training SNN with similar accuracy as ConvNets is difficult. Recent researchers have demonstrated the work of converting ConvNets to SNNs (CNN-SNN conversion) with similar accuracy. However, the energy-efficiency of the converted SNNs is impaired by the increased classification latency. In this paper, we focus on optimizing the classification latency of the converted SNNs. First, we propose a restricted output training method to normalize the converted weights dynamically in the CNN-SNN training phase. Second, false spikes are identified and the false spike inhibition theory is derived to speedup the convergence of the classification process. Third, we propose a temporal max pooling method to approximate the max pooling operation in ConvNets without accuracy loss. The evaluation shows that the converted SNNs converge in about 30 time-steps and achieve the best classification accuracy of 94% on CIFAR-10 dataset. |
收录类别 | EI |
语种 | 英语 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/23614 |
专题 | 国家专用集成电路设计工程技术研究中心 |
作者单位 | 1.中国科学院自动化研究所 2.中国科学院大学 |
第一作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Chen RZ,Ma H,Guo P,et al. Low Latency Spiking ConvNets with Restricted Output Training and False Spike Inhibition[C],2018. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
paper1.pdf(11839KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论