CASIA OpenIR  > 脑图谱与类脑智能实验室  > 类脑认知计算
An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain
He, Xiang1,2; Zhao, Dongcheng1; Li, Yang1,2; Shen, Guobin1,3; Kong, Qingqun1,2,3; Zeng, Yi1,2,3,4
2024
Conference NameAssociation for the Advancement of Artificial Intelligence (AAAI)
Conference Date2024-02-20
Conference PlaceVANCOUVER, CANADA
Abstract

Spiking neural networks (SNNs) are rich in spatio-temporal dynamics and are suitable for processing event-based neuromorphic data. However, event-based datasets are usually less annotated than static datasets. This small data scale makes SNNs prone to overfitting and limits their performance. In order to improve the generalization ability of SNNs on event-based datasets, we use static images to assist SNN training on event data. In this paper, we first discuss the domain mismatch problem encountered when directly transferring networks trained on static datasets to event data. We argue that the inconsistency of feature distributions becomes a major factor hindering the effective transfer of knowledge from static images to event data. To address this problem, we propose solutions in terms of two aspects: feature distribution and training strategy. Firstly, we propose a knowledge transfer loss, which consists of domain alignment loss and spatio-temporal regularization. The domain alignment loss learns domain-invariant spatial features by reducing the marginal distribution distance between the static image and the event data. Spatio-temporal regularization provides dynamically learnable coefficients for domain alignment loss by using the output features of the event data at each time step as a regularization term. In addition, we propose a sliding training strategy, which gradually replaces static image inputs probabilistically with event data, resulting in a smoother and more stable training for the network. We validate our method on neuromorphic datasets, including N-Caltech101, CEP-DVS, and N-Omniglot. The experimental results show that our proposed method achieves better performance on all datasets compared to the current state-of-the-art methods. Code is available at https://github.com/Brain-Cog-Lab/Transfer-for-DVS.

MOST Discipline Catalogue工学::控制科学与工程
DOIhttps://doi.org/10.1609/aaai.v38i1.27806
URL查看原文
Indexed ByEI
Language英语
Sub direction classification类脑模型与计算
planning direction of the national heavy laboratory认知机理与类脑学习
Paper associated data
Citation statistics
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/57241
Collection脑图谱与类脑智能实验室_类脑认知计算
Corresponding AuthorKong, Qingqun; Zeng, Yi
Affiliation1.Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
4.Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
He, Xiang,Zhao, Dongcheng,Li, Yang,et al. An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain[C],2024.
Files in This Item:
File Name/Size DocType Version Access License
AAAI_Transfer.pdf(1144KB)会议论文 开放获取CC BY-NC-SAView
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[He, Xiang]'s Articles
[Zhao, Dongcheng]'s Articles
[Li, Yang]'s Articles
Baidu academic
Similar articles in Baidu academic
[He, Xiang]'s Articles
[Zhao, Dongcheng]'s Articles
[Li, Yang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[He, Xiang]'s Articles
[Zhao, Dongcheng]'s Articles
[Li, Yang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: AAAI_Transfer.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.