Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation
Yating Huang1,2; Yunzhe Hao1,2; Jiaming Xu1,3; Bo Xu1,2,3
发表期刊Neural Networks
2022-06
卷号154页码:13-21
摘要

Recently, our proposed speaker extraction model, WASE (learning When to Attend for Speaker Extraction) yielded superior performance over the prior state-of-the-art methods by explicitly modeling onset clue and regarding it as important guidance in speaker extraction tasks. However, it still remains challenging when it comes to the deployments on the resource-constrained devices, where the model must be tiny and fast to perform inference with minimal budget in CPU and memory while keeping the speaker extraction performance. In this work, we utilize model compression techniques to alleviate the problem and propose a lightweight speaker extraction model, TinyWASE, which aims to run on resource-constrained devices. Specifically, we mainly investigate the grouping effects of quantization-aware training and knowledge distillation techniques in the speaker extraction task and propose Distillation-aware Quantization. Experiments on WSJ0-2mix dataset show that our proposed model can achieve comparable performance as the full-precision model while reducing the model size using ultra-low bits (e.g. 3 bits), obtaining 8.97x compression ratio and 2.15 MB model size. We further show that TinyWASE can combine with other model compression techniques, such as parameter sharing, to achieve compression ratio as high as 23.81 with limited performance degradation. Our code is available at https://github.com/aispeech-lab/TinyWASE.

收录类别EI
七大方向——子方向分类类脑模型与计算
国重实验室规划方向分类语音语言处理
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/49724
专题复杂系统认知与决策实验室_听觉模型与认知计算
通讯作者Jiaming Xu
作者单位1.Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China
2.School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
4.Center for Excellence in Brain Science and Intelligence Technology, CAS, Shanghai, China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Yating Huang,Yunzhe Hao,Jiaming Xu,et al. Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation[J]. Neural Networks,2022,154:13-21.
APA Yating Huang,Yunzhe Hao,Jiaming Xu,&Bo Xu.(2022).Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation.Neural Networks,154,13-21.
MLA Yating Huang,et al."Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation".Neural Networks 154(2022):13-21.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
NN_Compressing Speak(801KB)期刊论文出版稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yating Huang]的文章
[Yunzhe Hao]的文章
[Jiaming Xu]的文章
百度学术
百度学术中相似的文章
[Yating Huang]的文章
[Yunzhe Hao]的文章
[Jiaming Xu]的文章
必应学术
必应学术中相似的文章
[Yating Huang]的文章
[Yunzhe Hao]的文章
[Jiaming Xu]的文章
相关权益政策
暂无数据
收藏/分享
文件名: NN_Compressing Speaker Extraction.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。