CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共6条,第1-6条 帮助

限定条件            
已选(0)清除 条数/页:   排序方式:
Entity-level Cross-modal Learning Improves Multi-modal Machine Translation 会议论文
, Punta Cana, Dominican Republic, 2021-11-7
作者:  Huang X(黄鑫);  Zhang JJ(张家俊);  Zong CQ(宗成庆)
Adobe PDF(1714Kb)  |  收藏  |  浏览/下载:112/38  |  提交时间:2023/06/26
Distributed Representations of Emotion Categories in Emotion Space 会议论文
, online, August 1–6, 2021
作者:  Xiangyu, Wang;  Chengqing, Zong
Adobe PDF(1431Kb)  |  收藏  |  浏览/下载:44/28  |  提交时间:2023/06/20
CSDS: A Fine-Grained Chinese Dataset for Customer Service Dialogue Summarization 会议论文
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, 2021-11-07 - 2021-11-11
作者:  Lin, Haitao;  Ma, Liqun;  Zhu, Junnan;  Xiang, Lu;  Zhou, Yu;  Zhang, Jiajun;  Zong, Chengqing
Adobe PDF(491Kb)  |  收藏  |  浏览/下载:130/33  |  提交时间:2023/06/13
Towards Brain-to-Text Generation: Neural Decoding with Pre-trained Encoder-Decoder Models 会议论文
, 线上会议, 2021-12-13
作者:  Shuxian, Zou;  Shaonan, Wang;  Jiajun, Zhang;  Chengqing, Zong
Adobe PDF(321Kb)  |  收藏  |  浏览/下载:180/47  |  提交时间:2022/06/14
Graph-based Multimodal Ranking Models for Multimodal Summarization 期刊论文
ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2021, 卷号: 20, 期号: 4, 页码: 21
作者:  Zhu, Junnan;  Xiang, Lu;  Zhou, Yu;  Zhang, Jiajun;  Zong, Chengqing
Adobe PDF(4193Kb)  |  收藏  |  浏览/下载:298/58  |  提交时间:2021/12/28
Multimodal summarization  single-modal  multimodal ranking  unsupervised  
Medical Term and Status Generation From Chinese Clinical Dialogue With Multi-Granularity Transformer 期刊论文
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 卷号: 29, 页码: 3362-3374
作者:  Li, Mei;  Xiang, Lu;  Kang, Xiaomian;  Zhao, Yang;  Zhou, Yu;  Zong, Chengqing
Adobe PDF(3036Kb)  |  收藏  |  浏览/下载:294/66  |  提交时间:2021/12/28
Medical diagnostic imaging  Transformers  Task analysis  Medical services  Computational modeling  Semantics  Data mining  Medical dialogue  multi-granularity  attention mechanism  natural language understanding  sequence to sequence learning