CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共26条,第1-10条 帮助

限定条件            
已选(0)清除 条数/页:   排序方式:
Robust Video-Text Retrieval Via Noisy Pair Calibration 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 卷号: 25, 页码: 8632-8645
作者:  Zhang, Huaiwen;  Yang, Yang;  Qi, Fan;  Qian, Shengsheng;  Xu, Changsheng
收藏  |  浏览/下载:71/0  |  提交时间:2024/02/22
Noise calibration  uncertainty  video text retrieval  
Incremental Audio-Visual Fusion for Person Recognition in Earthquake Scene 期刊论文
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 卷号: 20, 期号: 2, 页码: 19
作者:  You, Sisi;  Zuo, Yukun;  Yao, Hantao;  Xu, Changsheng
收藏  |  浏览/下载:105/0  |  提交时间:2023/12/21
Cross-modal audio-visual fusion  incremental learning  person recognition  elastic weight consolidation  feature replay  
Reducing Vision-Answer Biases for Multiple-Choice VQA 期刊论文
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 卷号: 32, 页码: 4621-4634
作者:  Zhang, Xi;  Zhang, Feifei;  Xu, Changsheng
Adobe PDF(2684Kb)  |  收藏  |  浏览/下载:103/12  |  提交时间:2023/11/17
Multiple-choice VQA  vision-answer bias  causal intervention  counterfactual interaction learning  
Zero-Shot Predicate Prediction for Scene Graph Parsing 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 卷号: 25, 页码: 3140-3153
作者:  Li, Yiming;  Yang, Xiaoshan;  Huang, Xuhui;  Ma, Zhe;  Xu, Changsheng
收藏  |  浏览/下载:175/0  |  提交时间:2023/11/17
Deep learning  zero-shot  scene graph  
Recovering Generalization via Pre-training-like Knowledge Distillation for Out-of-Distribution Visual Question Answering 期刊论文
IEEE Transactions on Multimedia, 2023, 卷号: 26, 页码: 1-15
作者:  Song, Yaguang;  Yang, Xiaoshan;  Wang, Yaowei;  Xu, Changsheng
Adobe PDF(2397Kb)  |  收藏  |  浏览/下载:208/51  |  提交时间:2023/06/12
Multi-modal Foundation Model  Out-of-Distribution Generalization  Visual Question Answering  Knowledge Distillation  
Weakly-Supervised Video Object Grounding Via Learning Uni-Modal Associations 期刊论文
IEEE Transactions on Multimedia, 2022, 卷号: 25, 页码: 1-12
作者:  Wang, Wei;  Gao, Junyu;  Xu, Changsheng
Adobe PDF(5406Kb)  |  收藏  |  浏览/下载:150/45  |  提交时间:2023/04/25
Visualization  Grounding  Task analysis  Prototypes  Annotations  Uncertainty  Proposals  Cross-modal retrieval  weakly-supervised learning  video object grounding  uni-modal association  
Explicit Cross-Modal Representation Learning for Visual Commonsense Reasoning 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 卷号: 24, 页码: 2986-2997
作者:  Zhang, Xi;  Zhang, Feifei;  Xu, Changsheng
Adobe PDF(5681Kb)  |  收藏  |  浏览/下载:432/8  |  提交时间:2022/07/25
Cognition  Video recording  Syntactics  Visualization  Task analysis  Semantics  Linguistics  Visual Commonsense Reasoning  explicit reasoning  syntactic structure  interpretability  
Holographic Feature Learning of Egocentric-Exocentric Videos for Multi-Domain Action Recognition 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 卷号: 24, 页码: 2273-2286
作者:  Huang, Yi;  Yang, Xiaoshan;  Gao, Junyun;  Xu, Changsheng
Adobe PDF(2409Kb)  |  收藏  |  浏览/下载:392/78  |  提交时间:2022/07/25
Videos  Feature extraction  Visualization  Task analysis  Computational modeling  Target recognition  Prototypes  Egocentric videos  exocentric videos  holographic feature  multi-domain  action recognition  
Weakly-Supervised Facial Expression Recognition in the Wild With Noisy Data 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 卷号: 24, 页码: 1800-1814
作者:  Zhang, Feifei;  Xu, Mingliang;  Xu, Changsheng
收藏  |  浏览/下载:277/0  |  提交时间:2022/06/10
Noise measurement  Face recognition  Data models  Task analysis  Training data  Training  Annotations  Facial expression recognition  noisy labeled data  clean labels  end-to-end  pose modeling  noise modeling  
Attribute-Induced Bias Eliminating for Transductive Zero-Shot Learning 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 卷号: 24, 页码: 1933-1942
作者:  Yao, Hantao;  Min, Shaobo;  Zhang, Yongdong;  Xu, Changsheng
收藏  |  浏览/下载:254/0  |  提交时间:2022/06/10
Semantics  Visualization  Bridges  Training  Knowledge transfer  Image recognition  Topology  Transductive Zero-Shot Learning  Graph Attribute Embedding  Attribute-Induced Bias Eliminating  Semantic-Visual Alignment