CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共22条,第1-10条 帮助

限定条件                
已选(0)清除 条数/页:   排序方式:
SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 卷号: 26, 页码: 3469-3480
作者:  Peng, Fang;  Yang, Xiaoshan;  Xiao, Linhui;  Wang, Yaowei;  Xu, Changsheng
收藏  |  浏览/下载:0/0  |  提交时间:2024/07/03
Few-shot  image classification  vision-language models  
CLIP-VG: Self-Paced Curriculum Adapting of CLIP for Visual Grounding 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 卷号: 26, 页码: 4334-4347
作者:  Xiao, Linhui;  Yang, Xiaoshan;  Peng, Fang;  Yan, Ming;  Wang, Yaowei;  Xu, Changsheng
收藏  |  浏览/下载:25/0  |  提交时间:2024/05/30
Grounding  Reliability  Adaptation models  Task analysis  Visualization  Data models  Annotations  Visual grounding  curriculum learning  pseudo-language label  and vision-language models  
Self-supervised Calorie-aware Heterogeneous Graph Networks for Food Recommendation 期刊论文
ACM Transactions on Multimedia Computing, Communications, and Applications, 2023, 卷号: 19, 期号: 1s, 页码: 1-23
作者:  Song, Yaguang;  Yang, Xiaoshan;  Xu, Changsheng
Adobe PDF(1381Kb)  |  收藏  |  浏览/下载:205/64  |  提交时间:2023/06/12
Food recommendation  recipe calories  heterogeneous graph  selfsupervised learning  
Recovering Generalization via Pre-training-like Knowledge Distillation for Out-of-Distribution Visual Question Answering 期刊论文
IEEE Transactions on Multimedia, 2023, 卷号: 26, 页码: 1-15
作者:  Song, Yaguang;  Yang, Xiaoshan;  Wang, Yaowei;  Xu, Changsheng
Adobe PDF(2397Kb)  |  收藏  |  浏览/下载:192/48  |  提交时间:2023/06/12
Multi-modal Foundation Model  Out-of-Distribution Generalization  Visual Question Answering  Knowledge Distillation  
Learning Hierarchical Video Graph Networks for One-Stop Video Delivery 期刊论文
ACM Transactions on Multimedia Computing, Communications, and Applications, 2022, 卷号: 18, 期号: 1, 页码: 1-23
作者:  Song, Yaguang;  Gao, Junyu;  Yang, Xiaoshan;  Xu, Changsheng
Adobe PDF(7608Kb)  |  收藏  |  浏览/下载:166/49  |  提交时间:2023/04/25
Cross modal  video retrieval  deep learning  graph neural networks  
Holographic Feature Learning of Egocentric-Exocentric Videos for Multi-Domain Action Recognition 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 卷号: 24, 页码: 2273-2286
作者:  Huang, Yi;  Yang, Xiaoshan;  Gao, Junyun;  Xu, Changsheng
Adobe PDF(2409Kb)  |  收藏  |  浏览/下载:364/73  |  提交时间:2022/07/25
Videos  Feature extraction  Visualization  Task analysis  Computational modeling  Target recognition  Prototypes  Egocentric videos  exocentric videos  holographic feature  multi-domain  action recognition  
The Model May Fit You: User-Generalized Cross-Modal Retrieval 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 卷号: 24, 页码: 2998-3012
作者:  Ma, Xinhong;  Yang, Xiaoshan;  Gao, Junyu;  Xu, Changsheng
Adobe PDF(6549Kb)  |  收藏  |  浏览/下载:278/53  |  提交时间:2022/06/17
cross-modal retrieval  domain generalization  meta-learning  
Emotion Knowledge Driven Video Highlight Detection 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 卷号: 23, 页码: 3999-4013
作者:  Qi, Fan;  Yang, Xiaoshan;  Xu, Changsheng
收藏  |  浏览/下载:222/0  |  提交时间:2021/12/28
Visualization  Training data  Predictive models  Training  Semantics  Emotion recognition  Computational modeling  Deep ranking  knowledge graph  video highlight detection  
Unsupervised Video Summarization via Relation-Aware Assignment Learning 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 卷号: 23, 页码: 3203-3214
作者:  Gao, Junyu;  Yang, Xiaoshan;  Zhang, Yingying;  Xu, Changsheng
Adobe PDF(3649Kb)  |  收藏  |  浏览/下载:337/65  |  提交时间:2021/11/03
Feature extraction  Training  Optimization  Semantics  Recurrent neural networks  Task analysis  Graph neural network  unsupervised learning  video summarization  
Learning Coarse-to-Fine Graph Neural Networks for Video-Text Retrieval 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 卷号: 23, 页码: 2386-2397
作者:  Wang, Wei;  Gao, Junyu;  Yang, Xiaoshan;  Xu, Changsheng
Adobe PDF(2165Kb)  |  收藏  |  浏览/下载:348/46  |  提交时间:2021/11/02
Feature extraction  Encoding  Task analysis  Semantics  Data models  Cognition  Focusing  Video-text retrieval  graph neural network  coarse-to-fine strategy