CASIA OpenIR

浏览/检索结果: 共6条,第1-6条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation 会议论文
MM '22: Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 2022.10.10—2022.10.14
作者:  Huang Yi;  Yang Xiaoshan;  Zhang Ji;  Xu Changsheng
Adobe PDF(1264Kb)  |  收藏  |  浏览/下载:195/81  |  提交时间:2023/06/21
Learning Hierarchical Video Graph Networks for One-Stop Video Delivery 期刊论文
ACM Transactions on Multimedia Computing, Communications, and Applications, 2022, 卷号: 18, 期号: 1, 页码: 1-23
作者:  Song, Yaguang;  Gao, Junyu;  Yang, Xiaoshan;  Xu, Changsheng
Adobe PDF(7608Kb)  |  收藏  |  浏览/下载:158/47  |  提交时间:2023/04/25
Cross modal  video retrieval  deep learning  graph neural networks  
Many Hands Make Light Work: Transferring Knowledge from Auxiliary Tasks for Video-Text Retrieval 期刊论文
IEEE Transactions on Multimedia, 2022, 页码: 1-15
作者:  Wang, Wei;  Gao, Junyu;  Yang, Xiaoshan;  Xu, Changsheng
Adobe PDF(3679Kb)  |  收藏  |  浏览/下载:124/23  |  提交时间:2023/04/25
Holographic Feature Learning of Egocentric-Exocentric Videos for Multi-Domain Action Recognition 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 卷号: 24, 页码: 2273-2286
作者:  Huang, Yi;  Yang, Xiaoshan;  Gao, Junyun;  Xu, Changsheng
Adobe PDF(2409Kb)  |  收藏  |  浏览/下载:350/70  |  提交时间:2022/07/25
Videos  Feature extraction  Visualization  Task analysis  Computational modeling  Target recognition  Prototypes  Egocentric videos  exocentric videos  holographic feature  multi-domain  action recognition  
Towards a multimodal human activity dataset for healthcare 期刊论文
MULTIMEDIA SYSTEMS, 2022, 页码: 13
作者:  Hu, Menghao;  Luo, Mingxuan;  Huang, Menghua;  Meng, Wenhua;  Xiong, Baochen;  Yang, Xiaoshan;  Sang, Jitao
收藏  |  浏览/下载:229/0  |  提交时间:2022/06/06
Human activity recognition  Multimodal fusion  Wearable devices  Healthcare  
A unified framework for multi-modal federated learning 期刊论文
NEUROCOMPUTING, 2022, 卷号: 480, 页码: 110-118
作者:  Xiong, Baochen;  Yang, Xiaoshan;  Qi, Fan;  Xu, Changsheng
收藏  |  浏览/下载:261/0  |  提交时间:2022/06/06
Multi-modal  Federated learning  Co-attention