CASIA OpenIR

浏览/检索结果: 共4条,第1-4条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Mixture of personality improved spiking actor network for efficient multi-agent cooperation 期刊论文
FRONTIERS IN NEUROSCIENCE, 2023, 卷号: 17, 页码: 14
作者:  Li, Xiyun;  Ni, Ziyi;  Ruan, Jingqing;  Meng, Linghui;  Shi, Jing;  Zhang, Tielin;  Xu, Bo
收藏  |  浏览/下载:48/0  |  提交时间:2023/11/17
multi-agent cooperation  personality theory  spiking actor networks  multi-agent reinforcement learning  theory of mind  
Combined genome-wide association study of 136 quantitative ear morphology traits in multiple populations reveal 8 novel loci 期刊论文
PLOS GENETICS, 2023, 卷号: 19, 期号: 7, 页码: 25
作者:  Li, Yi;  Xiong, Ziyi;  Zhang, Manfei;  Hysi, Pirro G.;  Qian, Yu;  Adhikari, Kaustubh;  Weng, Jun;  Wu, Sijie;  Du, Siyuan;  Gonzalez-Jose, Rolando;  Schuler-Faccini, Lavinia;  Bortolini, Maria-Catira;  Acuna-Alonzo, Victor;  Canizales-Quinteros, Samuel;  Gallo, Carla;  Poletti, Giovanni;  Bedoya, Gabriel;  Rothhammer, Francisco;  Wang, Jiucun;  Tan, Jingze;  Yuan, Ziyu;  Jin, Li;  Uitterlinden, Andre G.;  Ghanbari, Mohsen;  Ikram, M. Arfan;  Nijsten, Tamar;  Zhu, Xiangyu D.;  Lei, Zhen;  Jia, Peilin;  Ruiz-Linares, Andres;  Spector, Timothy D.;  Wang, Sijia;  Kayser, Manfred;  Liu, Fan
收藏  |  浏览/下载:29/0  |  提交时间:2023/11/17
Improving Cross-State and Cross-Subject Visual ERP-Based BCI With Temporal Modeling and Adversarial Training 期刊论文
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 卷号: 30, 页码: 369-379
作者:  Ni, Ziyi;  Xu, Jiaming;  Wu, Yuwei;  Li, Mengfan;  Xu, Guizhi;  Xu, Bo
收藏  |  浏览/下载:182/0  |  提交时间:2022/06/06
Brain modeling  Electroencephalography  Visualization  Training  Task analysis  Feature extraction  Adaptation models  Brain-computer interface  temporal modeling  adversarial training  cross-subject  cross-state  
Image captioning with triple-attention and stack parallel LSTM 期刊论文
NEUROCOMPUTING, 2018, 卷号: 319, 页码: 55-65
作者:  Zhu, Xinxin;  Li, Lixiang;  Liu, Jing;  Li, Ziyi;  Peng, Haipeng;  Niu, Xinxin
收藏  |  浏览/下载:227/0  |  提交时间:2019/12/16
Image caption  Deep learning  LSTM  CNN  Attention