CASIA OpenIR

浏览/检索结果: 共5条,第1-5条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
基于气压肌动图和改进神经模糊推理系统的手势识别研究 期刊论文
自动化学报, 2022, 卷号: 48, 期号: 5, 页码: 1220-1233
作者:  汪雷;  黄剑;  段涛;  伍冬睿;  熊蔡华;  崔雨琦
Adobe PDF(24430Kb)  |  收藏  |  浏览/下载:26/9  |  提交时间:2024/05/20
手势识别  肌动图  神经模糊推理系统  自适应学习算法  
Visuals to Text: A Comprehensive Review on Automatic Image Captioning 期刊论文
IEEE/CAA Journal of Automatica Sinica, 2022, 卷号: 9, 期号: 8, 页码: 1339-1365
作者:  Yue Ming;  Nannan Hu;  Chunxiao Fan;  Fan Feng;  Jiangwan Zhou;  Hui Yu
Adobe PDF(56128Kb)  |  收藏  |  浏览/下载:185/23  |  提交时间:2022/08/01
Artificial intelligence  attention mechanism  encoder-decoder framework  image captioning  multi-modal understanding  training strategies  
Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 11
作者:  Peng, Xinyu;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:199/0  |  提交时间:2022/06/06
Training  Estimation  Deep learning  Standards  Optimization  Noise measurement  Convergence  Deep learning  generalization performance  nontypicality sampling scheme  stochastic gradient descent (SGD)  
Drill the Cork of Information Bottleneck by Inputting the Most Important Data 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Peng, Xinyu;  Zhang, Jiawei;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:225/0  |  提交时间:2022/01/27
Training  Signal to noise ratio  Mutual information  Optimization  Convergence  Deep learning  Tools  Information bottleneck (IB) theory  machine learning  minibatch stochastic gradient descent (SGD)  typicality sampling  
Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 11, 页码: 4649-4659
作者:  Peng, Xinyu;  Li, Li;  Wang, Fei-Yue
收藏  |  浏览/下载:261/0  |  提交时间:2021/01/06
Training  Convergence  Approximation algorithms  Stochastic processes  Estimation  Optimization  Acceleration  Batch selection  machine learning  minibatch stochastic gradient descent (SGD)  speed of convergence