CASIA OpenIR

浏览/检索结果: 共9条,第1-9条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Deep Rank-Consistent Pyramid Model for Enhanced Crowd Counting 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 页码: 14
作者:  Gao, Jiaqi;  Huang, Zhizhong;  Lei, Yiming;  Shan, Hongming;  Wang, James Z.;  Wang, Fei-Yue;  Zhang, Junping
收藏  |  浏览/下载:33/0  |  提交时间:2024/02/22
Crowd counting  feature pyramid  ranking  semi-supervised learning  
Learning Lightweight Dynamic Kernels With Attention Inside via Local-Global Context Fusion 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 15
作者:  Tian, Yonglin;  Shen, Yu;  Wang, Xiao;  Wang, Jiangong;  Wang, Kunfeng;  Ding, Weiping;  Wang, Zilei;  Wang, Fei-Yue
收藏  |  浏览/下载:185/0  |  提交时间:2023/03/20
Attention inside kernels  dynamic convolution  global context  local context  
VGN: Value Decomposition With Graph Attention Networks for Multiagent Reinforcement Learning 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 14
作者:  Wei, Qinglai;  Li, Yugu;  Zhang, Jie;  Wang, Fei-Yue
收藏  |  浏览/下载:199/0  |  提交时间:2022/07/25
Mathematical models  Task analysis  Games  Q-learning  Neural networks  Behavioral sciences  Training  Deep learning  graph attention networks (GATs)  multiagent systems  reinforcement learning  
Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 11
作者:  Peng, Xinyu;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:154/0  |  提交时间:2022/06/06
Training  Estimation  Deep learning  Standards  Optimization  Noise measurement  Convergence  Deep learning  generalization performance  nontypicality sampling scheme  stochastic gradient descent (SGD)  
Drill the Cork of Information Bottleneck by Inputting the Most Important Data 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Peng, Xinyu;  Zhang, Jiawei;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:176/0  |  提交时间:2022/01/27
Training  Signal to noise ratio  Mutual information  Optimization  Convergence  Deep learning  Tools  Information bottleneck (IB) theory  machine learning  minibatch stochastic gradient descent (SGD)  typicality sampling  
Convolutional Ordinal Regression Forest for Image Ordinal Estimation 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 12
作者:  Zhu, Haiping;  Shan, Hongming;  Zhang, Yuheng;  Che, Lingfu;  Xu, Xiaoyang;  Zhang, Junping;  Shi, Jianbo;  Wang, Fei-Yue
收藏  |  浏览/下载:178/0  |  提交时间:2022/01/27
Decision trees  Estimation  Task analysis  Forestry  Vegetation  Random forests  Support vector machines  Differentiable decision trees  image ordinal estimation  ordinal distribution learning  ordinal regression (OR)  random forest  
Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 11, 页码: 4649-4659
作者:  Peng, Xinyu;  Li, Li;  Wang, Fei-Yue
收藏  |  浏览/下载:206/0  |  提交时间:2021/01/06
Training  Convergence  Approximation algorithms  Stochastic processes  Estimation  Optimization  Acceleration  Batch selection  machine learning  minibatch stochastic gradient descent (SGD)  speed of convergence  
Stability-Based Generalization Analysis of Distributed Learning Algorithms for Big Data 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 3, 页码: 801-812
作者:  Wu, Xinxing;  Zhang, Junping;  Wang, Fei-Yue
收藏  |  浏览/下载:188/0  |  提交时间:2020/06/02
Big data  distributed learning algorithms  distributed simulations  generalization  
Adaptive Consensus Control for a Class of Nonlinear Multiagent Time-Delay Systems Using Neural Networks 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 卷号: 25, 期号: 6, 页码: 1217-1226
作者:  Chen, C. L. Philip;  Wen, Guo-Xing;  Liu, Yan-Jun;  Wang, Fei-Yue
收藏  |  浏览/下载:185/0  |  提交时间:2015/08/12
Consensus Control  Lyapunov-krasovskii Functional  Neural Networks (Nns)  Nonlinear Multiagent Systems  Time Delay