CASIA OpenIR

浏览/检索结果: 共10条,第1-10条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
Accurate Lung Nodule Segmentation With Detailed Representation Transfer and Soft Mask Supervision 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 页码: 13
作者:  Wang, Changwei;  Xu, Rongtao;  Xu, Shibiao;  Meng, Weiliang;  Xiao, Jun;  Zhang, Xiaopeng
Adobe PDF(4178Kb)  |  收藏  |  浏览/下载:155/12  |  提交时间:2023/12/21
Detailed representation transfer  lung nodules segmentation  medical images segmentation  soft mask  
Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 卷号: 33, 期号: 3, 页码: 1228-1241
作者:  Zhu, Yuanheng;  Zhao, Dongbin
Adobe PDF(2838Kb)  |  收藏  |  浏览/下载:236/6  |  提交时间:2022/06/10
Games  Nash equilibrium  Mathematical model  Markov processes  Convergence  Dynamic programming  Training  Deep reinforcement learning (DRL)  generalized policy iteration (GPI)  Markov game (MG)  Nash equilibrium  Q network  zero sum  
Attention Enhanced Reinforcement Learning for Multi agent Cooperation 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 15
作者:  Pu, Zhiqiang;  Wang, Huimu;  Liu, Zhen;  Yi, Jianqiang;  Wu, Shiguang
Adobe PDF(2967Kb)  |  收藏  |  浏览/下载:348/51  |  提交时间:2022/06/06
Training  Reinforcement learning  Games  Scalability  Task analysis  Standards  Optimization  Attention mechanism  deep reinforcement learning (DRL)  graph convolutional networks  multi agent systems  
Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 11
作者:  Peng, Xinyu;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:193/0  |  提交时间:2022/06/06
Training  Estimation  Deep learning  Standards  Optimization  Noise measurement  Convergence  Deep learning  generalization performance  nontypicality sampling scheme  stochastic gradient descent (SGD)  
Question-Guided Erasing-Based Spatiotemporal Attention Learning for Video Question Answering 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 0
作者:  Liu, Fei;  Liu, Jing;  Hong, Richang;  Lu, Hanqing
Adobe PDF(3550Kb)  |  收藏  |  浏览/下载:358/90  |  提交时间:2022/01/27
video question answering  attention mechanism  metric learning  
Event-Triggered Communication Network With Limited-Bandwidth Constraint for Multi-Agent Reinforcement Learning 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Hu, Guangzheng;  Zhu, Yuanheng;  Zhao, Dongbin;  Zhao, Mengchen;  Hao, Jianye
Adobe PDF(4187Kb)  |  收藏  |  浏览/下载:252/10  |  提交时间:2022/01/27
Bandwidth  Protocols  Reinforcement learning  Task analysis  Optimization  Communication networks  Multi-agent systems  Event trigger  limited bandwidth  multi-agent communication  multi-agent reinforcement learning (MARL)  
Drill the Cork of Information Bottleneck by Inputting the Most Important Data 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Peng, Xinyu;  Zhang, Jiawei;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:219/0  |  提交时间:2022/01/27
Training  Signal to noise ratio  Mutual information  Optimization  Convergence  Deep learning  Tools  Information bottleneck (IB) theory  machine learning  minibatch stochastic gradient descent (SGD)  typicality sampling  
EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 卷号: 32, 期号: 10, 页码: 4499-4513
作者:  Ruan, Xiaofeng;  Liu, Yufan;  Yuan, Chunfeng;  Li, Bing;  Hu, Weiming;  Li, Yangxi;  Maybank, Stephen
Adobe PDF(3625Kb)  |  收藏  |  浏览/下载:347/47  |  提交时间:2021/06/17
Data-driven  low-rank decomposition  model compression and acceleration  structured pruning  
Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 11, 页码: 4649-4659
作者:  Peng, Xinyu;  Li, Li;  Wang, Fei-Yue
收藏  |  浏览/下载:254/0  |  提交时间:2021/01/06
Training  Convergence  Approximation algorithms  Stochastic processes  Estimation  Optimization  Acceleration  Batch selection  machine learning  minibatch stochastic gradient descent (SGD)  speed of convergence  
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 7, 页码: 2557-2568
作者:  Tao, Wei;  Pan, Zhisong;  Wu, Gaowei;  Tao, Qing
收藏  |  浏览/下载:250/0  |  提交时间:2020/08/03
Convergence  Extrapolation  Optimization  Acceleration  Machine learning  Task analysis  Machine learning algorithms  Individual convergence  machine learning  Nesterov's extrapolation  nonsmooth optimization  sparsity