CASIA OpenIR  > 毕业生  > 博士学位论文
Alternative TitleTraffic Signal Optimal Control Based on Reinforcement Learning
Thesis Advisor赵冬斌
Degree Grantor中国科学院大学
Place of Conferral中国科学院自动化研究所
Degree Discipline控制理论与控制工程
Keyword强化学习 交通信号控制 多agent系统 基于基团分解 因子图 一般最大和算法 Reinforcement Learning Traffic Signal Control Multiagent Systems Clique-based Decomposition Factor Graphs The General Max-plus Algorithm
Abstract随着国民经济的发展和城市化进程的加快,我国机动车保有量和道路交通量急剧增加,城市交通拥堵情况愈发严重。研究表明,道路交叉口是城市交通运输系统的瓶颈,因此,本文以城市交通信号控制为研究对象,提出先进的强化学习优化方法,实现单点交叉口和干线交通信号的优化控制,研究内容主要包括以下几方面: 第一,针对单点交叉口交通信号控制问题,采用基于强化学习的自适应控制方法,提出了归一化的回报函数,达到了良好的学习效果。 第二,对单点交叉口交通信号控制问题进行了大量的、系统的仿真实验,对强化学习中的若干问题,如算法收敛性、回报函数的设计以及状态离散化程度对算法的影响,进行了详细的比较分析。 第三,针对多Agent协调优化控制问题,提出了基于基团分解的多Agent分布式稀疏强化学习优化方法:在多Agent强化学习的回报分配方式方面,提出了基于基团分解方法获得更好的协调策略;把基于因子图的和积算法改造为一般最大和算法,并且与稀疏强化学习结合起来,能够以并行、分布的方式解决问题。 第四,在标准测试问题--传感器网络问题中进行了验证,将所提出的方法与其他六种多Agent强化学习方法、单Agent强化学习方法进行比较,所提出的算法均获得了最好的性能指标和最快的学习速度。 第五,进而在干线上多个交叉口的交通信号协调优化问题中验证。对单点交叉口控制策略和上层协调控制策略的学习进行了一定程度的分离,分别使用交叉口Agent和协调Agent进行学习,减轻了维数灾问题。针对相邻交叉口之间的特点,提出了对协调程度进行准确评价的新的回报函数,实验结果显示所提出的方法具有较优的性能。 最后,对本文的研究成果进行了总结,并展望了需要进一步研究的工作。
Other AbstractWith the development of national economy and the speeding up of urbanization process, the number of motor vehicles and traffic volume in China grows rapidly, making the urban traffic becoming much more congested. Research shows that traffic signal control for intersections plays an important role in urban transportation systems, which is taken as the major focus of this study. This paper proposes advanced reinforcement learning based traffic signal control approaches for a single intersection and an artery. Firstly, for traffic light control in a single intersection, we propose a normalized reward function and apply reinforcement learning to design the controller, which turns out better than pre-timed control. Secondly, we conduct extensive and systematic experiments for different aspects of traffic signal control in reinforcement learning. We compare the performance of the proposed method with the traditional pre-timed control. We also analyze the convergence of the algorithm and its influence by the reward function and the state presentation. Thirdly, we propose clique-based sparse reinforcement learning using factor graphs. This is to solve the problem of coordination of multiple agents. The clique-based decomposition is proposed as a method for assigning reward among agents, aiming to promote coordination. Then, we obtain the general max-plus algorithm from the sum-product algorithm and integrate it with sparse reinforcement learning to solve the coordination problem in a parallel and distributed way. Fourthly, the proposed multiagent reinforcement learning algorithm is validated on a benchmark problem – the sensor network. It is compared with six other multiagent reinforcement learning algorithms and the single agent reinforcement learning algorithm. The results show that the proposed method gains the best performance and the highest learning speed. Fifthly, the proposed method is used to solve the coordination problem of traffic signal control in an artery road. To alleviate the dimensional disaster problem, different kinds of agents are designed for specific learning tasks. Moreover, we propose a reward function which can evaluate the behavior of the coordination. Experimental results show that the proposed method has a great advantage over pre-timed control. Finally, the obtained results are summarized and future work is addressed.
Other Identifier201018014628021
Document Type学位论文
Recommended Citation
GB/T 7714
张震. 基于强化学习的城市交通信号优化控制[D]. 中国科学院自动化研究所. 中国科学院大学,2010.
Files in This Item:
File Name/Size DocType Version Access License
CASIA_20101801462802(12129KB) 暂不开放CC BY-NC-SA
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[张震]'s Articles
Baidu academic
Similar articles in Baidu academic
[张震]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[张震]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.