Knowledge Commons of Institute of Automation,CAS
A PD-Type State-Dependent Riccati Equation With Iterative Learning Augmentation for Mechanical Systems | |
Saeed Rafee Nekoo; José Ángel Acosta; Guillermo Heredia; Anibal Ollero | |
Source Publication | IEEE/CAA Journal of Automatica Sinica
![]() |
ISSN | 2329-9266 |
2022 | |
Volume | 9Issue:8Pages:1499-1511 |
Abstract | This work proposes a novel proportional-derivative (PD)-type state-dependent Riccati equation (SDRE) approach with iterative learning control (ILC) augmentation. On the one hand, the PD-type control gains could adopt many useful available criteria and tools of conventional PD controllers. On the other hand, the SDRE adds nonlinear and optimality characteristics to the controller, i.e., increasing the stability margins. These advantages with the ILC correction part deliver a precise control law with the capability of error reduction by learning. The SDRE provides a symmetric-positive-definite distributed nonlinear suboptimal gain K(x) for the control input law u = –R–1(x)BT(x)K(x)x. The sub-blocks of the overall gain R–1(x)BT(x)K(x), are not necessarily symmetric positive definite. A new design is proposed to transform the optimal gain into two symmetric-positive-definite gains like PD-type controllers as u = –KSP(x)e–KSD(x)ė. The new form allows us to analytically prove the stability of the proposed learning-based controller for mechanical systems; and presents guaranteed uniform boundedness in finite-time between learning loops. The symmetric PD-type controller is also developed for the state-dependent differential Riccati equation (SDDRE) to manipulate the final time. The SDDRE expresses a differential equation with a final boundary condition, which imposes a constraint on time that could be used for finite-time control. So, the availability of PD-type finite-time control is an asset for enhancing the conventional classical linear controllers with this tool. The learning rules benefit from the gradient descent method for both regulation and tracking cases. One of the advantages of this approach is a guaranteed-stability even from the first loop of learning. A mechanical manipulator, as an illustrative example, was simulated for both regulation and tracking problems. Successful experimental validation was done to show the capability of the system in practice by the implementation of the proposed method on a variable-pitch rotor benchmark. |
Keyword | Closed-loop iterative learning control (ILC) PD-type SDRE SDDRE symmetric |
DOI | 10.1109/JAS.2022.105533 |
Citation statistics | |
Document Type | 期刊论文 |
Identifier | http://ir.ia.ac.cn/handle/173211/49657 |
Collection | 学术期刊_IEEE/CAA Journal of Automatica Sinica |
Recommended Citation GB/T 7714 | Saeed Rafee Nekoo,José Ángel Acosta,Guillermo Heredia,et al. A PD-Type State-Dependent Riccati Equation With Iterative Learning Augmentation for Mechanical Systems[J]. IEEE/CAA Journal of Automatica Sinica,2022,9(8):1499-1511. |
APA | Saeed Rafee Nekoo,José Ángel Acosta,Guillermo Heredia,&Anibal Ollero.(2022).A PD-Type State-Dependent Riccati Equation With Iterative Learning Augmentation for Mechanical Systems.IEEE/CAA Journal of Automatica Sinica,9(8),1499-1511. |
MLA | Saeed Rafee Nekoo,et al."A PD-Type State-Dependent Riccati Equation With Iterative Learning Augmentation for Mechanical Systems".IEEE/CAA Journal of Automatica Sinica 9.8(2022):1499-1511. |
Files in This Item: | Download All | |||||
File Name/Size | DocType | Version | Access | License | ||
JAS-2021-1034.pdf(2890KB) | 期刊论文 | 出版稿 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment