CASIA OpenIR  > 学术期刊  > IEEE/CAA Journal of Automatica Sinica
STGSA: A Novel Spatial-Temporal Graph Synchronous Aggregation Model for Traffic Prediction
Zebing Wei; Hongxia Zhao; Zhishuai Li; Xiaojie Bu; Yuanyuan Chen; Xiqiao Zhang; Yisheng Lv; Fei-Yue Wang
Source PublicationIEEE/CAA Journal of Automatica Sinica
ISSN2329-9266
2023
Volume10Issue:1Pages:226-238
AbstractThe success of intelligent transportation systems relies heavily on accurate traffic prediction, in which how to model the underlying spatial-temporal information from traffic data has come under the spotlight. Most existing frameworks typically utilize separate modules for spatial and temporal correlations modeling. However, this stepwise pattern may limit the effectiveness and efficiency in spatial-temporal feature extraction and cause the overlook of important information in some steps. Furthermore, it is lacking sufficient guidance from prior information while modeling based on a given spatial adjacency graph (e.g., deriving from the geodesic distance or approximate connectivity), and may not reflect the actual interaction between nodes. To overcome those limitations, our paper proposes a spatial-temporal graph synchronous aggregation (STGSA) model to extract the localized and long-term spatial-temporal dependencies simultaneously. Specifically, a tailored graph aggregation method in the vertex domain is designed to extract spatial and temporal features in one graph convolution process. In each STGSA block, we devise a directed temporal correlation graph to represent the localized and long-term dependencies between nodes, and the potential temporal dependence is further fine-tuned by an adaptive weighting operation. Meanwhile, we construct an elaborated spatial adjacency matrix to represent the road sensor graph by considering both physical distance and node similarity in a data-driven manner. Then, inspired by the multi-head attention mechanism which can jointly emphasize information from different representation subspaces, we construct a multi-stream module based on the STGSA blocks to capture global information. It projects the embedding input repeatedly with multiple different channels. Finally, the predicted values are generated by stacking several multi-stream modules. Extensive experiments are constructed on six real-world datasets, and numerical results show that the proposed STGSA model significantly outperforms the benchmarks.
KeywordDeep learning graph neural network (GNN) multi-stream spatial-temporal feature extraction temporal graph traffic prediction
DOI10.1109/JAS.2023.123033
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/50738
Collection学术期刊_IEEE/CAA Journal of Automatica Sinica
Recommended Citation
GB/T 7714
Zebing Wei,Hongxia Zhao,Zhishuai Li,et al. STGSA: A Novel Spatial-Temporal Graph Synchronous Aggregation Model for Traffic Prediction[J]. IEEE/CAA Journal of Automatica Sinica,2023,10(1):226-238.
APA Zebing Wei.,Hongxia Zhao.,Zhishuai Li.,Xiaojie Bu.,Yuanyuan Chen.,...&Fei-Yue Wang.(2023).STGSA: A Novel Spatial-Temporal Graph Synchronous Aggregation Model for Traffic Prediction.IEEE/CAA Journal of Automatica Sinica,10(1),226-238.
MLA Zebing Wei,et al."STGSA: A Novel Spatial-Temporal Graph Synchronous Aggregation Model for Traffic Prediction".IEEE/CAA Journal of Automatica Sinica 10.1(2023):226-238.
Files in This Item: Download All
File Name/Size DocType Version Access License
JAS-2022-0912.pdf(7068KB)期刊论文出版稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zebing Wei]'s Articles
[Hongxia Zhao]'s Articles
[Zhishuai Li]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zebing Wei]'s Articles
[Hongxia Zhao]'s Articles
[Zhishuai Li]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zebing Wei]'s Articles
[Hongxia Zhao]'s Articles
[Zhishuai Li]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: JAS-2022-0912.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.