CASIA OpenIR  > 模式识别国家重点实验室  > 模式分析与学习
Table Structure Recognition and Form Parsing by End-to-End Object Detection and Relation Parsing
Li, Xiao-Hui1,2; Yin, Fei1; Dai, He-Sen1,2; Liu, Cheng-Lin1,2
Source PublicationPATTERN RECOGNITION
ISSN0031-3203
2022-12-01
Volume132Pages:14
Corresponding AuthorLiu, Cheng-Lin(liucl@nlpr.ia.ac.cn)
AbstractThe recognition of two-dimensional structure of tables and forms from document images is a challenge due to the complexity of document structures and the diversity of layouts. In this paper, we propose a graph neural network (GNN) based unified framework named Table Structure Recognition Network (TSR-Net) to jointly detect and recognize the structures of various tables and forms. First, a multi-task fully convolutional network (FCN) is used to segment primitive regions such as text segments and ruling lines from document images, then a GNN is used to classify and group these primitive regions into page objects such as tables and cells. At last, the relationships between neighboring page objects are analyzed using another GNN based parsing module. The parameters of all the modules in the system can be trained end-to-end to optimize the overall performance. Experiments of table detection and structure recogni-tion for modern documents on the POD 2017, cTDaR 2019 and PubTabNet datasets and template-free form parsing for historical documents on the NAF dataset show that the proposed method can handle various table/form structures and achieve superior performance.(c) 2022 Elsevier Ltd. All rights reserved.
KeywordTable detection Table structure recognition Template -free form parsing Graph neural network End -to -end training
DOI10.1016/j.patcog.2022.108946
Indexed BySCI
Language英语
Funding ProjectNational Key Research and Development Program[2018AAA0100400] ; National Natural Science Foundation of China (NSFC)[61733007] ; National Natural Science Foundation of China (NSFC)[61721004]
Funding OrganizationNational Key Research and Development Program ; National Natural Science Foundation of China (NSFC)
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:000860987400006
PublisherELSEVIER SCI LTD
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/50440
Collection模式识别国家重点实验室_模式分析与学习
Corresponding AuthorLiu, Cheng-Lin
Affiliation1.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Li, Xiao-Hui,Yin, Fei,Dai, He-Sen,et al. Table Structure Recognition and Form Parsing by End-to-End Object Detection and Relation Parsing[J]. PATTERN RECOGNITION,2022,132:14.
APA Li, Xiao-Hui,Yin, Fei,Dai, He-Sen,&Liu, Cheng-Lin.(2022).Table Structure Recognition and Form Parsing by End-to-End Object Detection and Relation Parsing.PATTERN RECOGNITION,132,14.
MLA Li, Xiao-Hui,et al."Table Structure Recognition and Form Parsing by End-to-End Object Detection and Relation Parsing".PATTERN RECOGNITION 132(2022):14.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Li, Xiao-Hui]'s Articles
[Yin, Fei]'s Articles
[Dai, He-Sen]'s Articles
Baidu academic
Similar articles in Baidu academic
[Li, Xiao-Hui]'s Articles
[Yin, Fei]'s Articles
[Dai, He-Sen]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Li, Xiao-Hui]'s Articles
[Yin, Fei]'s Articles
[Dai, He-Sen]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.