CASIA OpenIR  > 模式识别国家重点实验室  > 先进时空数据分析与学习
Learning adversarial point-wise domain alignment for stereo matching
Zhang, Chenghao1,2; Meng, Gaofeng1,2,3; Xu, Richard Yi Da4; Xiang, Shiming1,2; Pan, Chunhong1
Source PublicationNEUROCOMPUTING
ISSN0925-2312
2022-06-28
Volume491Pages:564-574
Corresponding AuthorMeng, Gaofeng(gfmeng@nlpr.ia.ac.cn)
AbstractThe state-of-the-art stereo matching models trained on synthetic datasets have difficulty in generalizing to real-world datasets. One major reason is that illumination and texture in the real world are hard to be simulated, resulting in big differences between synthetic and real-world data. In this study, instead of narrowing the image-level appearance difference, we focus on aligning both data domains in feature space in an unsupervised manner and propose an end-to-end domain alignment stereo network (DAStereo). A domain alignment module (DAM) is introduced by learning a point-wise linear transformation. We demonstrate that DAM can maintain sufficient alignment capacity with fewer parameters than the globally nonlinear mapping. To explicitly promote the point-wise domain alignment, adversarial learning is further introduced using a cost volume discriminator in a hybrid training manner. Experimental results show that DAStereo outperforms the state-of-the-art unsupervised and adaptive methods and even achieves comparable performance to some supervised methods. (C) 2021 Elsevier B.V. All rights reserved.
KeywordStereo Matching Domain adaptation Point-wise linear transformation Adversarial learning
DOI10.1016/j.neucom.2021.12.034
Indexed BySCI
Language英语
Funding ProjectNational Key Research and Development Program of China[2018AAA0100400] ; National Natural Science Foundation of China[61802407] ; National Natural Science Foundation of China[61976208] ; National Natural Science Foundation of China[62071466]
Funding OrganizationNational Key Research and Development Program of China ; National Natural Science Foundation of China
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000830181200013
PublisherELSEVIER
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/49843
Collection模式识别国家重点实验室_先进时空数据分析与学习
Corresponding AuthorMeng, Gaofeng
Affiliation1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.Chinese Acad Sci, Ctr Artificial Intelligence & Robot, HK Inst Sci & Innovat, Beijing, Peoples R China
4.Univ Technol Sydney UTS, Fac Engn & Informat Technol, Ultimo, NSW 2007, Australia
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Zhang, Chenghao,Meng, Gaofeng,Xu, Richard Yi Da,et al. Learning adversarial point-wise domain alignment for stereo matching[J]. NEUROCOMPUTING,2022,491:564-574.
APA Zhang, Chenghao,Meng, Gaofeng,Xu, Richard Yi Da,Xiang, Shiming,&Pan, Chunhong.(2022).Learning adversarial point-wise domain alignment for stereo matching.NEUROCOMPUTING,491,564-574.
MLA Zhang, Chenghao,et al."Learning adversarial point-wise domain alignment for stereo matching".NEUROCOMPUTING 491(2022):564-574.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang, Chenghao]'s Articles
[Meng, Gaofeng]'s Articles
[Xu, Richard Yi Da]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang, Chenghao]'s Articles
[Meng, Gaofeng]'s Articles
[Xu, Richard Yi Da]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang, Chenghao]'s Articles
[Meng, Gaofeng]'s Articles
[Xu, Richard Yi Da]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.