CASIA OpenIR  > 多模态人工智能系统全国重点实验室
MoEP-AE: Autoencoding Mixtures of Exponential Power Distributions for Open-Set Recognition
Jiayin, Sun1,2,3; Hong, Wang3,4; Qiulei, Dong1,2,3
Source PublicationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
ISSN1051-8215
2023
Volume33Issue:1Pages:312-325
Abstract

Open-set recognition aims to identify unknown classes while maintaining classification performance on known classes and has attracted increasing attention in the pattern recognition field. However, how to learn effective feature representations whose distributions are usually complex for classifying both known-class and unknown-class samples when only the known-class samples are available for training is an ongoing issue in open-set recognition. In contrast to methods implementing a single Gaussian, a mixture of Gaussians (MoG), or multiple MoGs, we propose a novel autoencoder that learns feature representations by modeling them as mixtures of exponential power distributions (MoEPs) in latent spaces called MoEP-AE. The proposed autoencoder considers that many real-world distributions are sub-Gaussian or super-Gaussian and can thus be represented by MoEPs rather than a single Gaussian or an MoG or multiple MoGs. We design a differentiable sampler that can sample from an MoEP to guarantee that the proposed autoencoder is trained effectively. Furthermore, we propose an MoEP-AE-based method for open-set recognition by introducing a discrimination strategy, where the MoEP-AE is used to model the distributions of the features extracted from the input known-class samples by minimizing a designed loss function at the training stage, called MoEP-AE-OSR. Extensive experimental results in both standard-dataset and cross-dataset settings demonstrate that the MoEP-AE-OSR method outperforms 14 existing open-set recognition methods in most cases in both open-set recognition and closed-set recognition tasks.

Keywordopen-set recognition autoencoder scale mixture distribution exponential power distribution
DOI10.1109/TCSVT.2022.3200112
Indexed BySCI
Language英语
WOS Research AreaEngineering
WOS SubjectEngineering, Electrical & Electronic
WOS IDWOS:000911746000023
PublisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
IS Representative Paper
Sub direction classification目标检测、跟踪与识别
planning direction of the national heavy laboratory人工智能基础前沿理论
Paper associated data
Citation statistics
Cited Times:7[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/51357
Collection多模态人工智能系统全国重点实验室
Corresponding AuthorQiulei, Dong
Affiliation1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
4.Univ Chinese Acad Sci, Coll Life Sci, Beijing 100049, Peoples R China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Jiayin, Sun,Hong, Wang,Qiulei, Dong. MoEP-AE: Autoencoding Mixtures of Exponential Power Distributions for Open-Set Recognition[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(1):312-325.
APA Jiayin, Sun,Hong, Wang,&Qiulei, Dong.(2023).MoEP-AE: Autoencoding Mixtures of Exponential Power Distributions for Open-Set Recognition.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(1),312-325.
MLA Jiayin, Sun,et al."MoEP-AE: Autoencoding Mixtures of Exponential Power Distributions for Open-Set Recognition".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.1(2023):312-325.
Files in This Item: Download All
File Name/Size DocType Version Access License
MoEP-AE_Autoencoding(3639KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Jiayin, Sun]'s Articles
[Hong, Wang]'s Articles
[Qiulei, Dong]'s Articles
Baidu academic
Similar articles in Baidu academic
[Jiayin, Sun]'s Articles
[Hong, Wang]'s Articles
[Qiulei, Dong]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Jiayin, Sun]'s Articles
[Hong, Wang]'s Articles
[Qiulei, Dong]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: MoEP-AE_Autoencoding_Mixtures_of_Exponential_Power_Distributions_for_Open-Set_Recognition.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.