CASIA OpenIR  > 模式识别国家重点实验室  > 视频内容安全
Listwise Learning to Rank from Crowds
Wu, Ou1; You, Qiang1; Xia, Fen2; Ma, Lei1; Hu, Weiming3,4
2016-08-01
发表期刊ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA
卷号11期号:1页码:1-39
文章类型Article
摘要11; Learning to rank has received great attention in recent years as it plays a crucial role in many applications such as information retrieval and data mining. The existing concept of learning to rank assumes that each training instance is associated with a reliable label. However, in practice, this assumption does not necessarily hold true as it may be infeasible or remarkably expensive to obtain reliable labels for many learning to rank applications. Therefore, a feasible approach is to collect labels from crowds and then learn a ranking function from crowdsourcing labels. This study explores the listwise learning to rank with crowdsourcing labels obtained from multiple annotators, who may be unreliable. A new probabilistic ranking model is first proposed by combining two existing models. Subsequently, a ranking function is trained by proposing a maximum likelihood learning approach, which estimates ground-truth labels and annotator expertise, and trains the ranking function iteratively. In practical crowdsourcing machine learning, valuable side information (e.g., professional grades) about involved annotators is normally attainable. Therefore, this study also investigates learning to rank from crowd labels when side information on the expertise of involved annotators is available. In particular, three basic types of side information are investigated, and corresponding learning algorithms are consequently introduced. Further, the top-k learning to rank from crowdsourcing labels are explored to deal with long training ranking lists. The proposed algorithms are tested on both synthetic and real-world data. Results reveal that the maximum likelihood estimation approach significantly outperforms the average approach and existing crowdsourcing regression methods. The performances of the proposed algorithms are comparable to those of the learning model in consideration reliable labels. The results of the investigation further indicate that side information is helpful in inferring both ranking functions and expertise degrees of annotators.
关键词Listwise Learning To Rank Crowdsourcing Multiple Annotators Probabilistic Ranking Model Side Information
WOS标题词Science & Technology ; Technology
DOI10.1145/2910586
关键词[WOS]MODELS
收录类别SCI
语种英语
项目资助者National Science Foundation of China (NSFC)(61379098)
WOS研究方向Computer Science
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering
WOS记录号WOS:000382878300004
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/12019
专题模式识别国家重点实验室_视频内容安全
通讯作者Wu, Ou
作者单位1.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, 95 Zhongguancun East, Beijing, Peoples R China
2.Baidu Inc, Big Data Lab, 10 Shangdi 10th St, Beijing, Peoples R China
3.CAS Ctr Excellence Brain Sci & Intelligence Techn, 95 Zhongguancun East, Beijing, Peoples R China
4.Chinese Acad Sci, Inst Automat, NLPR, 95 Zhongguancun East, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Wu, Ou,You, Qiang,Xia, Fen,et al. Listwise Learning to Rank from Crowds[J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA,2016,11(1):1-39.
APA Wu, Ou,You, Qiang,Xia, Fen,Ma, Lei,&Hu, Weiming.(2016).Listwise Learning to Rank from Crowds.ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA,11(1),1-39.
MLA Wu, Ou,et al."Listwise Learning to Rank from Crowds".ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 11.1(2016):1-39.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
tkdd.pdf(1842KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wu, Ou]的文章
[You, Qiang]的文章
[Xia, Fen]的文章
百度学术
百度学术中相似的文章
[Wu, Ou]的文章
[You, Qiang]的文章
[Xia, Fen]的文章
必应学术
必应学术中相似的文章
[Wu, Ou]的文章
[You, Qiang]的文章
[Xia, Fen]的文章
相关权益政策
暂无数据
收藏/分享
文件名: tkdd.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。