Social Event Classification via Boosted Multimodal Supervised Latent Dirichlet Allocation
Shengsheng Qian1,2; Tianzhu Zhang1,2; Changsheng Xu1,2; M. Shamim Hossain3
发表期刊ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
2014-12-01
卷号11期号:2页码:1-22
文章类型Article
摘要

With the rapidly increasing popularity of social media sites (e.g., Flickr, YouTube, and Facebook), it is convenient for users to share their own comments on many social events, which successfully facilitates social event generation, sharing and propagation and results in a large amount of user-contributed media data (e.g., images, videos, and text) for a wide variety of real-world events of different types and scales. As a consequence, it has become more and more difficult to exactly find the interesting events from massive social media data, which is useful to browse, search and monitor social events by users or governments. To deal with these issues, we propose a novel boosted multimodal supervised Latent Dirichlet Allocation (BMM-SLDA) for social event classification by integrating a supervised topic model, denoted as multi-modal supervised Latent Dirichlet Allocation (mm-SLDA), in the boosting framework. Our proposed BMM-SLDA has a number of advantages. (1) Our mm-SLDA can effectively exploit the multimodality and the multiclass property of social events jointly, and make use of the supervised category label information to classify multiclass social event directly. (2) It is suitable for large-scale data analysis by utilizing boosting weighted sampling strategy to iteratively select a small subset of data to efficiently train the corresponding topic models. (3) It effectively exploits social event structure by the document weight distribution with classification error and can iteratively learn new topic model to correct the previously misclassified event documents. We evaluate our BMM-SLDA on a real world dataset and show extensive experimental results, which demonstrate that our model outperforms state-of-the-art methods.

关键词Algorithms Experimentation Performance Social Event Classification Multimodality Supervised Lda Adaboost Social Media
WOS标题词Science & Technology ; Technology
关键词[WOS]RECOGNITION ; ANNOTATION
URL查看原文
收录类别SCI
语种英语
WOS研究方向Computer Science
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Computer Science, Theory & Methods
WOS记录号WOS:000348308800004
引用统计
被引频次:32[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/2822
专题多模态人工智能系统全国重点实验室_多媒体计算
通讯作者Changsheng Xu
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.China Singapore Inst Digital Media, Singapore 119613, Singapore
3.King Saud Univ, Coll Comp & Informat Sci, SWE Dept, Riyadh, Saudi Arabia
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Shengsheng Qian,Tianzhu Zhang,Changsheng Xu,et al. Social Event Classification via Boosted Multimodal Supervised Latent Dirichlet Allocation[J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS,2014,11(2):1-22.
APA Shengsheng Qian,Tianzhu Zhang,Changsheng Xu,&M. Shamim Hossain.(2014).Social Event Classification via Boosted Multimodal Supervised Latent Dirichlet Allocation.ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS,11(2),1-22.
MLA Shengsheng Qian,et al."Social Event Classification via Boosted Multimodal Supervised Latent Dirichlet Allocation".ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS 11.2(2014):1-22.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
TOMM2014_Social Even(6388KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Shengsheng Qian]的文章
[Tianzhu Zhang]的文章
[Changsheng Xu]的文章
百度学术
百度学术中相似的文章
[Shengsheng Qian]的文章
[Tianzhu Zhang]的文章
[Changsheng Xu]的文章
必应学术
必应学术中相似的文章
[Shengsheng Qian]的文章
[Tianzhu Zhang]的文章
[Changsheng Xu]的文章
相关权益政策
暂无数据
收藏/分享
文件名: TOMM2014_Social Event Classification via Boosted Multimodal Supervised Latent Dirichlet Allocation.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。