A multi-level text representation model within background knowledge based on human cognitive process for big data analysis
Wei, Xiao1,2; Zhang, Jun3; Zeng, Daniel Dajun1; Li, Qing4
Source PublicationCLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS
2016-09-01
Volume19Issue:3Pages:1475-1487
SubtypeArticle
AbstractText representation is part of the most fundamental work in text comprehension, processing, and search. Various kinds of work has been proposed to mine the semantics in texts and then to represent them. However, most of them only focus on how to mine semantics from the text itself, while few of them take the background knowledge into consideration, which is very important to text understanding. In this paper, on the basis of human cognitive process, we propose a multi-level text representation model within background knowledge, called TRMBK. It is composed of three levels, which are machine surface code, machine text base and machine situational model. All of them are able to be constructed automatically to acquire semantics both inside and outside of the texts. Simultaneously, we also propose a method to establish background knowledge automatically and offer supports for the current text comprehension. Finally, experiments and comparisons have been presented to show the better performance of TRMBK.
KeywordText Comprehension Background Knowledge Text Representation Human Cognitive Process Surface Code Text Base Situational Model Semantics
WOS HeadingsScience & Technology ; Technology
DOI10.1007/s10586-016-0616-3
WOS KeywordMEMORY ; WEB
Indexed BySCI ; SSCi
Language英语
Funding OrganizationScience Foundation of Shanghai(16ZR1435500) ; National Science Foundation of China(61562020)
WOS Research AreaComputer Science
WOS SubjectComputer Science, Information Systems ; Computer Science, Theory & Methods
WOS IDWOS:000382635400032
Citation statistics
Cited Times:4[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/12656
Collection复杂系统管理与控制国家重点实验室_互联网大数据与信息安全
Affiliation1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing, Peoples R China
2.Shanghai Inst Technol, Shanghai, Peoples R China
3.Shanghai Univ, Shanghai, Peoples R China
4.City Univ Hong Kong, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
Recommended Citation
GB/T 7714
Wei, Xiao,Zhang, Jun,Zeng, Daniel Dajun,et al. A multi-level text representation model within background knowledge based on human cognitive process for big data analysis[J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS,2016,19(3):1475-1487.
APA Wei, Xiao,Zhang, Jun,Zeng, Daniel Dajun,&Li, Qing.(2016).A multi-level text representation model within background knowledge based on human cognitive process for big data analysis.CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS,19(3),1475-1487.
MLA Wei, Xiao,et al."A multi-level text representation model within background knowledge based on human cognitive process for big data analysis".CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS 19.3(2016):1475-1487.
Files in This Item: Download All
File Name/Size DocType Version Access License
A multi-level text r(4930KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wei, Xiao]'s Articles
[Zhang, Jun]'s Articles
[Zeng, Daniel Dajun]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wei, Xiao]'s Articles
[Zhang, Jun]'s Articles
[Zeng, Daniel Dajun]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wei, Xiao]'s Articles
[Zhang, Jun]'s Articles
[Zeng, Daniel Dajun]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: A multi-level text representation model within background knowledge based on human cognitive process for big data analysis.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.