|Place of Conferral||中国科学院自动化研究所|
|Keyword||医疗问诊 机器学习 卷积神经网络 注意力机制 知识推理|
随着“互联网 + 医疗健康”产业的蓬勃发展，线上医疗问答社区中的用户数量与日俱增，需求日益扩大，远超人类医生的工作负荷。因此，使用机器学习算法从医疗问诊大数据中建模问答机制，实现智能辅助问诊，已经成为一个重要的研究热点。然而，受限于用户表述不规范、语料标注稀缺以及医疗场景可解释性不足等难题，现有的智能问诊技术普遍存在正确率低、鲁棒性差等缺点。这衍生出一个重要的机器学习问题——如何利用低价值密度、高噪声水平的医疗问诊大数据，解决语言和知识的表示难题，进而提升智能问诊系统的语义理解水平。
With the rapid development of 'E-Health' industry, the number of users in the online medical community is increasing day by day. And the growing demand far exceeds the workload of human doctors. Hence, using machine learning algorithms to learn from the medical big data and build an intelligent question answering system, has becoming an important research hotspot. However, due to the users' non-standard expressions, the scarcity of annotations, and the high standard of interpretability in medical scenes, the existing methods generally are of poor accuracy, as well as low robustness. This leads to an important machine learning problem, that is, how to effectively use the medical question answering corpus, which is of low value density and high noise level, to solve the representation problem of language and knowledge, and to improve the level of semantic understanding.
Therefore, there are three bottlenecks needed to be broken through in the research of medical question answering methods: first, the semantic feature representations of medical text do not utilize the structural information of language; second, at the point of understanding the chief complaint, it is difficult to align between the users oral language and medical terminologies during semantic analysis; third, some relational knowledge are missing in the existing medical knowledge graphs. Addressing on these problems, and also considering the attributes of online medical corpus, this thesis focuses on the three key technologies of medical question answering: text semantic representation, oral entity extraction and knowledge graph completion, and constructs an intelligent medical question answering system adapted to the characteristics of medical field. The main innovations of this thesis are as follows:
1. This thesis proposes a short text representation model which embeds dependency parsing. Aiming at the mismatching between convolutional neural networks and the recursive property of natural language, a weight layer based on dependency syntactic tree is introduced to map the depth of a word in syntactic tree to the weight of word vector, implicitly fusing the syntactic structure information. On this basis, the convolution neural network is used to extract semantic features and form text representation. While maintaining the advantages of parallel computing, the model does not need any fine annotation of word level. In addition, the proposed model can be extended to different tasks, such as text classification, duplicate classification and text pair ranking. Experiments on each task confirm that the proposed model out-performs all state-of-the-art models, and learned word weights accord with the human cognition.
2. This thesis proposes a 'proposing-rejecting' two-stage symptom entity extraction method. Aiming at the difficulties of insufficient context, word shape changes and lack of annotations in oral queries, a cross attention network is proposed to model the attention distribution of human doctors to the users' queries, and propose candidate symptom entities through the associations between pairs of query and answer. On this basis, a semantic cluster based filtering model is proposed to determine the clustering centers and boundaries of the existing entities, which then are used to reject outliers in the candidate entities. In addition, this thesis design an automatic annotated training set to facilitate the statistical learning models, effectively combining the advantages of dictionary and statistical learning to improve the generalization performance on symptom entity extraction task.
3. This thesis proposes a topology adaptive knowledge graph embedding method. Aiming the failure of a series of translational distance models caused by the circle structures and link density in knowledge graphs, this thesis argues for the semantic difference of entities in different positions under a generalized viewpoint, and proves that the knowledge uncertainty is equivalent to the adaptive margin between positive and negative triplets. On this basis, a location sensitive model with self-attention scoring block is proposed to improve the representation capacity of the embedding model through the semantic distinction between the head and the tail entity. Also the self-attention block is introduced to score a knowledge triplet, and it greatly improves the performance of each existing models. In addition, three simplified models with adaptive margin are proposed. By decomposing of the covariance matrices, the margin can be adapted according to the link density of knowledge, so that the Gaussian embedding model can be simplified while its representation capacity is greatly improved. Experiments on each task confirm that the proposed model achieves higher, or at least comparable performance to all state-of-the-art models.
|Funding Project||Natural Science Foundation of China|
|张似衡. 医疗问诊大数据机器学习模型与算法研究[D]. 中国科学院自动化研究所. 中国科学院大学,2020.|
|Files in This Item:|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.