CASIA OpenIR  > 脑图谱与类脑智能实验室  > 类脑认知计算
类人概念学习计算模型研究
王寓巍
2022-12
Pages118
Subtype博士
Abstract

概念有各种各样的形式和内容,体现了我们对世界的大部分认识,是人类思 想的基石,也是将我们精神世界维系在一起的粘合剂。概念学习是人类认知的基 本组成部分,在分类、推理、记忆、学习和决策等心理过程发挥着至关重要的作 用。长期以来,关于人类是如何进行概念学习的这一问题,引起了各个领域研究 学者的关注。本文通过综述概念学习的计算神经科学与认知心理学的相关研究, 梳理出人类进行概念学习的三元结构网络,即概念在大脑中的表征主要由两部 分构成:侧重环境交互的多感觉表征与侧重语言信息的文本衍生表征,两种表征 通过语义控制系统进行协调,进而最终习得概念。受这一机制的启发,本文基于 类脑脉冲神经网络模型,对三元结构网络中的每个子模块分别进行建模,构建类 人的概念学习计算模型。具体而言,工作主要通过以下几部分进行:

为了解决类人概念学习计算模型的数据输入问题,本文首先讨论了认知心 理学家基于具身理论提出的概念多感觉表征与计算语言学家基于分布式假说利 用大规模语料训练而获得的文本衍生表征。通过四个实验,使用统计分析的方法 验证了这两类数据与人类认知能够很好地吻合及作为模型输入的可行性。这四 组统计实验,表明 1)对于相同的概念,两种形式的表征均能较好地反应概念本 身,但 2)表征相似分析的结果显示,两种概念表征之间存在着明显差异,3)对 于有不同具体度的概念而言,具体度越大,多感觉概念表征比文本衍生概念表征 更接近人类,4)也证明了将概念的两类表征相结合能够更好地完善概念学习的 效果。

依照人类进行概念学习的三元结构网络中的多感觉经验系统模块,本文借 助多感觉表征和类脑脉冲神经网络算法,构建多感觉融合的概念学习框架,实现 多种感觉信息的融合。该部分工作可以利用概念的这五种类型的感知强度(视 觉、听觉、触觉、味觉、嗅觉),生成多感觉融合的概念表征。考虑到不同的认 知心理学理论,该框架有两种范式,关联融合与独立融合,分别对应融合前各模 态之间是否是独立的假设。实验说明,多感觉融合的概念表征比原始未融合的概 念表征更接近人类,进一步地,系统分析了两类范式的异同,并验证了这个框架 的通用性。该部分的工作实现了多感觉信息的建模,完成了多感觉融合信息的脉 冲分布矩阵表示,是类人概念学习计算模型的重要组成部分。

最后,围绕人类进行概念学习的三元结构网络,完成了语言支撑模块和语义

控制系统的建模。前者依托概念的文本衍生表征生成脉冲分布矩阵,与以上的多 感觉融合框架相结合,进行空间协同与时序协同,实现语义控制系统的建模,并 生成类人的概念表征。借助类脑脉冲神经网络算法,实现了两种不同源信息在空 间尺度和时间尺度的协同,解决了概念的两类表征不同源、维度不均衡的问题, 生成了类人的概念表征,并在相似概念的测试中得到了更接近人类的表征。还进 一步地对模型参数进行了相关分析。

本文围绕概念学习的类人机制展开研究,系统分析模型所需的与人类认知 相近的概念表征数据集,并类脑基于脉冲神经网络进行整体模型的构建,实现了 概念多感觉表征与文本衍生表征的协同,生成了更接近人类认知的表征。

Other Abstract

There are many different types of concepts that contain much of what we under- stand about the world. They serve as the foundation of human thought and the glue of our mental world. Concept learning is a fundamental component of human cognition and plays a crucial role in mental processes such as categorization, reasoning, memory, learning, and decision making. The question of how humans engage in concept learning has long attracted the attention of scholars in various fields of research. In this paper, we review the computational neuroscience and cognitive psychology research on concept learning and identify the tri-network of concept learning in humans, i.e., the represen- tation of concepts in the brain consists of two main parts: multisensory representation and text-derived representation, which are coordinated through a semantic control sys- tem and eventually acquired. This paper develops a computational model for concept learning that is human-like and is based on spiking neural network models that mimic each submodule of the tri-network. Specifically, the work is carried out through the following components.

In order to solve the problem of data input for a human-like conceptual learning computational model, this paper first discusses the multisensory representations of con- cepts proposed by cognitive psychologists based on embodiment theory and the text- derived representations obtained by computational linguists based on distributed hy- potheses using large-scale corpus training. The feasibility of these two types of data as model inputs is verified by four experiments using statistical analyses that match human cognition well. The four statistical experiments are designed to verify the usability of these two types of data, showing that (1) for the same concept, both forms of represen- tations can properly reflect the concept, but (2) the representational similarity analysis findings reveal that the two types of representations are significantly different, (3) as the concreteness of the concept grows larger, the multisensory representation of the concept becomes closer to human beings than the text-derived representation, and (4) it is also demonstrated that combining the two improves the concept representation.

In accordance with the multisensory experience system module in the trinetwork of human-like concept learning, this paper constructs a concept learning framework 

for multisensory integration with the help of multisensory representations and brain- inspired spiking neural network algorithms to realize the integration of multiple sensory information. This part of the work allows the generation of concept representations of multisensory fusion using these five types of perceptual intensities of concepts (visual, auditory, tactile, gustatory, and olfactory). Considering different cognitive psychologi- cal theories, the framework has two paradigms, associative integration and independent integration, corresponding to the hypothesis of whether the modalities are independent from each other before integration. Experiments illustrate that the concept representa- tion of multisensory integration is closer to humans than the original unfused represen- tation. Further, we systematically analyze the similarities and differences between the two types of paradigms and verify the generality of the framework. This part of the work implements the modeling of multisensory information and completes the neural repre- sentation of multisensory integration information, which is an important component of the human-like computational model of concept learning.

Finally, the modeling of the language-support module and the semantic control sys- tem is completed around the tri-network of humans performing concept learning. The former relies on text-derived representations of concepts to generate an spiking distri- bution matrix, which is combined with the above multisensory integration framework to model the semantic control system and generate human-like concept representations by combining spatial and temporal cooperations. Using the brain-inspired spiking neu- ral networks, the problems of two types of representations of concepts with different sources and dimensional imbalance are solved to generate human-like concept repre- sentations and obtain representations closer to human beings in similar concepts tests. And further, the model parameters are correlated and analyzed.

In this paper, we start the research from the human-like mechanism of concept learning, systematically analyze the dataset of concept representations similar to human cognition required for the model, and build the overall model based on brain-inspired spiking neural networks to realize the cooperation of concept multisensory represen- tations and text-derived representations, generate the representations closer to human, and complete the design of the computational model for human-like concept learning.

Keyword类脑脉冲神经网络 类人概念学习 多感觉融合 文本衍生表征
Language中文
Document Type学位论文
Identifierhttp://ir.ia.ac.cn/handle/173211/57329
Collection脑图谱与类脑智能实验室_类脑认知计算
毕业生_博士学位论文
Recommended Citation
GB/T 7714
王寓巍. 类人概念学习计算模型研究[D],2022.
Files in This Item:
File Name/Size DocType Version Access License
王寓巍博士论文提交最终版20221211(15018KB)学位论文 开放获取CC BY-NC-SA
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[王寓巍]'s Articles
Baidu academic
Similar articles in Baidu academic
[王寓巍]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[王寓巍]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.