The plasticity in our brain gives us promising ability to learn and know the world. Although great successes have been achieved in many fields, few bio-inspired methods have mimiced this ability. They are infeasible when the data is time-varying and the scale is large because they need all training data loaded into memory. Furthermore, even the popular deep convolutional neural network (CNN) models have relatively fixed structures. Through incremental PCANet, this paper aims at exploring a lifelong learning framework to achieve the plasticity of both feature and classifier constructions. The proposed model mainly comprises of three parts: Gabor filters followed by maxpooling layer offering shift and scale tolerance to input samples, cascade incremental PCA to achieve the plasticity of feature extraction and incremental SVM to pursue plasticity of classifier construction. Different from CNN, the plasticity in our model has no back propogation (BP) process and don’t need huge parameters. Experiments have been done and their results validate the plasticity of our models in both feature and classifier constructions and further verify the hypothesis of physiology that the plasticity of high layer is better than the low layer.
Wang-Li Hao,Zhaoxiang Zhang. Incremental PCANet: A Lifelong Learning Framework to Achieve the Plasticity of both Feature and Classifier Constructions[C],2016.
修改评论