In the real world, feedforward neural networks have been applied extensively. However, its model selection is still an open problem in the field of neural networks because there is no a strict theoretical guidance or a standard selection criteria. Moreover, with the continuous emergence of new theories and new algorithms, it is more and more diffcult for a single feedforward neural network, which is used for classification (or function approximation), to achieve better performance. Thus, to produce higher classification accuracy rate (or approximation accuracy), mixing the feedforward neural networks in different types together may be a feasible solution. In this dissertation, feedforward neural networks are studied from the perspectives of model selection and hybrid strategy. The main contributions of this dissertation contain five aspects below. 1) Basing on mutual information in information theory, we have proposed a two-phase construction approach for multilayer perceptrons. The approach can remove the irrelevant input units and redundant hidden units automatically. The experiment results show the efficiency of our method. Furthermore, compared with its related work, the proposed strategy exhibits better generalization ability in dealing with the benchmark data sets. 2) We have proposed a hybrid learning method for elliptical basis function neural networks. The experiment results show that it can improve the classification abilities of the traditional radial basis function neural networks. To achieve the same (or better) test accuracy rate, the elliptical basis function neural network trained with the proposed method needs far fewer training epoches and training time in comparison with the radial basis function neural network trained in the same manner. 3) For the unlabeled data classification with no information of class labels, we have proposed a model named “an adaptive fuzzy c-means clustering based mixtures of experts”. Experiment results show its better performance. The extension version of the proposed model for semi-supervised classification has been presented. The experiment result also show that the extension version of the proposed method can achieve better performance in comparison with its related work. 4) The Gauss-Chebyshev neural network has been proposed, and the training method of it, i.e. back propagation algorithm, has been presented in detail. Experiment results show that Gauss-Chebyshev neural network can improve the generalization, approximation, and anti-noise abilities of Gaussian neural network. With the same topological structure and the same settings of parameters, Gauss-Chebyshev neural network can approximate a function more quickly and exactly in comparison with Gauss-Sigmoid neural network.
修改评论