英文摘要 | Because of the ability to provide excellent universal function approximation for multivariate input/output spaces on a training set, artificial neural networks are widely used in various applications. However, some disadvantages such as “Black-box” characteristics, ignoring most of existing prior information, greatly restrain further advances of the neural networks technique in applications. In real-word problems, prior information is valuable to be incorporated into a practical learning of models. Making the best use of information can overcome the limitation of the training data and improve the performance of the networks, especially in some real-word domains where extensive prior information is available. Therefore, in this thesis, the incorporation of prior information in modelings is our specific concern for adding transparency and improving the performance of the networks. The main contributions of this thesis are as follows: 1. Radial Basis Function (RBF) Neural Networks with Ranking Prior. Comparing with the method that treats the ranking information as hard constraints, we handle ranking reasonably by maximization of Normalized Discounted Cumulative Gain (NDCG) as Information Retrieval (IR) evaluation measure, which is used to evaluate the performance ranking model. In addition, a connection between weighted pairwise loss and NDCG is also revealed, and an upper bound of one minus NDCG is given by weighted pairwise loss. Numerical results from some existing benchmark regression problems confirm the beneficial aspects on the proposed approach. When training data are scarce or with noise, the improvement of the model will be better. 2. Generalized Constraint Neural Networks with Linear Priors. Techniques in the existing methods embedded prior vary from different prior information. Therefore, this part will concentrate on developing a more generic approach, called Generalized Constraint Neural Networks-Linear Priors (GCNN-LP), for handling a class of linear priors, such as ranking list, boundary, monotonicity, etc. The key contributions of this part include: - An explicitly structural mode, which may add a higher degree of transparency than using a pure algorithm mode, is proposed for embedding linear prior. - Soft constraints, which can handle prior information with noise better than hard constraints, are investigated. - Direct elimination and least squares approach, which produces better performances in both accuracy and computational cost o... |
修改评论