The study of parameter identifiability is an important way for enhancing model transparency and comprehensibility, and is the perquisite for parameter estimation. Identifiability is an essential requirement for system modeling when the parameters to be estimated have a physically interpretable meaning. The importance and utility of identifiability analysis can be recognized in statistical learning theory, model structure learning, model selection, parameter estimation, learning algorithm, learning dynamic, etc. This thesis presents a systematic study of identifiability for parametric models on the basis of machine learning, system identification and neural computing. According to the model nature, we categorize parametric models into two frameworks: (1) time-invariant framework. Within this framework, identifiability theorems for nonlinear Multiple-input Multiple-output (MIMO) mappings and parametric statistical models are derived. (2) time-variant framework. Within this framework, identifiability theorems for dynamic models and stochastic process models are derived. The main contribution of this thesis is given in the following: (a) For nonlinear mappings within time-invariant framework, we view the models as static, noise-free, deterministic mappings from input space to output space, identifiability theorem for MIMO models is derived. The resulting theorem includes the previous identifiability criteria for Single-input Single-output (SISO) and Multiple-input Single-output (MISO) models as its special cases, thus theoretically generalizing the previous identifiability criteria for SISO and MISO models. Further, this thesis presents a dual algebraically reasonable and geometrically perceivable interpretation for the result. Compared with the previous results, the superiority of the proposed method lies that, it is not only workable for checking model identifiability, but also explicitly gives the observationally equivalent parameter vectors. (b) For parametric statistical models within time-invariant framework, we view the parameterized family of statistical distributions as geometrically statistical manifold, and make use of Kullback-Leibler divergence in information theory to transform the identifiability problems of unconstrained and parameter-constrained models into unconstrained and constrained optimization problems, respectively. This is the first work that systematically studies identifiability problem from the optimization theory perspec...
修改评论