## Statistical Identification to the Divergence of the Kalman Filter and the Method of the Adaptive Filtering for Controlling the Divergence

**Zhang Jin-huai**

In some applications of the Kalman filtev, it is found that the actual estimation errors greatly exceed the values which would be theoretically predicted by the error variance. The actual error may become unbounded, even though the error variance in the Kalman-filter algorithm is vanishingly small. The phenomenon is so-Galled divergence which seriously affects the usefulness of the Kalman filter. This paper discusses the problems of the divergence of the Kalman filter. In the first, We considere the predicted residuals (or innovations), i, e r_(k+1)=Z_(k+1)-E[Z_(k+1)|Z~K], l0, where Z~K={Z_1, …, Z_K}, Z_i is the observation vector in time t_i, and Z_i=H_iX_i+V_i, where X_i is the state vector and vi is the observation noise vector. Under certain conditions, we have proved that B_(K,N)=R_(K,N)~TV~(-1)R_(K,N) is the x~2-random variable with degrees of freedom M.Where M is the rank of the matrix V which is the covariance matrix of R_(K,N), R_(K,N)=[r_(K+1)~T…r_(K+N)~T]~T. Thus we construct the rule esting the divergence of the Kalman filter.In order to eliminate the divergence caused by inaccuracies in the modeling process, we discussed the theoretical properties of the predicted residuals and compares with the practical calculated results. And then, we formed the adaptive filtering algorithm. Finally, an example is given to show the effectiveness of this method.

CAJViewer7.0 supports all the CNKI file formats; AdobeReader only supports the PDF format.