Java开发代写|CS 5800 – Assignment #4

We assume the following nonlinear system with continuous-time state dynamics and discrete observations

x˙(t ) =  f (t, x) + w(t )

z(tk ) = h(x(tk )) + vk,

where x is an n dimensional state vector and z is an m dimensional observation vector. w and v are Gaussian noise terms that correspond to the system and observation noise, with covariances Q and R respectively. Selection of these noise matrices is key to the success of any filtering methodology. The estimation of Q and R, a process known as adaptive filtering, is an active area of research (see [10] and the references within). Since our goal in this manuscript is to examine the problem of state and parameter estimation in nonlinear systems with censored data, we simplify matters by performing offline tuning of these error covariance matrices to obtain optimal filter performance.The EKF is a sequential estimator that consists of a prediction and update step. We solve the following system

xˆ˙ =  f (t, xˆ)

P˙ = PF T + FP + Q,

with initial conditions xˆk1 and Pk1 from tk1 to tk to compute xˆk and Pk , our prior state and covariance matrix estimates. F is the linearization of the system dynamics, namely F = f (xˆ). We form the linearization of the observation operator, Hk = h(xˆk ), and then implement the standard Kalman update equations to correct our state and covariance estimates

xˆ  = xˆ + K               z  h(xˆ )

k       k        k    k           k

Kk = Pk HkT  HkPk HkT + R 1.

2.1. Filtering with censored data

J. Arthur et al. / Applied Mathematics and Computation 316 (2018) 155–166 157

data in the Kalman filter framework. The authors derived a new set of equations for the filter which appropriately accounts for censored data during the Kalman update step. In this article, we extend these ideas to the nonlinear case, deriving an auxiliary set of update equations for the EKF to accurately handle censored data. The derivation included here follows very closely that in [24], though our assumption throughout is that our system of interest is nonlinear.

We use Uk to denote the vector of all uncensored observations up to time tk. Similarly, let Ck denote the vector of cen-sored observations, each of which lies in some possibly infinite interval Z. For simplicity, we will write Ck Z. The filter proceeds at every step by first estimating the state and error covariance ignoring any censored observations. We denote these naive estimates with xˆk(uc) and Pk(uc) and use xˆk and Pk to denote the final estimates, which are additionally condi-tioned on the censored observations lying in Z. To calculate the naive estimates, we use a modified gain term:

Kk = P HT HkP     HT    R 1   otherwise.
0 if zk is censored
k(uc)   k k(uc)   k +

Therefore when zk is censored, we have xˆk(uc) = xˆk(uc) and Pk(uc) = Pk(uc), i.e., the predicted values are equal to the naive estimates.In the case of a censored observation, we calculate the mean and approximate error covariance for the censored obser-vation conditional on the uncensored data, namely

Cˆk(uc) = h(xˆk(uc) )

PkC(uc) =  HkPk(uc)HkT + R ,

We also compute

PkCx(uc) = HkPk(uc),

the covariance between the censored observation and the state. Using multivariate Gaussian calculations (see Appendix), the final state and covariance update equations are defined as

xˆk = xˆk(uc) + Kk  Cˆk Cˆk(uc) (1)
Pk = Pk(uc) Kk  PkC(uc) PkC   Kk  T , (2)

where the new gain term is

Kk = PkxC(uc)  PkC(uc)  1 (3)

and PkxC(uc) =  PkCx(uc)  T .

Note that Cˆk and PkC are the mean and covariance of the censored observation given the uncensored observations and conditioned on the censored observation lying in Z. This computation is done using the tmvtnorm package in R, which computes the mean and covariance of truncated multivariate normal random variables After the first censored obser-vation,re used as the state and covariance update equations. Additionally though, we must update Cˆk(uc), PkC(uc), and PkCx(uc)  at every step of the filter. This update is carried out in two ways, depending on whether or not zk is censored.

In the censored case we first update the covariance PkCx1(uc) to account for the change in state from tk1 to tk. Momen-tarily abbreviating this covariance as D, we solve the system

˙ T (4)
D = DF
xˆ˙ = f (xˆ) (5)
from tk1 to tk with initial conditions D(tk1 ) = PkCx1(uc)  and xˆ(tk1 ) = xˆk1. The result of this computation is that D(tk)

is approximately the covariance between Ck1 and xk, conditional on only the uncensored observations (see Appendix for details). We call this covariance PkCx1,k(uc) and compute the final updated covariance as

k(uc) = HkPk(uc)
PCx PkCx−1,k(uc) .

Now, we update the naive covariance of the censored observations as

Pk(uc) = PC PCx HT ,
Hk (PkCx1,k(uc) )T Pkz(uc) k
C k−1(uc) k 1,k(uc)

where the covariance of the new observation is

Pkz(uc) =  HkPk(uc)HkT + R .

 

Similarly, updating the naive estimate for the censored observations gives

In the case that zk  is not censored, the calculations become slightly more complicated.

compute PCx , which is equivalent to PCx since C C . This predictive covariance can be updated as
k−1,k(uc) k(uc) k = k−1

PkCx(uc) = PkCx(uc)  I HkT KkT  .

The naive expectation of the censored data vector can be updated according to

Cˆk(uc) = Cˆk1(uc) + PkCx(uc)HkT (Pkz(uc) )1  zk h(xˆk(uc) ) ,

which is analogous to the state update equation in the basic Kalman filter. Similarly, we use the equation

PkC(uc) = PkC1(uc) PkCx(uc)HkT (Pkz(uc) )1Hk (PkCx(uc) )T

to update the error covariance for the censored observations.

Of course, with an increasing number of censored data the above algorithm can become computationally unwieldy due to the increasing dimension of the covariance matrices.the authors reason that previous censored data can be forgotten over time, allowing for a reduction in the algorithm’s computational complexity. In particular the columns of the modified gain term Kk defined in (3), where each column corresponds to a censored observation, will naturally decay over time to 0 as more data is processed. Additionally if there are a sufficient number of uncensored observations after a censored measurement, the correlation between the censored observation and the state becomes very small. With these ideas in mind, we can introduce approximations to the state and covariance update by removing past censored observations. This in effect reduces the computational complexity of the algorithm and would allow us to only use subsets of the censored observations for a period of time.