Java开发代写｜CS 5800 – Assignment #4
We assume the following nonlinear system with continuous-time state dynamics and discrete observations
x˙(t ) = f (t, x) + w(t )
z(t_{k} ) = h(x(t_{k} )) + v_{k},
where x is an n dimensional state vector and z is an m dimensional observation vector. w and v are Gaussian noise terms that correspond to the system and observation noise, with covariances Q and R respectively. Selection of these noise matrices is key to the success of any filtering methodology. The estimation of Q and R, a process known as adaptive filtering, is an active area of research (see [10] and the references within). Since our goal in this manuscript is to examine the problem of state and parameter estimation in nonlinear systems with censored data, we simplify matters by performing oﬄine tuning of these error covariance matrices to obtain optimal filter performance.The EKF is a sequential estimator that consists of a prediction and update step. We solve the following system
xˆ^{˙} = f (t, xˆ)
P^{˙} = PF ^{T} + FP + Q,
with initial conditions xˆ_{k}_{−}_{1} and P_{k}_{−}_{1} from t_{k}_{−}_{1} to t_{k} to compute xˆ^{−}_{k} and P_{k}^{−} , our prior state and covariance matrix estimates. F is the linearization of the system dynamics, namely F = f (xˆ). We form the linearization of the observation operator, H_{k} = h(xˆ^{−}_{k} ), and then implement the standard Kalman update equations to correct our state and covariance estimates
xˆ = xˆ^{−} + K z − h(xˆ^{−} )
_{k} _{k} _{k} _{k} _{k}
K_{k} = P_{k}^{−} H_{k}^{T} H_{k}P_{k}^{−} H_{k}^{T} + R ^{−}^{1}.
2.1. Filtering with censored data
J. Arthur et al. / Applied Mathematics and Computation 316 (2018) 155–166 | 157 |
data in the Kalman filter framework. The authors derived a new set of equations for the filter which appropriately accounts for censored data during the Kalman update step. In this article, we extend these ideas to the nonlinear case, deriving an auxiliary set of update equations for the EKF to accurately handle censored data. The derivation included here follows very closely that in [24], though our assumption throughout is that our system of interest is nonlinear.
We use U_{k} to denote the vector of all uncensored observations up to time t_{k}. Similarly, let C_{k} denote the vector of cen-sored observations, each of which lies in some possibly infinite interval Z. For simplicity, we will write C_{k} Z. The filter proceeds at every step by first estimating the state and error covariance ignoring any censored observations. We denote these naive estimates with xˆ_{k}_{(}_{uc}_{)} and P_{k}_{(uc)} and use xˆ_{k} and P_{k} to denote the final estimates, which are additionally condi-tioned on the censored observations lying in Z. To calculate the naive estimates, we use a modified gain term:
K_{k} = | _{P}− | H^{T} | H_{k}P^{−} H^{T} R | ^{−}^{1} otherwise. |
0 | if z_{k} is censored | |||
k(uc) k | _{k}_{(}_{uc}_{)} _{k} + |
Therefore when z_{k} is censored, we have xˆ^{−}_{k}_{(}_{uc}_{)} = xˆ_{k}_{(}_{uc}_{)} and P_{k}^{−}_{(}_{uc}_{)} = P_{k}_{(}_{uc}_{)}, i.e., the predicted values are equal to the naive estimates.In the case of a censored observation, we calculate the mean and approximate error covariance for the censored obser-vation conditional on the uncensored data, namely
^{C}^{ˆ}k(uc) ^{=} ^{h}^{(}^{x}^{ˆ}k(uc) ^{)}
P_{k}^{C}_{(}_{uc}_{)} = H_{k}P_{k}_{(}_{uc}_{)}H_{k}^{T} + R ,
We also compute
^{P}_{k}^{Cx}_{(}_{uc}_{)} ^{=} ^{H}k^{P}k(uc)^{,}
the covariance between the censored observation and the state. Using multivariate Gaussian calculations (see Appendix), the final state and covariance update equations are defined as
xˆ_{k} = xˆ_{k}_{(}_{uc}_{)} + K_{k} C^{ˆ}_{k} − C^{ˆ}_{k}_{(}_{uc}_{)} | (1) | |
P_{k} = P_{k}_{(}_{uc}_{)} − K_{k} P_{k}^{C}_{(}_{uc}_{)} − P_{k}^{C} K_{k} ^{T} , | (2) |
where the new gain term is
K_{k} = P_{k}^{xC}_{(}_{uc}_{)} P_{k}^{C}_{(}_{uc}_{)} ^{−} | 1 | (3) | |
and P_{k}^{xC}_{(}_{uc}_{)} = P_{k}^{Cx}_{(}_{uc}_{)} ^{T} .
Note that C^{ˆ}_{k} and P_{k}^{C} are the mean and covariance of the censored observation given the uncensored observations and conditioned on the censored observation lying in Z. This computation is done using the tmvtnorm package in R, which computes the mean and covariance of truncated multivariate normal random variables After the first censored obser-vation,re used as the state and covariance update equations. Additionally though, we must update C^{ˆ}_{k}_{(}_{uc}_{)}, P_{k}^{C}_{(}_{uc}_{)}, and P_{k}^{Cx}_{(}_{uc}_{)} at every step of the filter. This update is carried out in two ways, depending on whether or not z_{k} is censored.
In the censored case we first update the covariance P_{k}^{Cx}_{−}_{1}_{(}_{uc}_{)} to account for the change in state from t_{k}_{−}_{1} to t_{k}. Momen-tarily abbreviating this covariance as D, we solve the system
˙ | T | (4) | ||
D = DF | ||||
xˆ^{˙} = f (xˆ) | (5) | |||
from t_{k}_{−}_{1} | to t_{k} | with initial conditions D(t_{k}_{−}_{1} ) = P_{k}^{Cx}_{−}_{1}_{(}_{uc}_{)} and xˆ(t_{k}_{−}_{1} ) = xˆ_{k}_{−}_{1}. The result of this computation is that D(t_{k}) |
is approximately the covariance between C_{k}_{−}_{1} and x_{k}, conditional on only the uncensored observations (see Appendix for details). We call this covariance P_{k}^{Cx}_{−}_{1}_{,}_{k}_{(}_{uc}_{)} and compute the final updated covariance as
k(uc) | = | H_{k}P_{k}_{(}_{uc}_{)} | |
_{P}Cx | ^{P}k^{Cx}−1,k(uc) | . |
Now, we update the naive covariance of the censored observations as
^{P}k(uc) | = | _{P}C | _{P}Cx | _{H}T | , | ||||
^{H}k ^{(}^{P}_{k}^{Cx}_{−}_{1}_{,}_{k}_{(}_{uc}_{)} ^{)}^{T} | ^{−} P_{k}^{z}_{(}_{uc}_{)} | k | |||||||
C | k−1(uc) | k | 1,k(uc) |
where the covariance of the new observation is
P_{k}^{z}_{(}_{uc}_{)} = H_{k}P_{k}_{(}_{uc}_{)}H_{k}^{T} + R .
Similarly, updating the naive estimate for the censored observations gives
In the case that z_{k} is not censored, the calculations become slightly more complicated.
compute P^{Cx}^{−} | , which is equivalent to P^{Cx}^{−} | since C | C | . This predictive covariance can be updated as |
k−1,k(uc) | k(uc) | k ^{=} | k−1 |
P_{k}^{Cx}_{(}_{uc}_{)} = P_{k}^{Cx}_{(}_{uc}^{−}_{)} I − H_{k}^{T} K_{k}^{T} .
The naive expectation of the censored data vector can be updated according to
C^{ˆ}_{k}_{(}_{uc}_{)} = C^{ˆ}_{k}_{−}_{1}_{(}_{uc}_{)} + P_{k}^{Cx}_{(}_{uc}^{−}_{)}H_{k}^{T} (P_{k}^{z}_{(}_{uc}_{)} )^{−}^{1} z_{k} − h(xˆ^{−}_{k}_{(}_{uc}_{)} ) ,
which is analogous to the state update equation in the basic Kalman filter. Similarly, we use the equation
P_{k}^{C}_{(}_{uc}_{)} = P_{k}^{C}_{−}_{1}_{(}_{uc}_{)} − P_{k}^{Cx}_{(}_{uc}^{−}_{)}H_{k}^{T} (P_{k}^{z}_{(}_{uc}_{)} )^{−}^{1}H_{k} (P_{k}^{Cx}_{(}_{uc}^{−}_{)} )^{T}
to update the error covariance for the censored observations.
Of course, with an increasing number of censored data the above algorithm can become computationally unwieldy due to the increasing dimension of the covariance matrices.the authors reason that previous censored data can be forgotten over time, allowing for a reduction in the algorithm’s computational complexity. In particular the columns of the modified gain term K_{k} defined in (3), where each column corresponds to a censored observation, will naturally decay over time to 0 as more data is processed. Additionally if there are a suﬃcient number of uncensored observations after a censored measurement, the correlation between the censored observation and the state becomes very small. With these ideas in mind, we can introduce approximations to the state and covariance update by removing past censored observations. This in effect reduces the computational complexity of the algorithm and would allow us to only use subsets of the censored observations for a period of time.