Research in nonlinear estimation in space is vital for accurately predicting and controlling spacecraft trajectories in complex environments. It improves navigation, collision avoidance, and mission success rates by effectively handling uncertainties and nonlinear dynamics. This research enhances the precision of satellite operations, interplanetary missions, and space exploration, ensuring safer and more efficient space endeavors.
Estimation in Astrodynamics
The Polynomial Update
We present a systematic generalization of the linear update structure associated with the extended Kalman filter for high-order polynomial estimation of nonlinear dynamical systems. The minimum mean-square error criterion is used as the cost function to determine the optimal polynomial update during the estimation process. The high-order series representation is implemented effectively using differential algebra techniques. Numerical examples show that the
proposed algorithm, named the high-order differential algebra Kalman filter, provides superior robustness and/or mean-square error performance as compared to linear estimators under the conditions considered.
The polynomial update uses the high-order powers of the measurements to improve estimation while keeping a linear update structure. The polynomial update is a better estimator than any linear counterpart, like the UKF or the EKF.
The Polynomial Update Can Estimate Kurtosis
The left-side columns represent the position error in the three components, whereas the right-side columns are the velocity errors. The error is evaluated as the difference between the estimated state and the true state. The two figures show the results for a duration of two orbits, with a total of 24 equally spaced observations per orbit. The initial uncertainty is beyond the scale depicted in the figure, which shows the convergence of the error and the error covariance (shown with 3σ values). A Monte Carlo analysis with 100 runs is performed, and it can be noted that the filter’s predicted covariance (blue continuous line) is consistent with the sample covariance from the Monte Carlo (blue dashed line). Moreover, the figures depict the ability of HODAKF-2-N to correctly estimate highorder moments. In each graph, the continuous magenta line represents the estimated fourth central moment, whereas the dashed magenta line is the fourth central moment evaluated from the Monte Carlo analysis,
as a mean among all the single runs. It can be noted that HODAKF-2-2 has a consistent estimation.
The Polynomial Update Estimator can Bend and Better Follow the True Posterior Distribution.
Both the EKF and UKF are linear estimators; therefore, their representations in the figure are straight lines, black and magenta, respectively. The slope of the lines is the Kalman gain, where different approximation of the moments result in different values of the Kalman gain and different slopes in the figure. The EKF is a local approximation, hence the line is aligned with the slope of the optimal MMSE at the prior mean, and has the largest deviation from the optimal MMSE at the edges of the distribution. The brown line in the figure shows the optimal linear MMSE (LMMSE) evaluate directly from the true points. The Sigma Points employed in the UKF allow for some global information, as such, the UKF provides an approximation closer to the LMMSE than the EKFs. The DA-based polynomial estimator is reported with three different orders: a linear estimator, a cubic, and a fifth-order one, all employing a third-order Taylor series approximation of the arctan function. The estimates from HOPUF-1-3 form a straight line, depicted in cyan, with different slope with respect to the EKF and the UKF; HOPUF-1-3 is almost completely superimposed and indistinguishable from the optimal LMMSE. Therefore, the higher order polynomial representation of the measurement equation leads to a better approximation of the true LMMSE (when compared to the EKF and UKF) and it improves accuracy, as shown by the root mean square error (RMSE) analysis. The DA estimator uses Taylor polynomials approximation to represent the system function and themoments of the pdf are calculated accordingly. In this example, a third-order Taylor series approximation is used; therefore, the Kalman gain evaluated by the LMMSE differs from the EKF, which uses linearization (Jacobian), and from the UKF, which applies the unscented transformation.
The histrogram shows the RMSE. The first three bars in the figure show that the more accurate representation of the measurement equation relates to higher accuracy and to lower error level of the estimator, while still having a linear dependence on the measurement outcome; HOPUF-1-3 shows lower estimation error than the EKF and the UKF, indicating it is a closer approximation to the true LMMSE estimator. In the left figure, the Cubic MMSE (HOPUF-3-3) and the fifth-order MMSE (HOPUF-5-3) are also reported, with red and green points, respectively. They both work with a third-order polynomial truncation of the Taylor series representation of the arctan function. These estimators are a polynomial function of the measurement and, therefore, they can better follow the true (nonlinear) behavior of the MMSE. The estimator functions are curved and follow the tails of the distribution. The improvement in accuracy can be appreciated by looking at the RMSE values: the higher the order of the update, the lower the RMSE, which leads to a more accurate estimate.
Double Polynomial Estimator to Estimate the Posterior Covariance
A simple scalar problem is presented here to highlight the improvements of the new filtering technique by estimating the conditional covariance. It was proven before that high-order polynomial estimators are a better approximation of the true MMSE. However, the presented example underlines the matching between state and covariance for each different realization of the measurement. The figure shows the true joint distribution of x and y represented using points (gray dots in the figure). The figure compares SACE-c-η-μ and SACEMM-c-η-μ with a few common estimators: the EKF, the UKF, the GSF, the Iterated Extended Kalamn Filter (IEKF), the PF, and the high-order EKF (DAHO-k). The first row of graphs (EKF, UKF, DAHO-3) contains linear estimators; therefore, their representation on the (x, y) plane is a straight line, shown in red. The slope of the red line is the Kalman gain. The different slopes shown by the different linear estimators are due to the different approximations each linear filter employs to evaluate the moments. The different evaluations of the moments lead to different values on the estimation of the variance. The green lines share the same slope of the corresponding red line. They are just translated left (and right) by 3σ. These linear filters estimate the same uncertainty level regardless of the measurement outcome, and the predicted covariance value is the mean among all the possible different realizations. The second row of graphs shows nonlinear estimators. The GSF , PF and IKF can bend their estimator function, but they are faultu in the covariance estimation. Our developed SACE filter fits the covariance level for each measurement, hhaving the green lines expanding or narrowing according to the spread of the posterior ped. The estimator bends following the mean of the posterior and the covariance bends as well accoding to the measurement spread.
Expectation Maximization Filtering
The nonlinear filtering problem plays a fundamental role in multiple space-related applications. A new filtering technique that combines Monte Carlo
time propagation with a Gaussian mixture model measurement update. Differential algebra (DA) techniques are used as a tool to reduce the computational effort required by particle filters. Moreover, the use of expectation maximization (EM) optimization algorithm leads to a good approximation of the probability density functions. The performance of the new method is assessed in the nonlinear orbit determination problem, for the challenging case of low observations frequency, and in the restricted three-body dynamics.
Expectation MAximization Creates Gaussian Multiple Models
The figure shows the EM clustering at the end of the first propagation. The samples distribution shows how the predicted PDF has the so-called banana shape, characteristic of an orbit determination problem with long propagation times in-between measurements. The clustering algorithm with 5 Gaussians better approximates the shape of the distribution, especially near the mean and at the tails. Using only 3 Gaussians, the algorithm does not match the curve of the
density function.
Expectation Maximization Differential Algebra Filter for Orbit Deterimination at Low Frequancy Measurements
The figure show the performance of EMDA2-3 with aMonte Carlo analysis of 100 runs. The figures show the consistency of the position components, left columns, and velocity components, right columns, of the spacecraft state vector in a simulation with time duration of 12 orbits with 3 equally spaced observations per orbit. The filter converges and correctly predict the estimation uncertainties. The continuous blue lines indicate the standard deviation of the estimation error as predicted by the filter, expressed as 3σ values, while the dashed blue lines represent the actual standard deviations of the errors calculated directly from the Monte Carlo samples, again shown as 3σ values. The consistency of the filter is assessed by the overlapping of the two lines. Lastly, the black line shows the mean of the samples: the expected value of the error is very close to zero, making EMDAc-N and unbiased filter, matching the theoretical results expected for minimum mean square error (MMSE) estimators.
Comparison of Accuracy in the Two-Body Problem
The figure is divided into two parts. It shows the standard deviation profiles for the spacecraft position, top row, and velocity, bottom row on a 6 orbits-long simulation with 3 observations per orbit. Each graph has two sets of lines: the dashed lines refer to the standard deviations calculated from theMonte Carlo samples (100 runs), at each time step, while the continuous lines are the predicted standard deviations estimated by each filter. These values are derived from the diagonal terms of the updated covariance matrix of each of the filters. The proposed filter is the most accurate and consistent one.
Comparison of Accuracy in the Two-Body Problem
The figure is divided into two parts. It shows the standard deviation profiles for the spacecraft position, top row, and velocity, bottom row on a 6 orbits-long simulation with 3 observations per orbit. Each graph has two sets of lines: the dashed lines refer to the standard deviations calculated from theMonte Carlo samples (100 runs), at each time step, while the continuous lines are the predicted standard deviations estimated by each filter. These values are derived from the diagonal terms of the updated covariance matrix of each of the filters. The proposed filter is the most accurate and consistent one.
Circular Restricted Three-Body Problem Application
The performances of EMDAc-N are compared against the other filters, as done in the previous problem. The figure reports the standard deviation analysis for position, left, and velocity, right, of the spacecraft. EMDA2-3 has been compared to the UKF, the BPF and DAEnKF-2. DAHO-1 and DAEnKF-1 (which reduce to the EKF) diverge and are therefore not reported. From the figure, it can be noted that the UKF diverges and the effective standard deviation is larger than the filter’s own prediction. The DAHO-2 has an analogous behavior to the UKF and it is not reported in the figure. The BPF diverges and the orange curves start growing after the first step. Based on following each single particle, the BPF follows both paths from the initial PDF. Therefore, in the first steps, the corresponding estimate is in conflict on which path of the bifurcation to take, and the filter fails. The remaining two filters, DAEnKF-2 and EMDA2-3, converge and they reach steady state with consistency. These are the filters that approximate the PDFs with clustering, where the measurement likelihood better weights, and picks, which one of the two modes the true state belongs to. However, as in the previous example, EMDA2-3 shows an improvement in accuracy and its precision levels are superior to its single-Gaussian counterpart. Indeed, the blue lines lie below the red lines during the whole simulation, both for position and velocity. EMDA2-5 achieves convergence with consistency, and it is slightly more accurate than EMDA2-3. However, it is not reported in the figure for clarity purposes.
Thanks to DA, Computational Time is Reduced
A computational time analysis is performed on the L2 orbit determination problem in order to underline the benefits of using DA evaluation techniques for the propagation of an ensemble of points. The time analysis studies the average computational time requested by the processor to perform one single run of the Monte Carlo analysis. The figure gives a visual representation of computational time through a bar graph. The figure proves that DA reduces the computational time of the filtering algorithm. The three filters based on DA are the fastest, while the BPF is the slowest. DA uses the polynomial map of deviations to propagate all the points with a singe DA integration and n evaluations, while classic particle-based filtering techniques, such as the BPF and the MCKF, perform n propagations. Therefore, the main advantage of leveraging DA is appreciated by comparing the DAEnKF-2 computational time with that of the MCKF. The two filters have the same accuracy and robustness levels, but DAEnKF-2 is considerably faster. It is important to re-emphasize that DAEnKF-c is equivalent to EMDAc-1, by selecting N = 1 the filter skips the clustering (K-means and the EM) part of its algorithm. Therefore, in the figure, DAEnKF-2 is the fastest algorithm, and the computational time increases as we increase the number of Gaussian kernels. Increasing the number of kernels produces amore precise estimate at the cost of a heavier computational burden. As expected, as the number of particles n becomes larger, the computational effort increases as well.
PDF Propagation via Map Inversion and MAP Estimation
Differential Algebra applies the Measurement Update directly onto the PDF
The figure shows DAMAP’s approximation of the posterior distribution for different truncation orders of the Taylor polynomial expansion, achieved by applying the measurement update algorithm directly on the PDF analytical expression. The polynomial truncation order is added to the name of the filter, DAMAP-c. As an example, DAMAP-2 indicates that the polynomial approximation of the state is second order, and consequently the approximation of the log-probability function is 4. As expected, DAMAP-1 behaves exactly like the EKF, being based on the simple linearization of the measurement function. As the order increases, the true PDF approximation improves. DAMAP-8 achieves an excellent representation of the posterior distribution.
The PDF normalization factor is estimated with importance samplings.
The figure shows the results of applying importance sampling to the PDF obtained from DAMAP-8. After scaling, the peak of the approximated mode differs only 1.2% from the correct value, proving that the distribution has been correctly normalized. The importance distribution used is Gaussian with mean given by the DAMAP-8 estimate and covariance equal to that of the prior distribution’s, propagated usind DA map inversion.
The filter can directly estimate MSE and Covariance with an acceptance-rejection method.
Lastly, DAMAP-c can provide an estimate of the mean square error of the state estimate. In this example, the sampling distribution to calculate the MSE is a uniform distribution centered at the MAP estimate with amplitude 1.5 times the prior PDF’s standard deviation. The figure shows the comparison between the covariance and MSE ellipses between the true distribution and DAMAP-8 approximation. The figure reports in red dots the points that the algorithm accepts for the true distribution, while the blue circles are the ones from the polynomial series. The covariance and the MSE has been evaluated through Monte Carlo calculation with 1 million sample points. The figure demonstrate the consistency of the estimation from DAMAP-c and the robustness of the MSE prediction, especially when compared with the covariances of the other filters. Moreover an estimate of the estimation bias, consisting in the distance between the red point (DAMAP-8 MAP) and the blue point (DAMAP-8 mean).
Angles-Only Orbit Determination is achieved with accuracy
The results of the orbit determination problem solved with DAMAP-3 are reported in the figure where a Monte Carlo analysis of 1000 test runs has been performed. For each test run, the true initial condition is chosen randomly according to the initial probability density function of the state. The simulation tracks the spacecraft for two full orbits obtaining twelve equally spaced observations for each orbit. The blue lines (effective MSE) represent three times the square root of the mean square error of the 1000 runs. The predicted MSE is calculated using 200 accepted samples and the mean of the 1000 predicted MSEs is shown in green lines (as three times the MSE’s square root). The close match between the green and blue lines indicates the filter’s consistency.
Koopman Operator Moments’ Prediction
Koopman Operator Filter (KOF) System Architecture.
An overlook of the system architecture of the KO uncertainty propagation technique and of the KOF is proposed in the figure. The KO decomposition of the system, in terms of eigenfunctions and observable matrices, is performed offline: the dynamics are fed into the KO framework, which outputs the state transition polynomial map (STPM) that propagates the state for a specific time length. That is, for onboard navigation, it is sufficient to just upload the STPM on the onboard computer because the evaluation of the inner products and the evaluation of the observable matrices need to be computed only once. After storing the polynomials, the filtering algorithm works autonomously, predicting the state uncertainties and processing measurements. First, the STPM is shifted to be centered at the current estimate, and the Isserlis moments are evaluated given the current covariance. The state PDF is then propagated via its central moments through the evaluation of the shifted STPM, where the Isserlis moments are substituted to their monomials counterparts. The KO uncertainty propagation technique can approximate high-order central moments up to any arbitrary order for analysis purposes or to fit a polynomial update. Figure 1 shows the simple case of a linear measurement update, where just the predicted state mean and covariance are used to evaluate their corrected counterparts. Thus, the KOF prediction step is completed by the evaluation of the predicted measurement mean and covariance in the KO framework. Indeed, the measurement equations have been analyzed offline to evaluate the measurement KO observable matrix, which defines a new polynomial map that directly connects the state variable to the measurement. After performing the polynomial shift such that the shifted map connects state deviation vectors to the measurement outcomes, the measurement moments are likewise predicted for the state. Lastly, when the true measurement becomes available from the sensors, the update is computed and the filter can start the process over with a new prediction step. The system architecture shows how the KO methodology derived in previous works has been exploited and modified to work with stochastic variables and to perform recursive estimation. The STPM shifting and evaluation are critical for the correct prediction of the moments. Moreover, the measurement can be fully represented in the KOframework as well such that it is possible to create a measurement polynomial map that undergoes the same calculation of expectations.
The KOF is a Consistent Filter
A Monte Carlo analysis has been performed to assess the consistency of the KOF. Figure 2 shows the state estimation for 3000 runs, with random true initial condition drawn according to the initial Gaussian distribution. The figure shows the errors of the position (first row) and velocity (second row) of each run in gray. The mean of the errors is reported as a black line. This mean line settles on the zero value rapidly after every restart of the oscillator, which is in accordance with the unbiased nature of the Koopman operator filter. The convergence is assessed by the overall decrease of the uncertainties of the state of the system. The initial covariance is particularly large, and the figure zooms in to show the filter behavior at steady state, where the oscillations are marked by the sinusoidal pattern of the covariance lines. Indeed, the blue lines represent the effective standard deviation of the error as 3σ, which are calculated directly from the Monte Carlo runs at each time step; whereas the cyan lines are the estimated uncertainties from the updated covariance matrix of the filter, which are plotted again as 3σ. A consistent filter has a correct prediction of its own spread of the error, which is assessed in the figure by the overlapping of the estimated covariance (cyan) over the effective covariance (blue). The effective standard deviation indicates the effective level of accuracy of the filter, and it is evaluated as a mean among the runs. This parameter directly evaluates the statistics of the errors from the results of the Monte Carlo analysis, and it is used for validation purposes. On the other hand, the predicted standard deviation indicates the level of accuracy the filter is estimating it is reaching and it is evaluated directly from the updated covariance matrix of the filter. Thus, a consistent filter is able to correctly estimate its own accuracy level, and there is a match between the effective and predicted covariance. For the Duffing oscillator, the KOF has been proven to be a coherent and consistent filter, without any estimation bias; and it is robust against brute restarts of the system.
Koopman Operator for Raw Moments Prediction
The newly developed technique has been applied to the CRTBP in the Earth–moon system, and the results have been validated through a Monte Carlo analysis,where the predicted moments of the state have been compared to the effective moments evaluated from all of the Monte Carlo runs. The figure shows the time evolution of the uncertainties represented with the state raw moments. Indeed, the continuous lines represents the true evolution of the moments of the system: the mean is in black; the covariance, as the 3σ standard deviation, is in blue; the skewness is in cyan; and the kurtosis is inmagenta. The correct prediction of the mean and uncertainties of the systemis asserted by the overlapping of the points over the continuous lines.The accuracy of the estimation of the moments decreases as the integrating time increases, meaning that, for extremely large propagation times, the prediction of the state PDF is unfeasible. The CRTBP is strongly diverging: thus, high-order central moments after long time steps require an elevated amount of runs in order to have correct indicating values. Moreover, the figure shows the high level of accuracy in the first half of the simulation for times shorter than one year, which is zoomed in on in the figure. Stationkeeping applications, in filtering, usually receive measurements in the section where the KO shows an extremely accurate prediction ofmoments. The KO technique can predict central moments up to any arbitrary high order. However, due to the fact that odd moments can be negative, the accuracy of the prediction of even moments is higher than their odd counterparts. Indeed, some values of the skewness, in the figure, are slightly off by the end of the simulation, where the state PDF largely expands without control. In the remaining sectors, the prediction of odd moments outperforms any other uncertainty prediction technique based on the Gaussian approximation of the state distribution (such as UT) because they provide a null value for any odd moment.
Koopman Operator for Central Moments Prediction
The central moments estimation can be appreciated by reporting the uncertainties as around their mean. Indeed, although the previous figure shows the correct prediction of the family of orbits in terms of its overall possible trajectories, this figure focuses on the spread around the mean. Therefore, this figure displays similar results as the previous one, but with a new prospective centered at the current mean. It shows the chaotic nature of the problem because the uncertainties, in terms of their σ value, increase exponentially; and the spread of all the possible resolutions of the state creates a conical shape. The KO prediction is able to keep track of the central moments of the system, especially during the first half of the simulation. Both the state covariance and kurtosis expand without any boundaries. The star points show the correct estimation performed by the KO technique. The new methodology is keeping an accurate prediction of the state distribution, even for a long period of time, and even if slightly reducing the accuracy. The figure has a logarithmic scale for the values of the moments, highlighting the diverging nature of the CRTBP dynamics. Thus, it is worth noticing the exponential increase of the spread of the state PDF, with standard deviations that are two orders of magnitude bigger after merely three-quarters of a full revolution.
KO Accuracy on the Prediction of Moments
The correct prediction of the uncertainties has been compared to other common techniques that propagate central moments forward in time. First, the Gaussian mixture model has been used to propagate the state distribution by splitting the PDF in 2n + 1 Gaussians. Then, state transition tensors and differential algebra have been adopted to provide a high-accuracy level comparison of the state propagated standard deviation. These two techniques have been proven to obtain similar results and the same level of accuracy when applied up to a high truncation order. The figure reports the percentage relative error level with respect to the Monte Carlo standard deviation values of the position ζp and velocity ζv covariance prediction. The GMM approximation has difficulties in estimating the spread of the distribution and the line relative to its performance (in yellow) is far above the STT and DA lines (in red) and the KO line (in blue). By zooming in on the lower error magnitude, it can be noted that the high-order approximation with STTs and DA has a smaller relative error when compared to the KO. This aspect is due to the Richardson approximation of the dynamics of the CRTBP, which becomes less accurate for halo orbits as the time increases. The KO technique has evaluated its state transition polynomial map on the Richardson approximation of the Hamiltonian; for this specific application, it suffers the same limitations.
KO Filter Convergence and Consistency for the CRTBP
The figure shows, for each component of the state, the Monte Carlo analysis performed with the KOF on a selected Lyapunov orbit. Indeed, each gray line represents the errors connected to one single run of the Monte Carlo, for each state component. The means of the errors are portrayed with black lines. The mean (black) lines settle on the zero value, proving that the KOF is an unbiased filter, as expected from the linearized minimum mean square error principle on which the update of the filter is based. The Monte Carlo analysis provides also information about the spread of the state error and the level of uncertainties. The figure has multiple pairs of dashed and continuous blue lines. The dashed blue lines represent the levels of standard deviations, which are obtained from the Monte Carlo runs: they represent the effective level of accuracy of the filter. These effective lines are calculated, for each time step, by considering the values of the state errors among all the simulations; and they represent the actual behavior of the filter. On the contrary, the continuous blue lines report the estimated level of the uncertainties, which is predicted by the filter. Thus, for each time step, these lines are calculated by extracting the error standard deviation directly from the updated covariance of the filter. By looking at the figure, we can assess the consistency of the KOF because the effective and predicted lines overlap, meaning that the filter is able to correctly predict its own uncertainty. The KOF shows convergence and consistency, where steady-state accuracy levels are reached rapidly after a few updates. The large initial uncertainties are rapidly reduced around the most current estimate, and the updated covariance of the filter correctly represents the spread of the state error PDF.
The Koopman Operator Filter is the Most Consistent Filter in the Selected Application
The figure reports the accuracy levels, in terms of error standard deviations, of four filters, displaying estimated values with continuous lines and the actual behavior with dashed lines. The EKF is shown in red. The effective EKF lines, for both the position and velocity, are out of scale when compared to the predicted counterparts, which are overlapping with the predicted IKF lines. Indeed, the green lines are connected to the IKF. The EKF and the IKF share the same updated covariance matrix while their estimate changes. Thus, the overlapping between the IKF and the EKF predicted lines matches with what is expected from theory: the two estimators approximate the posterior distribution of the state as Gaussian and with the same level of uncertainty (same standard deviations) but different means. Indeed, although the EKF is a minimum mean square error (MMSE) filter, the IKF is a filter based on the MAP principle, which outputs as its estimate the most likely state of the posterior distribution. However, the IKF and EKF share the same prediction step, based on the linearization of the dynamics, which is not sufficient to achieve an accurate approximation of the state prior distribution. Therefore, both filters are inconsistent; and their dashed lines settle orders of magnitude above the continuous ones. The EKF and IKF believe that they are achieving higher accuracy than their actual results, and they are overconfident in their performance. The difference between the green dashed lines and the red dashed lines is connected to the different update steps performed by the two filtering techniques: a MMSE one for the EKF, and a MAP one for the IKF. On the contrary, in blue, the UKF applies the unscented transformation in its prediction step to obtain a more accurate state prior distribution. Thus, the effective UKF line settles below the EKF and the IKF ones. However, the matching with the predicted uncertainty is missing and the filter is inconsistent as well. The UKF estimates a performance similar to the KOF (in black), which is shown by the overlapping between the predicted KOF and the predicted UKF lines. Therefore, the KOF is the only filter that converges with consistency. The KOF is able to correctly predict its own uncertainties, and its estimate is the most accurate among the other filtering techniques. The effective KOF lines are the lowest effective lines for both position and velocity, and they overlap their predicted counterparts along the whole time length of the simulation.
Measurement Noise Scouting for a new Sequantial Importance Sampling Technique
An exploit of the Sequential Importance Sampling (SIS) algorithm using Differential Algebra (DA) techniques is derived to develop an efficient particle filter. The filter creates an original kind of particles, called scout particles, that bring information from the measurement noise onto the state prior probability density function. Thanks to the creation of high order polynomial maps and their inversions, the scouting of the measurements helps the SIS algorithm identify the region of the prior more affected by the likelihood distribution. The result of the technique is two different versions of the proposed Scout Particle Filter (SPF), which identifies and delimits the region where the true posterior probability has high density in the SIS algorithm. Four different numerical applications show
the benefits of the methodology both in terms of accuracy and efficiency, where the SPF is compared to other particle filters, with a particular focus on target tracking and orbit determination problems.
The Map Inversion Selects the Region of the Posterior
Different particle filters based on SIS have been applied to the problem and their performance has been compared in terms of accuracy and efficiency. The figure shows the different weights that each particle receives after update by modifying the particle size: indeed, a bigger point corresponds to a particle with larger weight. The figure reports the state space only around the true posterior distribution, reported in light gray in the background to help facilitate visual comparisons. The estimate of each filter, calculated as a weighted mean, is displayed as a red circle. The BPF has been implemented using 10000 particles: only very few have enough weight to be visualized in the figure and most of them are outside the posterior distribution domain, as they are sampled centered from the prior mean. The BPF graph is a perfect example of particle impoverishment, where only a handful of particles (0.21% in the presented case) positively contribute in the state estimation. A particle filter that does not suffer impoverishment has the size of its particle close together, without extremes. The SIS-EKF and SIS-UKF are the subsequent step to improve estimation: the importance distribution is the updated Gaussian PDF calculated by the EKF and UKF. However, even if they show better particle efficiency than the BPF, many particles are drawn in zones of low density outside of the posterior domain. In order to achieve valuable results, the SIS-EKF and SIS-UKF have been computed with 10000 particles. Lastly the Scout Particle Filter has been run in its two different variations, i.e., considering a uniform distribution from the scout particles in SPF-1 or a Gaussian PDF in SPF-2. For both cases, only 100 scout particles have been drawn to scout the likelihood, such that a total of 1000 samples is enough to achieve an accurate update. Looking at the figure, the SPF particles are efficiently drawn in the vicinity of the true posterior, such that they share weight accordingly. In particular, using the uniform distribution, SPF-1 particles get heavier the closer they are to the true posterior peak, perfectly representing the distribution as a PMF.
The Scouting improves accuracy and efficiency
A more accurate comparison of the performance is pursued by completing a Monte Carlo analysis and computing the root mean square error (RMSE) of each filter and the relative Ψ efficiency factor. The figure is an histogram of the error level of the filters, blue bars. In the same graph, the efficiency of particle sampling is reported using Ψ, which is an indicator of the ratio of effective particles. Thus, the Ψ factor is a measure on how well the importance distribution has been chosen by the filtering algorithm. As expected, the BPF has terrible efficiency, and poor accuracy. Introducing the SIS algorithm, using the EKF and UKF, improves the performance of the filters, but only slightly due to bad estimates from the linear estimators. The figure shows that the performance of the SPF is superior with respect to the other filters, both in terms of accuracy and efficiency. Between the scouting filters, SPF-1 achieves more accurate estimates than its counterpart; however, SPF-2 proves that Gaussian sampling around the scout mean is more efficient in generating samples, meaning that it could achieve similar accuracy levels with fewer particles.
The Map Inversion Selects the Region of the Posterior
The figure shows the results of the estimators for the new application. Lacking angle information, the true posterior distribution is more curved than the previous case, which penalizes linear estimators such as the EKF and the UKF. Thus, the two SIS particle filters that rely on such Gaussian assumption have poor performance as the Gaussian
mean is far from the mean of the true posterior, red circle of the first graph. The BPF is, once again, poorly sampled as the observation has moved the likelihood on the edges of the prior distribution. Lastly, the two SPF filter show the best behaviour. They create fictitious angle measurement to invert the polynomial map, such that the scout particles are able to undergo polynomial evaluation. In particular, SPF-1 scans the whole posterior domain, as the uniform distribution creates a rectangle where particles are drawn from. On the contrary, SPF-2 concentrates its particles closer to the mean of the true posterior thanks to the Gaussian approximation of the scout samples.
Scout Particle Filter Software Architecture.
The overall structure of the Scout Particle Filter is outlined in the figure, where the filtering algorithm has been deconstructed to show connections and components. The main separation of the algorithm is the division between the prediction and the update step. The two steps are connect merely by the exchange of the current predicted mean and covariance. The update has the major improvement with respect to the literature. Indeed, the SIS algorithm is supported by the DA mathematics, preceded by the creation of the polynomial maps and their inversions. The only information coming from the external world is the observation from the sensor with relative noise covariance level: this measurement is used to improve the accuracy of the estimated state PDF. After undergoing the inverted map through polynomial evaluation, the scout particles delimit the region of the prior that could have generated the measurement. Therefore, it is obvious that the benefits of the scouting is more beneficial whenever there is a poor knowledge of the prior state distribution. If the filter has given a very accurate predicted PDF, the scout particles would delimit the whole state distribution as high density region. Thus, scouting helps the filter especially during the initial steps and transient period of the estimation process, before reaching convergence. Indeed, when convergence is reached, the SPF can decide to stop generating scouting particles and apply any common particle filter update, reducing the number of computations. In a similar manner, the SPF is particularly effective when the system is affected by heavy process noises or when the subsequent measurement acquisition is separated by long propagation times, factors that tend to enlarge the spread of the state PDF affected by the nonlinear dynamics.