language-iconOld Web
English
Sign In

Optimal estimation

In applied statistics, optimal estimation is a regularized matrix inverse method based on Bayes' theorem.It is used very commonly in the geosciences, particularly for atmospheric sounding.A matrix inverse problem looks like this: In applied statistics, optimal estimation is a regularized matrix inverse method based on Bayes' theorem.It is used very commonly in the geosciences, particularly for atmospheric sounding.A matrix inverse problem looks like this: The essential concept is to transform the matrix, A, into a conditional probability and the variables, x → {displaystyle {vec {x}}} and y → {displaystyle {vec {y}}} into probability distributions by assuming Gaussian statistics and using empirically-determined covariance matrices. Typically, one expects the statistics of most measurements to be Gaussian. So for example for P ( y → | x → ) {displaystyle P({vec {y}}|{vec {x}})} , we can write: where m and n are the numbers of elements in x → {displaystyle {vec {x}}} and y → {displaystyle {vec {y}}} respectively A {displaystyle {oldsymbol {A}}} is the matrix to be solved (the linear or linearised forward model) and S y {displaystyle {oldsymbol {S_{y}}}} is the covariance matrix of the vector y → {displaystyle {vec {y}}} . This can be similarly done for x → {displaystyle {vec {x}}} : Here P ( x → ) {displaystyle P({vec {x}})} is taken to be the so-called 'a-priori' distribution: x a ^ {displaystyle {widehat {x_{a}}}} denotes the a-priori values for x → {displaystyle {vec {x}}} while S x a {displaystyle {oldsymbol {S_{x_{a}}}}} is its covariance matrix. The nice thing about the Gaussian distributions is that only two parameters are needed to describe them and so the whole problem can be converted once again to matrices. Assuming that P ( x → | y → ) {displaystyle P({vec {x}}|{vec {y}})} takes the following form: P ( y → ) {displaystyle P({vec {y}})} may be neglected since, for a given value of x → {displaystyle {vec {x}}} , it is simply a constant scaling term. Now it is possible to solve for both the expectation value of x → {displaystyle {vec {x}}} , x ^ {displaystyle {widehat {x}}} , and for its covariance matrix by equating P ( x → | y → ) {displaystyle P({vec {x}}|{vec {y}})} and P ( y → | x → ) P ( x → ) {displaystyle P({vec {y}}|{vec {x}})P({vec {x}})} . This produces the following equations: Because we are using Gaussians, the expected value is equivalent to the maximum likely value, and so this is also a form of maximum likelihood estimation.

[ "Algorithm", "Statistics", "Mathematical optimization", "Estimator", "Machine learning" ]
Parent Topic
Child Topic
    No Parent Topic
Baidu
map