language-iconOld Web
English
Sign In

Representer theorem

In statistical learning theory, a representer theorem is any of several related results stating that a minimizer f ∗ {displaystyle f^{*}} of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data. In statistical learning theory, a representer theorem is any of several related results stating that a minimizer f ∗ {displaystyle f^{*}} of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data. The following Representer Theorem and its proof are due to Schölkopf, Herbrich, and Smola: Theorem: Consider a positive-definite real-valued kernel k : X × X → R {displaystyle k:{mathcal {X}} imes {mathcal {X}} o mathbb {R} } on a non-empty set X {displaystyle {mathcal {X}}} with a corresponding reproducing kernel Hilbert space H k {displaystyle H_{k}} . Let there be given which together define the following regularized empirical risk functional on H k {displaystyle H_{k}} :

[ "Radial basis function kernel", "Reproducing kernel Hilbert space", "Kernel embedding of distributions", "Polynomial kernel", "Kernel principal component analysis" ]
Parent Topic
Child Topic
    No Parent Topic
Baidu
map