Kernel Functions
Base Kernels
These are the basic kernels without any transformation of the data. They are the building blocks of KernelFunctions.
Constant Kernels
KernelFunctions.ZeroKernel
— TypeZeroKernel()
Zero kernel.
Definition
For inputs $x, x'$, the zero kernel is defined as
\[k(x, x') = 0.\]
The output type depends on $x$ and $x'$.
See also: ConstantKernel
KernelFunctions.ConstantKernel
— TypeConstantKernel(; c::Real=1.0)
Kernel of constant value c
.
Definition
For inputs $x, x'$, the kernel of constant value $c \geq 0$ is defined as
\[k(x, x') = c.\]
See also: ZeroKernel
KernelFunctions.WhiteKernel
— TypeWhiteKernel()
White noise kernel.
Definition
For inputs $x, x'$, the white noise kernel is defined as
\[k(x, x') = \delta(x, x').\]
KernelFunctions.EyeKernel
— TypeEyeKernel()
Alias of WhiteKernel
.
Cosine Kernel
KernelFunctions.CosineKernel
— TypeCosineKernel()
Cosine kernel.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the cosine kernel is defined as
\[k(x, x') = \cos(\pi \|x-x'\|_2).\]
Exponential Kernels
KernelFunctions.ExponentialKernel
— TypeExponentialKernel()
Exponential kernel.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the exponential kernel is defined as
\[k(x, x') = \exp\big(- \|x - x'\|_2\big).\]
See also: GammaExponentialKernel
KernelFunctions.LaplacianKernel
— TypeLaplacianKernel()
Alias of ExponentialKernel
.
KernelFunctions.SqExponentialKernel
— TypeSqExponentialKernel()
Squared exponential kernel.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the squared exponential kernel is defined as
\[k(x, x') = \exp\bigg(- \frac{\|x - x'\|_2^2}{2}\bigg).\]
See also: GammaExponentialKernel
KernelFunctions.SEKernel
— TypeSEKernel()
Alias of SqExponentialKernel
.
KernelFunctions.GaussianKernel
— TypeGaussianKernel()
Alias of SqExponentialKernel
.
KernelFunctions.RBFKernel
— TypeRBFKernel()
Alias of SqExponentialKernel
.
KernelFunctions.GammaExponentialKernel
— TypeGammaExponentialKernel(; γ::Real=2.0)
γ-exponential kernel with parameter γ
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the γ-exponential kernel[RW] with parameter $\gamma \in (0, 2]$ is defined as
\[k(x, x'; \gamma) = \exp\big(- \|x - x'\|_2^{\gamma}\big).\]
See also: ExponentialKernel
, SqExponentialKernel
Exponentiated Kernel
KernelFunctions.ExponentiatedKernel
— TypeExponentiatedKernel()
Exponentiated kernel.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the exponentiated kernel is defined as
\[k(x, x') = \exp(x^\top x').\]
Fractional Brownian Motion Kernel
KernelFunctions.FBMKernel
— TypeFBMKernel(; h::Real=0.5)
Fractional Brownian motion kernel with Hurst index h
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the fractional Brownian motion kernel with Hurst index $h \in [0,1]$ is defined as
\[k(x, x'; h) = \frac{\|x\|_2^{2h} + \|x'\|_2^{2h} - \|x - x'\|^{2h}}{2}.\]
Gabor Kernel
KernelFunctions.GaborKernel
— TypeGaborKernel(; ell::Real=1.0, p::Real=1.0)
Gabor kernel with lengthscale ell
and period p
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the Gabor kernel with lengthscale $l_i > 0$ and period $p_i > 0$ is defined as
\[k(x, x'; l, p) = \exp\bigg(- \cos\bigg(\pi\sum_{i=1}^d \frac{x_i - x'_i}{p_i}\bigg) \sum_{i=1}^d \frac{(x_i - x'_i)^2}{l_i^2}\bigg).\]
Matérn Kernels
KernelFunctions.MaternKernel
— TypeMaternKernel(; ν::Real=1.5)
Matérn kernel of order ν
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the Matérn kernel of order $\nu > 0$ is defined as
\[k(x,x';\nu) = \frac{2^{1-\nu}}{\Gamma(\nu)}\big(\sqrt{2\nu}\|x-x'\|_2\big) K_\nu\big(\sqrt{2\nu}\|x-x'\|_2\big),\]
where $\Gamma$ is the Gamma function and $K_{\nu}$ is the modified Bessel function of the second kind of order $\nu$.
A Gaussian process with a Matérn kernel is $\lceil \nu \rceil - 1$-times differentiable in the mean-square sense.
See also: Matern12Kernel
, Matern32Kernel
, Matern52Kernel
KernelFunctions.Matern12Kernel
— TypeMatern12Kernel()
Alias of ExponentialKernel
.
KernelFunctions.Matern32Kernel
— TypeMatern32Kernel()
Matérn kernel of order $3/2$.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the Matérn kernel of order $3/2$ is given by
\[k(x, x') = \big(1 + \sqrt{3} \|x - x'\|_2 \big) \exp\big(- \sqrt{3}\|x - x'\|_2\big).\]
See also: MaternKernel
KernelFunctions.Matern52Kernel
— TypeMatern52Kernel()
Matérn kernel of order $5/2$.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the Matérn kernel of order $5/2$ is given by
\[k(x, x') = \bigg(1 + \sqrt{5} \|x - x'\|_2 + \frac{5}{3}\|x - x'\|_2^2\bigg) \exp\big(- \sqrt{5}\|x - x'\|_2\big).\]
See also: MaternKernel
Neural Network Kernel
KernelFunctions.NeuralNetworkKernel
— TypeNeuralNetworkKernel()
Kernel of a Gaussian process obtained as the limit of a Bayesian neural network with a single hidden layer as the number of units goes to infinity.
Definition
Consider the single-layer Bayesian neural network $f \colon \mathbb{R}^d \to \mathbb{R}$ with $h$ hidden units defined by
\[f(x; b, v, u) = b + \sqrt{\frac{\pi}{2}} \sum_{i=1}^{h} v_i \mathrm{erf}\big(u_i^\top x\big),\]
where $\mathrm{erf}$ is the error function, and with prior distributions
\[\begin{aligned} b &\sim \mathcal{N}(0, \sigma_b^2),\\ v &\sim \mathcal{N}(0, \sigma_v^2 \mathrm{I}_{h}/h),\\ u_i &\sim \mathcal{N}(0, \mathrm{I}_{d}/2) \qquad (i = 1,\ldots,h). \end{aligned}\]
As $h \to \infty$, the neural network converges to the Gaussian process
\[g(\cdot) \sim \mathcal{GP}\big(0, \sigma_b^2 + \sigma_v^2 k(\cdot, \cdot)\big),\]
where the neural network kernel $k$ is given by
\[k(x, x') = \arcsin\left(\frac{x^\top x'}{\sqrt{\big(1 + \|x\|^2_2\big) \big(1 + \|x'\|_2^2\big)}}\right)\]
for inputs $x, x' \in \mathbb{R}^d$.[CW]
Periodic Kernel
KernelFunctions.PeriodicKernel
— TypePeriodicKernel(; r::AbstractVector=ones(Float64, 1))
Periodic kernel with parameter r
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the periodic kernel with parameter $r_i > 0$ is defined[DM] as
\[k(x, x'; r) = \exp\bigg(- \frac{1}{2} \sum_{i=1}^d \bigg(\frac{\sin\big(\pi(x_i - x'_i)\big)}{r_i}\bigg)^2\bigg).\]
KernelFunctions.PeriodicKernel
— MethodPeriodicKernel([T=Float64, dims::Int=1])
Create a PeriodicKernel
with parameter r=ones(T, dims)
.
Piecewise Polynomial Kernel
KernelFunctions.PiecewisePolynomialKernel
— TypePiecewisePolynomialKernel(; degree::Int=0, dim::Int)
PiecewisePolynomialKernel{degree}(dim::Int)
Piecewise polynomial kernel of degree degree
for inputs of dimension dim
with support in the unit ball.
Definition
For inputs $x, x' \in \mathbb{R}^d$ of dimension $d$, the piecewise polynomial kernel of degree $v \in \{0,1,2,3\}$ is defined as
\[k(x, x'; v) = \max(1 - \|x - x'\|, 0)^{\alpha(v,d)} f_{v,d}(\|x - x'\|),\]
where $\alpha(v, d) = \lfloor \frac{d}{2}\rfloor + 2v + 1$ and $f_{v,d}$ are polynomials of degree $v$ given by
\[\begin{aligned} f_{0,d}(r) &= 1, \\ f_{1,d}(r) &= 1 + (j + 1) r, \\ f_{2,d}(r) &= 1 + (j + 2) r + \big((j^2 + 4j + 3) / 3\big) r^2, \\ f_{3,d}(r) &= 1 + (j + 3) r + \big((6 j^2 + 36j + 45) / 15\big) r^2 + \big((j^3 + 9 j^2 + 23j + 15) / 15\big) r^3, \end{aligned}\]
where $j = \lfloor \frac{d}{2}\rfloor + v + 1$.
The kernel is $2v$ times continuously differentiable and the corresponding Gaussian process is hence $v$ times mean-square differentiable.
Polynomial Kernels
KernelFunctions.LinearKernel
— TypeLinearKernel(; c::Real=0.0)
Linear kernel with constant offset c
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the linear kernel with constant offset $c \geq 0$ is defined as
\[k(x, x'; c) = x^\top x' + c.\]
See also: PolynomialKernel
KernelFunctions.PolynomialKernel
— TypePolynomialKernel(; degree::Int=2, c::Real=0.0)
Polynomial kernel of degree degree
with constant offset c
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the polynomial kernel of degree $\nu \in \mathbb{N}$ with constant offset $c \geq 0$ is defined as
\[k(x, x'; c, \nu) = (x^\top x' + c)^\nu.\]
See also: LinearKernel
Rational Quadratic Kernels
KernelFunctions.RationalQuadraticKernel
— TypeRationalQuadraticKernel(; α::Real=2.0)
Rational-quadratic kernel with shape parameter α
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the rational-quadratic kernel with shape parameter $\alpha > 0$ is defined as
\[k(x, x'; \alpha) = \bigg(1 + \frac{\|x - x'\|_2^2}{2\alpha}\bigg)^{-\alpha}.\]
The SqExponentialKernel
is recovered in the limit as $\alpha \to \infty$.
See also: GammaRationalQuadraticKernel
KernelFunctions.GammaRationalQuadraticKernel
— TypeGammaRationalQuadraticKernel(; α::Real=2.0, γ::Real=2.0)
γ-rational-quadratic kernel with shape parameters α
and γ
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the γ-rational-quadratic kernel with shape parameters $\alpha > 0$ and $\gamma \in (0, 2]$ is defined as
\[k(x, x'; \alpha, \gamma) = \bigg(1 + \frac{\|x - x'\|_2^{\gamma}}{\alpha}\bigg)^{-\alpha}.\]
The GammaExponentialKernel
is recovered in the limit as $\alpha \to \infty$.
See also: RationalQuadraticKernel
Spectral Mixture Kernels
KernelFunctions.spectral_mixture_kernel
— Functionspectral_mixture_kernel(
h::Kernel=SqExponentialKernel(),
αs::AbstractVector{<:Real},
γs::AbstractMatrix{<:Real},
ωs::AbstractMatrix{<:Real},
)
where αs are the weights of dimension (A, ), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.
h
is the kernel, which defaults to SqExponentialKernel
if not specified.
Generalised Spectral Mixture kernel function. This family of functions is dense in the family of stationary real-valued kernels with respect to the pointwise convergence.[1]
\[ κ(x, y) = αs' (h(-(γs' * t)^2) .* cos(π * ωs' * t), t = x - y\]
References:
[1] Generalized Spectral Kernels, by Yves-Laurent Kom Samo and Stephen J. Roberts
[2] SM: Gaussian Process Kernels for Pattern Discovery and Extrapolation,
ICML, 2013, by Andrew Gordon Wilson and Ryan Prescott Adams,
[3] Covariance kernels for fast automatic pattern discovery and extrapolation
with Gaussian processes, Andrew Gordon Wilson, PhD Thesis, January 2014.
http://www.cs.cmu.edu/~andrewgw/andrewgwthesis.pdf
[4] http://www.cs.cmu.edu/~andrewgw/pattern/.
KernelFunctions.spectral_mixture_product_kernel
— Functionspectral_mixture_product_kernel(
h::Kernel=SqExponentialKernel(),
αs::AbstractMatrix{<:Real},
γs::AbstractMatrix{<:Real},
ωs::AbstractMatrix{<:Real},
)
where αs are the weights of dimension (D, A), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.
Spectral Mixture Product Kernel. With enough components A, the SMP kernel can model any product kernel to arbitrary precision, and is flexible even with a small number of components [1]
h
is the kernel, which defaults to SqExponentialKernel
if not specified.
\[ κ(x, y) = Πᵢ₌₁ᴷ Σ(αsᵢᵀ .* (h(-(γsᵢᵀ * tᵢ)²) .* cos(ωsᵢᵀ * tᵢ))), tᵢ = xᵢ - yᵢ\]
References:
[1] GPatt: Fast Multidimensional Pattern Extrapolation with GPs,
arXiv 1310.5288, 2013, by Andrew Gordon Wilson, Elad Gilboa,
Arye Nehorai and John P. Cunningham
Wiener Kernel
KernelFunctions.WienerKernel
— TypeWienerKernel(; i::Int=0)
WienerKernel{i}()
The i
-times integrated Wiener process kernel function.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the $i$-times integrated Wiener process kernel with $i \in \{-1, 0, 1, 2, 3\}$ is defined[SDH] as
\[k_i(x, x') = \begin{cases} \delta(x, x') & \text{if } i=-1,\\ \min\big(\|x\|_2, \|x'\|_2\big) & \text{if } i=0,\\ a_{i1}^{-1} \min\big(\|x\|_2, \|x'\|_2\big)^{2i + 1} + a_{i2}^{-1} \|x - x'\|_2 r_i\big(\|x\|_2, \|x'\|_2\big) \min\big(\|x\|_2, \|x'\|_2\big)^{i + 1} & \text{otherwise}, \end{cases}\]
where the coefficients $a$ are given by
\[a = \begin{bmatrix} 3 & 2 \\ 20 & 12 \\ 252 & 720 \end{bmatrix}\]
and the functions $r_i$ are defined as
\[\begin{aligned} r_1(t, t') &= 1,\\ r_2(t, t') &= t + t' - \frac{\min(t, t')}{2},\\ r_3(t, t') &= 5 \max(t, t')^2 + 2 tt' + 3 \min(t, t')^2. \end{aligned}\]
The WhiteKernel
is recovered for $i = -1$.
Composite Kernels
The modular design of KernelFunctions uses base kernels as building blocks for more complex kernels. There are a variety of composite kernels implemented, including those which transform the inputs to a wrapped kernel to implement length scales, scale the variance of a kernel, and sum or multiply collections of kernels together.
KernelFunctions.TransformedKernel
— TypeTransformedKernel(k::Kernel, t::Transform)
Kernel derived from k
for which inputs are transformed via a Transform
t
.
It is preferred to create kernels with input transformations with transform
instead of TransformedKernel
directly since transform
allows optimized implementations for specific kernels and transformations.
Definition
For inputs $x, x'$, the transformed kernel $\widetilde{k}$ derived from kernel $k$ by input transformation $t$ is defined as
\[\widetilde{k}(x, x'; k, t) = k\big(t(x), t(x')\big).\]
KernelFunctions.transform
— Methodtransform(k::Kernel, t::Transform)
Create a TransformedKernel
for kernel k
and transform t
.
KernelFunctions.transform
— Methodtransform(k::Kernel, ρ::Real)
Create a TransformedKernel
for kernel k
and inverse lengthscale ρ
.
KernelFunctions.transform
— Methodtransform(k::Kernel, ρ::AbstractVector)
Create a TransformedKernel
for kernel k
and inverse lengthscales ρ
.
KernelFunctions.ScaledKernel
— TypeScaledKernel(k::Kernel, σ²::Real=1.0)
Scaled kernel derived from k
by multiplication with variance σ²
.
Definition
For inputs $x, x'$, the scaled kernel $\widetilde{k}$ derived from kernel $k$ by multiplication with variance $\sigma^2 > 0$ is defined as
\[\widetilde{k}(x, x'; k, \sigma^2) = \sigma^2 k(x, x').\]
KernelFunctions.KernelSum
— TypeKernelSum <: Kernel
Create a sum of kernels. One can also use the operator +
.
There are various ways in which you create a KernelSum
:
The simplest way to specify a KernelSum
would be to use the overloaded +
operator. This is equivalent to creating a KernelSum
by specifying the kernels as the arguments to the constructor.
julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
julia> (k = k1 + k2) == KernelSum(k1, k2)
true
julia> kernelmatrix(k1 + k2, X) == kernelmatrix(k1, X) .+ kernelmatrix(k2, X)
true
julia> kernelmatrix(k, X) == kernelmatrix(k1 + k2, X)
true
You could also specify a KernelSum
by providing a Tuple
or a Vector
of the kernels to be summed. We suggest you to use a Tuple
when you have fewer components and a Vector
when dealing with a large number of components.
julia> KernelSum((k1, k2)) == k1 + k2
true
julia> KernelSum([k1, k2]) == KernelSum((k1, k2)) == k1 + k2
true
KernelFunctions.KernelProduct
— TypeKernelProduct <: Kernel
Create a product of kernels. One can also use the overloaded operator *
.
There are various ways in which you create a KernelProduct
:
The simplest way to specify a KernelProduct
would be to use the overloaded *
operator. This is equivalent to creating a KernelProduct
by specifying the kernels as the arguments to the constructor.
julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
julia> (k = k1 * k2) == KernelProduct(k1, k2)
true
julia> kernelmatrix(k1 * k2, X) == kernelmatrix(k1, X) .* kernelmatrix(k2, X)
true
julia> kernelmatrix(k, X) == kernelmatrix(k1 * k2, X)
true
You could also specify a KernelProduct
by providing a Tuple
or a Vector
of the kernels to be multiplied. We suggest you to use a Tuple
when you have fewer components and a Vector
when dealing with a large number of components.
julia> KernelProduct((k1, k2)) == k1 * k2
true
julia> KernelProduct([k1, k2]) == KernelProduct((k1, k2)) == k1 * k2
true
KernelFunctions.KernelTensorProduct
— TypeKernelTensorProduct
Tensor product of kernels.
Definition
For inputs $x = (x_1, \ldots, x_n)$ and $x' = (x'_1, \ldots, x'_n)$, the tensor product of kernels $k_1, \ldots, k_n$ is defined as
\[k(x, x'; k_1, \ldots, k_n) = \Big(\bigotimes_{i=1}^n k_i\Big)(x, x') = \prod_{i=1}^n k_i(x_i, x'_i).\]
Construction
The simplest way to specify a KernelTensorProduct
is to use the overloaded tensor
operator or its alias ⊗
(can be typed by \otimes<tab>
).
julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5, 2);
julia> kernelmatrix(k1 ⊗ k2, RowVecs(X)) == kernelmatrix(k1, X[:, 1]) .* kernelmatrix(k2, X[:, 2])
true
You can also specify a KernelTensorProduct
by providing kernels as individual arguments or as an iterable data structure such as a Tuple
or a Vector
. Using a tuple or individual arguments guarantees that KernelTensorProduct
is concretely typed but might lead to large compilation times if the number of kernels is large.
julia> KernelTensorProduct(k1, k2) == k1 ⊗ k2
true
julia> KernelTensorProduct((k1, k2)) == k1 ⊗ k2
true
julia> KernelTensorProduct([k1, k2]) == k1 ⊗ k2
true
Multi-output Kernels
KernelFunctions.MOKernel
— TypeMOKernel
Abstract type for kernels with multiple outpus.
KernelFunctions.IndependentMOKernel
— TypeIndependentMOKernel(k::Kernel)
Kernel for multiple independent outputs with kernel k
each.
Definition
For inputs $x, x'$ and output dimensions $p_x, p_{x'}'$, the kernel $\widetilde{k}$ for independent outputs with kernel $k$ each is defined as
\[\widetilde{k}\big((x, p_x), (x', p_{x'})\big) = \begin{cases} k(x, x') & \text{if } p_x = p_{x'}, \\ 0 & \text{otherwise}. \end{cases}\]
Mathematically, it is equivalent to a matrix-valued kernel defined as
\[\widetilde{K}(x, x') = \mathrm{diag}\big(k(x, x'), \ldots, k(x, x')\big) \in \mathbb{R}^{m \times m},\]
where $m$ is the number of outputs.
KernelFunctions.LatentFactorMOKernel
— TypeLatentFactorMOKernel(g, e::MOKernel, A::AbstractMatrix)
Kernel associated with the semiparametric latent factor model.
Definition
For inputs $x, x'$ and output dimensions $p_x, p_{x'}'$, the kernel is defined as[STJ]
\[k\big((x, p_x), (x, p_{x'})\big) = \sum^{Q}_{q=1} A_{p_xq}g_q(x, x')A_{p_{x'}q} + e\big((x, p_x), (x', p_{x'})\big),\]
where $g_1, \ldots, g_Q$ are $Q$ kernels, one for each latent process, $e$ is a multi-output kernel for $m$ outputs, and $A$ is a matrix of weights for the kernels of size $m \times Q$.
- RWC. E. Rasmussen & C. K. I. Williams (2006). Gaussian Processes for Machine Learning.
- CWC. K. I. Williams (1998). Computation with infinite neural networks.
- DMD. J. C. MacKay (1998). Introduction to Gaussian Processes.
- SDHSchober, Duvenaud & Hennig (2014). Probabilistic ODE Solvers with Runge-Kutta Means.
- STJM. Seeger, Y. Teh, & M. I. Jordan (2005). Semiparametric Latent Factor Models.