1D linear operator with a zero parameter

We have shown that parameter estimation works well with positive parameters. In some complex problems, one might include extra terms in the predicted function, whose parameter equals to 0 in the end. To check if the framework works in such case, we construct the following example:

:raw-latex:`\begin{align*} \mathcal{L}_x^\phi &:= \phi_1 \cdot + \phi_2\frac{d}{dx}\cdot + \phi_3\frac{d^2}{dx^2}\cdot \\ u(x) &= sin(x) \\ f(x) &= \mathcal{L}_x^\phi u(x)\\ &=\phi_1 sin(x) + \phi_2 cos(x) - \phi_3 sin(x) \\ &=(\phi_1- \phi_3)sin(x) + \phi_2 cos(x) \\ x &\in [0, 1] \\ \end{align*}`

Simulate data

We assume \(\phi_1 = 2\) and \(\phi_3 = 5\), and \(cos(x)\) is the extra term with \(\phi_2 = 0\), then the function f is given by:

:raw-latex:`\begin{align*} f(x) &= -3sin(x) \end{align*}`

Evaluate kernels

Corresponding kernels are defined as follows:

\(k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) = \theta \exp \left( - \frac { 1 } { 2 l } \left( x _ { i } - x _ { j } \right) ^ { 2 } \right)\)

\(\left. \begin{array} { l } { k _ { f f } \left( x _ { i } , x _ { j } ; \theta , \phi _ { 1 } , \phi _ { 2 } , \phi _ { 3 } \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } \mathcal { L } _ { x _ { j } } ^ { \phi } k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } \left( \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { j } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { j } ^ { 2 } } k _ { u u } \right) } \\ { = \left( \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { i } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { i } ^ { 2 } } k _ { u u } \right) \left( \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { j } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { j } ^ { 2 } } k _ { u u } \right) } \end{array} \right.\)

\(\left. \begin{array} { l } { k _ { f u } \left( x _ { i } , x _ { j } ; \theta , \phi _ { 1 } , \phi _ { 2 } , \phi _ { 3 } \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) } \\ { = \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { i } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { i } ^ { 2 } } k _ { u u } } \end{array} \right.\)

\(\left. \begin{array} { l } { k _ { u f } \left( x _ { i } , x _ { j } ; \theta , \phi _ { 1 } , \phi _ { 2 } , \phi _ { 3 } \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) } \end{array} \right.\)

The kernels apply to the general 1D system with three parameters.

Optimize hyperparameters

In [16]:
phi # [phi1 - phi3, phi2]
Out[16]:
[-3.0110910877279604, 0.006341664007503534]
Parameter Value
\(\phi_1 - \phi_3\) -3.0001
\(\phi_2\) 0.18e-05

The parameter estimation is very close to our prediction. The linear operator can be applied to most 1D linear PDE problems, which is quite powerful. While dealing with some specific problems, we can add more terms in the form of the transformation, then the parameter estimation with Gaussian Processes determines which of these terms are redundant.