1D linear operator with a zero parameter¶
We have shown that parameter estimation works well with positive parameters. In some complex problems, one might include extra terms in the predicted function, whose parameter equals to 0 in the end. To check if the framework works in such case, we construct the following example:
Simulate data¶
We assume \(\phi_1 = 2\) and \(\phi_3 = 5\), and \(cos(x)\) is the extra term with \(\phi_2 = 0\), then the function f is given by:
Evaluate kernels¶
Corresponding kernels are defined as follows:
\(k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) = \theta \exp \left( - \frac { 1 } { 2 l } \left( x _ { i } - x _ { j } \right) ^ { 2 } \right)\)
\(\left. \begin{array} { l } { k _ { f f } \left( x _ { i } , x _ { j } ; \theta , \phi _ { 1 } , \phi _ { 2 } , \phi _ { 3 } \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } \mathcal { L } _ { x _ { j } } ^ { \phi } k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } \left( \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { j } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { j } ^ { 2 } } k _ { u u } \right) } \\ { = \left( \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { i } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { i } ^ { 2 } } k _ { u u } \right) \left( \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { j } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { j } ^ { 2 } } k _ { u u } \right) } \end{array} \right.\)
\(\left. \begin{array} { l } { k _ { f u } \left( x _ { i } , x _ { j } ; \theta , \phi _ { 1 } , \phi _ { 2 } , \phi _ { 3 } \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) } \\ { = \phi _ { 1 } k _ { u u } + \phi _ { 2 } \frac { \partial } { \partial x _ { i } } k _ { u u } + \phi _ { 3 } \frac { \partial ^ { 2 } } { \partial x _ { i } ^ { 2 } } k _ { u u } } \end{array} \right.\)
\(\left. \begin{array} { l } { k _ { u f } \left( x _ { i } , x _ { j } ; \theta , \phi _ { 1 } , \phi _ { 2 } , \phi _ { 3 } \right) } \\ { = \mathcal { L } _ { x _ { i } } ^ { \phi } k _ { u u } \left( x _ { i } , x _ { j } ; \theta \right) } \end{array} \right.\)
The kernels apply to the general 1D system with three parameters.
Optimize hyperparameters¶
In [16]:
phi # [phi1 - phi3, phi2]
Out[16]:
[-3.0110910877279604, 0.006341664007503534]
Parameter | Value |
---|---|
\(\phi_1 - \phi_3\) | -3.0001 |
\(\phi_2\) | 0.18e-05 |
The parameter estimation is very close to our prediction. The linear operator can be applied to most 1D linear PDE problems, which is quite powerful. While dealing with some specific problems, we can add more terms in the form of the transformation, then the parameter estimation with Gaussian Processes determines which of these terms are redundant.