Skip to content

Help needed with erroneous ExactGP behavior #2667

@anja-sheppard

Description

@anja-sheppard

Hello,

I've been using GPytorch for some time now, but I'm running into a problem that is absolutely baffling me and I can't figure out how to solve it. I have an ExactGP implementation with a standard RBF kernel. I'm training a GP to learn a terrain elevation distribution, so a 2D input (x, y) and 1D output (z). I started using a new dataset with resolution of 1.32 m per pixel, and this is when I started seeing a very strange problem--after even just the first iteration of the training loop, the prediction appears "cut off" after about 1/6 of the data. What's happening here is the GP just collapses to predicting the mean because the covariance is so high at those points:

Image

This is with a small initial kernel lengthscale (0.1), just to help demonstrate the issue. I'm just not sure why the covariance matrix has this behavior. I've quadruple checked all of my input training data--it's definitely not some issue with the prediction/test points being outside of the training points. The train/input x/y points are just 2x downsampled from the test points. Could this be some kind of numerical stability issue? I've tried varying the kernel parameter initialization, but to no avail.

Any help is greatly appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions