Whenever we add new inputs or outputs to our models, there is a high chance that we break inference because some components of our inference pipeline (namely the metadata patches and the GRIB templates) are hardcoded with a limited set of variables. This forces us to merge hotfixes such as #128, which is not ideal.
We should:
- change the GRIB templating approach so that we don't need to specify the list of params
- create variables metadata patches starting from the anemoi datasets used for training (the current patches were derived from some checkpoints), which contain all the possible variables we could need in inference
Whenever we add new inputs or outputs to our models, there is a high chance that we break inference because some components of our inference pipeline (namely the metadata patches and the GRIB templates) are hardcoded with a limited set of variables. This forces us to merge hotfixes such as #128, which is not ideal.
We should: