Skip to content

PyTorch Backend: Missing autograd wrapper for se_t descriptor's second-order derivatives #4994

@OutisLi

Description

@OutisLi

Summary

The se_t descriptor (TabulateFusionSeTOp) does not seem to support second-order derivative calculations (required for virial/stress), even though the underlying C++/CUDA kernel (deepmd::tabulate_fusion_se_t_grad_grad_gpu) and its wrapper function (TabulateFusionSeTGradGradForward) are present in the code.

DeePMD-kit Version

devel latest

Backend and its version

pytorch

Python Version, CUDA Version, GCC Version, LAMMPS Version, etc

No response

Details

  1. Working Implementation (se_a): The se_a descriptor has a complete autograd chain. Its first-derivative calculation is wrapped in TabulateFusionSeAGradOp, whose backward method correctly calls TabulateFusionSeAGradGradForward. This makes the operation fully second-order differentiable.

  2. Incomplete Implementation (se_t): For the se_t descriptor, the TabulateFusionSeTOp's backward method calls TabulateFusionSeTGradForward directly. There is no corresponding TabulateFusionSeTGradOp class to wrap this first-derivative calculation.

  3. The Consequence: Because the autograd wrapper (GradOp) is missing, the TabulateFusionSeTGradGradForward function is defined but is never actually called by the PyTorch autograd engine. This effectively makes it dead code in the context of PyTorch's automatic differentiation and prevents training models with virial/stress labels when using the se_t descriptor.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions