-
Notifications
You must be signed in to change notification settings - Fork 574
Description
Summary
The se_t descriptor (TabulateFusionSeTOp) does not seem to support second-order derivative calculations (required for virial/stress), even though the underlying C++/CUDA kernel (deepmd::tabulate_fusion_se_t_grad_grad_gpu) and its wrapper function (TabulateFusionSeTGradGradForward) are present in the code.
DeePMD-kit Version
devel latest
Backend and its version
pytorch
Python Version, CUDA Version, GCC Version, LAMMPS Version, etc
No response
Details
-
Working Implementation (se_a): The se_a descriptor has a complete autograd chain. Its first-derivative calculation is wrapped in TabulateFusionSeAGradOp, whose backward method correctly calls TabulateFusionSeAGradGradForward. This makes the operation fully second-order differentiable.
-
Incomplete Implementation (se_t): For the se_t descriptor, the TabulateFusionSeTOp's backward method calls TabulateFusionSeTGradForward directly. There is no corresponding TabulateFusionSeTGradOp class to wrap this first-derivative calculation.
-
The Consequence: Because the autograd wrapper (GradOp) is missing, the TabulateFusionSeTGradGradForward function is defined but is never actually called by the PyTorch autograd engine. This effectively makes it dead code in the context of PyTorch's automatic differentiation and prevents training models with virial/stress labels when using the se_t descriptor.