Skip to content

Commit 40b3ea1

Browse files
authored
docs: document the floating-point precision of the model (#4240)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added a new section on `precision` in the documentation, enhancing navigation. - Introduced detailed guidelines on floating-point precision settings for the model. - Included structured instructions for creating models with the PyTorch backend. - **Documentation** - Expanded troubleshooting documentation related to model precision issues, including data accuracy and training recommendations. - Enhanced guidelines for integrating new components into user configurations and ensuring model integrity across different backends. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <[email protected]>
1 parent 8f546cf commit 40b3ea1

File tree

4 files changed

+26
-0
lines changed

4 files changed

+26
-0
lines changed

doc/development/create-a-model-pt.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,15 @@ class SomeAtomicModel(BaseAtomicModel, torch.nn.Module):
137137
pass
138138
```
139139

140+
### Floating-point precision
141+
142+
When creating a new component, the floating-point precision should obey the [Floating-point precision of the model](../model/precision.md) section.
143+
In implementation, the component should
144+
145+
- store parameters in the component precision, except those for output normalization;
146+
- store output normalization parameters in {py:data}`deepmd.pt.utils.env.GLOBAL_PT_FLOAT_PRECISION`;
147+
- before input normalization, cast the input tensor to the component precision; before output normalization, cast the output tensor to the {py:data}`deepmd.pt.utils.env.GLOBAL_PT_FLOAT_PRECISION`.
148+
140149
## Register new arguments
141150

142151
To let someone uses your new component in their input file, you need to create a new method that returns some `Argument` of your new component, and then register new arguments. For example, the code below

doc/model/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,3 +24,4 @@ Model
2424
linear
2525
pairtab
2626
change-bias
27+
precision

doc/model/precision.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Floating-point precision of the model
2+
3+
The following options control the precision of the model:
4+
5+
- The environment variable {envvar}`DP_INTERFACE_PREC` controls the interface precision of the model, the descriptor, and the fitting, the precision of the environmental matrix, and the precision of the normalized parameters for the environmental matrix and the fitting output.
6+
- The training parameters {ref}`precision <model[standard]/fitting_net[ener]/precision>` in the descriptor, the fitting, and the type embedding control the precision of neural networks in those components, and the subsequent operations after the output of neural networks.
7+
- The reduced output (e.g. total energy) is always `float64`.
8+
9+
Usually, the following two combinations of options are recommended:
10+
11+
- Setting {envvar}`DP_INTERFACE_PREC` to `high` (default) and all {ref}`precision <model[standard]/fitting_net[ener]/precision>` options to `float64` (default).
12+
- Setting {envvar}`DP_INTERFACE_PREC` to `high` (default) and all {ref}`precision <model[standard]/fitting_net[ener]/precision>` options to `float32`.
13+
14+
The Python and C++ inference interfaces accept both `float64` and `float32` as the input and output arguments, whatever the floating-point precision of the model interface is.
15+
Usually, the MD programs (such as LAMMPS) only use `float64` in their interfaces.

doc/troubleshooting/precision.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@ See [FAQ: How to tune Fitting/embedding-net size](./howtoset_netsize.md) for det
6060

6161
In some cases, one may want to use the FP32 precision to make the model faster.
6262
For some applications, FP32 is enough and thus is recommended, but one should still be aware that the precision of FP32 is not as high as that of FP64.
63+
See [Floating-point precision of the model](../model/precision.md) section for how to set the precision.
6364

6465
## Training
6566

0 commit comments

Comments
 (0)