-
Notifications
You must be signed in to change notification settings - Fork 579
pd: support dpa3 with paddle backend #4701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pd: support dpa3 with paddle backend #4701
Conversation
This PR is an early experimental preview version of DPA3. Significant changes may occur in subsequent updates. Please use with caution.
* change property.npy to any name * Init branch * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change | to Union * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change sub_var_name default to [] * Solve pre-commit * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * solve scanning github * fix UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * delete useless file * Solve some UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Solve precommit * slove pre * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Solve dptest UT, dpatomicmodel UT, code scannisang * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * delete param and * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Solve UT fail caused by task_dim and property_name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix UT * Fix UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix permutation error * Add property bias UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * recover rcond doc * recover blank * Change code according according to coderabbitai * solve pre-commit * Fix UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change apply_bias doc * update the version compatibility * feat (tf/pt): add atomic weights to tensor loss (deepmodeling#4466) Interfaces are of particular interest in many studies. However, the configurations in the training set to represent the interface normally also include large parts of the bulk material. As a result, the final model would prefer the bulk information while the interfacial information is less learnt. It is difficult to simply improve the proportion of interfaces in the configurations since the electronic structures of the interface might only be reasonable with a certain thickness of bulk materials. Therefore, I wonder whether it is possible to define weights for atomic quantities in loss functions. This allows us to add higher weights for the atomic information for the regions of interest and probably makes the model "more focused" on the region of interest. In this PR, I add the keyword `enable_atomic_weight` to the loss function of the tensor model. In principle, it could be generalised to any atomic quantity, e.g., atomic forces. I would like to know the developers' comments/suggestions about this feature. I can add support for other loss functions and finish unit tests once we agree on this feature. Best. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced an optional parameter for atomic weights in loss calculations, enhancing flexibility in the `TensorLoss` class. - Added a suite of unit tests for the `TensorLoss` functionality, ensuring consistency between TensorFlow and PyTorch implementations. - **Bug Fixes** - Updated logic for local loss calculations to ensure correct application of atomic weights based on user input. - **Documentation** - Improved clarity of documentation for several function arguments, including the addition of a new argument related to atomic weights. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * delete sub_var_name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * recover to property key * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix conflict * Fix UT * Add document of property fitting * Delete checkpoint * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add get_property_name to DeepEvalBackend * pd: fix learning rate setting when resume (deepmodeling#4480) "When resuming training, there is no need to add `self.start_step` to the step count because Paddle uses `lr_sche.last_epoch` as the input for `step`, which already records the `start_step` steps." learning rate are correct after fixing  <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced training process with improved optimizer configuration and learning rate adjustments. - Refined logging of training and validation results for clarity. - Improved model saving logic to preserve the latest state during interruptions. - Enhanced tensorboard logging for detailed tracking of training metrics. - **Bug Fixes** - Corrected lambda function for learning rate scheduler to reference warmup steps accurately. - **Chores** - Streamlined data loading and handling for efficient training across different tasks. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * docs: update deepmd-gnn URL (deepmodeling#4482) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Updated guidelines for creating and integrating new models in the DeePMD-kit framework. - Added new sections on descriptors, fitting networks, and model requirements. - Enhanced unit testing section with instructions for regression tests. - Updated URL for the DeePMD-GNN plugin to reflect new repository location. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: Jinzhe Zeng <[email protected]> * docs: update DPA-2 citation (deepmodeling#4483) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Updated references in the bibliography for the DPA-2 model to include a new article entry for 2024. - Added a new reference for an attention-based descriptor. - **Bug Fixes** - Corrected reference links in documentation to point to updated DOI links instead of arXiv. - **Documentation** - Revised entries in the credits and model documentation to reflect the latest citations and details. - Enhanced clarity and detail in fine-tuning documentation for TensorFlow and PyTorch implementations. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <[email protected]> * docs: fix a minor typo on the title of `install-from-c-library.md` (deepmodeling#4484) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Updated formatting of the installation guide for the pre-compiled C library. - Icons for TensorFlow and JAX are now displayed together in the header. - Retained all installation instructions and compatibility notes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: Jinzhe Zeng <[email protected]> * fix: print dlerror if dlopen fails (deepmodeling#4485) xref: deepmodeling/deepmd-gnn#44 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced error messages for library loading failures on non-Windows platforms. - Updated thread management environment variable checks for improved compatibility. - Added support for mixed types in tensor input handling, allowing for more flexible configurations. - **Bug Fixes** - Improved error reporting for dynamic library loading issues. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * change doc to py * Add out_bias out_std doc * change bias method to compute_stats_do_not_distinguish_types * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change var_name to property_name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change logic of extensive bias * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add doc for neww added parameter * change doc for compute_stats_do_not_distinguish_types * try to fix dptest * change all property to property_name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix UT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Delete key 'property' completely * Fix UT * Fix dptest UT * pd: fix oom error (deepmodeling#4493) Paddle use `MemoryError` rather than `RuntimeError` used in pytorch, now I can test DPA-1 and DPA-2 in 16G V100...  <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved detection of out-of-memory (OOM) errors to enhance application stability. - Ensured cached memory is cleared upon OOM errors, preventing potential memory leaks. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * pd: add missing `dp.eval()` in pd backend (deepmodeling#4488) Switch to eval mode when evaluating model, otherwise `self.training` will be `True`, backward graph will be created and cause OOM <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced model evaluation state management to ensure correct behavior during evaluation. - **Bug Fixes** - Improved type consistency in the `normalize_coord` function for better computational accuracy. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * [pre-commit.ci] pre-commit autoupdate (deepmodeling#4497) <!--pre-commit.ci start--> updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.3 → v0.8.4](astral-sh/ruff-pre-commit@v0.8.3...v0.8.4) <!--pre-commit.ci end--> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Delete attribute * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Solve comment * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Solve error * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * delete property_name in serialize --------- Signed-off-by: Jinzhe Zeng <[email protected]> Co-authored-by: root <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Chenqqian Zhang <[email protected]> Co-authored-by: Jia-Xin Zhu <[email protected]> Co-authored-by: HydrogenSulfate <[email protected]> Co-authored-by: Jinzhe Zeng <[email protected]>
…fate/deepmd-kit into add_paddle_cinn_dpa2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
source/tests/pd/model/test_compressed_descriptor_se_atten.py (1)
27-45: Duplicated test utility function.This
eval_pd_descriptorfunction appears to be duplicated across multiple test files (also found in test_compressed_descriptor_se_a.py). Consider extracting it to a common test utilities module to promote code reuse and maintainability.source/tests/pd/model/test_compressed_descriptor_se_a.py (3)
27-45: Duplicated test utility function.This
eval_pd_descriptorfunction appears to be duplicated across multiple test files (also found in test_compressed_descriptor_se_atten.py). Consider extracting it to a common test utilities module to promote code reuse and maintainability.
27-45: Add validation fornatomsto prevent potential IndexError.In
eval_pd_descriptor, the code readsnatoms[0]directly at line 39. Ifnatomsis empty, this could raise anIndexError. Consider adding an assertion or input check.
104-128: Consider documenting the magic value in the compression ratio.The compression ratio of 0.5 is passed to
enable_compression()without any explanation. Consider adding a comment explaining the significance of this value or making it a named constant to improve code readability.
🧹 Nitpick comments (3)
source/tests/pd/model/test_compressed_descriptor_se_atten.py (2)
109-133: Consider documenting the magic value in the compression ratio.The compression ratio of 0.5 is passed to
enable_compression()without any explanation. Consider adding a comment explaining the significance of this value or making it a named constant to improve code readability.- self.se_atten.enable_compression(0.5) + # Use 0.5 as a reasonable compression ratio for testing + # (typically values between 0.3-0.7 provide a good balance) + COMPRESSION_RATIO = 0.5 + self.se_atten.enable_compression(COMPRESSION_RATIO)
109-133: Consider expanding test coverage for compression functionality.The current test only verifies that compression doesn't significantly change the output, but it doesn't verify if the compression is actually working as expected (e.g., by checking memory usage, performance, or other compression-specific metrics). Consider adding additional assertions or tests to verify the compression effectiveness.
source/tests/pd/model/test_compressed_descriptor_se_a.py (1)
104-128: Consider expanding test coverage beyond shape equality.While the test correctly verifies that compressed and uncompressed outputs have the same shape and similar values, it would be beneficial to add tests for additional scenarios:
- Test with different compression ratios to ensure robustness
- Test error handling (e.g., calling
enable_compressiontwice)- Test with edge cases (e.g., very small or large distance values)
This would provide more comprehensive testing of the compression functionality.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
source/tests/pd/model/test_compressed_descriptor_dpa2.py(1 hunks)source/tests/pd/model/test_compressed_descriptor_se_a.py(1 hunks)source/tests/pd/model/test_compressed_descriptor_se_atten.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- source/tests/pd/model/test_compressed_descriptor_dpa2.py
🧰 Additional context used
🧬 Code Graph Analysis (2)
source/tests/pd/model/test_compressed_descriptor_se_a.py (2)
deepmd/pd/model/descriptor/se_a.py (9)
DescrptSeA(77-430)mixed_types(155-159)mixed_types(569-579)get_rcut(123-125)get_rcut(533-535)get_sel(135-137)get_sel(545-547)enable_compression(238-283)enable_compression(671-705)source/tests/pd/model/test_compressed_descriptor_se_atten.py (3)
eval_pd_descriptor(27-45)setUp(50-107)test_compressed_forward(109-133)
source/tests/pd/model/test_compressed_descriptor_se_atten.py (1)
source/tests/pd/model/test_compressed_descriptor_se_a.py (3)
eval_pd_descriptor(27-45)setUp(50-102)test_compressed_forward(104-128)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Analyze (python)
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Test C++ (false)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Test C++ (true)
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Build C++ (cpu, cpu)
🔇 Additional comments (2)
source/tests/pd/model/test_compressed_descriptor_se_atten.py (1)
48-107: The test class properly initializes the descriptor with attention mechanism.The test class correctly sets up the
DescrptDPA1descriptor with appropriate parameters including attention-specific parameters (attn=8,attn_layer=0,tebd_input_mode="strip"), which is essential for testing the DPA1 descriptor with attention mechanism.source/tests/pd/model/test_compressed_descriptor_se_a.py (1)
57-102: Selection sizes for descriptor are properly defined.The
self.sel = [9, 10]parameter correctly initializes the smooth version of the descriptor with appropriate selection sizes, which is crucial for the atomic environment representation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
deepmd/pd/utils/tabulate.py (1)
54-55: Avoid mutable default argument and function call in default argument.As previously noted, passing
[]and callingActivationFn("tanh")directly in default arguments can lead to confusing bugs or unexpected behavior with shared state. Use aNonedefault and initialize these values inside__init__.🧰 Tools
🪛 Ruff (0.8.2)
54-54: Do not use mutable data structures for argument defaults
Replace with
None; initialize within function(B006)
55-55: Do not perform function call
ActivationFnin argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
🧹 Nitpick comments (2)
deepmd/pd/utils/tabulate.py (2)
116-202: Reduce complexity of_make_dataor break it into smaller methods.This method is quite lengthy with multiple special cases in if/elif blocks. Splitting certain paths (e.g., conditionals for
layer == 0,self.neuron[0] == 1/2, etc.) into helper methods could significantly improve maintainability and readability.
285-295: Add or expand docstrings for_layer_0and_layer_1.Both
_layer_0and_layer_1perform targeted transformations with slightly different logic and return shapes—especially_layer_1, which returns a tuple. Documenting each parameter, the return shape, and expected usage would help future readers understand these methods more quickly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
deepmd/pd/utils/tabulate.py(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
deepmd/pd/utils/tabulate.py (2)
deepmd/utils/tabulate.py (4)
BaseTabulate(19-458)_get_table_size(380-396)_get_data_type(398-402)_get_last_layer_size(404-408)deepmd/pd/model/descriptor/repformers.py (4)
get_sel(291-293)get_rcut(279-281)get_rcut_smth(283-285)get_ntypes(295-297)
🪛 Ruff (0.8.2)
deepmd/pd/utils/tabulate.py
54-54: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
55-55: Do not perform function call ActivationFn in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Test C++ (false)
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Test C++ (true)
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Analyze (python)
- GitHub Check: Analyze (c-cpp)
🔇 Additional comments (1)
deepmd/pd/utils/tabulate.py (1)
203-279:❓ Verification inconclusive
Verify tensor dimension logic in the deeper embedding layers.
In the branches starting from line 203, you handle expansions for dual-size layers (
2 * self.neuron[layer - 1]) and skip connections (zz + yy). Consider verifying that the shapes match expectations for all supported descriptor types, especially when certain types are excluded. You can run a test script or add shape assertions:
🏁 Script executed:
#!/bin/bash # Perform shape checks on the generated tensors # This script searches for unit tests or calls to _make_data and logs shapes rg "_make_data" -A 10Length of output: 4814
Action: Confirm Tensor Dimension Consistency in Deeper Layers
Please verify that the tensor shapes produced in these branches (lines 203–279 in
deepmd/pd/utils/tabulate.py) conform to expectations for all supported descriptor types. In particular, double-check that:
- In the branch handling the dual-size layers (
2 * self.neuron[layer - 1]), the intermediate tensors (e.g.,tt,zz, andybar) have compatible shapes.- In the skip connection branch (where
zz + yyis used), the addition does not inadvertently introduce shape mismatches.It’s recommended to add explicit shape assertions or extend unit tests (similar to those using
_make_datain other modules) to automatically verify that the outputs ofpaddle.matmul, bias additions, and derivative computations (viaunaggregated_dy_dxandunaggregated_dy2_dx) are as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
deepmd/pd/model/descriptor/se_a.py (1)
261-283: Consider handling partial state if table creation fails and implementing forward pass under compression.The new compression logic looks consistent overall. However, note that once
self.compressis set toTrue, the forward path raisesNotImplementedErrorlater in the code (line 779). If compression is intended for production, consider handling the forward logic or returning a graceful error message. Additionally, you may wish to implement tests verifying that table creation succeeds and partial states are not left behind if an exception occurs before settingself.compress = True.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
deepmd/pd/model/descriptor/se_a.py(3 hunks)deepmd/pd/model/descriptor/se_atten.py(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
deepmd/pd/model/descriptor/se_a.py (3)
deepmd/pd/utils/utils.py (1)
ActivationFn(69-107)deepmd/dpmodel/model/make_model.py (2)
serialize(548-549)enable_compression(191-217)deepmd/pd/model/atomic_model/dp_atomic_model.py (1)
enable_compression(214-243)
⏰ Context from checks skipped due to timeout of 90000ms (30)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Analyze (python)
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test C++ (false)
- GitHub Check: Test C++ (true)
🔇 Additional comments (4)
deepmd/pd/model/descriptor/se_atten.py (1)
431-434: Improved tensor assignment using paddle.assignThe change from direct assignment to using
paddle.assign()for the compression data is a good practice. This approach is more explicit and follows PaddlePaddle's recommended patterns for tensor assignment operations.deepmd/pd/model/descriptor/se_a.py (3)
63-65: Imports for tabulate functionality look good.These imports correctly reference the new
DPTabulateclass. No issues found with the syntax or usage at this time.
66-68: Imports for activation function look good.
ActivationFnis properly imported and later used to instantiate activation layers. Everything appears correct.
703-704: Validate tensor shape consistency for table assignment.The code assigns newly built table tensors into
self.compress_dataandself.compress_info. Confirm that the shapes and dtypes match all model expectations, especially if this method is called in different contexts or with different table sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
source/tests/pd/model/test_compressed_descriptor_dpa2.py (2)
31-49: Duplicate utility function should be refactored.This
eval_pd_descriptorfunction is identical to the one in test_compressed_descriptor_se_atten.py and similar to others in the test suite. Extract it to a common test utilities module to reduce code duplication.
97-108: Add a comment explaining the rcut difference between repinit and repformer.The repformer is configured with a smaller cutoff radius (
self.rcut - 1) than repinit. Please add an inline comment explaining the rationale behind this difference to improve code maintainability.
🧹 Nitpick comments (6)
source/tests/pd/model/test_compressed_descriptor_dpa2.py (2)
125-125: Consider parameterizing the compression ratio.The compression ratio is hardcoded to 0.5. Consider parameterizing this value to test compression behavior at different ratios, which would provide more comprehensive test coverage for the compression feature.
- self.descriptor.enable_compression(0.5) + # Test with different compression ratios + compression_ratios = [0.3, 0.5, 0.7] + for ratio in compression_ratios: + self.descriptor.enable_compression(ratio) + result_pd_compressed = eval_pd_descriptor( + self.descriptor, + self.natoms, + self.coords, + self.atype, + self.box, + ) + + self.assertEqual(result_pd.shape, result_pd_compressed.shape) + paddle.testing.assert_close( + result_pd, + result_pd_compressed, + atol=self.atol, + rtol=self.atol, + )
52-52: Consider adding float32 to the test parameters.The test class is parameterized with only
float64, but there's code for handlingfloat32precision in thesetUpmethod. Consider addingfloat32to the test parameters to ensure the descriptor works correctly with both precision levels.-@parameterized(("float64",), (True, False)) +@parameterized(("float64", "float32"), (True, False))deepmd/pd/model/descriptor/repflow_layer.py (4)
92-93: Avoid repeated assignments toself.update_residualandself.update_residual_init.Currently, lines 92–93 and 107–108 both assign the same values to
self.update_residualandself.update_residual_init. This duplication can be removed or consolidated to improve clarity.Below is a possible refactor:
self.update_residual = update_residual self.update_residual_init = update_residual_init - ... - self.update_residual = update_residual - self.update_residual_init = update_residual_initAlso applies to: 107-108
309-309: Remove or justify the unusede_dimvariable.Static analysis indicates that
e_dimis assigned at line 309 but never used. If not needed, remove it to reduce clutter. If it’s needed for debugging or logging, add a comment explaining its purpose.- e_dim = edge_ebd.shape[-1]🧰 Tools
🪛 Ruff (0.8.2)
309-309: Local variable
e_dimis assigned to but never usedRemove assignment to unused variable
e_dim(F841)
519-519: Remove or justify the unusednallvariable.Although a similar pattern was intentionally used in another file based on prior learnings, this file lacks any documentation indicating the same need. If
nallis not genuinely needed here, please remove it or explain its presence.- nall = node_ebd_ext.shape[1]🧰 Tools
🪛 Ruff (0.8.2)
519-519: Local variable
nallis assigned to but never usedRemove assignment to unused variable
nall(F841)
777-777: Remove or justify the unusednitemvariable.The local variable
nitemis assigned at line 777 but never used in this “res_residual” update method. Consider removing it unless intended for future use or debugging.- nitem = len(update_list)🧰 Tools
🪛 Ruff (0.8.2)
777-777: Local variable
nitemis assigned to but never usedRemove assignment to unused variable
nitem(F841)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
deepmd/pd/model/descriptor/repflow_layer.py(1 hunks)deepmd/pd/utils/nlist.py(2 hunks)source/tests/pd/model/test_compressed_descriptor_dpa2.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- deepmd/pd/utils/nlist.py
🧰 Additional context used
🧠 Learnings (1)
deepmd/pd/model/descriptor/repflow_layer.py (1)
Learnt from: njzjz
PR: deepmodeling/deepmd-kit#4226
File: deepmd/dpmodel/model/make_model.py:370-373
Timestamp: 2024-10-16T21:50:10.680Z
Learning: In `deepmd/dpmodel/model/make_model.py`, the variable `nall` assigned but not used is intentional and should not be flagged in future reviews.
🪛 Ruff (0.8.2)
deepmd/pd/model/descriptor/repflow_layer.py
309-309: Local variable e_dim is assigned to but never used
Remove assignment to unused variable e_dim
(F841)
519-519: Local variable nall is assigned to but never used
Remove assignment to unused variable nall
(F841)
777-777: Local variable nitem is assigned to but never used
Remove assignment to unused variable nitem
(F841)
⏰ Context from checks skipped due to timeout of 90000ms (30)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Analyze (python)
- GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Test C++ (false)
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Test C++ (true)
- GitHub Check: Build C++ (cpu, cpu)
|
@njzjz This PR is ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HydrogenSulfate Note that this PR not only adds DPA3 descriptor, but also modifies other parts (such as hessian loss, silut activation and other improvements). Please at least keep a list of modifications in this PR comment. The main modifications are LGTM.
Thanks for comments, I have added the summary of modification. |
|
@coderabbitai resolve |

support dpa3 with paddle backend(eager mode)
1. training curve
2. accuracy
torch
paddle(slightly better than torch)
3. The main modifications in this PR include:
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Refactor
Chores