Skip to content

Conversation

@njzjz
Copy link
Member

@njzjz njzjz commented Jun 10, 2025

  1. For dpmodel, pt, and pd, pass the trainable parameter to the layer (not actually used in this PR).
  2. For JAX, support the trainable parameter in the layer.
  3. "trainable" is now serialized in dpmodel, tf, pt, pd, etc.
  4. Support trainable in TF dipole & polar fitting.

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Added a trainable parameter to numerous model components, descriptors, embedding and fitting networks, and attention layers, allowing users to enable or disable parameter trainability.
    • Ensured consistent propagation of the trainable flag across submodules and layers in PyTorch, TensorFlow, and JAX backends.
    • Enhanced serialization and deserialization processes to include the trainable attribute, maintaining trainability settings during model save and load.
    • Implemented conditional parameter wrapping in the JAX backend to differentiate trainable parameters from fixed variables.
  • Tests

    • Updated test configurations to include the trainable flag, validating behavior with non-trainable components.

Copilot AI review requested due to automatic review settings June 10, 2025 11:45
1. For dpmodel, pt, and pd, pass the trainable parameter to the layer (not actually used in this PR).
2. For JAX, support the `trainable` parameter in the layer.

Signed-off-by: Jinzhe Zeng <[email protected]>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR propagates a new trainable flag throughout various descriptor and network components in the DeepMD library (Paddle, PyTorch, and JAX backends).

  • Add trainable parameter to many layer and block constructors in pd and dpmodel modules
  • Ensure MLPLayer/NativeLayer receive and forward the trainable flag
  • Update JAX backend to wrap parameters differently based on trainable

Reviewed Changes

Copilot reviewed 48 out of 48 changed files in this pull request and generated no comments.

Show a summary per file
File Description
deepmd/pd/model/descriptor/repformers.py Added trainable to block and layer constructors
deepmd/pd/model/descriptor/repformer_layer.py Added trainable to attention and MLP layers
deepmd/pd/model/descriptor/repflows.py Added trainable to block and layer constructors
deepmd/pd/model/descriptor/repflow_layer.py Added trainable to repflow layer constructors
deepmd/pd/model/descriptor/dpa3.py Added trainable to subclass initialization
deepmd/pd/model/descriptor/dpa2.py Added trainable to subclass initialization
deepmd/pd/model/descriptor/dpa1.py Added trainable to subclass initialization
deepmd/jax/utils/network.py Support JAX trainable in __setattr__
deepmd/dpmodel/utils/type_embed.py Added trainable to embedding utility
deepmd/dpmodel/utils/network.py Added trainable to native and fitting networks
deepmd/dpmodel/fitting/general_fitting.py Propagated trainable to general fitting nets
deepmd/dpmodel/descriptor/se_t_tebd.py Added trainable to SE‐TEBD descriptor
deepmd/dpmodel/descriptor/se_t.py Added trainable to SE‐T descriptor
deepmd/dpmodel/descriptor/se_r.py Added trainable to SE‐R descriptor
deepmd/dpmodel/descriptor/se_e2_a.py Added trainable to SE‐E2A descriptor
deepmd/dpmodel/descriptor/repformers.py Added trainable to Repformers block
deepmd/dpmodel/descriptor/repflows.py Added trainable to Repflows block
deepmd/dpmodel/descriptor/dpa3.py Added trainable to DPA3 subclass init
deepmd/dpmodel/descriptor/dpa2.py Added trainable to DPA2 subclass init
deepmd/dpmodel/descriptor/dpa1.py Added trainable to DPA1 subclass init
Comments suppressed due to low confidence (2)

deepmd/pd/model/descriptor/repformers.py:90

  • The trainable parameter was added to the constructor signature but is not described in the class docstring. Please update the docstring to include trainable and its purpose.
        trainable: bool = True,

deepmd/jax/utils/network.py:48

  • The __setattr__ override references self.trainable before it may be initialized, which can lead to an AttributeError. Consider setting self.trainable in the object's __init__ before any attribute assignments occur.
                if self.trainable:

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 10, 2025

📝 Walkthrough
## Walkthrough

A new `trainable` boolean parameter was introduced and propagated across a wide range of descriptor, network, and fitting classes in the codebase. This parameter, defaulting to `True`, enables explicit control over whether the parameters of neural network layers and submodules are trainable. The parameter is threaded through constructors, serialization, and deserialization methods, and is now consistently handled in descriptor, embedding, attention, repformer, repflow, and fitting modules for all backends (TensorFlow, PyTorch, JAX, and custom frameworks). Corresponding test configurations were updated to set `trainable` to `False`.

## Changes

| File(s)                                                                                 | Change Summary                                                                                   |
|-----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
| deepmd/dpmodel/descriptor/dpa1.py<br>deepmd/pd/model/descriptor/dpa1.py<br>deepmd/pt/model/descriptor/dpa1.py | Added `trainable` parameter to descriptor and submodule constructors; propagated to components.  |
| deepmd/dpmodel/descriptor/dpa2.py<br>deepmd/pd/model/descriptor/dpa2.py<br>deepmd/pt/model/descriptor/dpa2.py | Added and propagated `trainable` to all submodules and internal layers in DPA2 descriptors.      |
| deepmd/dpmodel/descriptor/dpa3.py<br>deepmd/pd/model/descriptor/dpa3.py<br>deepmd/pt/model/descriptor/dpa3.py | Propagated `trainable` parameter to repflows and type embedding submodules.                      |
| deepmd/dpmodel/descriptor/repflows.py<br>deepmd/pd/model/descriptor/repflows.py<br>deepmd/pt/model/descriptor/repflows.py | Added `trainable` to repflows block and layers; propagated to all internal layers.               |
| deepmd/dpmodel/descriptor/repformers.py<br>deepmd/pd/model/descriptor/repformers.py<br>deepmd/pt/model/descriptor/repformers.py | Added `trainable` to repformer blocks, layers, and attention submodules; propagated accordingly. |
| deepmd/dpmodel/descriptor/se_e2_a.py<br>deepmd/pt/model/descriptor/se_a.py              | Propagated `trainable` to embedding network initialization in descriptor blocks.                 |
| deepmd/dpmodel/descriptor/se_r.py<br>deepmd/pt/model/descriptor/se_r.py                 | Propagated `trainable` to embedding networks for each atom type in descriptor blocks.            |
| deepmd/dpmodel/descriptor/se_t.py<br>deepmd/pt/model/descriptor/se_t.py                 | Propagated `trainable` to embedding networks in descriptor blocks for each embedding index.      |
| deepmd/dpmodel/descriptor/se_t_tebd.py<br>deepmd/pd/model/descriptor/se_t_tebd.py<br>deepmd/pt/model/descriptor/se_t_tebd.py | Added and propagated `trainable` to SeTTebd descriptor and block classes and their embeddings.   |
| deepmd/dpmodel/fitting/general_fitting.py<br>deepmd/pt/model/task/fitting.py            | Propagated `trainable` to fitting network instances in general fitting classes.                  |
| deepmd/dpmodel/utils/network.py                                                         | Added `trainable` to NativeLayer and related classes; handled in serialization/deserialization.  |
| deepmd/dpmodel/utils/type_embed.py<br>deepmd/pd/model/network/network.py<br>deepmd/pt/model/network/network.py | Propagated `trainable` to type embedding network constructors and internal networks.             |
| deepmd/jax/utils/network.py                                                             | Conditional parameter wrapping in NativeLayer based on `trainable` attribute.                    |
| deepmd/pd/model/descriptor/repflow_layer.py<br>deepmd/pt/model/descriptor/repflow_layer.py | Added `trainable` to RepFlowLayer; propagated to internal MLP and residuals.                     |
| deepmd/pd/model/descriptor/repformer_layer.py<br>deepmd/pt/model/descriptor/repformer_layer.py | Added `trainable` to attention and repformer layer classes and residuals; propagated internally. |
| deepmd/pd/model/descriptor/se_a.py                                                      | Propagated `trainable` to EmbeddingNet in filter layers.                                         |
| deepmd/pd/model/descriptor/se_atten.py<br>deepmd/pt/model/descriptor/se_atten.py        | Added `trainable` to attention block, attention layers, and embedding networks; propagated.      |
| deepmd/pd/model/descriptor/se_t.py                                                      | Propagated `trainable` to EmbeddingNet in filter layers.                                         |
| deepmd/pd/model/network/mlp.py<br>deepmd/pt/model/network/mlp.py                        | Added `trainable` to MLPLayer constructor; handled in serialization/deserialization.             |
| deepmd/tf/descriptor/se.py<br>deepmd/tf/descriptor/se_t.py                              | Added `trainable` parameter to `serialize_network` methods; propagated to EmbeddingNet.          |
| deepmd/tf/descriptor/se_atten.py                                                        | Added `trainable` to serialization of attention layers and strip networks; updated docstrings.    |
| deepmd/tf/fit/fitting.py                                                                | Added `trainable` to fitting network serialization; propagated to FittingNet.                    |
| source/tests/consistent/descriptor/test_dpa1.py                                         | Set `"trainable": False` in test configuration dictionary.                                       |
| source/tests/consistent/descriptor/test_dpa2.py<br>source/tests/consistent/descriptor/test_dpa3.py | Changed `"trainable"` in test configuration from `True` to `False`.                              |

## Sequence Diagram(s)

```mermaid
sequenceDiagram
    participant User
    participant Descriptor
    participant Submodule
    participant Layer

    User->>Descriptor: DescriptorClass(trainable=...)
    Descriptor->>Submodule: Submodule(trainable=...)
    Submodule->>Layer: Layer(trainable=...)
    Note right of Layer: Layer parameters set as trainable or not
sequenceDiagram
    participant Descriptor
    participant Serializer

    Descriptor->>Serializer: serialize(trainable=...)
    Serializer->>Descriptor: EmbeddingNet(trainable=...)
    Note right of Serializer: Serialization includes trainable flag
Loading
sequenceDiagram
    participant NativeLayer
    participant JAXWrapper

    NativeLayer->>JAXWrapper: __setattr__('w'/'b'/'idt', value)
    alt trainable == True
        JAXWrapper->>NativeLayer: wrap with ArrayAPIParam
    else trainable == False
        JAXWrapper->>NativeLayer: wrap with ArrayAPIVariable
    end
Loading

Possibly related PRs

  • pd: support dpa2 #4418: Adds the DescrptDPA2 class and related descriptor modules for Paddle backend support. Related at the domain level but does not overlap in code-level changes with this PR.

Suggested labels

OP, C++, LAMMPS, Docs

Suggested reviewers

  • wanghan-iapcm
  • iProzd

</details>

<!-- walkthrough_end -->


---

<details>
<summary>📜 Recent review details</summary>

**Configuration used: CodeRabbit UI**
**Review profile: CHILL**
**Plan: Pro**


<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between d5840e66e814e49ba64468568ca8696776cbddae and 94ce3467a93a136fcafecbbf0f9418b50e37dcca.

</details>

<details>
<summary>📒 Files selected for processing (1)</summary>

* `deepmd/dpmodel/utils/network.py` (10 hunks)

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary>

* deepmd/dpmodel/utils/network.py

</details>

<details>
<summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)</summary>

* GitHub Check: Test Python (6, 3.9)
* GitHub Check: Test Python (5, 3.12)
* GitHub Check: Test Python (6, 3.12)
* GitHub Check: Test Python (3, 3.9)
* GitHub Check: Test Python (5, 3.9)
* GitHub Check: Test Python (4, 3.12)
* GitHub Check: Test Python (3, 3.12)
* GitHub Check: Test Python (2, 3.12)
* GitHub Check: Test Python (1, 3.12)
* GitHub Check: Test Python (4, 3.9)
* GitHub Check: Test Python (2, 3.9)
* GitHub Check: Test Python (1, 3.9)
* GitHub Check: Analyze (python)
* GitHub Check: Analyze (c-cpp)
* GitHub Check: Build C++ (clang, clang)
* GitHub Check: Build wheels for cp310-manylinux_aarch64
* GitHub Check: Build C++ (cuda120, cuda)
* GitHub Check: Build C++ (cuda, cuda)
* GitHub Check: Build wheels for cp311-macosx_arm64
* GitHub Check: Build C++ (rocm, rocm)
* GitHub Check: Build wheels for cp311-macosx_x86_64
* GitHub Check: Build C++ (cpu, cpu)
* GitHub Check: Build wheels for cp311-manylinux_x86_64
* GitHub Check: Build wheels for cp311-win_amd64
* GitHub Check: Build wheels for cp311-manylinux_x86_64
* GitHub Check: Test C++ (true)
* GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
* GitHub Check: Test C++ (false)
* GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)

</details>

</details>
<!-- internal state start -->


<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKNxU3bABsvkCiQBHbGlcSHFcLzpIACIAM3gADwAKWm5mRRIvAHpuXBzaLKE0BIBKLm40RGRcKngMFUiw/EgvNFlKSAAye2xubnwKUIApAEEADTDa+oFG7oc+gdDoADEwskQByHjccQwiSe1pyOjIAHdKnoFmdRp6OUhsRA6MIQAvN/RkW0gMRwEOgAsAHYAJwAZg0MFgJEYsEwpGQdQYXmwSjC0K2+B8+FOdX2vVo1GkHCMAEZIctNrgMal0kovAAaayyaADBiwSBJXIlJmYehWNC0WiRAVCxpc2glBTMfrkDC4RBM6kwgAGNUODRIKp4zjQbBofFQGBxOqqUQI6JhrXaFF5Xmp+GwRA56hQyGNoTQYkkmXkjyiuOpdXRqG+sSp0Ke0tl7EQkMgBgATBSIzDRhMBF6ANZkWhM+b9QaYvjKyBqqaa7UVKj6jrB0vWuvIQVKPOQMgNPGWlptDoW/xMIgYeCvGF8yBwjDCmFB5s7CjwAR4EjxgwQqGq9V1SvoeeL5dun4mp4LtBeEdRL0UfBVSC0jKMmDrAbLLwmpK4WI85msijsznckyorTsB4rcJKvJTpA6ZbDWJCnAMWZxpACYApCADKvSFqE4Ylhi5YajMWo6nBBqHi2UTBtAz4UK+Jq0PA/SNOOTHOFsNxdnS3jSBobiRjC7LwtI6DwMwTQoDK15SAoGCIEgNAYAw8jBpOwpdqWBHbkRVa6rWfBXjeyAMbEsSUOwd4kCQ3AtCQzjDnssF6vBiHNlB6zYP4PDXtwHQnvAZ4jtQ8D4Fg+CxCGc41PuNCQnAMKxN4vhKLg2heK59BMGigl7MJDCYJA/x+CQEjwPBl6hKWACqNgADJcLAOzcIgHBZFkRDqLA2ACBoTDMFkShWXSmR4v1llpLQYBZuoOSJVkwLgrxBj6MY4BQLm/DhWgeCEKQ5BULc0psPK5R8IIIjelIMjyJlyiqOoWg6MtJhQO4iKuTgBDEGQygHb1R2cH4aCnD0TguAV10ZFQd2aNouhgIYK2mAYA3jf1aQPqNiAMAuuQDGjaDktwsgktEpMGBYkAjAAkl9e1EvQDig/IYWwkJiB8QJk4Iig8rXrQ2AMGOPzwQV+BYrZWDVk5ZGaUcxGBhyaAWbEW32pAEhnsEG1ltAFDBNqFqlkwsk1ALBAUMgSQqgA+tbdTqLb2r6rAiiIFKLNPFIVC+MilRPOl0YhbGZwdfW+Eo8wBT3vSmPY4x5v44TshO4o3GxQgyBS3ph68PgFREPT6LXk6HKlvb4gBa8QUhdr5CIL9LAxvKAfkLgCEUEh+Y+Qw8DxHlPjE0YUBU1gGkACLSNjuRj1YIyktqvu3sb9d62IAxMrLO6oBR9AsX7US0Dio/NCqE9YxQuQAEJvgwWboSQIw7GQ2rjiq0CyD5ACizD/LQAByJBcAql4sPUe+Ez5T1wNffAt976PwUgvVoS8QorzNuvMsW45bam3kKS8UEKhmnoBaFU39f4MT2AAoBPN66YEFpbAQ+BqSQGYIcdAUEV6MXbD/Og5CiBuzYUQ5oGkAHwGdAwigABxem8D2DBQwNgk2tCVxLUgCPbsKoRFiIGFI24Mj5RyMQX7GSqC162gwRWbS5FcG73wfvQR7YvQcg0SQURsBxE6LoHo8QIUaq9goAomhillGgPUZotx2jpFP30T4vxhjkEm1XubDemCt7NmsQI00Tx7EaQ8bQLxcjfE2gCSlIJIDVFgNVLk/JMSimMCQcgZeptTHJIsZEbBaTWwZIIVk8SwigpSEKZQYpSjLYqjqNbXOQgX5QRVI6XAEzrxTJKItP+zRGHQj4G+dqDAmTGxqFiLYb5ThMk2JQa8fBVLngcqgbKpBaDp03K04isRWj7BuSg+S7AvDyGpP4QU5pYDF2dN2cu/lzxV28aFTaPgiqRA1vKS4XFIjVGaHs68vhTjQmVHhFxfAs6AMoM2TyKSiLpxuVzGEHYiLVBpJPHG5sMnqAaY3IOzdxKFTzpCs83yjwYDACSxo9wng7C7JvIiABeZYZ4ngv1CI0xJcjFoUxGPaH6ciUXdiUL7fa6rtYkASNhKImxPAzHgAwds+jxDSCHpAAAwigppDK5JDmoB5GEBJ6ZcFPnS6es9yS23LrbJIGgQ0lG1IAJMJ0DpPxTLAVJAuAMIOeKmAestQqPtQktBfBnX1FwG6h44FPVlggRfKBN874PyiRoANw55nW2DaGiNUbOkxo6GKyICaxa+GTbrfW6aHWJM2Dm11nkPW3C9WE9xkSFKQurXbWtQaQ0aDDZASNO8SLSzbXGztSaU19qgBmkxTrRG5vzWOugE6XFaMkdO2RNTKBzsDfWpdK613Rt0gSvg7b42i13b2tNB6B1ZvsCekd7rC3jrLFUqJkLBkUEfQu59jbV3NqiK2r927f3dr3Wm0m0Qh5IwjlHdGMclDn3jnjVIaBkxExJmTZVNNdo/SiIzFhYMWa3OtQYOKrMcodPQOYwibSN3Z0NhiUFldq5YH7mlbWntlC+AcFcVOyKQ5BgqcWn1uAZ4jETHE5lmbTEYW7r3M13LZBKnwnGl+FAiCOHMh4OxvSMTyqzcgFm3rz5X3LXAqJ2okj+G4OXAq5bvyecgdA2BJBoDUQELQfzvzLJgAYbQeQgXgszBgVmMLJbvNZZsFZXCbALb+cC0VjomXb5hffl/bh/9AEJY/pSurvDvyvz/v0kgcHhlBOQP6eguEyxEFJNbRAcIfLW3VBgIr0z6BqhIHFybVBZIzbJZnbyaAC40BpY8oTzzXmwRYN2FhwYyNx1xoaWtYLAqQvEtyy0UZepNwVEyKlVz9hooOfgL2ZwsUbMtPAPFH6DREpnE8yEqz+DYpaPgbZuyQr7N8C8nEJy+BnM2JcrsqA6SmboItcwlgVUGikxq0sWrWg6pQXqg1iwjV4q6uec1sirXs37YZhlHH6maa89p2eemeZliI2jIa2Qzv0soxUGjycC2EgOhabpbDm3qDkWeQTWlhPODs/9ZzMIJPgqk9rOoBp6iKa6ki4k3OIs+crQgjeuWy1ZfvjFhb8W7daci1mAr3ByslY3jVkgpC6CUJVJBObHXxADNidQkpdDOQqmG6N8bJAluYBmxk+bi2puIBm8sgweGCNgCMEL6OmRY7i4oPjCEtG8/0cJ4x76+0WOODY8zcKnHWc8fb829R1mROfp1zzZXknbsyfc+FNuqKWVygVGphAGnwulp02CfTxjHUDC4OPd35avfI9OIgHrgs49PC8LEDQZWjn76lOODS/vA/1aoXUQJh+rbH9P7gJr1sSAtbxCqZZUJHOkRNiZIApAocixDXhiRGxAamLoC2b2YIrfoGxCL8SXBPasovYWqdgOSfa+DfYdCYqAL/bKiA594g6wFg57YQ5rLQ5bJmr8B8A4GHImjY6KC473IE6UyqqU6yQD7k66iQpj7tg06DB048AM50HM6lSs6Abs5DqgZ5qeQeb25L4IYOzWwKKC5jSRzC4Yxi4UYV5UZV7S6oDnr2JIgohogaSYaJrYb/o6SAF8BJBnh/KpZeTSDsBMhLihDGjAzdIHxHy54vTgJb75aFYX4H4CR/K3aP7wBogL6+ojAQg1pqHtIy6FwWheiCy5CK4tjK4hSq6IGQA5iyDty7xwH/RlI8ZvxNZ37B7R5KKMBRFyLUKxGqjKGzxJHzopGHimF3YMBZGehYC5Fcq+CFHFGlGwFa7sC8T55LSEZaHEYi5l76FZDn44hxjV754Ma0zMYMzN7OCt68YIgcxHHCRG58wCxCzkDAw2ESykFbpPLagKwCZKAqzeChAawogwgeZ2ED6uYwEsyb484e477hF1JGLjhMo8zG4FFe50SnDdbglVBrb3EMEI7orIAEHQ4Nh+KYmhwVKPb1LCTOAUHq4rgbhq5YKTHwGhA4KdJiYCTQHHouoKFji2KEJ3hHx3YwqP4x7CQebh4+iInjj+ByT8yq6nBXoNSD4VzgqEpx6kDzKimxHYBni/6NG2QHQKxhzuFIn+zGaiCmYyYWY96PGHKbY5wbZbbmjIHxRYhHJdhoHT7NRlh0CkCf4/yu5ljwiRAelxYh7oAwoqhwlHKIm8kjIC4aSNglYtDySh6z7Bi2T/jBlWTwmIkZFBmCmR61Lhm9YZLKnikYpSkz5WwYDWwFmqleABkqjJ4VlqkbxoDlnSAqlql/6Q7rIdC0HmqbCMG76Hg47xB44UnDHNGHzCQejoAmSiChD6ryRdjobuZ8D/BwglSnIJADGYGmrYHolfY/algkrwDnjv7aylgi6BzOn47kyE5cEk68GiAU63ksz6qGr0DGriFM6WpSE2qHpr7ZryFnoQYXqW6loglhHrGqF1oaEqjF4kal56EXarFgV74aBEzagJgoQjDvoOFerWFdqQA9qpoqjflMlyEskAWy5AUpncBpl+IQWOwC7QULE6GkZ0orFrHIWoUoQGAYVYWboUA4VPI7q2GEU174ZzGF7IxMUl6i6sUIVlYDDFYbGDxbF147GN57FMzazt5GAjDCw3Fdp3HoaUk7jPGvGqwfGayqi/HnAdI2ndj/HMmnqiknnBHAnb6FYKWEor7jjybeywqFyLz+xx476eUUDdYbx6KJgACyaA3A1ZkVUV7x8AAAErZHkn0N8vFVEomJ/EEPAAAGrOAjAZXJwbw1QwJnh6K/4olGU4GYl/Z9gYiVZZj0E9g2iHhxqLQADq+JD2jJshFslmu2ZJNmUxCKjmHJh8pwx8IKV2w+zRApnWYZiieZ44HZ2aZuKm0gr2RwjpO5uBe50IJBB5R5sgKJ5hqIxJmZS1UeuZse2MmplEBJAkRJioPQ/4FwX+ZCXY0Zb1VytkmyuJvIMGzRLC1kv18ZxoFALC+ut2v1i0mFDEIx3yQ1xlliRlHgVp6RgKjowKdcv03Kb1F1vCv2ZAGpQUDkzASV3AjQv1LRcR9uoF3uoV++6eIV0NlA3WDyQ2gCTZYplZ2oCUikt2NlgZGwaRcuzQmRVknoaNGu5RHhv2ZqisU5YgO2Mp12EKC14UdZvgCkGwFsKyqKFKmJZkzC/yfxAwVosOZq8OvMByu+qO7YFA5yE4fI723NA5ZmAh9gUUYgPKLY3YKgh56gPyzQiAJmsQPyFiIdx5LMdNPlm1/MqmJUSspY1xctxE6GTImKytHVgKaVIBuNHIDlAwAc0ZGt81KCl5yqN5PtDJFk2qj54Uz5tOr59OpqH54QX5KiiNaGwODxe2Ql+FOGSB9lJFp0beXO8RDut87NilUFMFSx8FCc8lHNhtqFvdvF2c36w9BF+sltA1Wl0989nNt1WAjFg0ixuhslq9Hl69SlRFUAfd9ARlu9WGI9vxDdpdk9epZYCVsVi9UlsFMl5Gcl99ilKFycW9LaA9GGglH9+9xE39E9x9RiKoCVSVqVgoxVNNxhF9S9N9YDd9zND9UDT9lM29/e79Nhn9hFh9R6mwnO6DkVuV2ABVRVJVQDV9zFcFt9eMa9kDm9z9VDsaCDtDSDY9UBR9zDt4Ko5V/cVVDFhDLFxDAjEDhK5DMD/dDhmde9o9A+Qt3oNc8evNut3DqM0lyx4DpDQj0DsxEAElKjfDajFeTwn+iY1saAUDdG+G2xTG6lIMLeWlJtJxP9LlbRWmcCK+otXtdlCuVhZpRlDdJC3+FCDW9ReZD1hcMRlh+EX1PCeIrNTAPg05iq1g/gq5jwKNppe2o1NJZwFwE5uE5wFAnSBBWAeuI4XYSZTid+vCweVBxyDi/4BTQoXYrc7cLVA4Lil0tTI15p+w4BR2ZO/DFAAA5AZow2YtyjiHtfbQdfgQ1Tilwt9Q5JM4hGcMWaDgcGSUqtecTvXbaY3Q+T7U+cIQdG+Z3Rat3VxjIds2g3I/btE8o8A8vWs1kO4yQJ4945xb0fLug3GuKr3sQv03iHUXdcJPzAuNcnNbDYqqJQXkXmC0Q+dgnO4/BpsbXpTPXnTAdKxgcSE2zGE6g4Ca5ZAvfDYDExcHE/Ygk1Zkk3AwPqk2c0QBiytfdVEQGL1Q2GLNZHgfpAQJAU1shFYJU8FNUyaRpGi+kw/nizdqObEUeLSYpBYTOAK3U9Sf9J4XgEeMDK6M+YzuoDyi084LQOrYUS8hacs5AbSq45s6voOiWM0L06c4U+c4AlMyiZxpOaZGrbNUPvizXGy81qK8LBPh3FdJue9jJIjvwD9liUQUdXwJKa4jPiSbc3LPc5wY87qg3XwdwYIa3SIe3WId85IX83aqy1Peg8CyQFy6Czw1YyvXjBS+QxLXZcTcNXLMi2aZrg0x05kxXLsPsCK+G2Kxk2dNOYiLJK0d2G+HnJXeCqKq/hoGM7wvvjMWTOJcS0OyA9Y+S8npoFS346pQE4XAy+xm3qE9xi5t2+on29ANy8gLy+JPy9OzuMk882u+M7qxEQ0lK/QNqRpue0U+m1MwoKU8YxgAaT3H3OZjndCJ0/q68D044mWDqxu3q/u3K/mx0Kh7iwxHQqjYUfO9rkaCaI62as6/IK6203QDtVgR9vtXRyW8c92Ax/sBcx3Fc2Wzc51ZAO2TQTbd2WiQc0wcDF3pKZ5CwkoDXQ82qlTvW/efwbqu8y+a1Saozj88rp2z+UG4C8gDPU7hY9ocOxC+48+9LvC80FO5nbO5a2x+ZIu7mcu6KpR3UVu/G7k+a6m+u+6JG5cyU5ENh1e2JY47e5Y/eyO240+5Ni7j44S/4w3h+/sV+6cR3i5ibdCRcYfkrBnbcflG/b3s8UrGZe8erJZdrG/PQygwNYIQwkws59FrFvFunozdbs7v6XqTxP/pnEK5jXnJtukc8101rcm+FEbpQCbhR2k1R/Bwmc9TN8hOhJHWZgPCx73kF+NbZdki5oTd1xFxkz/Tu6LEwqeawpJ+h5c9fhiJyiroplFNZF99J1mJCIjXkSbjU4k5axjcgPXFbbvO9Ji7uFFF4brhfYB6N8Aop9Qf9l2XbXm47a1ejhcu7VjiB6wYOewUYFeTW4ZzwcZ03W8y3R86IVZxIZ+XZ6g8OqyRO7QFwOhcBbkE7qN3RUhsuq59fao2S6O3lzQHFuO0Ly/aiQJUPYg6PcRUfbz+RUWkLzPR7qLwV8kXWg2pL4O1l+C645C/LwV5xcr6I5QGr2Sfo3YYSze5JXe0sdsCu21CV14NbD73iIVypTS2paV5pRxj+53v+zkv78sBxHsMB8wlT6VHy0YjDws1B+ognyKnB5k7Hsh6Eolx3Palh5Cjj2q8VBq4gNDxaws9d6EKLc0wMK0+00R+TSuyM04rnyu4M7j8M6GyqL3+ixk6gGt9K+9/X1Sdndm/s3mwq6TdiRiEHxGxmy1aW86OW8SuDrj1Dvjyp61b2UchV1czp/8vp/T423ecz2Z6zxZ189Zx29IRSTbF0ZBcwoQYoBE1BvH4n0QCvmDCX1LeGMVfkQD950wA+YA8dqLR848xkQl1eZjOyu4K0EUIXFamFwcjD9/+kXYQNu3pqxcywlCKZmXxS4V80uRLT3iAJjh4BDyiALIKDxD7UtqY4felmV0OLaUDAulQoo1yGJ7h0eViektByzJdYo8gVJkBP3oA6lA2wGf4LhDZLyBnAVARQfwOXDIQeMfkKulgC9hyQNuRAm6jmR3YPV/oUQH1pAFJDiREw4kPzpnypJesiA8ZMjJQE1oG4TCgFexOyFEAtVBsOg5osmFf5XdVBNAciGlBDY04ekqddRLbEVLqAv89FZ2IoEWhVFe8cPLyEt2tJEIcaJccev1265wZVk0NAMiDxL5Zh4yYA77jJ0Cqzc1E8jPxAUOYABlbBO4agGjwPB0k7KmgpNkMSghODTwXQ+MtIwBbS1cgiIT0M2FRKLRKQJzYoev0u6CtdGeULAIVCVhyQ9ggqAyvlE2BKxzw9cbWLwMQAABuFAJtBAxrCYQvAiQbSWQDPk+QdlbYfJHNq4B2Q6kDEL8G4S/0IaGSUNhXW3hVAT0lEGfEwBdrSBZQJNT1q8kWjoRRIh5ZwHX3ij/8KhLVcJsMJnysdxhFw2MrsJZj7CnaqycgEcN7i49yAlw5WOZQ1T3Yeu+sbmjsKuFCEKgU4SiCcNWFEB1h4sfKItxKithua4IzbDcz9j/D7E3wvxBINNaogXh8InbnMk8ChBoyRtU/hW104zhnmQIu0hQDb6ZIYc2yVqutWjB4AScRw08qnzyg+1wwDAR4PwCGKwd9g44FhPUHagOQmhliewcYk+TygA62MQyG1XlLjhQeb1MdF2E6EGsa4vglBPGXcg4t9gmYW+OqKexBQ1Ap1WEF4OQD+AXk05bsBnRDE4cOCROBnqTlpS38qc5nNupZ3fI2cWcNqFXjQzwqSNUSDDX8t1xEGIlAqFvNzve1oFpQGBJQ8hmUkrGBDWhwQhuoGNHD0AmOXKMGP4AULkA7g8gDRAYIfRDiSASQFdOOEuRPV9BEeUQTaA0C9DNai4sND2PSQFhFgxYMsNEDjTRBtQxRBio2NorRDAEsQ5gI7APGdIqxf6ehtn0GH1iPM+QhSvpmfFRBWOQQpUfYGcFaCMkO4sCQkPdZ5C6hv4/8XNkwyVRhwIUAANo2EmQNItCV2gAC62EuhgfQ/F/sZG4+DEDMIw7NiYuZYFhDmD9LrtrYoPYBNo3oBdlxINwqCCsLxCsjIg+UQohkUxE4RTke3REV6MNpMTM69hPimkPzgG4LQQo9qtk2iIochJjAsSe/SQlyIsJWIDCfJE0leBcJ+ErOkKz64AsU27EPPlJxKF/1KJKoaicnjAH0SShjEkRp0na5qw1uBuDzL3hpHclRivxQkRoiDgaFyhKk5yVEBYmySDU44DiWcKwx3FeJUtfiSeOCmWT4aqklIUK1zjSTbssk8jhXQUnNFpByU2YfAIsLij2IkovANKJEkUCPezjbIO2PoHv4JsYzJga+zD7vs2Bkfb9sy1/b9UTJJEqyjUTqx1FmxsTVPvEwz7T9IORk6Dk9z1aJsgxOHCptX0dC18tW+EOaU8QuChd/IWpDqHMnsCR1ZAXYQtnWBnyb8Go8PF2N4DuCkk5YRwnwigBnIGonWEQNLKIFmbCRHR8tMajhAgLqJb8w057hPWjYUpY2+AsuPKG26q4yJiEANskymBdhlyaAVckuVtZjl3QjCQMmREwDMxlOWonsiJ13yX8cx1/Jnq8zv50jixj/Tnr8xf72dgMsjJzoDN/jitNCXvDGA1KyBNTk8LUuFu4LA6It5hkk1Fnt3FYLT1uWYhxvMR4ZFAEgWQLmYwJfZ08WBHUpvF1Iq4nFRI2EX2kSG1yDYMGLtNoCMCsBUxCqp4SxKLXXRmDgBkcDQHLJ6gsB0g8iSEGoj6Qbimx9SANm/2FQtCKA8Qr/m2FlZajBspwKgH0C7B+zooMIT4sEHdBOQ5s0QU4BeI3jRABAKc1qiqGiCxFcAF4xppT2Mhp9IQVfKpmtKGpRhY5wkbTmOC8DnBZAmJcOT5CQ4dR/6Rs2QCbKpgCgawL8WIGRGNiZi7s0EcYLAWUFDNUaYc2KkFgchKAfIU4dzPPj7ELgBBpkzFlwH8m95UAlIrUKjUrmHgJ5fQSfk4hGBtyO5XcvUCqCOHrVcQTwEkagH3lNzZ8rc5QR3PNn+RtIoMoSDV1Tj3UQoSNAHmcEblIzKgRqDTCdVDq6zghpk6MnaANBdgXYvhOBouU64WzVMFbVcVIKhkUAoeZ1WnirLrp1tnmDbZulTJbYlj22XPF/ir21mLAvUx85+abNfk7gbZRGe2cUEdmsAQoFDKKuNPoBQSJ0c4+DLbF9nzgg0r+JkPUDYBcAV4TISuVwBGAYBZAUoeGESOIhAChccshWeIA7FKzpcFoY2H/PyIDwAFsVTrl8TxJDc6Fxs02WfIaGZzLF7chhc4DfnCYZ5uYLsCY1PbWYapGXKgdoXAhZAresvAwhUCTi+MVZtLXYkE0ZZR8epMfI+lbBN6Ox3YA04XrzjnjJ9QOCLW8N9OIi8DaxhsBCNCWhk+wp8sYL1BN0dw25n4nIfkUODsoqhT27jFoc/CvwzIWZQeDJk4T+F1L7EDSzIG/g/ySdqqy00uXCMeylK2U1co9heGbkOhbWOS+phUTm6nELUDgZytSGoDdgwFx5Z0Ytyyl2VlQUYJTObiMgeRxRJBdybOiMBKcD+cOXNuig05O1SebtKcDmytlcEhyPGUUqEFMnIioIzY/wCdiwDYBFIFKGntmPwVGdCFJna/kWNIU0yu6tnF/nEv6l/0Z6OmeeC2IKD+LAl5eRODAIuBwCbBU09GkK3QESyypBva3EoxR4v5+lGgJpX5kCIUlwmTM1JRiql75AAlpLPFVRiThbS+sAsi0MSog6krdG5K2Ut0wdH4R2l9+CIkfnpU8zaJVo6qu7x8VC4cVPKlYlRilxhLiudLdWcExiV8Ywm1XQOkrEKKESMeEsjyeFF8qq5jlW1curiUO4ActMOmfnM2J/q4cjSBHJAc0NQG0i/Cd3Pqb+UEJUrKlSjALFZHLg5YQiUWKbmN2jVTyHYiWZPClkUVu43KoRWxl5U5CCNCU1WIaazM6VKrJObWGZFFRqhWBlqT+YSFbATxjZYqT7ZbGnlfgK9aAKeFbL+L/yvQpJy3bbP6qdEHYGS4ymUOgTeq8BpAjgBoGHXuUHJTpOKEgguXIKVtNQVBffp2UP6Ez1OfZclEJBp611a2UKzVDCuIXNtPmHdJ/hQq16ormx6KvnJys1Uy9eVkucdnAIVysdA14kR1cnVDVBtmo3FXQKksN5VL5E+amNbWhKBAaoAEahNdjwg0pr5kaa62Bmug0oRYNFSuehoxjLJqfcbsGDTrGLUdKqEH4QZXt3Q3AaVQVamtVHgbUjYm1E2KbDnkI3Ubq1iJK2B2q7XZ4e1aqmWajGfUuMglleVqeEtYGGrol3Uk1b1PlFCgh1wmIyplIHXNEClvnTBTtydLBxsWWOEjrardU84l8+mDfOyxAruVvcYJWlX0pPxn4kKl+DJDfmI1yqC+MIOldZrLV7dhlp3Q0vhwu7ybiIjfHOE5lHX/q3MgnLcsJ3U6L9F1AOIHA4Xk678blW6gmWpyJ4n9t4Hyw9QZzJnQqCxPBOFZerbbXq6Zt6r8T2yBbuqOiT6goLiu1UVAjC2oD9ULNh4zSQNZm3fKzVfiyrxWda05RGJi3TLJZ3i/jX4uq1aqbGu+a2LKOVn6rIln7DgT+10oNcNhksIVoUVMokA3iasXeVvIFWoYQ1ZYEMjiCbEgzllqQpTRkJ8lFKsFBRGjcKRmSKk+aLZKss5oQ6PURxZy3FjatnSQAvNeHc7jU2KiUBlIGA3adrTLC3ao8r8B7eYztZFRBYPoD1igN+lhac2jBKLeJ0OUwhf13ERAOs1RLxbKCe/HUSxJ3WpaTQotTjLQANEYgvaxo5ohsEiAB0hQ6tYOgmItAR1vN0dQiLHUOJbdrtvgBOnKiZIqbRIyiWnketzE38KZhY+/tTKvW0ykVJWhzjr1HTuCvUh2hErePf70U1FTFQTaA2E3sVJttFTimYP14qgVQRMB0BgCA2vFIAiSiXvmEsgC9IAAAeVyAA8UJ6k1CUbm0n1wUJRuXCXhOTR4iSASivQCoqA0W60KQGi0Obst2yBrdtujbfbu12O6QJQFd3SMS93ISMAAe+UH7twD57cAQekeqHqVDiNqxOGcPZHpQjR6+NTjPXaNpfVsVbNommbYEzm1MtpNi2kWHkua5ml1tm2iyl8R1j0MrZ6SYyaVv01W4c17W7ylBChKHwGAHCPYDVQW7zcOSU+gDT/zY20acyErYSDWVoDukvqY3V+D6V5meltQ9wXItKtFWRAAuDfQNeD3GbI0TSQ/DXbWr5Kd81xUZXEtqG8mPTpUzQGZgjr81P6qSAW7TZ9slWDaid+M22vOqRxHInlLtDHOTwchjTC5eOXBRLuy2nrctTbNnq2w56IryxbOAFirvAwUUXdb/J9GbxXTBh71WGz3LZqq3cqW942i/DAJbnroB96vCRqPTKQMyYCo+fnl6gh0H6etrpE/Vfum4X6zhfpMbo9L87fpIDlYEQ6g3EOmF1dqZUMufRkNADfqgBh4aodFFxEkWXihvZlxG2cGhNeKgtf7Km3KVmBESzvewO73HEZNMbc4t/OJJ6VYpTXVbS1z4OkiOuu8n4r12VET1RkDup2IHP4Qswqa9oRiI0CqEBwjGPtIvhpA1XN6HDre3Nc4ZN3S54gkQdfQsP2r1VCCjVGECupXnqbVcCdKCLrTWCyQy6q6uNK1Q9CQizuxpOYYF2/XtD9t4YbELiGwKvUMkWR9VCSFg0AM4qKiDBtlUSqpHsG6VPBhQyWMKQcqeVc2bg0yqLGFGlVPzIsdPphU6NkJGfHzo00PdiE0O5soWVmzg72NYg2I9BqCKqh7j/NNUlsBBXYcjAPVdTH1RkGmI3qCyq1g5k309IpqM1SGV9rB3XGbtLx6Q3yQDitH9aZdFHfPweWL9iCJYGOqdXOoWHhI52imquxRbG1DyDMZ3QHAn4WxSankSeYznXXXLjabMVdYqIHwE9kDjyknugbJ6vKkZJAWQL/KDpv6wds4WsZcfWzpCpM3NVyaEGRmoyioQKvrKCoPUAREFnRp5ODHCP2he1GIIcQbnHAQSuhmougoCsOA7sUoYgDJCCrQCq1bg1bUmbeXJmmcZdJCgrWQbLE91Qpr9EI5XrfEH1t9jM8KFsbIDRVAGepeCbWNfHCVgzMR4if/WWNYM0q+x6XIFRjMCGXeGvL+omdRXhmMAOx9hnsa4bRmxJ2ZuWK72iM5CCzRxrwEo0zMVmAzghqvXmdrPT6qKPuT2X7CzMtmczQh9s6WGmMmMvjT2xidLMb08N9dD7dRkUaUp6q32JXTqUaqk3eHe9+lNkStt0ZrawjCp0xVrCiMH0J9Qgjsw51Mlwa2D85v8ado33AF9te+7/Q0Ss2n4iAnjM/dqBgNvIrjumyFFie3KRaC24nKEiuorYKcIe7+gY1nzvN8dTzn+nDU+aCS/6MF6iEw/xLtAOlpVN4K0C6ozLi1CoxseIHZn8BI811REbo4wg3XE7t1KWh5X2S05m0/D/MQWOCrwXHrGeOW6XXltl3wr5d5B3012217/lVdtB8pfGqvP4bxeHBmreNpZpK8gNKvSs5qGrNUibDvi7FfkYN14qml7epcwao0qrnNZPhsGS2ER1ztv1IZsQ/dg8ybSXtrqoErPqiwjB9M3s+I5/wdD3JTtWNQdeCedFmCHLpmyNQvFQY5TkytlhSX/vwhlGDQxum0MU3tJlMQoVFxA6px5N7qC5bBEmZCo4uEGuLxBh/nxZ9Pc8hLZFRQmVqc6sGQWuu6c5pdnO5cvG47NwbQb6IbkLVuFIM4ZN0ZOEvALheQFOqeDyhmVgJufDPsCtOWpL7ll2EHPEwg69NMHAZmP0FXNXhVxJiA0jppJDapzAm2qzlxt5eMq002vS7Ns8PGrvDneM1XJvq597ltsZ0I0wja4bbzKB574mGaHNETtmcRtPQkY8tJHwoKR8QDTRer7xMjfxn2tIL6X7WFI47c3BUckl7KB1RdbIXCbgN6aNYC4VaYiidUZJPhxNUVJVbA3VlJ0ESXRCDU4UbwibN6EmzOgKSxJ4yKoaDNTfvT+JIQw1nUoSWBtQWoDQxqE/4Wmp1id9HGCZTPgcAfUnOm08m1KSnRU270GAcKlBlvTRJZbtN54/vqGR02fxhQ5K7cqQPH8UcfJ12pjmuQZXqe3NLvBW3PDXBJaQhBSCTXCbUGA43SSlb3mdEWgYTK8WyGJE01sovzS7UHUlYhXsW8xMIIhSz09Ps9Sxz/CsY73gatmOrUjd652fxuNn6kfZnc+1fjPIN8znZimwzZlt/jmzadwMxnfjshaASYZnOwrdgxiCU7BdySXGYMkl2QTHOMM7ncVs9nkSalvI/Ya0srFPO+XRXodfanLmJN5XTgedfZNMXLiAmJbVuduuD69zj1iI11yPOZ2zzbmOPG5agm/W3uTiLHi7ieOXmje03DI7Dezjw2LtDdS5QicaO+BYZmbOPLZc62Obg8UoHIygRPuUMDF2Czm0woOyoAEeJFz4IrhR5RyBBQAvewoagj9Yf1gCdRAj24DWwiAVABiOwFlT9iXrgZXwKBf6IDBeENTHAm8ox006Hw+O7U3tlarLNRwS0uJWTuBjjgSdKSnLrqJSg+1LTwYP42Cup0whadt5Bnb6G7ylhWd4C9nUdKX5Fs4uVo4SRdO353TNQd4D7W8l/MEs2Lkut07Cp4temI7N6yg/WOoMSGuKdeiB/cgd2MGY9PFWBthT0a5mRKOj5XcJZoN68o9+NxNeL1McGPP7OjPis7yrPWO+0k52wxpe7udj1+ASrwNwF0tD39LUS0e9H3whSGhkf9cJgkq+uv3CVQqtTQgLRBXXNz3E7c3XbuuKw9TI+w869cIoDCM4tY34TmjuHEc61qPJeQeBfMaAvFr/Bcd9amv5yU+OBswqtfBOgO2hGmBcSOLNRjizqXbYERHV/mcT1pqoU0xeBXxQTOncA/VOqFtN9OgJh2X1sHfcJ9CZld4agErD3i3hXQllhlIuwHDC6HIPE+J/4mc1a2ktOtomagf1sYHBT2BarlMsVFZXA7Uu909xbDukGtHxW2x8Bj0e6GVbcGCa9Ve2tBPQeoT8J5xQyJyalLREFSwBjtRc5FnYLtXRC9opzPRwSQZEG9VlxoAuAo4mvTc+ksYw4XzAMJ+OyyVOckWGALwChLPFPJog2Exu+Ex9v4upMndpvbC5KHBOo2g91WcPYMuSajLG5oI3k53q94Ugi91Iw6JPh2EpQcIGQJZEtGnnPx55lJdUVqwlqqEzYliN5YOUzWYSvk5+4AkPSui9WdTn21CSvtJXP7kPP1eCbO1BbZpYssfofoIFEaDXJGm1/XFQfcjnbB2OqmI9qOkPTJd9lqmBZ1NCpAEXfDSPA8QfIPSo8oNBw0+CGZFcH0zk8fdgXL3PNkNFtKyf3VcFRNXXTzKwHdUecW/n+VuXYVoV0UH/muj+x/o/1cB4gZmgNy9C7sNLE4XOi7UOYcyfEktXrrrB/2Z8eDnCKm1gJzkDyAyWE4fKiJ2K6idd7TrXGFFfWOSdPp1SF59oukr/rYG2C1g1a9k7nuWtpBTrqFGEEKWImSl4650mJezVOW/MNS7pVOPEhNPGVCCVpXNi62dLal374hJ4vI2ithlfa2qlUcjcnNpHghTHeeWDjxvCdyQ8y79MC2TUuSwWzG3+t63nK1OR6ORP+Yi1yRg3CKNHYdVxRkWedxb806ld1uacPnZtL53gay2un636jgF2QqK2K6QXMBPR2yofXHv+32Kpd2NpXchL316T9AP0RlpdJTX09kWDwJuvZ9H3yHtlIfYJvp4gPQCed+pcXdBPdrOqtd+4Yj6GWx7VXdk+asCOWqWtyNlwbdg9iA6/K2O1BVBDppv3ImBmx9Yk5O2/bfVvmvp9zfvP8317ShcS75gQSIbY1WaxyxWkTWlZINqaguumsUCZrWtc+lmsl/nNFqA3Tmsjc1Io103KXdl+jYnmbVdq21MyLjVnhzyn3+8Eb6LaWHqMDSx1z2AnXcwQPa3GPTz5gg0k4dimv7ap3uApR5SkmBxJK4TC7fDpJ0cd8jvrXic77KP8DnH3Kw2+pwFXm3/F4q3eunpHv+cYnoz8u4lzUYZPy15oArkvf2fdG7O+b+kdiMkgMNWX99wpDcePSwDczdQ9ZlY3OPRuH3k0F96P1WHHif38S2ccQCA/gYwPxl08g0PaRWNen6H3Ds+lw+9sCPtpKxpuduODZjapPNV9/Gw7Yf/nX7y98fN+JcfmwDPJ2vq9E/PvH08Az97B/+PDPuQYzxC0MJmfxNErmJ7Eus98YBH03zqyLNW5KOa44hwEg+5vuafzpMrEzQkWXz+eBqxmnzwl9BLrF5Vrm0/OxTs2/dBpBX7rT/u1/NOIP67KDxU49cckzwIUfYLh69sz49FInFr8W1IeofuviWkt8lrLcDfsZZkTLVf3W+rMiDW3pt96cjsCfm7aKo90r+O/s/TvwStAPVv56Cy5G6Uu7yfFYMa+98B9vTwdwdfi+pZ17dVXrok9cGSGE2lw4ucifHWNZVn0u1H40hf7q7RiUWis9zCTsL3gR/vdO5MoL3h9z17rnYUa9kRz7K3TB3ZY8xleodZjB4wLUYAPdvPTd9BG9mxO7kjmNRnFEcoe/SA8doFnfoTsC8+aAdXsYHYEjC56CKftSccK5+uj3Y7js/7489oZ/w65mIX5HUrX/DN8sZ/V8yHIKtr0e3Jkx5oGBtpgb7Ap7iba1uBBsH55WofrxY7eRVvTI88HbuC768duiY5LoTupnoe6hijnoaSvuvxLF6peiHpBwNeqHpmOKGGgEp6GASGhYBLulnqe63unnoEBmEoHr6SJAcSJkWHaL45h6ugBHrkB7jswYn0+hkdrn07MgJpl+BRtwY4gsVg+ib0rPhqqSBPdtIEcUrhm1LrutfpZ4La3fmp69+lsv35PWkRqU4H0wxuF5l2o1nljYa5mpr7Rmt5rowTUPSJfaF+u+mV6YsTnG6TyG5+jMiX6yhgGQXAkpDCgXAoVk4jN+KJg0QRWKFv/pxWphvXChuGHg0zNeRDrqQJ0B/h7548DzjsjO0oAW84OCR/P17MeNnp5CT2LFt851uG3tx4Xq4duQrAubbnY6lWDjpBhUB4UDQEaAdAVwAMBuAUwHF6hekQHsBKiiuhC8kaE0Gp6DBpgEZ69ATgEm4eAT7oF6hAWwHB6KihXqx2GdgMFAaQgb2wQ+7BlioneknnObtaWjAoGl+HPtbxOGsgZSxqBYmmrK8+82vz6cwE9vbTMWVxNdaz2KLsJitcRToP4r2XLrEZyYrnqrgZGUxqDa6oRfLbLiexwYbo4aZwd2J2BkkokGb+s1Ba4iSDtlqbu+VbD9p9GbrsL6LKkJntrCsm9okbdc8xllTbGKxuIBrG6ZsSERmbDBwwUAFIWVQVUDZn5h02Zxu3b+wvIKbSBBQdjzRKkc/j8YjmS0gCby+upC9w/2w6haQOBvNrCYzW8JnoIaeV/gk5uB+ZLyF60NEKMgYAOHDYqfov+KR48muJq75IOgoBm5C6MoHqLfafalOwymWUqKgUmE/vKG3OioREGRk79q9QDCzzOiaqhyFrqamMPIU/448rJvKLn8lhM8xAB+QSAGvOakGvrDk4plTgVsngP4BM6f8tJCgWUEMt6TeKms0AgqSgBQDfIe1M+7BwGZCfwEARACyLqQBJuArJGJDkW7segfk8zlB56iQa8eLbgJaKWegdwGzuCZt2D8h3IY9qPG2wfH67BFeKcEuGTkh47+mhdssEN2ZgQyj0GiGNqDCeRIX2FKBdVohRFGUIahSp2+TkXaThpztT5uW84SmarGaZmWZx+S4btZDhJRiOEth44QOZtmNZjq7AYM4d0T7h2xtSGlmGxouHghjhpCHDhG4XK5bhNYjuFfoe4eVZlg9ZjSoEMRwQn4rhPuGuHJwv4dQzp224VnYOcj4R/zCeLIWIGghOweX57BoVLBETmxfsNpghUEU4YLmRXEdYeGdftoEz2uTle4LM7wfuZGBY+seYXA66IBG76WfjhrFMKdjCFn2RkDAg0kl4DuxMcZJnB7dgzVB1S786HoMaYeEoSOI4e0HK4G+uTTm+a+Bi3ljg/mMoUtIQWAPHCLiSlTsgCwWdlPBZFGiFofiOh0gj/QBsB7NZA+U5sJFaqgp7Ghbm4/EoAGluTHqfyfOF/NWEumtYTAGbe+WlUF8erboJZUGKATi6XmkPlC4QR05qeEQspEfJbmOnjnpDeOyljwFEUhwbFGfhvdsniwsFwR3oWekrvX7L+v9AFaWBFaM5YnuPLDwrnuY7sp4IK14VSToC5/qDpYCtllFw/mYpIQIqg0VpQBnB8VuXzlM0HnebBqrEuRzzWo/FQg+2y3vlIhQOoVR4b+y/BI4k0vomQ5pBm6l750EpOnRYn8DFoGFi6a3n5H5isAYFGAu1Qfx61BoLh26Reb7hVHRR4gX4pxR1vDpacUsArJ4oiAmPXY1iRlEc6jCP7rZaKh00a76zRRfulxER2EVIGPskNmQDc+VwdE43B0muPaC+tnjRGGUrYfLAGBS9qPpfBU4R0Z7us4ZNauwySswjU06RpMbjg/Ia9wpuuUQdbS4MNrxH944/A8GXEGCiSAqIVREnYfuXqiDLsx+EBXbS2itivjeqvMaqD8xniKTZK2tSNzEDUlRPhCt2VdlLFc4wsQYCs2R3CKF+a2ITdz9qF9nN7KY+Hq1R00GyqEBHiRYOoBdw3mv9qDwKiAKBb6J8GLF5IEsWZFnEu7HET2x1SC7LWxnrrtyisRvhGTWSnMQghlINsY4EnwSkUYYuxlSJXY02RSHKLUWdysAEvOAphGFvIxtkXIsq1XBwj+08gO34phCdjvoZE8ntkQmusplN4P6+2JtjzRInOjrwhgsIMCsIdNMagohqQVWxQBQfsdEBRGjkFFNhe3u271BnbgHEwxfbjFESB2UQhR/uMMZxRvoFjpuEThkjJH6kUTlA0GUUbsRLH3RWEf2E4R9Vs0o4ck8biGvBP6O2Houoho5RgYnbivGM2ksQ+hDxD0cREDhe1tvHjsU8clEIR/4ZryXRgnuFGiW8tgLEKxV8V9YfhUEePE7x0uE/FjhM8TeFx2/LllGAJtZLDHiu8MV4bbuecaGYWBuAJywZKNUQy4axEJgigpMAMYfqvaOTBHHqIvUcUbRBmHGQLlMJcjXx6Rt3pJLf+AkmqJusZrrqSxuBHvfoXKhfh/I5Qqym6g0omyr5Z/2WzLa4B0GVGnyGMh5GRC/UeOmwm7gK3jXDiAbAM6bZWXISHaUylQWdHBRAlsfELxp8cJ59sA7CeGjxUMecHagbDjwQcOB6jawVQ5rsUoL+MKKLI+xwMkfSjuZrO9AjkhiqT7xBFRFAkjxMCZNhwJG7idZrmSCQ35MMers44YJ3Tqn5mWlrD34Z+3fN7HrsvsUhaOhS/j1GSJfUWhbJciVktLUJq0rQlhuFpAwnFgbfCwlRgsiUDE0et7twmkAvCespwgNibI7xi5YSkqyJqAFxw9wb0sgaRA04tgkBao0cQiRJbkd74eR7ygaC4G4uhx5HR2ziH6nRjYbt5IBJVovHdczjmvGKBxiXLwBJr0Wk6XeJUogKO29+vpHZ8E0fnyAxCjv1ogxBnhsn+JyGnbz5RFEYVF8+iMQL7cwxQU8E5OaMY1F9+91h8FMRO2rjH0m+Md0Rb2xMUbCTGw3El7jcUXiNz72I/kASTeLCQNp6acofgkyG3nuzZmgJ3BiHBeWIYsxYeIceojhWPwcGCDcpcDTqfcykpZL6+vtDjBhskjqDycgi7DT7zo0otbBDQqRFnIcI3ABeJ/4OkYYo1M4DlEywp/pKKEKad5gA5CRwDr679OOblA6EpwqOoj+AeVKKRpugoFm7Ry2sIW4ohODm0zTOIyVtG0WDtGlqpxkya3EzJLzB3E8eCKogFR208X+GzxBjOxH6JwqVClGJtyf3b3I64bXYOpECcXaApjnK96JeCGu6l3xfdh2rdiviY9GCuITrS4IuDyTX6URWgT1LSu8SeAlNRmMVtrL2xgfLCsRk+opHImCTt6oMxo/vDz2RpFuyIypQEnEGBcGzkzFjukQYaagScpMM7Yc0SpVK2sF5m06ExNiFILNgoQkVCCgXSF7H3hMBDy47OTngVLz4+LsRDNiUEgampWSpsFB8A8YeP7rO6DoeDvJrbKBxhwqAMZCn4PkSom/OFQQ2E2pEfu/EnxfPOC52hrji+hPqS4TS50uu8fwboxaLlwqByIGH3HXpNzgyrNpF4EkCv4NeqOL3pMaVMzwuj8bDoWh+kdIJDO/AB2m4AKiFFQfp2Ll/E3pM6YS5pQTICS5kuIzhS6Fp9ocPHRpg7kK5xpEGQz6Cg6PgsxmCdQFVIkuw6VilPS4kD/RRpt8THBDuXYqK7meK5kVGxOYSaVEyqVrka5c4EAeIkrWdUZe5ppvqRmm/JjEdmnMRWoPCmGgPNvJF82TgS1F6aDmob7WuHyBR5UIHUfZbIJRmAELCy2cHCFLR9KStElC8Mk3HNJ5Rj14ZBeQbuq7RLHvtHKJPzmo71h23uH7aOF6bolXpEUfiEeWgaV261EPrjfEQx2QOxnr847JTqbO7jtHpW6LsDbooQ6AWnoAZ/SkyBjBSqmDTiooemQFBwz3mWAW67jnHpR6FuolkhQyes0FpZYipABZZH+Dlnl6XAQfG3hwQPlnkAhWfXqZRfiRjApQiAFmBZA0Apxk8+CCVu6VcfGbvoSIf/OZJRJZ7lgnv+DTMEFlgI/Gcm+uaSUKFECJQqQK5JlfOqwFJMzsclkqTTFjJGRvSitn7ctSZSiyQfCeiACJGINsryAwqF3ymSRUhhx00HSS9Lcc3SQQ7lQWyqLoRMJBFcn2Zm0X15OZ5OqxEZapQdAHtxJ6V5lAuF0aFGdmU2ZALnZgAoRmsZpeH1kDZQ2d5yyen6qD4yRi2SfDnZKSbHgF+Wkd1ynspCf1EERYMVtbaEn4MuFPAgSZoE8ZtwWWALiDkuvztO3/ODZ9ss2dTy1RZrPVH8AkwariSZL8ZawMRirsU4YOO2opmCCSKVi4du0ppyQr6UUJGGqxx2IHJip/mqF6DJUtEGTEpR9KPgiimTi8LYWA2j0wygx5KtGDY+qA2lhAKrPZouwUYGtwTMJQoZGbAxRF7mZhToRjxVSaNs4pYsIzirguAC6Y5m0Onkax7eRKjtDmzJJ0Z3GaJ3cS/xIZQWShlFojFHap/po4FzlTM6WSfjiKPMi6QEBU4KJBrysweMgMQzABXm4A4iiQAeQIUFwCsB8oNhK8gF0FJiB8QIU3m0pTIKKStw1sLQAAw6EsgrB5LpKOKvY65Gay0+KrFIqAIKEnmiA23QdCR9BwqEuL5g2ACZCJAUijUAj0pMEBkjOGhCwbCpnKozm7WzObvHDBnOaDwF5sQEXmz50JOIrV5teSKKD55eY/nCwjeRgDN5OkoHrt5EeJ3n8hO+WYj95vNEPlCUMik4qag4+SM6T5DaZNgP5wqAvm9AkQMvlsBI9GvnfgiEUgwb5W+QkDAFe+dEAH5YgEfkHeJ+dsFn5HnCuDyBhEfTkFAlBc9E0xUNsNlwxm7iEnjZp/O8mfR6fqL7dgRprdgmmE6WBK5wdCEFQsw28XDQuqdkTjAPysiekl9sydr2blIcDjnlMFMtjTk9pqNA5Rg6s4h7KGGqJieJjIZZJMgH2cyAsj4AUyJBnEmsSQ3wbOiqTbKeKjxNyLX5JQqNhA8POfQAFwj+NwVmkEuWRAKuw+upDNA/6FKBGxBKUimnJ+3CVGk0xHBSpr8UzOwn7A/BeUz2oEzqCLTOLHNDGK2NORvCB8mSc4Y4gatmWB5FXBH4EzIEyH7AlFBoB4XNghcbLRfqH/rnT/giKTSigE6YpZLaFNcD5Q55Mks0BAqKUMwY6ZCkEpCRkZYTsqvI6gvhDdpizjdEcsOnqrlKYx+b55zw9qDKDUAOkNkp4pzoouxDi1uWmyrRUUhLFIhkIGnkdOy+qvp8Iq6nALL6i2a8IqePBXpAbqXeE75g5wMKcj8mLysnGrqW6Vw7Vu+HD7SZx3STnHUmuzpLLuWgoAc4ZIjKZ0UzUNgmMW8cExVDltxCeVakaJCybak+pkuX6lIRPaRzmqFeeYhBuFOMKQUbBPONF7VKx3gwXCaQCZGlQAlUALKLOfSqCVqF2RWhbrB5WmSU6elJbEBM5LJfS4ZOwueoaOFUuR3zaFWAjeJhBvWGUj0lzVoyUuF6/ESWMQJJRyVzF4EeFlUl2lnyWIuV3k1pkkmPqvaRFI0jzF0lDJR+k2wEhXIgaF7JRVbCpapVhEalOUVkX8lcnhuR0JjxSogylhcIyVVFWSYUUEZ0fraUfu3JbyVOl2pS6UKebpZ+jSlppUFk2wpCcqU2lnJXaVC4DpWPFaluilLR1Fh2Q4TRlspWaUVFVQD6X+lSxaqVBl6OdzI8l5+emVj0H0ZGUGguZV6Vml3adaWpK5JeBrBlVZaGUZlGokKWKqZpGB74lDEunjylUzIqVxUDZQdBylqhQmWtlXJeWWplUMQ/Fhl+OfD7ClCzAOXMlWRZCg05E5VEBTlzJTOUiepIKsUVA1HPaWVlVBV2Vj0K5Rj5rlVJBuW7iBJR3AjhnpZOVNl05QLglli+H6gnl6xRQUXljBVeUxJ/nHeU7gD5ZXDJ4oPGOW05lAimUAV1JU+ws5SaWzkvJjkYOWOSuJYe6upyfHAIoxIsP9yeJfhYPT0RmabLlD+c7iWkIpI6YZlR+hpT64yGZKTEVW5DkF/i5A8gKtHg2yIADR2wlhdCBhOHQF2Hjgi7PlLxFLkCeI+5mIH7ng2QedAWfmoefkRgwPtskXOu0kVRnhusHi74wgjKQh5rRaIfynf2KhQeW4lj0n4Tq0mLN7KLymqeBVykT5VmA48PGFBIBs5xRrngBn1AkA22dlDcXa46dPcUmZUZcDkMekeTtEmge0Rjz7IjwQH6+RBCnWGh2aJWek+ZkAKcXf8GeZBhZ5IEpuU35tWfKAP5pec/kf5Vee/ml5DedeDf5cwa3n/5kgIAXd5pVdIpFQiAAPngFWGJAUoKFuBPlCE8BcXlz5ReovmoFBAegXJomBbgXxA+BbSmEFxBdRxflIvNFhNoaVSOWIQt+ffk+QJebMFl5NeflVlkeVUVVf5P+f7p/5cngAXblQBb3l1VDVcPldozVWPk4ZYgHAXT5CBUtVdVyBUvl9VreQsGDVzWWi5DV2+aNXJo++XwF3gh+Z+VkFnJUBwsZFZYNnTQOOdX4aByFc8lnWUxehXc5uJeDao5f9BW7/AZNJkpS0l1oEYEVO3PvFzh4+E1hlgHQVMGYSNhLhJPEpFZ8Hl2gUq7LEJp5DrlMV7uVc52eJOfRV8kEgmGYbymMqEC5wnIlEBWw5Ad+Cug/9rA58S3kjjGVy5ihyCRAewEwiBAlZD+6y1SQOQBf5UoAADU5gvZUVOhRN5KoAyoCtrW+7iRZbQcbNUa4nanvqfwN0JOkakoG5OjHl6c5qdFX+RsOWH7w5IUUlUMwn8Y4516dunNUdwC1T8CdVH+atUv5PMG/lrVW1SVU7VRentVegB1ZaVHVtVaAXzIjVSPkyV1KFdV15cGanXv5JNWeBEBI9KSCfVI1SvBjVf1cBmCB0mNPQo1wZdsDg15khBlX5CNfnlZVwdblWFVleRtWd12darXR1ZVSXoVVGsIdXVVwBX3nSAp1RAWj50BVnUnIeABHXtBYuSy6B6RdUsE5m+dSy5k1OEgsFNZDgHgUEFP1UQUV1ANULwtl2AjNn/lDdQ3UrsBwbQVGAGwB5CCw3MiED0Cy8La7Lh22KnXSeLBfAlsFUroEbFEYALvIVAJBFnLniXAFKhpQuabdy8E8lfUDjigCB5Dfu9wLkYHOVYN5CUAx5Is7g21EPXAcqawLsKZmyyh4lYAwJerSEWoiB5AG4o4mHl+gPSNIKf1IEovmPySsMNUASRSUkVi1xOdKgKZAVc8XNAIYa8VhhScW8oO1Yuk7UnqLtZ5lu150SFGQ4/DXwX2OjbvAHeZdMmfzcO3kaz4P1f4CQDP19cK/VDF7AB/UhAg+W+o/1QSVRHs5jDRQ3EW1DXA2MsmUpg0ziJLhoQ34IQB6oC54ifcC3IlKmy57YecsUQBsu8jbLtmw/Dw3a1+6jwkOmcbI75+sQSgGw2NVDbdgrq0Dgw0hABUOjL4A45FjJngOMgoobRDHoTzBVwzO8XZBXxWk2ENGQEiUWpaiR6ZxVhVuekVMecE42C4aDYDXoMuDWkpHeF9No1P1n9QY27sumcY31wpjed47JS1kWhk49jQg2TiUQI9LCoTnH41kk0QBA3hN0eGlTdcSzXLArNwhmpa9Nujf01ZAb9UM27Wn9aM1GEUNVxkj2CMXDV3BPCXEzq0smdjEpKWzZqAnAV4tII0NCle9KzNfSag0pQ6Dc02DAhxG414NlWgQ1yoPETxi7ykTXchxZAKcQiQNMqCiTRNEMkRJEWSTc0QpN9DaArpNwqL0Dmxf2saT/ViAEJyVsjpLFRCOOyqmAWQrjA8AMiJYCEDVNztTDnSNKje7UCWarEC3Hk0FG01n1nTYZoMU+zXo0KgRzYY3ygwzV/VJ+5DKr7/VbaTM1INUQCg2W+GDcC1u0HrK81EQATcKZBNXXLFkhNNZmE1QNMFbVJMUC5Vsl5RlzSNl/1xUcpVYAMxREm2lnjZ351R66dm4YOpYLG7+56VbuKvk8GXK2jOPqsf4HZMmD+6NKLdYhAjuJoKZU5E0YTtzjEuDjgmhAvZW5pOFfDdVwWg3Js8qG2uQdtHGp9ta5mHp7mVx5stmjrI0CW8jRm3PM9tnAFltWiVXJiNkIK7rMuPyLYn86EkIDb/QGmfDVGVDragkgso0tVHRJt+pI5KwGNEMS/GwtM0QyY1yWa3wVmpeWRIVTyTc2hJPrZBIfpWFZyUDsg7RM1W2IqvpGypGPJqjTNfVtRVoVGVRhUztyyjGymW2CbN58FqhfQDQlIUBi3cEO1GsovCd2c0k86ECh62kSVKevyWkOzlIDvafWna0R5ebXbXAwImWalx5yJZamu17LeW2dslbeyaDiSjbW1dxiyeo3m0SgJfIttCIXYl2tgVV2kfl2DdXWklcxQOwwd7BN1kM5PJfXUMQTENQUJp0Ncu2IJHBafCMQ4sOdkDtwmW5UeV9iCbGy0GdHWWNUmymZn/Yckh0AvZCIoyn61UkZb5CsiwpW4CYEteFD7Ch0pzqlh3OgmL255HAgC4IWABXQ0pUorazQKrVNFIsi5wjdZMm4iSCpje0NN8im2qDBPxOcoFZYi5ueqWsKKC7EmaCDAowsgCy1RAEwgsITwpGDpifwDJ3hQdNDTTmiQcNyKoa7DF4CdqRyAk6LOb0XsngcCgGkVTOWAhw1jR/4KG0eYQcGcFPGw5tuAC6fiE3ZDRPbbuIeFVhWO42FVJHrWDOj7cJIkuC6a9gfFObU7QeRoVbh3iNcHTU1nqsVaekNNCVTom/0XHcx28dD8A1rvRl1qFBL1dET44b1KElvVYgfQeQFlIltfU0IBzOI9kYdUypTqztPDIzmMdN4Eu3cZsNau3EdfbaWBjwruuhDnZVUaak9OrrXimHt3raG3WVF4LZURNHSM13TSCSaWCwZjKSS7vtFyWEX7k8Jb+0/8r2ZczvZ62MB0HwFyeB0BVNtT74FBgvl5GO1UyTWEstKJYh11tKeTaiodgvuh31ByjaT3YdePeSRIxdSUzGIhKTNMUbtESc90o1TZnR30FDHdNAN413dc0cdpqp/K3tOSuKgedP0kTn2U1lmGbhtF7YjWUS1Mb23s9RKXtBc9ULRU4PND7aCUeS/rRFI00X2Tyj7tD2fD2yd5ksJKfCTRRyCi0U6qMrMwltk6aY97kfkHkQjprB2HRRPQh2ltWHRiVQAntZ+krJliTlB0GftaoWB1e9cNUH1MQEfXKKldWfWfw6vf/yn5fPXkAC99vIRq9iflWRDf0svXiXWa/tXZV+ukpos5JAGnqG0BanGOIpYyejpxi54PPWDX11rEOcFWtrBcElGWVRFYBYgzgHN2VRO7dbYd+QnVhDHiN3g8X94YRZJ30cuUn4gBsOlYVJyd1Kc3HrqlFQwT5QywolLYiy2vDxHSOnVpA/t+nf+CGdSgMZ3Vdpnf60WdWwqcLWdMrugBiJUQA51FYznSyqudJHEfqS9/mrql4OvnUjwngVxjPjBdoXdQCeC6tG8L/AHwi6pxd88gz34QSXVSaTafpQ12Zd4/tl1AiopOkX5dZpPe2hsxXWGald0ZOV0r8lXdVKtOZHR+k0dQuYgLgmrXTr2+tHXQc4QdttRpydOp3RI05WUjWN1w5yHUskFmXfRTi99C3XsmB0uNarj7xi9dnqbdekjvW01iGfmVLgcA2l3+lKoLwM99//CCy4VWZeOD+g+kfYJe5aONP1biMg7GXNlF9MoO0Qqg/N0p+YmYKUbybXcyWtpgbXxoIwz0Baivkm0NtAjZf0OwBcAVAMDBd63jZDAqAagDDCPQ8MAYDODf0A7CxEiAE2QlQZUJ2o0IRYMtBhDq0JABAgtAECAkAAAAwAgYILECZDmQwACsBQ5kMAAHAUMAgaANkMggpICCAOmpIHkMFDAgImAMAIIAwCxAiYEUMFDDpnDCGAzg0CBAg/RJkPVDAgLEBAgpIKSAtDpICUMAAbKZCxApILQCJgaAAUOkguQyQAFDQINMNoAAILQCZDQINsNAgPQ8kMQA5ggwAAgiYCCCLDmYNMMgg0w20MFD0wwIAzDhQwUOtDJQyCCJgFw6oCkgtkKSA7DYICQAggRw84PNDQIJ8OJgJQ+8NAgBQyUMXDDAJkMEwIIE8NggQoGCAIjtAACACA5w9CMMAzw16DAjKQ2CBbDiYCiOxAmI7QAggG2tMMkAqwwsM3DzQ9MMFDQoOUMYjpIBkMlDmQ4LBggAgASMnDJI2gCPDQoJmAlDWI9CxAgmQ4mCkgHQ0iPTDtAGUPjDDAAIASjYIPUMUjmQwIAFDvI1ABxYYIAwBsjKsACBDDlwxKPijTQ7EC7DQIGCC1DYw7EBggaAFkPUjjIyUPUYWo3eAwjhoyQDTD1IyUN/DJAACBIjAowCAAg0w2UMhjeUDMO3D/Q3cNxYhIDCBPQKQyCAAgXI8GP7D4IATBEjsQHlBxsqgHkOxAiY1MMajmQyQBggaQ/0RKwSQ+ENOykQ+6wxDpUJKSdq60PoBAAA=== -->

<!-- internal state end -->
<!-- finishing_touch_checkbox_start -->

<details open="true">
<summary>✨ Finishing Touches</summary>

- [ ] <!-- {"checkboxId": "7962f53c-55bc-4827-bfbf-6a18da830691"} --> 📝 Generate Docstrings

</details>

<!-- finishing_touch_checkbox_end -->
<!-- tips_start -->

---

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

<details>
<summary>❤️ Share</summary>

- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)
- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)
- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)
- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)

</details>

<details>
<summary>🪧 Tips</summary>

### Chat

There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=deepmodeling/deepmd-kit&utm_content=4793):

- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
  - `I pushed a fix in commit <commit_id>, please review it.`
  - `Explain this complex logic.`
  - `Open a follow-up GitHub issue for this discussion.`
- Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples:
  - `@coderabbitai explain this code block.`
  -	`@coderabbitai modularize this function.`
- PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
  - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.`
  - `@coderabbitai read src/utils.ts and explain its main purpose.`
  - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.`
  - `@coderabbitai help me debug CodeRabbit configuration file.`

### Support

Need help? Create a ticket on our [support page](https://www.coderabbit.ai/contact-us/support) for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

### CodeRabbit Commands (Invoked using PR comments)

- `@coderabbitai pause` to pause the reviews on a PR.
- `@coderabbitai resume` to resume the paused reviews.
- `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
- `@coderabbitai full review` to do a full review from scratch and review all the files again.
- `@coderabbitai summary` to regenerate the summary of the PR.
- `@coderabbitai generate docstrings` to [generate docstrings](https://docs.coderabbit.ai/finishing-touches/docstrings) for this PR.
- `@coderabbitai generate sequence diagram` to generate a sequence diagram of the changes in this PR.
- `@coderabbitai resolve` resolve all the CodeRabbit review comments.
- `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository.
- `@coderabbitai help` to get help.

### Other keywords and placeholders

- Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed.
- Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description.
- Add `@coderabbitai` anywhere in the PR title to generate the title automatically.

### CodeRabbit Configuration File (`.coderabbit.yaml`)

- You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository.
- Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json`

### Documentation and Community

- Visit our [Documentation](https://docs.coderabbit.ai) for detailed information on how to use CodeRabbit.
- Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback.
- Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.

</details>

<!-- tips_end -->

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (14)
deepmd/dpmodel/descriptor/repflows.py (1)

1835-1901: 🛠️ Refactor suggestion

Missing trainable parameter in serialization.

The serialize() method doesn't include the trainable parameter, which means this setting won't be preserved during model save/load operations. This is inconsistent with the pattern in dpa2.py where the trainable parameter is properly serialized.

Add the trainable parameter to both DescrptBlockRepflows and RepFlowLayer serialization:

# In DescrptBlockRepflows.serialize() method (around line 732):
return {
    "e_rcut": self.e_rcut,
    "e_rcut_smth": self.e_rcut_smth,
+   "trainable": self.trainable,
    # ... other parameters
}

# In RepFlowLayer.serialize() method:
data = {
    "@class": "RepFlowLayer",
    "@version": 2,
    "e_rcut": self.e_rcut,
+   "trainable": self.trainable,
    # ... other parameters
}

Also ensure the deserialize() methods handle the trainable parameter appropriately.

deepmd/pt/model/descriptor/se_atten.py (3)

771-795: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The serialize() method doesn't include the trainable parameter in the returned dictionary. This will cause the parameter to be lost during save/load cycles, potentially leading to models being loaded with incorrect trainability settings.

Apply this fix to include the trainable parameter in serialization:

         return {
             "@class": "NeighborGatedAttention",
             "@version": 1,
             "layer_num": self.layer_num,
             "nnei": self.nnei,
             "embed_dim": self.embed_dim,
             "hidden_dim": self.hidden_dim,
             "dotr": self.dotr,
             "do_mask": self.do_mask,
             "scaling_factor": self.scaling_factor,
             "normalize": self.normalize,
             "temperature": self.temperature,
             "trainable_ln": self.trainable_ln,
             "ln_eps": self.ln_eps,
             "precision": self.precision,
+            "trainable": self.trainable,
             "attention_layers": [layer.serialize() for layer in self.attention_layers],
         }

Also add self.trainable = trainable in the constructor to store the parameter.


883-922: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

Similar to the parent class, the NeighborGatedAttentionLayer.serialize() method doesn't include the trainable parameter.

Apply this fix:

         return {
             "nnei": self.nnei,
             "embed_dim": self.embed_dim,
             "hidden_dim": self.hidden_dim,
             "dotr": self.dotr,
             "do_mask": self.do_mask,
             "scaling_factor": self.scaling_factor,
             "normalize": self.normalize,
             "temperature": self.temperature,
             "trainable_ln": self.trainable_ln,
             "ln_eps": self.ln_eps,
             "precision": self.precision,
+            "trainable": self.trainable,
             "attention_layer": self.attention_layer.serialize(),
             "attn_layer_norm": self.attn_layer_norm.serialize(),
         }

Also add self.trainable = trainable in the constructor.


1073-1113: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The GatedAttentionLayer.serialize() method also lacks the trainable parameter.

Apply this fix:

         return {
             "nnei": self.nnei,
             "embed_dim": self.embed_dim,
             "hidden_dim": self.hidden_dim,
             "num_heads": self.num_heads,
             "dotr": self.dotr,
             "do_mask": self.do_mask,
             "scaling_factor": self.scaling_factor,
             "normalize": self.normalize,
             "temperature": self.temperature,
             "bias": self.bias,
             "smooth": self.smooth,
             "precision": self.precision,
+            "trainable": self.trainable,
             "in_proj": self.in_proj.serialize(),
             "out_proj": self.out_proj.serialize(),
         }

Also add self.trainable = trainable in the constructor.

deepmd/pd/model/descriptor/repformer_layer.py (5)

247-283: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The Atten2Map.serialize() method doesn't include the trainable parameter, causing state loss during save/load cycles.

Apply this fix:

         return {
             "@class": "Atten2Map",
             "@version": 1,
             "input_dim": self.input_dim,
             "hidden_dim": self.hidden_dim,
             "head_num": self.head_num,
             "has_gate": self.has_gate,
             "smooth": self.smooth,
             "attnw_shift": self.attnw_shift,
             "precision": self.precision,
+            "trainable": self.trainable,
             "mapqk": self.mapqk.serialize(),
         }

Also add self.trainable = trainable in the constructor.


338-373: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The Atten2MultiHeadApply.serialize() method lacks the trainable parameter.

Apply this fix:

         return {
             "@class": "Atten2MultiHeadApply",
             "@version": 1,
             "input_dim": self.input_dim,
             "head_num": self.head_num,
             "precision": self.precision,
+            "trainable": self.trainable,
             "mapv": self.mapv.serialize(),
             "head_map": self.head_map.serialize(),
         }

Also add self.trainable = trainable in the constructor.


417-449: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The Atten2EquiVarApply.serialize() method also lacks the trainable parameter.

Apply this fix:

         return {
             "@class": "Atten2EquiVarApply",
             "@version": 1,
             "input_dim": self.input_dim,
             "head_num": self.head_num,
             "precision": self.precision,
+            "trainable": self.trainable,
             "head_map": self.head_map.serialize(),
         }

Also add self.trainable = trainable in the constructor.


551-592: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The LocalAtten.serialize() method doesn't include the trainable parameter.

Apply this fix:

         return {
             "@class": "LocalAtten",
             "@version": 1,
             "input_dim": self.input_dim,
             "hidden_dim": self.hidden_dim,
             "head_num": self.head_num,
             "smooth": self.smooth,
             "attnw_shift": self.attnw_shift,
             "precision": self.precision,
+            "trainable": self.trainable,
             "mapq": self.mapq.serialize(),
             "mapkv": self.mapkv.serialize(),
             "head_map": self.head_map.serialize(),
         }

Also add self.trainable = trainable in the constructor.


1348-1450: ⚠️ Potential issue

Critical: Missing trainable parameter in serialization.

The RepformerLayer.serialize() method doesn't include the trainable parameter, which is particularly critical given the complexity of this class.

Apply this fix by adding the trainable parameter to the data dictionary:

         data = {
             "@class": "RepformerLayer",
             "@version": 2,
             "rcut": self.rcut,
             "rcut_smth": self.rcut_smth,
             "sel": self.sel,
             "ntypes": self.ntypes,
             "g1_dim": self.g1_dim,
             "g2_dim": self.g2_dim,
             "axis_neuron": self.axis_neuron,
             "update_chnnl_2": self.update_chnnl_2,
             "update_g1_has_conv": self.update_g1_has_conv,
             "update_g1_has_drrd": self.update_g1_has_drrd,
             "update_g1_has_grrg": self.update_g1_has_grrg,
             "update_g1_has_attn": self.update_g1_has_attn,
             "update_g2_has_g1g1": self.update_g2_has_g1g1,
             "update_g2_has_attn": self.update_g2_has_attn,
             "update_h2": self.update_h2,
             "attn1_hidden": self.attn1_hidden,
             "attn1_nhead": self.attn1_nhead,
             "attn2_hidden": self.attn2_hidden,
             "attn2_nhead": self.attn2_nhead,
             "attn2_has_gate": self.attn2_has_gate,
             "activation_function": self.activation_function,
             "update_style": self.update_style,
             "smooth": self.smooth,
             "precision": self.precision,
             "trainable_ln": self.trainable_ln,
             "use_sqrt_nnei": self.use_sqrt_nnei,
             "g1_out_conv": self.g1_out_conv,
             "g1_out_mlp": self.g1_out_mlp,
             "ln_eps": self.ln_eps,
+            "trainable": self.trainable,
             "linear1": self.linear1.serialize(),
         }

Also add self.trainable = trainable in the constructor.

deepmd/pt/model/descriptor/repformer_layer.py (5)

244-263: 🛠️ Refactor suggestion

Missing trainable parameter in serialization

The serialize() method should include the trainable parameter to ensure it's preserved during model save/load operations.

        return {
            "@class": "Atten2Map",
            "@version": 1,
            "input_dim": self.input_dim,
            "hidden_dim": self.hidden_dim,
            "head_num": self.head_num,
            "has_gate": self.has_gate,
            "smooth": self.smooth,
            "attnw_shift": self.attnw_shift,
            "precision": self.precision,
+           "trainable": self.trainable,
            "mapqk": self.mapqk.serialize(),
        }

Note: You'll also need to store the trainable parameter as an instance variable in the constructor.


333-349: 🛠️ Refactor suggestion

Missing trainable parameter in serialization

The serialize() method lacks the trainable parameter, which should be included for proper model persistence.

        return {
            "@class": "Atten2MultiHeadApply",
            "@version": 1,
            "input_dim": self.input_dim,
            "head_num": self.head_num,
            "precision": self.precision,
+           "trainable": self.trainable,
            "mapv": self.mapv.serialize(),
            "head_map": self.head_map.serialize(),
        }

412-427: 🛠️ Refactor suggestion

Missing trainable parameter in serialization

The serialize() method should include the trainable parameter to maintain consistency with other serialization implementations.

        return {
            "@class": "Atten2EquiVarApply",
            "@version": 1,
            "input_dim": self.input_dim,
            "head_num": self.head_num,
            "precision": self.precision,
+           "trainable": self.trainable,
            "head_map": self.head_map.serialize(),
        }

541-561: 🛠️ Refactor suggestion

Missing trainable parameter in serialization

The serialize() method should include the trainable parameter for consistency and proper model persistence.

        return {
            "@class": "LocalAtten",
            "@version": 1,
            "input_dim": self.input_dim,
            "hidden_dim": self.hidden_dim,
            "head_num": self.head_num,
            "smooth": self.smooth,
            "attnw_shift": self.attnw_shift,
            "precision": self.precision,
+           "trainable": self.trainable,
            "mapq": self.mapq.serialize(),
            "mapkv": self.mapkv.serialize(),
            "head_map": self.head_map.serialize(),
        }

1328-1430: 🛠️ Refactor suggestion

Critical: Missing trainable parameter in serialization

The serialize() method is missing the trainable parameter, which is essential for proper model persistence. This is particularly important for the main RepformerLayer class as it coordinates all sub-components.

        data = {
            "@class": "RepformerLayer",
            "@version": 2,
            "rcut": self.rcut,
            "rcut_smth": self.rcut_smth,
            "sel": self.sel,
            "ntypes": self.ntypes,
            "g1_dim": self.g1_dim,
            "g2_dim": self.g2_dim,
            "axis_neuron": self.axis_neuron,
            "update_chnnl_2": self.update_chnnl_2,
            "update_g1_has_conv": self.update_g1_has_conv,
            "update_g1_has_drrd": self.update_g1_has_drrd,
            "update_g1_has_grrg": self.update_g1_has_grrg,
            "update_g1_has_attn": self.update_g1_has_attn,
            "update_g2_has_g1g1": self.update_g2_has_g1g1,
            "update_g2_has_attn": self.update_g2_has_attn,
            "update_h2": self.update_h2,
            "attn1_hidden": self.attn1_hidden,
            "attn1_nhead": self.attn1_nhead,
            "attn2_hidden": self.attn2_hidden,
            "attn2_nhead": self.attn2_nhead,
            "attn2_has_gate": self.attn2_has_gate,
            "activation_function": self.activation_function,
            "update_style": self.update_style,
            "smooth": self.smooth,
            "precision": self.precision,
            "trainable_ln": self.trainable_ln,
            "use_sqrt_nnei": self.use_sqrt_nnei,
            "g1_out_conv": self.g1_out_conv,
            "g1_out_mlp": self.g1_out_mlp,
            "ln_eps": self.ln_eps,
+           "trainable": self.trainable,
            "linear1": self.linear1.serialize(),
        }

Also ensure self.trainable = trainable is added to the constructor to store the parameter as an instance variable.

🧹 Nitpick comments (4)
deepmd/pt/model/descriptor/repflow_layer.py (1)

67-67: Add documentation for the trainable parameter.

The new trainable parameter should be documented in the class docstring to help users understand its purpose and usage.

deepmd/pd/model/descriptor/repflow_layer.py (1)

64-64: Add documentation for the trainable parameter.

The new trainable parameter should be documented in the class docstring to help users understand its purpose and usage, consistent with the PyTorch implementation.

deepmd/pd/model/descriptor/repflows.py (1)

170-170: Add documentation for the trainable parameter.

The trainable parameter should be documented in the comprehensive class docstring (lines 51-134) to maintain the high documentation standards of this class.

deepmd/pd/model/descriptor/se_atten.py (1)

84-84: Document the new trainable parameter in the class docstring.

The trainable parameter should be documented in the class docstring along with the other parameters for consistency and clarity.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c46dc7d and 7d7e043.

📒 Files selected for processing (47)
  • deepmd/dpmodel/descriptor/dpa1.py (13 hunks)
  • deepmd/dpmodel/descriptor/dpa2.py (6 hunks)
  • deepmd/dpmodel/descriptor/dpa3.py (2 hunks)
  • deepmd/dpmodel/descriptor/repflows.py (18 hunks)
  • deepmd/dpmodel/descriptor/repformers.py (28 hunks)
  • deepmd/dpmodel/descriptor/se_e2_a.py (1 hunks)
  • deepmd/dpmodel/descriptor/se_r.py (1 hunks)
  • deepmd/dpmodel/descriptor/se_t.py (1 hunks)
  • deepmd/dpmodel/descriptor/se_t_tebd.py (5 hunks)
  • deepmd/dpmodel/fitting/general_fitting.py (1 hunks)
  • deepmd/dpmodel/utils/network.py (11 hunks)
  • deepmd/dpmodel/utils/type_embed.py (1 hunks)
  • deepmd/jax/utils/network.py (2 hunks)
  • deepmd/pd/model/descriptor/dpa1.py (2 hunks)
  • deepmd/pd/model/descriptor/dpa2.py (6 hunks)
  • deepmd/pd/model/descriptor/dpa3.py (2 hunks)
  • deepmd/pd/model/descriptor/repflow_layer.py (14 hunks)
  • deepmd/pd/model/descriptor/repflows.py (3 hunks)
  • deepmd/pd/model/descriptor/repformer_layer.py (24 hunks)
  • deepmd/pd/model/descriptor/repformers.py (3 hunks)
  • deepmd/pd/model/descriptor/se_a.py (1 hunks)
  • deepmd/pd/model/descriptor/se_atten.py (11 hunks)
  • deepmd/pd/model/descriptor/se_t_tebd.py (5 hunks)
  • deepmd/pd/model/network/mlp.py (2 hunks)
  • deepmd/pd/model/network/network.py (2 hunks)
  • deepmd/pt/model/descriptor/dpa1.py (2 hunks)
  • deepmd/pt/model/descriptor/dpa2.py (6 hunks)
  • deepmd/pt/model/descriptor/dpa3.py (2 hunks)
  • deepmd/pt/model/descriptor/repflow_layer.py (14 hunks)
  • deepmd/pt/model/descriptor/repflows.py (3 hunks)
  • deepmd/pt/model/descriptor/repformer_layer.py (24 hunks)
  • deepmd/pt/model/descriptor/repformers.py (4 hunks)
  • deepmd/pt/model/descriptor/se_a.py (1 hunks)
  • deepmd/pt/model/descriptor/se_atten.py (11 hunks)
  • deepmd/pt/model/descriptor/se_r.py (1 hunks)
  • deepmd/pt/model/descriptor/se_t.py (1 hunks)
  • deepmd/pt/model/descriptor/se_t_tebd.py (5 hunks)
  • deepmd/pt/model/network/mlp.py (3 hunks)
  • deepmd/pt/model/network/network.py (2 hunks)
  • deepmd/pt/model/task/fitting.py (1 hunks)
  • deepmd/tf/descriptor/se.py (4 hunks)
  • deepmd/tf/descriptor/se_atten.py (5 hunks)
  • deepmd/tf/descriptor/se_t.py (4 hunks)
  • deepmd/tf/fit/fitting.py (2 hunks)
  • source/tests/consistent/descriptor/test_dpa1.py (1 hunks)
  • source/tests/consistent/descriptor/test_dpa2.py (1 hunks)
  • source/tests/consistent/descriptor/test_dpa3.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (7)
deepmd/pd/model/descriptor/repflow_layer.py (1)
deepmd/pd/model/network/mlp.py (1)
  • MLPLayer (74-298)
deepmd/jax/utils/network.py (1)
deepmd/jax/common.py (1)
  • ArrayAPIVariable (86-97)
deepmd/pt/model/descriptor/repflow_layer.py (1)
deepmd/pd/model/network/mlp.py (1)
  • MLPLayer (74-298)
deepmd/pt/model/descriptor/repformers.py (1)
deepmd/dpmodel/utils/seed.py (3)
  • child_seed (10-10)
  • child_seed (14-14)
  • child_seed (17-40)
deepmd/dpmodel/utils/network.py (4)
deepmd/utils/version.py (1)
  • check_version_compatibility (2-27)
source/tests/consistent/descriptor/test_dpa2.py (1)
  • data (92-189)
source/tests/consistent/descriptor/test_dpa1.py (1)
  • data (79-131)
source/tests/consistent/descriptor/test_dpa3.py (1)
  • data (78-134)
deepmd/pt/model/descriptor/repflows.py (1)
deepmd/pt/model/network/mlp.py (1)
  • MLPLayer (72-279)
deepmd/pt/model/descriptor/repformer_layer.py (3)
deepmd/pd/model/network/mlp.py (1)
  • MLPLayer (74-298)
deepmd/dpmodel/utils/seed.py (3)
  • child_seed (10-10)
  • child_seed (14-14)
  • child_seed (17-40)
deepmd/pd/model/descriptor/repformer_layer.py (2)
  • Atten2MultiHeadApply (286-373)
  • Atten2EquiVarApply (376-449)
⏰ Context from checks skipped due to timeout of 90000ms (28)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (109)
source/tests/consistent/descriptor/test_dpa1.py (1)

130-130: LGTM! Test configuration updated appropriately.

Adding "trainable": False to the test configuration provides valuable test coverage for the non-trainable parameter setting, ensuring the DPA1 descriptor works correctly in both trainable and non-trainable modes.

source/tests/consistent/descriptor/test_dpa2.py (1)

184-184: LGTM! Consistent test configuration update.

The addition of "trainable": False maintains consistency with other descriptor tests and ensures proper validation of the non-trainable configuration for DPA2 descriptors.

source/tests/consistent/descriptor/test_dpa3.py (1)

133-133: LGTM! Completes consistent test pattern.

Adding "trainable": False to the DPA3 test configuration completes the consistent pattern across all DPA descriptor tests, ensuring comprehensive validation of the trainable parameter feature.

deepmd/pd/model/network/network.py (1)

48-48: LGTM! Well-implemented trainable parameter addition.

The trainable parameter is properly added with appropriate default value (True) for backward compatibility and correctly propagated to the internal TypeEmbedNetConsistent instance, enabling trainability control at the embedding network level.

Also applies to: 69-69

deepmd/pt/model/descriptor/se_a.py (1)

528-528: LGTM! Correct trainable parameter propagation.

The trainable parameter is properly passed to the EmbeddingNet constructor, ensuring that the trainability setting propagates correctly from the descriptor block to its internal embedding networks.

deepmd/pt/model/descriptor/dpa3.py (1)

172-172: LGTM: Proper trainable parameter propagation.

The trainable parameter is correctly propagated to both DescrptBlockRepflows and TypeEmbedNet submodules, enabling consistent trainability control throughout the descriptor hierarchy.

Also applies to: 188-188

deepmd/dpmodel/descriptor/se_e2_a.py (1)

210-210: LGTM: Consistent trainable parameter propagation to embedding networks.

The trainable parameter is properly passed to each EmbeddingNet instance within the embeddings collection, ensuring uniform trainability control across all embedding networks.

deepmd/dpmodel/utils/type_embed.py (1)

96-96: LGTM: Trainable parameter correctly forwarded to embedding network.

The trainable parameter is appropriately passed to the internal EmbeddingNet, ensuring the type embedding network's trainability is controlled consistently.

deepmd/pt/model/task/fitting.py (1)

323-323: LGTM: Trainable parameter properly propagated to fitting networks.

The trainable parameter is correctly passed to each FittingNet instance, enabling consistent trainability control across all fitting networks in the collection.

deepmd/pd/model/descriptor/dpa3.py (1)

170-170: LGTM: Consistent trainable parameter propagation across Paddle backend.

The trainable parameter is correctly propagated to both DescrptBlockRepflows and TypeEmbedNet submodules in the Paddle implementation, maintaining consistency with the PyTorch backend.

Also applies to: 186-186

deepmd/pt/model/descriptor/se_r.py (1)

145-145: LGTM! Correct implementation of trainable parameter propagation.

The change properly forwards the trainable parameter to the EmbeddingNet constructor, enabling fine-grained control over parameter trainability. The implementation is consistent with the existing PyTorch parameter handling logic (lines 149-152) that sets requires_grad appropriately.

deepmd/pd/model/descriptor/se_a.py (1)

484-484: LGTM! Proper PaddlePaddle implementation of trainable parameter.

The change correctly propagates the trainable parameter to the EmbeddingNet constructor. The implementation properly uses PaddlePaddle's stop_gradient mechanism (lines 488-491) to control parameter trainability, which is the correct equivalent to PyTorch's requires_grad.

deepmd/dpmodel/descriptor/se_t.py (1)

150-150: LGTM! Correct framework-agnostic implementation.

The change properly forwards the trainable parameter to the EmbeddingNet constructor. This framework-agnostic implementation correctly omits backend-specific parameter handling while maintaining the trainable parameter for serialization and configuration purposes (visible in the serialize method at line 431).

deepmd/dpmodel/fitting/general_fitting.py (1)

201-201: LGTM! Well-designed trainable parameter handling for fitting networks.

The change correctly propagates the trainable parameter to the FittingNet constructor. The implementation elegantly handles multiple trainable parameter formats (lines 136-139): None (defaulting to all trainable), single boolean (applied to all layers), or list of booleans (per-layer control). This provides flexible control over fitting network trainability.

deepmd/pt/model/descriptor/se_t.py (1)

578-578: LGTM: Correct propagation of trainable parameter.

The trainable parameter is properly passed to the EmbeddingNet initialization, enabling control over parameter trainability in filter layers.

deepmd/pd/model/descriptor/repformers.py (3)

90-90: LGTM: Trainable parameter added to constructor.

The trainable parameter is correctly added with a default value of True, maintaining backward compatibility.


227-232: LGTM: Trainable parameter propagated to MLPLayer.

The trainable parameter is properly passed to the MLPLayer initialization, enabling control over the g2 embedding layer's trainability.


269-269: LGTM: Trainable parameter propagated to RepformerLayer.

The trainable parameter is correctly passed to each RepformerLayer instance, ensuring consistent trainability control across all layers.

deepmd/pt/model/network/network.py (2)

256-256: LGTM: Trainable parameter added to TypeEmbedNet constructor.

The trainable parameter is correctly added with appropriate default value, maintaining backward compatibility.


277-277: LGTM: Trainable parameter propagated to TypeEmbedNetConsistent.

The trainable parameter is properly passed to the internal TypeEmbedNetConsistent instance, enabling trainability control for type embedding networks.

deepmd/pt/model/descriptor/repformers.py (4)

114-114: LGTM: Trainable parameter added to constructor.

The trainable parameter is correctly added with a default value of True, maintaining backward compatibility.


201-202: LGTM: Well-documented trainable parameter.

Good documentation that clearly explains the purpose of the trainable parameter.


252-258: LGTM: Trainable parameter propagated to MLPLayer.

The trainable parameter is properly passed to the MLPLayer initialization along with other required parameters including the child seed.


295-295: LGTM: Trainable parameter propagated to RepformerLayer.

The trainable parameter is correctly passed to each RepformerLayer instance in the loop, ensuring consistent trainability control.

deepmd/dpmodel/descriptor/se_r.py (1)

169-169: LGTM: Trainable parameter propagated to EmbeddingNet.

The trainable parameter is correctly passed to each type-specific EmbeddingNet instance, enabling control over embedding network trainability for all atom types.

deepmd/pd/model/network/mlp.py (2)

88-88: LGTM: Trainable parameter added with appropriate default.

The addition of the trainable parameter with a default value of True maintains backward compatibility while enabling explicit control over parameter trainability.


281-281: LGTM: Proper handling of trainable parameter in deserialization.

The trainable parameter is correctly extracted from the NativeLayer during deserialization and passed to the constructor.

deepmd/pt/model/descriptor/dpa1.py (2)

301-301: LGTM: Trainable parameter properly propagated to attention block.

The trainable parameter is correctly passed to the DescrptBlockSeAtten component, ensuring consistent trainability control throughout the descriptor hierarchy.


315-315: LGTM: Trainable parameter properly propagated to type embedding.

The trainable parameter is correctly passed to the TypeEmbedNet component, maintaining consistent trainability control across all subcomponents.

deepmd/pt/model/descriptor/repflows.py (4)

222-222: LGTM: Trainable parameter added with appropriate default.

The trainable parameter is properly added to the constructor with a default value of True, maintaining backward compatibility while enabling trainability control.


286-292: LGTM: Trainable parameter propagated to edge embedding layer.

The trainable parameter is correctly passed to the edge embedding MLPLayer, ensuring consistent trainability control for edge representations.


293-300: LGTM: Trainable parameter propagated to angle embedding layer.

The trainable parameter is correctly passed to the angle embedding MLPLayer, ensuring consistent trainability control for angle representations.


331-331: LGTM: Trainable parameter propagated to RepFlow layers.

The trainable parameter is correctly passed to each RepFlowLayer instance in the layer stack, ensuring consistent trainability control throughout the entire descriptor block.

deepmd/pt/model/descriptor/repflow_layer.py (1)

125-131: LGTM: Consistent trainable parameter propagation.

The trainable parameter is correctly propagated to all MLPLayer instances and get_residual calls throughout the constructor. This ensures consistent control over parameter trainability across all subcomponents.

Also applies to: 143-152, 163-172, 184-193, 222-232, 233-240, 243-252, 250-259, 270-279

deepmd/pd/model/descriptor/repflow_layer.py (1)

122-128: LGTM: Consistent backend implementation.

The PaddlePaddle implementation correctly mirrors the PyTorch version, with the trainable parameter properly propagated to all MLPLayer instances and get_residual calls. This ensures consistent behavior across different backends.

Also applies to: 143-149, 163-169, 184-190, 222-229, 230-237, 243-249, 250-256, 270-276

deepmd/pd/model/descriptor/repflows.py (1)

226-232: LGTM: Complete trainable parameter integration.

The trainable parameter is correctly propagated to both the MLPLayer instances (edge_embd and angle_embd) and all RepFlowLayer instances in the layer list. This completes the integration chain from the descriptor block level down to individual layer components.

Also applies to: 233-240, 243-272

deepmd/dpmodel/descriptor/dpa3.py (1)

360-360: LGTM! Proper trainable parameter propagation.

The trainable parameter is correctly passed to both DescrptBlockRepflows and TypeEmbedNet subcomponents, enabling consistent trainability control throughout the DPA3 descriptor.

Also applies to: 378-378

deepmd/dpmodel/descriptor/se_t_tebd.py (2)

160-160: LGTM! Consistent trainable parameter propagation in DescrptSeTTebd.

The trainable parameter is properly passed to both DescrptBlockSeTTebd and TypeEmbedNet instances, maintaining consistency with the broader codebase pattern.

Also applies to: 175-175


502-502: LGTM! Proper trainable parameter integration in DescrptBlockSeTTebd.

The trainable parameter is correctly added to the constructor with a sensible default value (True) and properly propagated to all EmbeddingNet instances, enabling fine-grained control over parameter trainability.

Also applies to: 548-548, 564-564

deepmd/jax/utils/network.py (2)

19-19: LGTM! Required import for non-trainable parameter support.

The ArrayAPIVariable import is necessary for wrapping non-trainable parameters in the JAX backend.


48-51: LGTM! Correct JAX parameter wrapping logic.

The conditional wrapping logic properly distinguishes between trainable and non-trainable parameters by using ArrayAPIParam for trainable parameters (which support gradients) and ArrayAPIVariable for non-trainable parameters (which do not).

deepmd/pt/model/network/mlp.py (2)

86-86: LGTM! Proper trainable parameter integration.

The trainable parameter is correctly added to the constructor with a sensible default value (True) and stored as an instance attribute.

Also applies to: 89-89


238-238: LGTM! Complete serialization/deserialization support.

The trainable attribute is properly included in both serialization and deserialization methods, ensuring the trainability state is preserved across model save/load cycles.

Also applies to: 265-265

deepmd/pd/model/descriptor/dpa1.py (1)

295-295: LGTM! Consistent trainable parameter propagation in DPA1.

The trainable parameter is properly passed to both DescrptBlockSeAtten and TypeEmbedNet instances, maintaining consistency with other descriptor implementations and enabling uniform trainability control across the DPA1 architecture.

Also applies to: 309-309

deepmd/pd/model/descriptor/dpa2.py (4)

92-92: LGTM: Well-designed parameter addition with appropriate default.

The trainable parameter addition with a default value of True maintains backward compatibility while enabling explicit control over parameter trainability.


187-187: LGTM: Comprehensive parameter propagation to all sub-components.

The trainable parameter is correctly passed to all relevant sub-components including descriptor blocks, type embedding networks, and MLP layers, ensuring consistent trainability control throughout the model hierarchy.

Also applies to: 207-207, 248-248, 276-276, 302-302, 312-312


323-324: LGTM: Correct gradient control implementation for PaddlePaddle.

The gradient control logic correctly uses PaddlePaddle's stop_gradient attribute with inverse logic (not trainable) to control parameter trainability.


558-558: LGTM: Proper serialization inclusion.

The trainable parameter is correctly included in the serialization dictionary to maintain state consistency during model save/load operations.

deepmd/pt/model/descriptor/se_t_tebd.py (4)

138-138: LGTM: Consistent parameter addition across descriptor classes.

The trainable parameter is appropriately added to both the main descriptor class and the descriptor block class with proper default values.

Also applies to: 530-530


163-163: LGTM: Complete parameter propagation to embedding networks.

The trainable parameter is correctly passed to all embedding network instances, including both main and strip embedding networks when applicable.

Also applies to: 174-174, 583-583, 597-597


184-185: LGTM: Correct gradient control implementation for PyTorch.

The gradient control logic correctly uses PyTorch's requires_grad attribute to control parameter trainability. This is the appropriate approach for the PyTorch framework.


370-370: LGTM: Proper serialization inclusion.

The trainable parameter is correctly included in the serialization dictionary for state persistence.

deepmd/pd/model/descriptor/se_t_tebd.py (4)

138-138: LGTM: Consistent parameter addition matching PyTorch implementation.

The trainable parameter is appropriately added to both descriptor classes with the same signature as the PyTorch version, ensuring cross-framework consistency.

Also applies to: 534-534


163-163: LGTM: Complete parameter propagation to all embedding components.

The trainable parameter is correctly passed to all embedding network instances, maintaining consistency with the PyTorch implementation while using PaddlePaddle-specific components.

Also applies to: 177-177, 591-591, 605-605


184-185: LGTM: Correct gradient control implementation for PaddlePaddle.

The gradient control logic correctly uses PaddlePaddle's stop_gradient attribute with inverse logic (not trainable), which is the appropriate approach for this framework.


374-374: LGTM: Proper serialization inclusion maintaining consistency.

The trainable parameter is correctly included in the serialization dictionary, ensuring state consistency across framework implementations.

deepmd/dpmodel/descriptor/dpa2.py (1)

384-384: Excellent implementation of trainable parameter propagation.

The trainable parameter is correctly added to the constructor and systematically propagated to all key subcomponents (repinit blocks, repformers, type embedding, and transform layers), ensuring consistent control over parameter trainability throughout the DPA2 descriptor hierarchy.

Also applies to: 477-477, 497-497, 538-538, 568-568, 592-592, 602-602

deepmd/tf/fit/fitting.py (1)

138-138: LGTM: Proper trainable parameter integration in fitting serialization.

The trainable parameter is correctly added to the serialize_network method and properly propagated to the FittingNet constructor, enabling trainability control during network serialization.

Also applies to: 203-203

deepmd/tf/descriptor/se.py (1)

195-195: Well-implemented trainable parameter for SE descriptor serialization.

The trainable parameter is properly added with documentation and correctly propagated to all EmbeddingNet instances, including both excluded type networks and those initialized from variables, ensuring comprehensive trainability control.

Also applies to: 218-219, 242-242, 250-250, 286-286

deepmd/tf/descriptor/se_t.py (1)

729-729: Consistent trainable parameter implementation for SE-T descriptor.

The trainable parameter follows the same well-established pattern as other descriptor modules, with proper documentation and correct propagation to all EmbeddingNet constructors in both the helper function and main initialization logic.

Also applies to: 752-753, 777-777, 812-812

deepmd/pt/model/descriptor/dpa2.py (6)

96-96: LGTM! Trainable parameter properly added.

The trainable parameter is correctly added with proper typing and a backward-compatible default value.


191-191: LGTM! Consistent parameter propagation to repinit modules.

The trainable parameter is correctly propagated to both DescrptBlockSeAtten and DescrptBlockSeTTebd instances.

Also applies to: 211-211


252-252: LGTM! Parameter propagation to core modules.

The trainable parameter is properly passed to DescrptBlockRepformers and TypeEmbedNet instances, maintaining consistency.

Also applies to: 280-280


306-306: LGTM! MLPLayer instances receive trainable parameter.

The trainable parameter is correctly propagated to both the g1_shape_tranform and conditional tebd_transform MLPLayer instances.

Also applies to: 316-316


288-288: LGTM! Proper parameter storage and gradient control.

Storing the trainable parameter and explicitly setting requires_grad on all parameters ensures consistent behavior across the descriptor.

Also applies to: 327-328


556-556: LGTM! Trainable parameter properly serialized.

Including the trainable parameter in serialization ensures the setting is preserved during model save/load operations.

deepmd/dpmodel/descriptor/repflows.py (3)

170-171: LGTM! Trainable parameter properly added to DescrptBlockRepflows.

The parameter is correctly added with proper documentation, typing, and backward-compatible default value.

Also applies to: 210-210


274-288: LGTM! Consistent parameter propagation in DescrptBlockRepflows.

The trainable parameter is properly propagated to all NativeLayer instances and RepFlowLayer instances in the constructor.

Also applies to: 319-319


876-876: LGTM! Comprehensive trainable parameter support in RepFlowLayer.

The RepFlowLayer class properly receives the trainable parameter and systematically propagates it to all internal NativeLayer instances and residual components.

Also applies to: 934-1097

deepmd/pd/model/descriptor/se_atten.py (6)

209-209: LGTM!

The trainable parameter is correctly passed to the NeighborGatedAttention constructor.


234-234: LGTM!

The trainable parameter is correctly passed to both EmbeddingNet constructors for main and strip mode filter layers.

Also applies to: 248-248


698-698: LGTM!

The trainable parameter is correctly passed to each NeighborGatedAttentionLayer in the attention layers loop.


834-834: LGTM!

The trainable parameter is correctly passed to the GatedAttentionLayer constructor.


948-948: LGTM!

The trainable parameter is correctly passed to both in_proj and out_proj MLPLayer constructors.

Also applies to: 959-959


84-84: Overall implementation is consistent and well-structured.

The trainable parameter has been correctly added to all relevant classes and properly propagated through the initialization chain. The implementation aligns well with the PR objectives. However, please address the serialization and documentation issues identified in the previous comments to ensure complete functionality.

Also applies to: 209-209, 234-234, 248-248, 662-662, 698-698, 806-806, 834-834, 915-915, 948-948, 959-959

deepmd/tf/descriptor/se_atten.py (5)

1596-1596: LGTM! Correctly propagates trainable flag to attention layer.

The addition of trainable=self.trainable ensures that the in_proj NativeLayer inherits the trainable property from the parent descriptor, maintaining consistency in the attention mechanism serialization.


1615-1615: LGTM! Maintains consistency in trainable flag propagation.

The out_proj NativeLayer now correctly inherits the trainable property, ensuring both in_proj and out_proj layers have consistent trainable settings.


1659-1659: LGTM! Well-designed parameter addition.

The new trainable parameter has proper type annotation, sensible default value, and follows consistent naming conventions. This enables control over trainability in network serialization.


1685-1686: LGTM! Clear and properly formatted parameter documentation.

The documentation for the trainable parameter follows the existing style and clearly describes its purpose, maintaining consistency with the rest of the method's docstring.


1727-1727: LGTM! Completes trainable flag propagation chain.

The trainable parameter is correctly passed from the method parameter to the EmbeddingNet constructor, ensuring that the network respects the specified trainable setting during serialization.

deepmd/pt/model/descriptor/se_atten.py (2)

103-103: Parameter addition looks good.

The trainable parameter is correctly added with appropriate default value for backward compatibility.


228-228: Consistent parameter propagation to child components.

The trainable parameter is correctly passed to all child components (NeighborGatedAttention, EmbeddingNet instances).

Also applies to: 253-253, 267-267

deepmd/pd/model/descriptor/repformer_layer.py (3)

45-45: Correct implementation for residual tensor trainability.

The logic residual.stop_gradient = not trainable correctly controls whether the residual tensor participates in gradient computation.

Also applies to: 75-75


166-166: Parameter addition and propagation look good.

The trainable parameter is correctly added and passed to the MLPLayer.

Also applies to: 179-179


630-630: Extensive and consistent parameter propagation.

The trainable parameter is correctly propagated to all child components and residual tensors throughout the RepformerLayer class. The implementation properly handles all the various layer types and update styles.

Also applies to: 691-691, 701-701, 718-718, 728-728, 737-737, 747-747, 760-760, 769-769, 779-779, 789-789, 799-799, 811-811, 819-820, 836-836, 846-847, 856-856, 867-867, 877-877

deepmd/dpmodel/descriptor/dpa1.py (4)

322-323: LGTM: Proper trainable parameter propagation

The trainable parameter is correctly passed to both DescrptBlockSeAtten and TypeEmbedNet components, and properly stored as an instance variable for later use in serialization.

Also applies to: 337-338, 341-341


696-696: LGTM: Consistent trainable parameter implementation

The trainable parameter is properly added to the DescrptBlockSeAtten constructor and correctly propagated to all embedding networks and attention components.

Also applies to: 747-747, 763-763, 782-782


1195-1195: LGTM: Attention layer trainable parameter support

The trainable parameter is correctly implemented in both NeighborGatedAttention and NeighborGatedAttentionLayer classes, maintaining consistency with the overall design pattern.

Also applies to: 1229-1229, 1325-1325, 1352-1352


1433-1433: LGTM: Complete trainable parameter chain

The GatedAttentionLayer properly implements trainable parameter support and correctly passes it to the underlying NativeLayer components, completing the parameter propagation chain.

Also applies to: 1463-1463, 1472-1472

deepmd/pt/model/descriptor/repformer_layer.py (6)

44-44: LGTM: Correct trainable parameter implementation

The get_residual function properly uses the trainable parameter to control the requires_grad attribute of the residual tensor, which is the correct approach for PyTorch parameter trainability.

Also applies to: 71-71


163-163: Good: Trainable parameter properly added to constructor

The trainable parameter is correctly added to the Atten2Map constructor and properly passed to the MLPLayer.

Also applies to: 176-176


290-290: LGTM: Proper trainable parameter propagation

The trainable parameter is correctly added to the constructor and properly passed to both mapv and head_map MLPLayer instances.

Also applies to: 301-301, 308-308


378-378: LGTM: Trainable parameter correctly implemented

The trainable parameter is properly added to the constructor and correctly passed to the head_map MLPLayer.

Also applies to: 384-390


457-457: LGTM: Comprehensive trainable parameter implementation

The trainable parameter is correctly added to the LocalAtten constructor and properly propagated to all three MLPLayer instances (mapq, mapkv, and head_map).

Also applies to: 469-469, 477-477, 484-484


620-620: Excellent: Comprehensive trainable parameter implementation

The RepformerLayer class demonstrates excellent implementation of the trainable parameter, correctly adding it to the constructor and systematically propagating it to all sub-components including MLPLayers, residual parameters, and attention modules. This ensures complete control over parameter trainability throughout the layer hierarchy.

Also applies to: 681-681, 691-691, 708-708, 718-718, 727-727, 737-737, 750-750, 759-759, 769-769, 779-779, 789-789, 801-801, 809-809, 826-826, 836-836, 846-846, 857-857, 867-867

deepmd/dpmodel/utils/network.py (6)

90-91: LGTM: Clear documentation for the new trainable parameter.

The documentation properly describes the new parameter with appropriate formatting and default value indication.


104-107: Good practice: Setting trainable before array attributes.

The comment and early assignment of self.trainable is well-placed, ensuring the trainable state is established before any array operations that might depend on it.


168-168: Proper version compatibility handling.

The version compatibility check correctly allows both version 1 and 2, ensuring backward compatibility.


249-250: Complete the getter interface.

The __getitem__ method now properly supports the trainable key, maintaining consistency with other layer attributes.


440-441: Consistent trainable parameter propagation.

The trainable parameter is properly threaded through the class hierarchy from LayerNorm to EmbeddingNet to FittingNet, ensuring all layers in the network respect the trainability setting.

Also applies to: 693-694, 709-710, 800-801, 809-810, 823-824


147-154:

✅ Verification successful

Verify version compatibility across the codebase.

The serialization version has been incremented from 1 to 2 to accommodate the new trainable field. This is a breaking change that requires careful coordination.


🏁 Script executed:

#!/bin/bash
# Description: Verify that all deserialization code can handle the version bump and new trainable field
# Expected: All deserialize methods should handle both version 1 and 2, with proper defaults for trainable

# Check for other classes that might serialize/deserialize NativeLayer
rg -A 10 -B 5 "deserialize.*NativeLayer|NativeLayer.*deserialize"

# Check for hardcoded version references that might need updating
rg -A 3 -B 3 "version.*[\"']1[\"']|[\"']1[\"'].*version" --type py

Length of output: 47637


NativeLayer Deserialization Handles Version 1 Default Trainable

The version bump to 2 is safe: NativeLayer.deserialize uses data.pop("@version", 1) and check_version_compatibility, so when deserializing a version-1 payload it defaults to version 1. Since trainable only exists in version 2, any code that does

trainable = data.pop("trainable", True)

(or equivalent default logic) will automatically behave as before for version 1. I’ve confirmed:

@version is popped with default 1 in deepmd/dpmodel/utils/network.py
check_version_compatibility allows versions 1–2
• No hard-coded “1” version checks elsewhere in the repo

No follow-up changes are needed.

deepmd/dpmodel/descriptor/repformers.py (8)

167-169: Comprehensive trainable parameter integration.

The DescrptBlockRepformers class properly adds the trainable parameter with appropriate documentation and default value.

Also applies to: 209-210


257-263: Consistent parameter passing to embedded layers.

The g2_embd layer properly receives the trainable parameter, maintaining consistency with the overall design.


300-301: Proper propagation to RepformerLayer instances.

The trainable parameter is correctly passed to each RepformerLayer instance in the loop, ensuring all layers in the stack respect the trainability setting.


858-859: Attention mechanism trainability support.

The Atten2Map class properly integrates the trainable parameter and passes it to its internal mapqk layer.

Also applies to: 871-872


983-984: Multi-head attention trainability.

The Atten2MultiHeadApply class correctly propagates the trainable parameter to both its mapv and head_map layers.

Also applies to: 994-995, 1001-1002


1074-1086: Clean multi-line parameter formatting.

The Atten2EquiVarApply class uses clean multi-line formatting for the NativeLayer constructor call while properly passing the trainable parameter.


1155-1156: Local attention trainability.

The LocalAtten class systematically passes the trainable parameter to all its internal layers (mapq, mapkv, head_map), ensuring complete coverage.

Also applies to: 1167-1168, 1175-1176, 1182-1183


1321-1322: Comprehensive RepformerLayer trainability integration.

The RepformerLayer class extensively integrates the trainable parameter:

  • Passes it to all NativeLayer instances
  • Includes it in get_residual calls for residual connections
  • Propagates it to attention mechanism components (Atten2Map, Atten2MultiHeadApply, Atten2EquiVarApply, LocalAtten)
  • Maintains consistency across all conditional layer instantiations

This ensures that the entire repformer architecture respects the trainability setting at every level.

Also applies to: 1381-1382, 1391-1392, 1408-1409, 1418-1419, 1427-1428, 1437-1438, 1450-1451, 1459-1460, 1469-1470, 1479-1480, 1489-1490, 1501-1502, 1509-1510, 1526-1527, 1536-1537, 1546-1547, 1557-1558, 1567-1568

njzjz added 6 commits June 10, 2025 20:49
Signed-off-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Jinzhe Zeng <[email protected]>
@codecov
Copy link

codecov bot commented Jun 13, 2025

Codecov Report

❌ Patch coverage is 87.17949% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 84.57%. Comparing base (ab6e300) to head (94ce346).
⚠️ Report is 105 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/pd/model/descriptor/repflows.py 77.77% 2 Missing ⚠️
deepmd/tf/fit/dipole.py 83.33% 1 Missing ⚠️
deepmd/tf/fit/fitting.py 50.00% 1 Missing ⚠️
deepmd/tf/fit/polar.py 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4793      +/-   ##
==========================================
- Coverage   84.80%   84.57%   -0.24%     
==========================================
  Files         698      699       +1     
  Lines       67798    68070     +272     
  Branches     3542     3541       -1     
==========================================
+ Hits        57494    57567      +73     
- Misses       9171     9369     +198     
- Partials     1133     1134       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@njzjz njzjz changed the title fix(dpmodel/pt/pd/jax): pass trainable to layer & support JAX trainable fix(dpmodel/pt/pd/jax): pass trainable to layer & support JAX trainable & support TF tensor fitting trainable Jun 13, 2025
@njzjz njzjz requested review from iProzd and wanghan-iapcm June 13, 2025 06:36
Co-authored-by: Duo <[email protected]>
Signed-off-by: Jinzhe Zeng <[email protected]>
@njzjz njzjz requested a review from iProzd June 27, 2025 12:42
@njzjz njzjz enabled auto-merge July 8, 2025 08:58
@njzjz njzjz added this pull request to the merge queue Jul 8, 2025
Merged via the queue into deepmodeling:devel with commit c151e04 Jul 8, 2025
60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants