-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pd: support paddle backend and water/se_e2_a
#4302
pd: support paddle backend and water/se_e2_a
#4302
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
📝 WalkthroughWalkthroughThe pull request introduces significant updates across multiple files to enhance the integration of PaddlePaddle within the DeepMD framework. Key changes include updates to workflow configurations for testing, the addition of new classes and methods for Paddle-specific functionalities, and enhancements to existing tests to accommodate Paddle. The updates also include new JSON configuration files for models, improvements in error handling, and the introduction of new utility functions, all aimed at expanding the framework's capabilities and ensuring compatibility with PaddlePaddle. Changes
Sequence Diagram(s)sequenceDiagram
participant CI as CI Workflow
participant Test as Test Runner
participant Paddle as Paddle Backend
participant Model as DeepMD Model
CI->>Test: Trigger Tests
Test->>Paddle: Check Paddle Installation
alt Paddle Installed
Test->>Model: Load Model
Model->>Paddle: Run Model Evaluation
Paddle-->>Model: Return Results
Test-->>CI: Report Success
else Paddle Not Installed
Test-->>CI: Report Skipped Tests
end
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Hello @njzjz , I have a question about the naming of the functions |
water/se_e2_a
Do you mean deepmd-kit/deepmd/backend/backend.py Lines 179 to 201 in 38815b3
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #4302 +/- ##
==========================================
- Coverage 84.50% 83.19% -1.31%
==========================================
Files 596 649 +53
Lines 56665 60967 +4302
Branches 3459 3461 +2
==========================================
+ Hits 47884 50721 +2837
- Misses 7654 9118 +1464
- Partials 1127 1128 +1 ☔ View full report in Codecov by Sentry. |
…y to coverage newly added code
About 2000+ lines have yet to be tested in the CI. Could you take a look at the coverage report? |
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
source/tests/pd/model/test_model.py (4)
65-85
: Add input validation and improve documentation forpaddle2tf
The function would benefit from:
- Input validation for
paddle_name
- Type hints
- Docstring explaining the name conversion logic
-def paddle2tf(paddle_name, last_layer_id=None): +def paddle2tf(paddle_name: str, last_layer_id: int | None = None) -> str | None: + """Convert PaddlePaddle parameter names to TensorFlow format. + + Args: + paddle_name: Parameter name in PaddlePaddle format + last_layer_id: ID of the last layer for fitting net conversion + + Returns: + Converted name in TensorFlow format, or None if name should be skipped + + Raises: + ValueError: If paddle_name is invalid + """ + if not isinstance(paddle_name, str): + raise ValueError(f"paddle_name must be a string, got {type(paddle_name)}")
89-91
: Consider making the configuration file path configurableHard-coding the configuration file path makes the code less flexible and harder to test with different configurations.
Consider accepting the config file path as a parameter:
- def __init__(self) -> None: + def __init__(self, config_path: str | None = None) -> None: + config_path = config_path or str(Path(__file__).parent / "water/se_e2_a.json") - with open(str(Path(__file__).parent / "water/se_e2_a.json")) as fin: + with open(config_path) as fin:
236-239
: Simplify dictionary key iterationThe dictionary key iteration can be simplified by directly iterating over the dictionary.
- for kk in data_dict.keys(): + for kk in data_dict: if kk == "type": continue - for kk in batch.keys(): + for kk in batch: if kk == "find_type" or kk == "type": continueAlso applies to: 254-257
🧰 Tools
🪛 Ruff
236-236: Use
key in dict
instead ofkey in dict.keys()
Remove
.keys()
(SIM118)
386-390
: Extract magic numbers into named constantsThe tolerance values used in assertions should be defined as class constants for better maintainability.
+ # Tolerance values for numerical comparisons + RELATIVE_TOLERANCE = 1e-5 + ABSOLUTE_TOLERANCE = 1e-8 + def test_consistency(self) -> None: - rtol = 1e-5 - atol = 1e-8 np.testing.assert_allclose( head_dict["loss"], loss.cpu().detach().numpy(), - rtol=rtol, - atol=atol + rtol=self.RELATIVE_TOLERANCE, + atol=self.ABSOLUTE_TOLERANCE )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
pyproject.toml
(6 hunks)source/tests/pd/model/test_model.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
🪛 Ruff
source/tests/pd/model/test_model.py
236-236: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
254-254: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
415-415: Local variable bdata
is assigned to but never used
Remove assignment to unused variable bdata
(F841)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
source/tests/pd/model/test_model.py (2)
65-84
: Add docstring and improve error handling inpaddle2tf
The function would benefit from:
- A docstring explaining the purpose, parameters, and return value
- More descriptive error message including examples of expected format
Apply this diff:
def paddle2tf(paddle_name, last_layer_id=None): + """Convert PaddlePaddle parameter names to TensorFlow format. + + Args: + paddle_name: Parameter name in PaddlePaddle format (e.g., "descriptor.networks.0.compress_1") + last_layer_id: ID of the last layer for special handling + + Returns: + Equivalent parameter name in TensorFlow format or None if name should be skipped + """ fields = paddle_name.split(".") offset = int(fields[3] == "networks") + 1 element_id = int(fields[2 + offset]) if fields[1] == "descriptor": if fields[2].startswith("compress_"): return None layer_id = int(fields[4 + offset]) + 1 weight_type = fields[5 + offset] ret = "filter_type_all/%s_%d_%d:0" % (weight_type, layer_id, element_id) elif fields[1] == "fitting_net": layer_id = int(fields[4 + offset]) weight_type = fields[5 + offset] if layer_id != last_layer_id: ret = "layer_%d_type_%d/%s:0" % (layer_id, element_id, weight_type) else: ret = "final_layer_type_%d/%s:0" % (element_id, weight_type) else: - raise RuntimeError(f"Unexpected parameter name: {paddle_name}") + raise ValueError( + f"Unexpected parameter name: {paddle_name}. " + "Expected format: 'descriptor.<...>' or 'fitting_net.<...>'" + ) return ret
236-237
: Optimize dictionary iteration performanceUsing
.keys()
in dictionary iteration is unnecessary and less efficient.Apply this diff:
- for kk in data_dict.keys(): + for kk in data_dict: if kk == "type": continue - for kk in batch.keys(): + for kk in batch: if kk == "find_type" or kk == "type": continueAlso applies to: 254-255
🧰 Tools
🪛 Ruff
236-236: Use
key in dict
instead ofkey in dict.keys()
Remove
.keys()
(SIM118)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
pyproject.toml
(6 hunks)source/tests/pd/model/test_model.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
🪛 Ruff
source/tests/pd/model/test_model.py
236-236: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
254-254: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
415-415: Local variable bdata
is assigned to but never used
Remove assignment to unused variable bdata
(F841)
Thanks for the comments, redundant files&codes were removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 121 out of 136 changed files in this pull request and generated no suggestions.
Files not reviewed (15)
- deepmd/pd/infer/init.py: Evaluated as low risk
- deepmd/pd/entrypoints/init.py: Evaluated as low risk
- deepmd/dpmodel/model/make_model.py: Evaluated as low risk
- deepmd/main.py: Evaluated as low risk
- .github/workflows/test_python.yml: Evaluated as low risk
- deepmd/pd/model/init.py: Evaluated as low risk
- .github/workflows/test_cuda.yml: Evaluated as low risk
- deepmd/pd/model/atomic_model/dp_atomic_model.py: Evaluated as low risk
- deepmd/pd/model/atomic_model/energy_atomic_model.py: Evaluated as low risk
- deepmd/pd/model/descriptor/env_mat.py: Evaluated as low risk
- deepmd/pd/model/descriptor/base_descriptor.py: Evaluated as low risk
- deepmd/pd/loss/loss.py: Evaluated as low risk
- deepmd/pd/model/descriptor/descriptor.py: Evaluated as low risk
- deepmd/pd/model/descriptor/init.py: Evaluated as low risk
- deepmd/pd/model/atomic_model/init.py: Evaluated as low risk
@coderabbitai resolve |
@coderabbitai resolve |
... It seems to be not working. The comments may be required to be manually resolved |
@njzjz: Sorry that the resolve command didn't work earlier. I have now resolved the comments for you. ✅ Actions performedComments resolved. |
@HydrogenSulfate A segfault appears in the GPU test. https://github.com/deepmodeling/deepmd-kit/actions/runs/12005580706/job/33462440790 |
Thanks for report, I will check it out |
I have make a new environment with cuda11.8 and installed the nightly build paddle( nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0 It seems a weird issue and is there any way to get the detailed error log from Test CUDA? |
It's run in a docker container, so I think it may be reproducible. Btw, the image is nvidia/cuda:12.6.2-cudnn-devel-ubuntu22.04 |
A bit strange, but the segfault disappeared. |
If we can add |
Split #4157 into several pull requests.
deepmd.pd.*
) and related backend module unitests.Related PR to be merged:
Accuracy test
pytorch
paddle:
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Tests
Documentation