Skip to content

Add parameter to DiceMetric and DiceHelper classes#8774

Open
VijayVignesh1 wants to merge 12 commits intoProject-MONAI:devfrom
VijayVignesh1:8733-per-component-dice-metric
Open

Add parameter to DiceMetric and DiceHelper classes#8774
VijayVignesh1 wants to merge 12 commits intoProject-MONAI:devfrom
VijayVignesh1:8733-per-component-dice-metric

Conversation

@VijayVignesh1
Copy link

@VijayVignesh1 VijayVignesh1 commented Mar 13, 2026

Fixes #8733

Description

This PR adds support for connected component-based Dice metric calculation to the existing DiceMetric and DiceHelper classes.

Changes

  • Added per_component: bool = False to both DiceMetric and DiceHelper constructors
  • Implemented compute_cc_dice method that calculates Dice scores for each connected component individually
  • Voronoi regions: Added compute_voronoi_regions_fast method for efficient connected component assignment without external cc3d dependency
  • Added input shape validation requiring 5D binary segmentation with 2 channels (background + foreground) when per_component=True
  • Updated first_ch calculation to properly exclude background channel when using per-component mode

Reference

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a per_component mode to DiceMetric, DiceHelper, and compute_dice to compute Dice scores for each connected ground-truth component. Ground-truth foreground is decomposed into connected components, Voronoi regions are computed to assign voxels to components (SciPy / optional cupy ndimage paths), and per-component Dice is computed via new DiceHelper methods compute_voronoi_regions_fast and compute_cc_dice. The per_component flag is propagated through initializers and call sites; input-shape validation and CPU/GPU code paths are added. Tests for 2D/3D per-component behavior and input-dimension errors were added and gated on ndimage availability.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive Title is vague and generic, using only 'parameter' without specifying which parameter or feature was added, lacking clarity about the per_component functionality. Revise title to mention per_component specifically, e.g., 'Add per_component parameter to DiceMetric for connected component-wise Dice evaluation.'
✅ Passed checks (3 passed)
Check name Status Explanation
Description check ✅ Passed Description covers most requirements with clear changes, test coverage, and documentation updates indicated. All checkboxes properly marked.
Linked Issues check ✅ Passed PR implements core requirements from #8733: per_component parameter, connected component decomposition via Voronoi regions, component-wise Dice calculation, input validation, and CPU/GPU support.
Out of Scope Changes check ✅ Passed All changes directly support per_component Dice functionality: new methods, signature updates, tests, and validation align with the linked issue objectives.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.

Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
monai/metrics/meandice.py (1)

418-426: Wasted computation when per_component=True.

Lines 420-423 compute channel Dice, then lines 424-425 discard it and overwrite c_list. Move the branch earlier.

Proposed fix
         for b in range(y_pred.shape[0]):
-            c_list = []
-            for c in range(first_ch, n_pred_ch) if n_pred_ch > 1 else [1]:
-                x_pred = (y_pred[b, 0] == c) if (y_pred.shape[1] == 1) else y_pred[b, c].bool()
-                x = (y[b, 0] == c) if (y.shape[1] == 1) else y[b, c]
-                c_list.append(self.compute_channel(x_pred, x))
             if self.per_component:
                 c_list = [self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))]
+            else:
+                c_list = []
+                for c in range(first_ch, n_pred_ch) if n_pred_ch > 1 else [1]:
+                    x_pred = (y_pred[b, 0] == c) if (y_pred.shape[1] == 1) else y_pred[b, c].bool()
+                    x = (y[b, 0] == c) if (y.shape[1] == 1) else y[b, c]
+                    c_list.append(self.compute_channel(x_pred, x))
             data.append(torch.stack(c_list))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 418 - 426, The loop is doing wasted
work: it always computes per-channel Dice via compute_channel for each c and
only when self.per_component is True it discards those results and replaces
c_list with a compute_cc_dice call. Change the logic inside the for b in
range(...) loop to check self.per_component before computing channels; if
self.per_component is True, directly set c_list =
[self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))] and
skip the per-channel compute_channel loop and related x_pred/x extraction,
otherwise run the existing per-channel path that builds c_list with
compute_channel as before. Ensure references to y_pred, y, compute_channel,
compute_cc_dice, c_list and per_component are used so the branch correctly
short-circuits the expensive channel computations.
tests/metrics/test_compute_meandice.py (2)

253-276: Test data construction is hard to follow; expected value undocumented.

The lambda-walrus pattern obscures setup. Consider a helper function. Also document how 0.5120 was derived for maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_compute_meandice.py` around lines 253 - 276, TEST_CASE_16
uses a lambda-walrus pattern (variables y and y_pred inside TEST_CASE_16) that
makes the test data setup hard to read and omits explanation of the expected
0.5120 value; extract the tensor construction into a small descriptive helper
(e.g., build_test_case_16_tensors or make_meandice_case_16) and replace the
inline lambdas with calls to that helper, and add a short comment next to the
expected value explaining how 0.5120 was computed (e.g., describe overlapping
voxel counts and Dice formula for the two shifted cubes) so the test is readable
and the expected number is documented.

337-339: Shape mismatch may obscure test intent.

Both tensors are 4D (not 5D) and have 3 channels (not 2). The spatial mismatch (144 vs 145) is irrelevant to the validation. Use matching shapes to clarify:

-            DiceMetric(per_component=True)(torch.ones([3, 3, 144, 144]), torch.ones([3, 3, 145, 145]))
+            DiceMetric(per_component=True)(torch.ones([3, 3, 64, 64]), torch.ones([3, 3, 64, 64]))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_compute_meandice.py` around lines 337 - 339, The test
currently uses two 4D tensors with mismatched spatial sizes and 3 channels,
which obscures the intent to validate dimensionality; update
test_input_dimensions so both tensors have identical shapes but still 4D to
trigger the ValueError (e.g., use torch.ones([3, 2, 144, 144]) for both),
ensuring the failure comes from incorrect dimensionality for DiceMetric rather
than a spatial-size mismatch or wrong channel count.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/metrics/meandice.py`:
- Around line 14-17: The module imports SciPy unconditionally causing CI
failures when SciPy is not installed; change the top-level imports to use
MONAI's optional_import pattern to import distance_transform_edt,
generate_binary_structure and label (sn_label) and expose a has_scipy flag, then
in compute_voronoi_regions_fast check has_scipy and raise a clear RuntimeError
if False; update references to
sn_label/distance_transform_edt/generate_binary_structure in the file to use the
optionally imported symbols so runtime usage is guarded.
- Line 416: The code currently sets first_ch based on a combined condition which
silently ignores include_background when per_component is True; update the logic
in the MeanDice/meandice implementation to detect the conflicting flags
(self.per_component True and self.include_background False) and emit a clear
warning (e.g., warnings.warn or using the module logger) that include_background
will be ignored when per_component is enabled, then keep the existing behavior
for first_ch (set first_ch=1) to preserve compatibility; reference the
attributes self.include_background, self.per_component and the local variable
first_ch so reviewers can locate and adjust the check and add the warning.
- Around line 300-321: The compute_voronoi_regions_fast function's docstring
lacks a Returns section and the function always returns a CPU tensor
(torch.from_numpy) even if the original input was a CUDA tensor; update the
docstring to include a Returns: description and type (torch.Tensor on same
device as input) and change the implementation to preserve input type/device:
accept numpy array or torch.Tensor for labels, record the original device and
dtype (if torch.Tensor), convert input to CPU numpy for EDT processing, then
convert the resulting voronoi numpy array back to a torch.Tensor and move it to
the original device and appropriate dtype before returning; reference
compute_voronoi_regions_fast, labels, edt_input, indices, and voronoi when
locating where to apply these changes.
- Around line 323-364: The compute_cc_dice method's docstring and
empty-ground-truth handling are incorrect: update the docstring for
compute_cc_dice to state the actual expected input shapes (e.g., tensors that
may include batch and channel dims such as (1, C, D, H, W) or
per-channel/per-item spatial tensors) and then change the empty-GT branch (the
y_idx[0].sum() == 0 case) to consult self.ignore_empty (return
torch.tensor(0.0/1.0 or skip/ignore according to class semantics) instead of
always appending 1.0/0.0), and move the inf/nan replacement logic (the two
torch.where lines that sanitize values) out of the else block so they run for
both empty and non-empty cases; refer to symbols compute_cc_dice, y_idx,
y_pred_idx, self.ignore_empty, cc_assignment, uniq/inv/hist/dice_scores to
locate and update the logic and docstring.

---

Nitpick comments:
In `@monai/metrics/meandice.py`:
- Around line 418-426: The loop is doing wasted work: it always computes
per-channel Dice via compute_channel for each c and only when self.per_component
is True it discards those results and replaces c_list with a compute_cc_dice
call. Change the logic inside the for b in range(...) loop to check
self.per_component before computing channels; if self.per_component is True,
directly set c_list = [self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0),
y=y[b].unsqueeze(0))] and skip the per-channel compute_channel loop and related
x_pred/x extraction, otherwise run the existing per-channel path that builds
c_list with compute_channel as before. Ensure references to y_pred, y,
compute_channel, compute_cc_dice, c_list and per_component are used so the
branch correctly short-circuits the expensive channel computations.

In `@tests/metrics/test_compute_meandice.py`:
- Around line 253-276: TEST_CASE_16 uses a lambda-walrus pattern (variables y
and y_pred inside TEST_CASE_16) that makes the test data setup hard to read and
omits explanation of the expected 0.5120 value; extract the tensor construction
into a small descriptive helper (e.g., build_test_case_16_tensors or
make_meandice_case_16) and replace the inline lambdas with calls to that helper,
and add a short comment next to the expected value explaining how 0.5120 was
computed (e.g., describe overlapping voxel counts and Dice formula for the two
shifted cubes) so the test is readable and the expected number is documented.
- Around line 337-339: The test currently uses two 4D tensors with mismatched
spatial sizes and 3 channels, which obscures the intent to validate
dimensionality; update test_input_dimensions so both tensors have identical
shapes but still 4D to trigger the ValueError (e.g., use torch.ones([3, 2, 144,
144]) for both), ensuring the failure comes from incorrect dimensionality for
DiceMetric rather than a spatial-size mismatch or wrong channel count.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f8c5c98e-0bb3-413d-9471-3bef41a45cfa

📥 Commits

Reviewing files that changed from the base of the PR and between daaedaa and 41e52c1.

📒 Files selected for processing (2)
  • monai/metrics/meandice.py
  • tests/metrics/test_compute_meandice.py

@VijayVignesh1 VijayVignesh1 marked this pull request as draft March 13, 2026 15:40
…itai - docstring issues, ignore_empty bug

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
…itai - docstring issues, ignore_empty bug

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
@VijayVignesh1 VijayVignesh1 marked this pull request as ready for review March 13, 2026 20:22
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
monai/metrics/meandice.py (2)

427-427: ⚠️ Potential issue | 🟡 Minor

include_background is still silently ignored with per_component=True.

Line 427 forces foreground-only behavior without signaling it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` at line 427, The current assignment to first_ch in
MeanDice (meandice.py) ignores include_background when per_component is True;
change the logic so include_background is honored regardless of per_component by
setting first_ch based solely on self.include_background (e.g., first_ch = 0 if
self.include_background else 1) instead of conditioning on not
self.per_component; update any related comments/tests that assumed the previous
behavior.

318-328: ⚠️ Potential issue | 🟠 Major

Per-component Dice is not CUDA-safe.

Line 318 uses np.asarray(labels), which breaks for CUDA tensors; Line 328 returns a CPU tensor, and Line 360 then mixes devices.

Proposed fix
-    def compute_voronoi_regions_fast(self, labels, connectivity=26, sampling=None):
+    def compute_voronoi_regions_fast(self, labels, connectivity=26, sampling=None):
@@
-        x = np.asarray(labels)
+        labels_t = labels if isinstance(labels, torch.Tensor) else torch.as_tensor(labels)
+        in_device = labels_t.device
+        x = labels_t.detach().cpu().numpy()
@@
-        if num == 0:
-            return torch.zeros_like(torch.from_numpy(x), dtype=torch.int32)
+        if num == 0:
+            return torch.zeros_like(labels_t, dtype=torch.int32, device=in_device)
@@
-        return torch.from_numpy(voronoi)
+        return torch.from_numpy(voronoi).to(device=in_device, dtype=torch.int32)
#!/bin/bash
set -euo pipefail

# Verify current implementation uses numpy conversion without explicit CPU transfer
rg -n -C2 'def compute_voronoi_regions_fast|np\.asarray\(labels\)|torch\.from_numpy\(voronoi\)|compute_voronoi_regions_fast\(y_idx\[0\]\)' monai/metrics/meandice.py

# Confirm there is no explicit detach+cpu numpy conversion in this function body
python - <<'PY'
from pathlib import Path
text = Path("monai/metrics/meandice.py").read_text()
start = text.index("def compute_voronoi_regions_fast")
end = text.index("def compute_cc_dice")
chunk = text[start:end]
print("contains_detach_cpu_numpy:", ".detach().cpu().numpy()" in chunk or ".cpu().numpy()" in chunk)
PY

Also applies to: 356-361

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 318 - 328, The function
compute_voronoi_regions_fast currently uses np.asarray(labels) and
torch.from_numpy(voronoi) which breaks for CUDA tensors; capture the input
tensor's device and dtype first (e.g., orig_device = labels.device if
isinstance(labels, torch.Tensor) else torch.device('cpu')), convert safely to
CPU numpy via labels = labels.detach().cpu().numpy() (or leave numpy arrays
unchanged), run the existing numpy logic, then convert the result back using
torch.from_numpy(voronoi).to(device=orig_device, dtype=torch.int32) or
torch.as_tensor(voronoi, device=orig_device, dtype=torch.int32) so the returned
tensor is on the same device as the input; update both
compute_voronoi_regions_fast and the similar code at the other location (lines
~356-361 / compute_cc_dice caller) to follow this pattern.
🧹 Nitpick comments (2)
tests/metrics/test_compute_meandice.py (1)

334-337: Add per-component validation tests for invalid y shape/channel.

test_input_dimensions covers only one invalid input pattern. Add cases for y not being (B, 2, D, H, W) and for y_pred/y channel mismatch.

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_compute_meandice.py` around lines 334 - 337, Extend the
test_input_dimensions in tests/metrics/test_compute_meandice.py to add two more
invalid-shape cases: (1) verify DiceMetric(per_component=True) raises ValueError
when y has wrong channel count (e.g., y shape not (B,2,D,H,W) such as
torch.ones([3,1,144,144]) or torch.ones([3,3,144,144]) depending on 2D/3D
expectation), and (2) verify DiceMetric(per_component=True) raises ValueError
when y_pred and y have mismatched channel counts (call
DiceMetric(per_component=True)(y_pred, y) where y_pred has 2 channels and y has
1 or vice versa). Reference the DiceMetric class and the existing
test_input_dimensions to add these assertions so coverage includes invalid y
shapes and channel mismatches.
monai/metrics/meandice.py (1)

429-437: Skip per-channel Dice work when per_component=True.

Line 431-434 computes channel Dice, then Line 436 overwrites c_list. This is unnecessary work on every batch item.

Proposed refactor
         data = []
         for b in range(y_pred.shape[0]):
+            if self.per_component:
+                data.append(self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0)).unsqueeze(0))
+                continue
             c_list = []
             for c in range(first_ch, n_pred_ch) if n_pred_ch > 1 else [1]:
                 x_pred = (y_pred[b, 0] == c) if (y_pred.shape[1] == 1) else y_pred[b, c].bool()
                 x = (y[b, 0] == c) if (y.shape[1] == 1) else y[b, c]
                 c_list.append(self.compute_channel(x_pred, x))
-            if self.per_component:
-                c_list = [self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))]
             data.append(torch.stack(c_list))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 429 - 437, The loop currently always
computes per-channel Dice via compute_channel for each class then overwrites
c_list when self.per_component is True, causing unnecessary work; update the
logic in the batch loop (the block using variables b, c_list, first_ch,
n_pred_ch and calling compute_channel and compute_cc_dice) to short-circuit when
self.per_component is True—i.e., if self.per_component is True, skip the inner
per-channel loop entirely and directly set c_list =
[self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))];
otherwise run the existing per-channel computation using compute_channel. Ensure
you preserve behavior for n_pred_ch == 1 and that
data.append(torch.stack(c_list)) still executes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/metrics/meandice.py`:
- Around line 421-425: The per_component branch only validates y_pred; add the
same validation for y so incorrect shapes or channel counts on the ground truth
produce an immediate error. In the function/method that contains the existing
check (the block referencing self.per_component and y_pred), validate that y
also has 5 dimensions and y.shape[1] == 2 (or that y.shape matches y_pred) and
raise a ValueError with a message parallel to the existing one (e.g.,
"per_component requires 5D binary segmentation with 2 channels... Got shape
{y.shape}"). Ensure you reference the same symbol names (self.per_component,
y_pred, y) so the check runs before any computation that assumes the 5D
two-channel layout.

In `@tests/metrics/test_compute_meandice.py`:
- Around line 275-276: Remove the class-level `@unittest.skipUnless`(has_ndimage,
...) on TestComputeMeanDice and instead apply that skip only to the tests that
exercise the per_component code-path; identify and decorate the specific methods
(e.g., any test methods named test_*per_component* or those that call
compute_mean_dice(..., per_component=True) such as test_mean_dice_per_component)
with `@unittest.skipUnless`(has_ndimage, "Requires scipy.ndimage."); keep other
tests in TestComputeMeanDice unskipped so non-ndimage paths still run.

---

Duplicate comments:
In `@monai/metrics/meandice.py`:
- Line 427: The current assignment to first_ch in MeanDice (meandice.py) ignores
include_background when per_component is True; change the logic so
include_background is honored regardless of per_component by setting first_ch
based solely on self.include_background (e.g., first_ch = 0 if
self.include_background else 1) instead of conditioning on not
self.per_component; update any related comments/tests that assumed the previous
behavior.
- Around line 318-328: The function compute_voronoi_regions_fast currently uses
np.asarray(labels) and torch.from_numpy(voronoi) which breaks for CUDA tensors;
capture the input tensor's device and dtype first (e.g., orig_device =
labels.device if isinstance(labels, torch.Tensor) else torch.device('cpu')),
convert safely to CPU numpy via labels = labels.detach().cpu().numpy() (or leave
numpy arrays unchanged), run the existing numpy logic, then convert the result
back using torch.from_numpy(voronoi).to(device=orig_device, dtype=torch.int32)
or torch.as_tensor(voronoi, device=orig_device, dtype=torch.int32) so the
returned tensor is on the same device as the input; update both
compute_voronoi_regions_fast and the similar code at the other location (lines
~356-361 / compute_cc_dice caller) to follow this pattern.

---

Nitpick comments:
In `@monai/metrics/meandice.py`:
- Around line 429-437: The loop currently always computes per-channel Dice via
compute_channel for each class then overwrites c_list when self.per_component is
True, causing unnecessary work; update the logic in the batch loop (the block
using variables b, c_list, first_ch, n_pred_ch and calling compute_channel and
compute_cc_dice) to short-circuit when self.per_component is True—i.e., if
self.per_component is True, skip the inner per-channel loop entirely and
directly set c_list = [self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0),
y=y[b].unsqueeze(0))]; otherwise run the existing per-channel computation using
compute_channel. Ensure you preserve behavior for n_pred_ch == 1 and that
data.append(torch.stack(c_list)) still executes.

In `@tests/metrics/test_compute_meandice.py`:
- Around line 334-337: Extend the test_input_dimensions in
tests/metrics/test_compute_meandice.py to add two more invalid-shape cases: (1)
verify DiceMetric(per_component=True) raises ValueError when y has wrong channel
count (e.g., y shape not (B,2,D,H,W) such as torch.ones([3,1,144,144]) or
torch.ones([3,3,144,144]) depending on 2D/3D expectation), and (2) verify
DiceMetric(per_component=True) raises ValueError when y_pred and y have
mismatched channel counts (call DiceMetric(per_component=True)(y_pred, y) where
y_pred has 2 channels and y has 1 or vice versa). Reference the DiceMetric class
and the existing test_input_dimensions to add these assertions so coverage
includes invalid y shapes and channel mismatches.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2469c914-d7e5-4549-930b-3212056a1266

📥 Commits

Reviewing files that changed from the base of the PR and between 41e52c1 and ba2e0b3.

📒 Files selected for processing (2)
  • monai/metrics/meandice.py
  • tests/metrics/test_compute_meandice.py

…eck bug

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
tests/metrics/test_compute_meandice.py (1)

256-272: Test data note: batches 1-4 have all-zero tensors.

Valid for testing empty GT handling (ignore_empty=False returns 1.0), but technically not proper one-hot encoding. Consider adding a comment explaining intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_compute_meandice.py` around lines 256 - 272, TEST_CASE_16
uses y and y_hat where batches 1-4 are all-zero (not proper one-hot), so clarify
intent: update the test comment near TEST_CASE_16 to state that y and y_hat
intentionally include all-zero batches to validate per_component DiceMetric
behavior with ignore_empty=False (expecting 1.0), reference the variables y,
y_hat and the test case name TEST_CASE_16; do not change data values, only add a
concise comment explaining that these batches are intentionally empty and used
to test empty-GT handling.
monai/metrics/meandice.py (2)

430-438: Wasteful computation when per_component=True.

Lines 432-435 compute per-channel Dice, but when per_component=True, line 437 replaces c_list entirely, discarding that work.

♻️ Suggested optimization
         for b in range(y_pred.shape[0]):
             c_list = []
-            for c in range(first_ch, n_pred_ch) if n_pred_ch > 1 else [1]:
-                x_pred = (y_pred[b, 0] == c) if (y_pred.shape[1] == 1) else y_pred[b, c].bool()
-                x = (y[b, 0] == c) if (y.shape[1] == 1) else y[b, c]
-                c_list.append(self.compute_channel(x_pred, x))
             if self.per_component:
                 c_list = [self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))]
+            else:
+                for c in range(first_ch, n_pred_ch) if n_pred_ch > 1 else [1]:
+                    x_pred = (y_pred[b, 0] == c) if (y_pred.shape[1] == 1) else y_pred[b, c].bool()
+                    x = (y[b, 0] == c) if (y.shape[1] == 1) else y[b, c]
+                    c_list.append(self.compute_channel(x_pred, x))
             data.append(torch.stack(c_list))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 430 - 438, The loop currently
computes per-channel Dice via compute_channel for every class into c_list and
then, if self.per_component is True, throws that work away by replacing c_list
with compute_cc_dice; to fix, short-circuit the per-component path: inside the
outer loop over b, check self.per_component first and only call
self.compute_cc_dice for that batch (y_pred[b].unsqueeze(0), y[b].unsqueeze(0))
to create c_list, otherwise run the existing per-channel computation using
compute_channel; this avoids wasted compute and ensures c_list is only populated
by the needed branch.

322-328: Minor: dtype inconsistency and allocation inefficiency.

Line 323 creates an intermediate tensor unnecessarily. Also, return dtype depends on platform (sn_label may return int32 or int64), but docstring promises int32.

♻️ Suggested fix
         if num == 0:
-            return torch.zeros_like(torch.from_numpy(x), dtype=torch.int32)
+            return torch.zeros(x.shape, dtype=torch.int32)
         edt_input = np.ones(cc.shape, dtype=np.uint8)
         edt_input[cc > 0] = 0
         indices = distance_transform_edt(edt_input, sampling=sampling, return_distances=False, return_indices=True)
         voronoi = cc[tuple(indices)]
-        return torch.from_numpy(voronoi)
+        return torch.from_numpy(voronoi.astype(np.int32))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 322 - 328, The early-return creates
an unnecessary tensor from x and the final return dtype can vary; change the
num==0 branch to directly return a torch tensor of zeros with the same shape as
cc and dtype=torch.int32 (avoid torch.from_numpy(x)). After computing voronoi =
cc[tuple(indices)], ensure voronoi is cast to a stable 32-bit integer numpy type
(e.g., voronoi = voronoi.astype(np.int32)) before converting with
torch.from_numpy so the returned tensor is always int32; update the code around
variables num, cc, indices, edt_input and voronoi in meandice.py accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/metrics/meandice.py`:
- Line 428: The code currently forces first_ch = 1 when per_component is True,
silently ignoring include_background; update the logic in the MeanDice class
(where first_ch is computed) to detect the conflicting flags
(include_background=True and per_component=True) and emit a clear warning (using
warnings.warn or the module logger) stating that include_background will be
ignored in per_component mode and that first_ch is set to 1; keep the existing
behavior unless you intend to change semantics, but ensure the warning is raised
at construction or first use (e.g., in __init__ or the method computing
first_ch) so users are informed.

---

Nitpick comments:
In `@monai/metrics/meandice.py`:
- Around line 430-438: The loop currently computes per-channel Dice via
compute_channel for every class into c_list and then, if self.per_component is
True, throws that work away by replacing c_list with compute_cc_dice; to fix,
short-circuit the per-component path: inside the outer loop over b, check
self.per_component first and only call self.compute_cc_dice for that batch
(y_pred[b].unsqueeze(0), y[b].unsqueeze(0)) to create c_list, otherwise run the
existing per-channel computation using compute_channel; this avoids wasted
compute and ensures c_list is only populated by the needed branch.
- Around line 322-328: The early-return creates an unnecessary tensor from x and
the final return dtype can vary; change the num==0 branch to directly return a
torch tensor of zeros with the same shape as cc and dtype=torch.int32 (avoid
torch.from_numpy(x)). After computing voronoi = cc[tuple(indices)], ensure
voronoi is cast to a stable 32-bit integer numpy type (e.g., voronoi =
voronoi.astype(np.int32)) before converting with torch.from_numpy so the
returned tensor is always int32; update the code around variables num, cc,
indices, edt_input and voronoi in meandice.py accordingly.

In `@tests/metrics/test_compute_meandice.py`:
- Around line 256-272: TEST_CASE_16 uses y and y_hat where batches 1-4 are
all-zero (not proper one-hot), so clarify intent: update the test comment near
TEST_CASE_16 to state that y and y_hat intentionally include all-zero batches to
validate per_component DiceMetric behavior with ignore_empty=False (expecting
1.0), reference the variables y, y_hat and the test case name TEST_CASE_16; do
not change data values, only add a concise comment explaining that these batches
are intentionally empty and used to test empty-GT handling.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 623f6db3-2ab1-4bdf-8c74-90860dc9678d

📥 Commits

Reviewing files that changed from the base of the PR and between ba2e0b3 and d9bfb5d.

📒 Files selected for processing (2)
  • monai/metrics/meandice.py
  • tests/metrics/test_compute_meandice.py

@aymuos15
Copy link
Contributor

Would be nice to extend this to both 2d and 3d within this PR itself.

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
@aymuos15
Copy link
Contributor

MONAI also entertains cupy, e.g: https://github.com/Project-MONAI/MONAI/blob/dev/monai/data/image_reader.py so having the option to do the cc step through that would be cleaner in gpus.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
monai/metrics/meandice.py (1)

319-339: ⚠️ Potential issue | 🟠 Major

Per-component path can break on CUDA tensors.

compute_voronoi_regions_fast force-converts through NumPy and returns CPU tensors. In compute_cc_dice, this can cause device mismatch against CUDA y_idx/y_pred_idx.

Suggested fix
 def compute_voronoi_regions_fast(self, labels):
@@
-        x = np.asarray(labels)
+        labels_device = labels.device if isinstance(labels, torch.Tensor) else None
+        x = labels.detach().cpu().numpy() if isinstance(labels, torch.Tensor) else np.asarray(labels)
@@
-        if num == 0:
-            return torch.zeros_like(torch.from_numpy(x), dtype=torch.int32)
+        if num == 0:
+            out = torch.zeros_like(torch.from_numpy(x), dtype=torch.int32)
+            return out.to(labels_device) if labels_device is not None else out
@@
-        return torch.from_numpy(voronoi)
+        out = torch.from_numpy(voronoi)
+        return out.to(labels_device) if labels_device is not None else out
#!/bin/bash
set -euo pipefail
FILE="$(fd '^meandice\.py$' monai | head -n1)"
rg -n "np\.asarray\(labels\)|torch\.from_numpy\(voronoi\)|compute_voronoi_regions_fast\(y_idx\[0\]\)" "$FILE" -C2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 319 - 339,
compute_voronoi_regions_fast currently uses np.asarray(labels) and
torch.from_numpy(voronoi) which breaks on CUDA tensors; in
compute_voronoi_regions_fast (and callers like compute_cc_dice) detect the
original tensor device and avoid calling np.asarray on a CUDA tensor by doing
labels_cpu = labels.detach().cpu().numpy() (or if input is already ndarray keep
it), run the NumPy/scipy operations on labels_cpu, then convert the result back
to the original device with torch.from_numpy(voronoi).to(device,
dtype=torch.int32) (or torch.tensor(..., device=device, dtype=torch.int32)) so
returned voronoi matches the device of inputs (refer to symbols
compute_voronoi_regions_fast, compute_cc_dice, labels, x, voronoi, y_idx,
y_pred_idx). Ensure dtype and shape are preserved when moving the tensor back.
🧹 Nitpick comments (2)
monai/metrics/meandice.py (1)

440-447: Skip per-channel loop when per_component=True.

The channel loop work is discarded immediately after, so this is avoidable compute.

Suggested refactor
         data = []
         for b in range(y_pred.shape[0]):
+            if self.per_component:
+                data.append(torch.stack([self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))]))
+                continue
             c_list = []
             for c in range(first_ch, n_pred_ch) if n_pred_ch > 1 else [1]:
                 x_pred = (y_pred[b, 0] == c) if (y_pred.shape[1] == 1) else y_pred[b, c].bool()
                 x = (y[b, 0] == c) if (y.shape[1] == 1) else y[b, c]
                 c_list.append(self.compute_channel(x_pred, x))
-            if self.per_component:
-                c_list = [self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))]
             data.append(torch.stack(c_list))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 440 - 447, The current implementation
always iterates per-channel (loop over b and c using compute_channel) even when
self.per_component is True, then discards that work by replacing c_list with
compute_cc_dice; to fix, short-circuit the per-channel loop when
self.per_component is True (e.g., check self.per_component before entering the
per-channel c loop or branch at the start of the b-loop) and directly call
self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0)) instead
of building c_list via compute_channel, thus avoiding unnecessary
compute_channel calls for y_pred, y and redundant boolean conversions.
tests/metrics/test_compute_meandice.py (1)

353-357: Add invalid-shape coverage beyond channel-count mismatch.

Please add cases for mismatched spatial sizes and mixed 2D/3D rank under per_component=True.

Suggested additions
     `@unittest.skipUnless`(has_ndimage, "Requires scipy.ndimage.")
     def test_input_dimensions(self):
         with self.assertRaises(ValueError):
             DiceMetric(per_component=True)(torch.ones([3, 3, 144, 144]), torch.ones([3, 3, 145, 145]))
+        with self.assertRaises(ValueError):
+            DiceMetric(per_component=True)(torch.ones([3, 2, 144, 144]), torch.ones([3, 2, 145, 145]))
+        with self.assertRaises(ValueError):
+            DiceMetric(per_component=True)(torch.ones([3, 2, 16, 144, 144]), torch.ones([3, 2, 144, 144]))

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_compute_meandice.py` around lines 353 - 357, Extend the
test_input_dimensions case to cover spatial-size mismatches and mixed 2D/3D rank
when using DiceMetric(per_component=True): add self.assertRaises(ValueError)
checks for (a) inputs with same batch/channel but differing spatial dimensions
(e.g., torch.ones([3,3,144,144]) vs torch.ones([3,3,144,145])) and (b) inputs
with different spatial rank (e.g., a 2D tensor torch.ones([3,3,144,144]) vs a 3D
tensor torch.ones([3,3,10,144,144])); keep these in the same
test_input_dimensions method and use DiceMetric(per_component=True)(...) to
trigger the ValueError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/metrics/meandice.py`:
- Around line 306-350: Update the docstrings for compute_voronoi_regions_fast
and compute_cc_dice to include a "Raises:" section that documents all exceptions
the functions can throw: for compute_voronoi_regions_fast, document RuntimeError
when scipy.ndimage is not available and ValueError when input rank is not 2 or 3
(and any other potential errors like unexpected failures from
distance_transform_edt or sn_label if desired); for compute_cc_dice, add any
raised exceptions (e.g., shape/type validation errors) describing the condition
that triggers them. Use the Google-style "Raises:" format, reference the
function names and the specific conditions that cause each exception.
- Around line 431-436: When self.per_component is true, require y_pred and y to
have identical ranks and spatial dimensions as well as exactly 2 channels: check
that y_pred.ndim == y.ndim, that ndim is in (4,5), that y_pred.shape[1] ==
y.shape[1] == 2, and that all spatial dimensions y_pred.shape[2:] ==
y.shape[2:]; if any check fails raise a ValueError with a clear message
including both shapes; update the validation in meandice (the block using
self.per_component, y_pred and y) to perform these exact compatibility checks so
mixed ranks or mismatched spatial sizes are rejected early with a helpful error.

In `@tests/metrics/test_compute_meandice.py`:
- Line 350: Remove the debug print statement that prints test output by deleting
the call print(result.cpu().numpy()) in the test (e.g., inside the
test_compute_meandice test function or helper where result is computed); ensure
no other stray prints remain so the test only uses assertions to validate
behavior and produces no noisy CI output.

---

Duplicate comments:
In `@monai/metrics/meandice.py`:
- Around line 319-339: compute_voronoi_regions_fast currently uses
np.asarray(labels) and torch.from_numpy(voronoi) which breaks on CUDA tensors;
in compute_voronoi_regions_fast (and callers like compute_cc_dice) detect the
original tensor device and avoid calling np.asarray on a CUDA tensor by doing
labels_cpu = labels.detach().cpu().numpy() (or if input is already ndarray keep
it), run the NumPy/scipy operations on labels_cpu, then convert the result back
to the original device with torch.from_numpy(voronoi).to(device,
dtype=torch.int32) (or torch.tensor(..., device=device, dtype=torch.int32)) so
returned voronoi matches the device of inputs (refer to symbols
compute_voronoi_regions_fast, compute_cc_dice, labels, x, voronoi, y_idx,
y_pred_idx). Ensure dtype and shape are preserved when moving the tensor back.

---

Nitpick comments:
In `@monai/metrics/meandice.py`:
- Around line 440-447: The current implementation always iterates per-channel
(loop over b and c using compute_channel) even when self.per_component is True,
then discards that work by replacing c_list with compute_cc_dice; to fix,
short-circuit the per-channel loop when self.per_component is True (e.g., check
self.per_component before entering the per-channel c loop or branch at the start
of the b-loop) and directly call
self.compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0)) instead
of building c_list via compute_channel, thus avoiding unnecessary
compute_channel calls for y_pred, y and redundant boolean conversions.

In `@tests/metrics/test_compute_meandice.py`:
- Around line 353-357: Extend the test_input_dimensions case to cover
spatial-size mismatches and mixed 2D/3D rank when using
DiceMetric(per_component=True): add self.assertRaises(ValueError) checks for (a)
inputs with same batch/channel but differing spatial dimensions (e.g.,
torch.ones([3,3,144,144]) vs torch.ones([3,3,144,145])) and (b) inputs with
different spatial rank (e.g., a 2D tensor torch.ones([3,3,144,144]) vs a 3D
tensor torch.ones([3,3,10,144,144])); keep these in the same
test_input_dimensions method and use DiceMetric(per_component=True)(...) to
trigger the ValueError.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2dfdbc09-add0-4153-ba11-1f31da42b748

📥 Commits

Reviewing files that changed from the base of the PR and between d9bfb5d and 6f2155c.

📒 Files selected for processing (2)
  • monai/metrics/meandice.py
  • tests/metrics/test_compute_meandice.py

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
monai/metrics/meandice.py (1)

466-476: ⚠️ Potential issue | 🟠 Major

same_shape never runs for valid 2-channel inputs.

Line 467 only enters this block when rank/channel checks already fail. A (B, 2, ...) spatial mismatch skips validation here and blows up later with a less clear tensor-shape error.

Suggested fix
         if self.per_component:
-            if y_pred.ndim not in (4, 5) or y.ndim not in (4, 5) or y_pred.shape[1] != 2 or y.shape[1] != 2:
-                same_rank = y_pred.ndim == y.ndim and y_pred.ndim in (4, 5)
-                binary_channels = y_pred.shape[1] == 2 and y.shape[1] == 2
-                same_shape = y_pred.shape == y.shape
-                if not (same_rank and binary_channels and same_shape):
-                    raise ValueError(
-                        "per_component requires matching 4D/5D binary tensors "
-                        "(B, 2, H, W) or (B, 2, D, H, W). "
-                        f"Got y_pred={tuple(y_pred.shape)}, y={tuple(y.shape)}."
-                    )
+            same_rank = y_pred.ndim == y.ndim and y_pred.ndim in (4, 5)
+            binary_channels = same_rank and y_pred.shape[1] == 2 and y.shape[1] == 2
+            same_shape = y_pred.shape == y.shape
+            if not (same_rank and binary_channels and same_shape):
+                raise ValueError(
+                    "per_component requires matching 4D/5D binary tensors "
+                    "(B, 2, H, W) or (B, 2, D, H, W). "
+                    f"Got y_pred={tuple(y_pred.shape)}, y={tuple(y.shape)}."
+                )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/metrics/meandice.py` around lines 466 - 476, The current conditional
computes same_shape only in the branch where ndim/channel checks already failed,
so spatial mismatches for valid 2-channel inputs slip through; modify the
per_component validation in the Meandice logic to compute same_rank,
binary_channels, and same_shape unconditionally (regardless of the initial
ndim/channel check) and then raise the ValueError if not (same_rank and
binary_channels and same_shape), referencing the existing variables same_rank,
binary_channels, same_shape and the inputs y_pred and y so that (B,2,...)
spatial mismatches are caught with the clear error message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/metrics/meandice.py`:
- Around line 480-488: Inside the per-batch loop the branch that handles
self.per_component incorrectly wraps the tensor returned by compute_cc_dice(...)
in a list, adding an extra singleton axis and doing needless per-channel work;
instead call compute_cc_dice(y_pred=y_pred[b].unsqueeze(0), y=y[b].unsqueeze(0))
and use its returned per-item tensor directly (or remove its leading batch dim
with .squeeze(0) / index [0] as needed) so that you append a (C,) or (1,)
channel tensor to data rather than creating a (1,1) singleton; update the c_list
assignment to use the compute_cc_dice result without additional wrapping and
remove the prior per-channel compute_channel calls when per_component is true.

In `@tests/metrics/test_compute_meandice.py`:
- Around line 345-358: The two tests mutate the shared input_data
(test_cc_dice_value_nogpu moves tensors to CPU) causing order-dependent GPU
coverage; update both test_cc_dice_value_nogpu and test_cc_dice_value_gpu to
work on deep copies of the parameter/input dicts (e.g., deepcopy input_data)
instead of mutating the shared object, and explicitly move the copies to the
desired device inside each test (CPU for test_cc_dice_value_nogpu, CUDA for
test_cc_dice_value_gpu) before calling DiceMetric(**params) and
dice_metric(...); this ensures TEST_CASE_16/TEST_CASE_17 remain unchanged and
the GPU path is exercised reliably.
- Around line 364-367: The test currently fails for the wrong reason (channel
count 3), so add an additional assertRaises case that uses the expected
per-component channel size (2) but has mismatched dimensions to exercise the
per-component validator: in test_input_dimensions, keep the existing check and
add a new with DiceMetric(per_component=True) called on tensors like
torch.ones([3, 2, 144, 144]) and torch.ones([3, 2, 145, 145]) (and optionally
another with batch mismatch, e.g., torch.ones([3, 2, 144, 144]) vs
torch.ones([4, 2, 144, 144])) so the ValueError is triggered for spatial or
batch shape mismatch rather than wrong channel count.

---

Duplicate comments:
In `@monai/metrics/meandice.py`:
- Around line 466-476: The current conditional computes same_shape only in the
branch where ndim/channel checks already failed, so spatial mismatches for valid
2-channel inputs slip through; modify the per_component validation in the
Meandice logic to compute same_rank, binary_channels, and same_shape
unconditionally (regardless of the initial ndim/channel check) and then raise
the ValueError if not (same_rank and binary_channels and same_shape),
referencing the existing variables same_rank, binary_channels, same_shape and
the inputs y_pred and y so that (B,2,...) spatial mismatches are caught with the
clear error message.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 088ad292-b334-4046-8ade-8c1cde80d3a0

📥 Commits

Reviewing files that changed from the base of the PR and between 6f2155c and ba05438.

📒 Files selected for processing (2)
  • monai/metrics/meandice.py
  • tests/metrics/test_compute_meandice.py

Signed-off-by: Vijay Vignesh Prasad Rao <vijayvigneshp02@gmail.com>
@VijayVignesh1
Copy link
Author

MONAI also entertains cupy, e.g: https://github.com/Project-MONAI/MONAI/blob/dev/monai/data/image_reader.py so having the option to do the cc step through that would be cleaner in gpus.

I've added both 2d capabilities as well as cupy option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Evaluation of Semantic Segmentation Metrics on a per-component basis

2 participants