Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion models/vista2d/configs/inference.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
256,
256
],
"input_dict": "${'image': '/home/venn/Desktop/data/medical/cellpose_dataset/test/001_img.png'}",
"input_dict": "${'image': '/cellpose_dataset/test/001_img.png'}",
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
"sam_ckpt_path": "$@ckpt_dir + '/sam_vit_b_01ec64.pth'",
"pretrained_ckpt_path": "$@ckpt_dir + '/model.pt'",
Expand Down
10 changes: 10 additions & 0 deletions models/vista2d/configs/inference_trt.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"imports": [
"$import numpy",
"$from monai.networks import trt_compile"
],
"trt_args": {
"dynamic_batchsize": "$[1, @inferer#sw_batch_size, @inferer#sw_batch_size]"
},
"network": "$trt_compile(@network_def.to(@device), @pretrained_ckpt_path, args=@trt_args)"
}
8 changes: 7 additions & 1 deletion models/vista2d/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,13 @@ torchrun --nproc_per_node=gpu -m monai.bundle run_workflow "scripts.workflow.Vis
python -m monai.bundle run --config_file configs/inference.json
```

Please note that the data used in the config file is: "/cellpose_dataset/test/001_img.png", if the dataset path is different or you want to do inference on another file, please modify in `configs/inference.json` accordingly.
Please note that the data used in this config file is: "/cellpose_dataset/test/001_img.png", if the dataset path is different or you want to do inference on another file, please modify in `configs/inference.json` accordingly.

#### Execute inference with the TensorRT model:

```
python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
```

### Execute multi-GPU inference
```bash
Expand Down
9 changes: 9 additions & 0 deletions models/vista3d/configs/inference_trt.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"+imports": [
"$from monai.networks import trt_compile"
],
"trt_args": {
"dynamic_batchsize": "$[1, @inferer#sw_batch_size, @inferer#sw_batch_size]"
},
"network": "$trt_compile(@network_def.to(@device), @bundle_root + '/models/model.pt', args=@trt_args, submodule=['image_encoder.encoder', 'class_head'])"
}
7 changes: 7 additions & 0 deletions models/vista3d/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,13 @@ This default is overridable by changing the input folder `input_dir`, or the inp

Set `"postprocessing#transforms#0#_disabled_": false` to move the postprocessing to cpu to reduce the GPU memory footprint.

#### Execute inference with the TensorRT model:

```
python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
```


## Automatic segmentation label prompts :
The mapping between organ name and label prompt is in the [json file](labels.json)

Expand Down
Loading