-
Notifications
You must be signed in to change notification settings - Fork 819
Description
Describe the bug
I'm trying to run the GANomaly model using my own dataset. I followed the basic instructions to validate the operation of the model and used the code provided in the respective documentation to verify that the execution goes as expected. However, the following error occurs:
AttributeError: 'F1AdaptiveThreshold' object has no attribute 'update_called'. Did you mean: '_update_called'?
Dataset
Folder
Model
GANomaly
Steps to reproduce the behavior
Steps:
-
Install
git clone https://github.com/openvinotoolkit/anomalib.git
cd anomalib
pip install -e . -
Code
`
from pathlib import Path
from anomalib.data import Folder
from anomalib.models import Ganomaly
from anomalib.engine import Engine
from lightning.pytorch import Trainer
import torch
def main():
root_path = Path("D:/AI/anomalib/dataset")
datamodule = Folder(
name="mydataset",
root=root_path,
normal_dir=root_path / "train/class_x",
abnormal_dir=root_path / "val/class_y",
normal_test_dir=root_path / "val/class_x",
)
model = Ganomaly()
engine = Engine()
engine.fit(model, datamodule=datamodule)
predictions = engine.predict(model, datamodule=datamodule)
if name == "main":
main()
`
- Error
AttributeError: 'F1AdaptiveThreshold' object has no attribute 'update_called'. Did you mean: '_update_called'?
OS information
OS information:
- OS: Windows 10
- Python version: 3.11.9pip
- Anomalib version: 2.0.0
- PyTorch version: 2.7.1
- CUDA/cuDNN version: 11.8
- GPU models and configuration: RTX 4000 ada
- Any other relevant information: I'm using a custom dataset
Expected behavior
What I hope is that the network would be trained and visualize the results.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
main
Configuration YAML
pre_processor: true
post_processor: true
evaluator: true
visualizer: true
batch_size: 32
n_features: 64
latent_vec_size: 100
extra_layers: 0
add_final_conv_layer: true
wadv: 1
wcon: 50
wenc: 1
lr: 0.0002
beta1: 0.5
beta2: 0.999
Logs
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\utilities.py:73: `max_epochs` was not set. Setting it to 1000 epochs. To train without an epoch limit, set `max_epochs=-1`.
You are using a CUDA device ('NVIDIA RTX A4000') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params | Mode
-----------------------------------------------------------------
0 | pre_processor | PreProcessor | 0 | train
1 | post_processor | PostProcessor | 0 | train
2 | evaluator | Evaluator | 0 | train
3 | model | GanomalyModel | 188 M | train
4 | generator_loss | GeneratorLoss | 0 | train
5 | discriminator_loss | DiscriminatorLoss | 0 | train
-----------------------------------------------------------------
188 M Trainable params
0 Non-trainable params
188 M Total params
754.758 Total estimated model params size (MB)
112 Modules in train mode
0 Modules in eval mode
D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:420: Consider setting `persistent_workers=True` in 'train_dataloader' to speed up the dataloader worker initialization.
Epoch 0: 0%| | 0/71 [00:00<?, ?it/s] D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:420: Consider setting `persistent_workers=True` in 'val_dataloader' to speed up the dataloader worker initialization.
D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\core\module.py:512: You called `self.log('generator_loss', ..., logger=True)` but have no logger configured. You can enable one by doing `Trainer(logger=ALogger(...))`
D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\utilities\data.py:79: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 3. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\core\module.py:512: You called `self.log('discriminator_loss', ..., logger=True)` but have no logger configured. You can enable one by doing `Trainer(logger=ALogger(...))`
Epoch 0: 100%|██████████| 71/71 [00:24<00:00, 2.89it/s, generator_loss_step=59.30, discriminator_loss_step=3.92e-6]
Validation: | | 0/? [00:00<?, ?it/s]
Validation: 0%| | 0/88 [00:00<?, ?it/s]
Validation DataLoader 0: 0%| | 0/88 [00:00<?, ?it/s]
Validation DataLoader 0: 1%| | 1/88 [00:00<00:22, 3.95it/s]
Validation DataLoader 0: 2%|▏ | 2/88 [00:00<00:13, 6.42it/s]
Validation DataLoader 0: 3%|▎ | 3/88 [00:00<00:10, 8.27it/s]
Validation DataLoader 0: 5%|▍ | 4/88 [00:00<00:08, 9.69it/s]
Validation DataLoader 0: 6%|▌ | 5/88 [00:00<00:07, 10.78it/s]
Validation DataLoader 0: 7%|▋ | 6/88 [00:00<00:07, 11.62it/s]
Validation DataLoader 0: 8%|▊ | 7/88 [00:00<00:06, 12.35it/s]
Validation DataLoader 0: 9%|▉ | 8/88 [00:00<00:06, 12.95it/s]
Validation DataLoader 0: 10%|█ | 9/88 [00:00<00:05, 13.45it/s]
Validation DataLoader 0: 11%|█▏ | 10/88 [00:00<00:05, 13.88it/s]
Validation DataLoader 0: 12%|█▎ | 11/88 [00:00<00:05, 14.28it/s]
Validation DataLoader 0: 14%|█▎ | 12/88 [00:00<00:05, 14.64it/s]
Validation DataLoader 0: 15%|█▍ | 13/88 [00:00<00:05, 14.95it/s]
Validation DataLoader 0: 16%|█▌ | 14/88 [00:00<00:04, 15.20it/s]
Validation DataLoader 0: 17%|█▋ | 15/88 [00:00<00:04, 15.44it/s]
Validation DataLoader 0: 18%|█▊ | 16/88 [00:01<00:04, 15.63it/s]
Validation DataLoader 0: 19%|█▉ | 17/88 [00:01<00:04, 15.81it/s]
Validation DataLoader 0: 20%|██ | 18/88 [00:01<00:04, 16.00it/s]
Validation DataLoader 0: 22%|██▏ | 19/88 [00:01<00:04, 16.11it/s]
Validation DataLoader 0: 23%|██▎ | 20/88 [00:01<00:04, 16.22it/s]
Validation DataLoader 0: 24%|██▍ | 21/88 [00:01<00:04, 16.35it/s]
Validation DataLoader 0: 25%|██▌ | 22/88 [00:01<00:04, 16.46it/s]
Validation DataLoader 0: 26%|██▌ | 23/88 [00:01<00:03, 16.55it/s]
Validation DataLoader 0: 27%|██▋ | 24/88 [00:01<00:03, 16.66it/s]
Validation DataLoader 0: 28%|██▊ | 25/88 [00:01<00:03, 16.74it/s]
Validation DataLoader 0: 30%|██▉ | 26/88 [00:01<00:03, 16.84it/s]
Validation DataLoader 0: 31%|███ | 27/88 [00:01<00:03, 16.91it/s]
Validation DataLoader 0: 32%|███▏ | 28/88 [00:01<00:03, 16.98it/s]
Validation DataLoader 0: 33%|███▎ | 29/88 [00:01<00:03, 17.07it/s]
Validation DataLoader 0: 34%|███▍ | 30/88 [00:01<00:03, 17.14it/s]
Validation DataLoader 0: 35%|███▌ | 31/88 [00:01<00:03, 17.19it/s]
Validation DataLoader 0: 36%|███▋ | 32/88 [00:01<00:03, 17.22it/s]
Validation DataLoader 0: 38%|███▊ | 33/88 [00:01<00:03, 17.27it/s]
Validation DataLoader 0: 39%|███▊ | 34/88 [00:01<00:03, 17.33it/s]
Validation DataLoader 0: 40%|███▉ | 35/88 [00:02<00:03, 17.36it/s]
Validation DataLoader 0: 41%|████ | 36/88 [00:02<00:02, 17.42it/s]
Validation DataLoader 0: 42%|████▏ | 37/88 [00:02<00:02, 17.46it/s]
Validation DataLoader 0: 43%|████▎ | 38/88 [00:02<00:02, 17.51it/s]
Validation DataLoader 0: 44%|████▍ | 39/88 [00:02<00:02, 17.53it/s]
Validation DataLoader 0: 45%|████▌ | 40/88 [00:02<00:02, 17.56it/s]
Validation DataLoader 0: 47%|████▋ | 41/88 [00:02<00:02, 17.60it/s]
Validation DataLoader 0: 48%|████▊ | 42/88 [00:02<00:02, 17.65it/s]
Validation DataLoader 0: 49%|████▉ | 43/88 [00:02<00:02, 17.69it/s]
Validation DataLoader 0: 50%|█████ | 44/88 [00:02<00:02, 17.71it/s]
Validation DataLoader 0: 51%|█████ | 45/88 [00:02<00:02, 17.75it/s]
Validation DataLoader 0: 52%|█████▏ | 46/88 [00:02<00:02, 17.80it/s]
Validation DataLoader 0: 53%|█████▎ | 47/88 [00:02<00:02, 17.82it/s]
Validation DataLoader 0: 55%|█████▍ | 48/88 [00:02<00:02, 17.85it/s]
Validation DataLoader 0: 56%|█████▌ | 49/88 [00:02<00:02, 17.87it/s]
Validation DataLoader 0: 57%|█████▋ | 50/88 [00:02<00:02, 17.92it/s]
Validation DataLoader 0: 58%|█████▊ | 51/88 [00:02<00:02, 17.93it/s]
Validation DataLoader 0: 59%|█████▉ | 52/88 [00:02<00:02, 17.96it/s]
Validation DataLoader 0: 60%|██████ | 53/88 [00:02<00:01, 18.00it/s]
Validation DataLoader 0: 61%|██████▏ | 54/88 [00:02<00:01, 18.03it/s]
Validation DataLoader 0: 62%|██████▎ | 55/88 [00:03<00:01, 18.04it/s]
Validation DataLoader 0: 64%|██████▎ | 56/88 [00:03<00:01, 18.07it/s]
Validation DataLoader 0: 65%|██████▍ | 57/88 [00:03<00:01, 18.09it/s]
Validation DataLoader 0: 66%|██████▌ | 58/88 [00:03<00:01, 18.12it/s]
Validation DataLoader 0: 67%|██████▋ | 59/88 [00:03<00:01, 18.15it/s]
Validation DataLoader 0: 68%|██████▊ | 60/88 [00:03<00:01, 18.17it/s]
Validation DataLoader 0: 69%|██████▉ | 61/88 [00:03<00:01, 18.20it/s]
Validation DataLoader 0: 70%|███████ | 62/88 [00:03<00:01, 18.22it/s]
Validation DataLoader 0: 72%|███████▏ | 63/88 [00:03<00:01, 18.24it/s]
Validation DataLoader 0: 73%|███████▎ | 64/88 [00:03<00:01, 18.26it/s]
Validation DataLoader 0: 74%|███████▍ | 65/88 [00:03<00:01, 18.27it/s]
Validation DataLoader 0: 75%|███████▌ | 66/88 [00:03<00:01, 18.28it/s]
Validation DataLoader 0: 76%|███████▌ | 67/88 [00:03<00:01, 18.30it/s]
Validation DataLoader 0: 77%|███████▋ | 68/88 [00:03<00:01, 18.32it/s]
Validation DataLoader 0: 78%|███████▊ | 69/88 [00:03<00:01, 18.33it/s]
Validation DataLoader 0: 80%|███████▉ | 70/88 [00:03<00:00, 18.35it/s]
Validation DataLoader 0: 81%|████████ | 71/88 [00:03<00:00, 18.37it/s]
Validation DataLoader 0: 82%|████████▏ | 72/88 [00:03<00:00, 18.39it/s]
Validation DataLoader 0: 83%|████████▎ | 73/88 [00:03<00:00, 18.39it/s]
Validation DataLoader 0: 84%|████████▍ | 74/88 [00:04<00:00, 18.41it/s]
Validation DataLoader 0: 85%|████████▌ | 75/88 [00:04<00:00, 18.44it/s]
Validation DataLoader 0: 86%|████████▋ | 76/88 [00:04<00:00, 18.46it/s]
Validation DataLoader 0: 88%|████████▊ | 77/88 [00:04<00:00, 18.48it/s]
Validation DataLoader 0: 89%|████████▊ | 78/88 [00:04<00:00, 18.50it/s]
Validation DataLoader 0: 90%|████████▉ | 79/88 [00:04<00:00, 18.52it/s]
Validation DataLoader 0: 91%|█████████ | 80/88 [00:04<00:00, 18.53it/s]
Validation DataLoader 0: 92%|█████████▏| 81/88 [00:04<00:00, 18.55it/s]
Validation DataLoader 0: 93%|█████████▎| 82/88 [00:04<00:00, 18.57it/s]
Validation DataLoader 0: 94%|█████████▍| 83/88 [00:04<00:00, 18.59it/s]
Validation DataLoader 0: 95%|█████████▌| 84/88 [00:04<00:00, 18.60it/s]
Validation DataLoader 0: 97%|█████████▋| 85/88 [00:04<00:00, 18.62it/s]
Validation DataLoader 0: 98%|█████████▊| 86/88 [00:04<00:00, 18.63it/s]
Validation DataLoader 0: 99%|█████████▉| 87/88 [00:04<00:00, 18.64it/s]
Validation DataLoader 0: 100%|██████████| 88/88 [00:04<00:00, 18.69it/s]Traceback (most recent call last):
File "D:\AI\anomalib\main.py", line 25, in <module>
main()
File "D:\AI\anomalib\main.py", line 21, in main
engine.fit(model, datamodule=datamodule)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\anomalib\engine\engine.py", line 416, in fit
self.trainer.fit(model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 561, in fit
call._call_and_handle_interrupt(
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\call.py", line 48, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 599, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 1012, in _run
results = self._run_stage()
^^^^^^^^^^^^^^^^^
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 1056, in _run_stage
self.fit_loop.run()
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 216, in run
self.advance()
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 455, in advance
self.epoch_loop.run(self._data_fetcher)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 153, in run
self.on_advance_end(data_fetcher)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 394, in on_advance_end
self.val_loop.run()
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\utilities.py", line 179, in _decorator
return loop_run(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\evaluation_loop.py", line 152, in run
return self.on_run_end()
^^^^^^^^^^^^^^^^^
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\evaluation_loop.py", line 295, in on_run_end
self._on_evaluation_epoch_end()
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\loops\evaluation_loop.py", line 374, in _on_evaluation_epoch_end
call._call_callback_hooks(trainer, hook_name)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\lightning\pytorch\trainer\call.py", line 227, in _call_callback_hooks
fn(trainer, trainer.lightning_module, *args, **kwargs)
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\anomalib\post_processing\post_processor.py", line 138, in on_validation_epoch_end
if self._image_threshold_metric.update_called:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\anomalib\.venvAlib\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__
raise AttributeError(
AttributeError: 'F1AdaptiveThreshold' object has no attribute 'update_called'. Did you mean: '_update_called'?
Epoch 0: 100%|██████████| 71/71 [01:31<00:00, 0.77it/s, generator_loss_step=59.30, discriminator_loss_step=3.92e-6]
Code of Conduct
- I agree to follow this project's Code of Conduct