using predictor.model/trainer.model in inference_on_dataset #4133
              
                Unanswered
              
          
                  
                    
                      andreaceruti
                    
                  
                
                  asked this question in
                Q&A
              
            Replies: 0 comments
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
What is the right way of evaluating my model?
If I use predictor.model I get lower results than using trainer.model, could you explain me what is the reason?
In my case I train a MaskRCNN model saving with a hook the best weights according to the validation loss (I have implemented validation loss + early stopping). At the end of the train I create another cfg and I store there the weights of the best model found before. Then I run the evaluation on my test set but the problem is that evaluation dove with predictor.model is lower than the train.model one.
So what could be the reason, maybe it does not mean that if I have a model with the smallest validation loss I should also have the model with the better evaluation?
Beta Was this translation helpful? Give feedback.
All reactions