-
Notifications
You must be signed in to change notification settings - Fork 351
Description
Checklist
- I have searched for similar issues.
- I have tested with the latest development wheel.
- I have checked the release documentation and the latest documentation (for
mainbranch).
My Question
I'm training my model on a dataset containing labels [4, 12, 14]. But on inference I get back [3, 11, 13] with accuracies in the high 90%s. Checking internally, that's because the code doesn't include unlabeled:0 in its set of predictable labels (which makes sense) and we arrive at 19 possible labels indexed as 0-18 (not including unlabeled). As opposed to the original 20 indexed as 0-19 (including unlabeled). So I sort of get it. I just find the return values unintuitive, since they're not indexed the same as my input values. Rather offset by -1. Which - from the outside - you have to implicitly know.
The fix for the extrenal code is pretty trivial. Just increment by 1. But again, at least to me, this seems like an upstream issue.
Thanks for your help. :)