-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things
- bert tokenizer used is bert-base-uncased?
- in the training the pre-trained model passed to DocumentBertCombineWordDocumentLinear and DocumentBertSentenceChunkAttentionLSTM is also bert-base-uncased, with the first 11 layers frozen correct?
Metadata
Metadata
Assignees
Labels
No labels