Lydia Nishimwe
Lydia Nishimwe
Home
Experience
Publications
Talks
Blog
CV (English)
CV (French)
Contact
Light
Dark
Automatic
Blog
Your Fairseq-trained model might have more embedding parameters than it should.
How a bug in reading SentencePiece vocabulary files causes some Fairseq-trained models to have up to 3k extra parameters in the embedding layer.
Lydia Nishimwe
,
posted on Mar 16, 2024
Last updated on Nov 27, 2024
Cite
×