Artūrs Znotiņš, Guntis Barzdins
This paper presents LVBERT– the first publicly available monolingual language model pre-trained for Latvian. We show that LVBERT improves the stateof-the-art for three Latvian NLP tasks including Part-of-Speech tagging, Named Entity Recognition and Universal Dependency parsing. We release LVBERT to facilitate future research and downstream applications for Latvian NLP.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados