Publications
Chosen filters:
Chosen filters:
RobBERTje: A Distilled Dutch BERT Model KU Leuven
Pre-trained large-scale language models such as BERT have gained a lot of attention thanks to their outstanding performance on a wide range of natural language tasks. However, due to their large number of parameters, they are resource-intensive both to deploy and to fine-tune. Researchers have created several methods for distilling language models into smaller ones to increase efficiency, with a small performance trade-off. In this paper, we ...
Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models KU Leuven University of Antwerp
RobBERT: a Dutch RoBERTa-based Language Model KU Leuven
Pre-trained language models have been dominating the field of natural language processing in recent years, and have led to significant performance gains for various complex natural language tasks. One of the most prominent pre-trained language models is BERT, which was released as an English as well as a multilingual version. Although multilingual BERT performs well on many tasks, recent studies show that BERT models trained on a single language ...