Publicaties
RobBERTje: A Distilled Dutch BERT Model KU Leuven
Pre-trained large-scale language models such as BERT have gained a lot of attention thanks to their outstanding performance on a wide range of natural language tasks. However, due to their large number of parameters, they are resource-intensive both to deploy and to fine-tune. Researchers have created several methods for distilling language models into smaller ones to increase efficiency, with a small performance trade-off. In this paper, we ...
Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models KU Leuven Universiteit Antwerpen
The CNAME of the Game: Large-scale Analysis of DNS-based Tracking Evasion KU Leuven
Online tracking is a whack-a-mole game between trackers who build and monetize behavioral user profiles through intrusive data collection, and anti-tracking mechanisms, deployed as a browser extension, built-in to the browser, or as a DNS resolver. As a response to pervasive and opaque online tracking, more and more users adopt anti-tracking tools to preserve their privacy. Consequently, as the information that trackers can gather on users is ...