INTERSPEECH 2024

As Biased as You Measure: Methodological Pitfalls of Bias Evaluations in Speaker Verification Research

Abstract:

Detecting and mitigating bias in speaker verification systems 
is important, as datasets, processing choices and algorithms 
can lead to performance differences that systematically favour 
some groups of people while disadvantaging others. Prior studies 
have thus measured performance differences across groups to 
evaluate bias. However, when comparing results across studies, 
it becomes apparent that they draw contradictory conclusions, 
hindering progress in this area. In this paper we investigate 
how measurement impacts the outcomes of bias evaluations. We 
show empirically that bias evaluations are strongly influenced 
by base metrics that measure performance, by the choice of ratio 
or difference-based bias measure, and by the aggregation of bias 
measures into meta-measures. Based on our findings, we recommend 
the use of ratio-based bias measures, in particular when the 
values of base metrics are small, or when base metrics with 
different orders of magnitude need to be compared.


Pre-camera PDF 

ISCA Library

BibTeX:
@inproceedings{Hutiri:interspeech2024,
 author = {Hutiri, Wiebke and Patel, Tanvina and Ding, Aaron Yi and Scharenborg, Odette},
 title = {As Biased as You Measure: Methodological Pitfalls of Bias Evaluations in Speaker Verification Research},
 booktitle = {Proceedings of the 25th INTERSPEECH Conference},
 series = {INTERSPEECH '24},
 year = {2024},
 publisher = {ISCA}
}
How to cite:

Wiebke Toussaint, Tanvina Patel, Aaron Yi Ding, Odette Scharenborg. 2024. "As Biased as You Measure: Methodological Pitfalls of Bias Evaluations in Speaker Verification Research". In Proceedings of the 25th INTERSPEECH Conference (INTERSPEECH '24).