INTERSPEECH 2022

Design Guidelines for Inclusive Speaker Verification Evaluation Datasets

Abstract:

Speaker verification (SV) provides billions of voice-enabled 
devices with access control, and ensures the security of 
voice-driven technologies. As a type of biometrics, it is 
necessary that SV is unbiased, with consistent and reliable 
performance across speakers irrespective of their demographic, 
social and economic attributes. Current SV evaluation 
practices are insufficient for evaluating bias: they are 
over-simplified and aggregate users, not representative of 
real-life usage scenarios, and consequences of errors are not 
accounted for. This paper proposes design guidelines for 
constructing SV evaluation datasets that address these 
short-comings. We propose a schema for grading the difficulty 
of utterance pairs, and present an algorithm for generating 
inclusive SV datasets. We empirically validate our proposed 
method in a set of experiments on the VoxCeleb1 dataset. 
Our results confirm that the count of utterance pairs/speaker, 
and the difficulty grading of utterance pairs have a 
significant effect on evaluation performance and variability. 
Our work contributes to the development of SV evaluation 
practices that are inclusive and fair.


Pre-camera PDF 

ISCA Library

BibTeX:
@inproceedings{Toussaint:interspeech2022,
 author = {Toussaint, Wiebke and Gorce, Lauriane and Ding, Aaron Yi},
 title = {Design Guidelines for Inclusive Speaker Verification Evaluation Datasets},
 booktitle = {Proceedings of the 23rd INTERSPEECH Conference},
 series = {INTERSPEECH '22},
 year = {2022},
 publisher = {ISCA}
}
How to cite:

Wiebke Toussaint, Lauriane Gorce, Aaron Yi Ding. 2022. "Design Guidelines for Inclusive Speaker Verification Evaluation Datasets". In Proceedings of the 23rd INTERSPEECH Conference (INTERSPEECH '22).