Towards cross-speaker articulation-to-speech synthesis using dynamic time warping alignment on speech signals
Date
Type
Language
Reading access rights:
Rights Holder
Conference Date
Conference Place
Conference Title
ISBN, e-ISBN
Container Title
Version
Faculty
First Page
Subject (OSZKAR)
ultrasound tongue imaging
silent speech interfaces
Gender
University
- Cite this item
- https://doi.org/10.3311/WINS2024-002
OOC works
Abstract
Silent Speech Interfaces (SSI) aim to provide a non-intrusive means of communication by decoding articulatory information directly from the speaker's silent gestures, such as tongue movements. However, existing SSI methods often face challenges related to speaker dependency, arising from the substantial variations in individual articulatory organ structures and speeds. This paper explores the integration of Dynamic Time Warping (DTW) alignment in the context of cross-speaker articulation-to-speech synthesis. The DTW is performed on the speech signals, which is in synchrony with the ultrasound tongue images (UTI). The alignment of UTI is done based on the calculated DTW distance. We tested cross-speaker articulation-to-speech synthesis with 4 subjects from the UltraSuite-TaL dataset. Through the utilization of aligned ultrasound data, we trained convolutional neural networks to predict mel-spectrogram from the UTI input, and finally synthesized speech with each speaker pair. The results underline the potential of DTW as a valuable tool in enhancing the applicability of SSI.