Paper
Visinger2+: End-to-End Singing Voice Synthesis Augmented by Self-Supervised Learning Representation
Published Jun 13, 2024 · Yifeng Yu, Jiatong Shi, Yuning Wu
2024 IEEE Spoken Language Technology Workshop (SLT)
3
Citations
0
Influential Citations
Abstract
Singing Voice Synthesis (SVS) has witnessed significant advancements with the advent of deep learning techniques. However, a significant challenge in SVS is the scarcity of labeled singing voice data, which limits the effectiveness of supervised learning methods. In response to this challenge, this paper introduces a novel approach to enhance the quality of SVS by leveraging unlabeled data from pre-trained self-supervised learning models. Building upon the existing VISinger2 framework, this study integrates additional spectral feature information into the system to enhance its performance. The integration aims to harness the rich acoustic features from the pre-trained models, thereby enriching the synthesis and yielding a more natural and expressive singing voice. Experimental results in various corpora demonstrate the efficacy of this approach in improving the overall quality of synthesized singing voices in both objective and subjective metrics.
Using unlabeled data from pre-trained self-supervised learning models in the VISinger2 framework improves the quality of singing voice synthesis, yielding more natural and expressive results.
Full text analysis coming soon...