ASRRL-TTS: Agile Speaker Representation Reinforcement Learning for Text-to-Speech Speaker Adaptation

Ruibo Fu†,∗, Xin Qi†,*, Zhengqi Wen, Jianhua Tao, Tao Wang, Chunyu Qiang, Zhiyong Wang, Yi Lu, Xiaopeng Wang, Shuchen Shi,Yukun Liu, Xuefei Liu, Shuai Zhang,

Abstract

Speaker adaptation, which involves cloning voices from unseen speakers in the Text-to-Speech(TTS) task, has garnered significant interest due to its numerous applications in multi-media fields. Despite recent advancements, existing methods often struggle with inadequate speaker representation accuracy and overfitting, particularly in limited reference speeches scenarios. To address these challenges, we propose an Agile Speaker Representation Reinforcement Learning (ASRRL) strategy to enhance speaker similarity in speaker adaptation tasks. ASRRL is the first work to apply reinforcement learning (RL) to improve the modeling accuracy of speaker embeddings in speaker adaptation, addressing the challenge of decoupling voice content and timbre. Our approach introduces two action strategies tailored to different reference speeches scenarios. In the single-sentence (SS) scenario, a knowledge-oriented optimal routine searching RL method is employed to expedite the exploration and retrieval of refinement information on the fringe of speaker representations. In the few-sentence (FS) scenario, we utilize a dynamic RL method to adaptively fuse reference speeches, enhancing the robustness and accuracy of speaker modeling. To achieve optimal results in the target domain, a multi-scale fusion scoring mechanism based reward model that evaluates speaker similarity, speech quality, and intelligibility across three dimensions is proposed, ensuring that improvements in speaker similarity do not compromise speech quality or intelligibility. The experimental results on the LibriTTS and VCTK datasets within mainstream TTS frameworks demonstrate the extensibility and generalization capabilities of the proposed ASRRL method. The results indicate that the ASRRL method significantly outperforms traditional fine-tuning approaches, achieving higher speaker similarity and better overall speech quality with limited reference speeches.

ASSRL Framework

The overall framework of RIO.

Comparison of Zero-shot TTS Results

Sample Reference Speech Fine-tune ASRRL Ground-Truth Scenario Text
1 vits-SS (vctk) The whole process is a vicious circle at the moment.
2 vits-SS (libritts-clean) This is especially true during the later, peaceable economic stage.
3 vits-SS (libritts-other) And, Donovan, take a friend's advice and don't be too free with that watch."
4 diff-SS (vctk) Any change would be subject to the Scottish Parliament's approval.
5 diff-SS (libritts-clean) The effects are pleasing to us chiefly because we have been taught to find them pleasing.
6 diff-SS (libritts-other) Whether he was the creator of yourself and myself?
7 vits-FS (vctk) I don't think it would make any difference.
8 vits-FS (libritts-clean) "That's a poor saying," said Emil, stooping over to wipe his hands in the wet grass.
9 vits-FS (libritts-other) The little guy knew Mars as few others did, apparently, from all sides.
10 diff-FS (vctk) Being captain of this club is fantastic.
11 diff-FS (libritts-clean) And yet that something must be playful in its nature.
12 diff-FS (libritts-other) It was a trap, and the midshipman understood it now.