1
0
mirror of synced 2024-11-14 18:57:39 +01:00
Retrieval-based-Voice-Conve.../docs/en
2024-01-18 00:04:37 +09:00
..
Changelog_EN.md chore(sync): merge dev into main (#1380) 2023-10-06 19:11:58 +08:00
faiss_tips_en.md Place does by language 2023-08-27 22:04:13 +09:00
faq_en.md Update faq_en.md 2023-08-31 16:58:18 +08:00
README.en.md chore: link Português to READMEs 2024-01-18 00:04:37 +09:00
training_tips_en.md Place does by language 2023-08-27 22:04:13 +09:00

Retrieval-based-Voice-Conversion-WebUI

An easy-to-use Voice Conversion framework based on VITS.

madewithlove


Open In Colab Licence Huggingface

Discord

Changelog | FAQ (Frequently Asked Questions)

English | 中文简体 | 日本語 | 한국어 (韓國語) | Français | Türkçe | Português

Check out our Demo Video here!

Training and inference Webui Real-time voice changing GUI
go-web.bat go-realtime-gui.bat
You can freely choose the action you want to perform. We have achieved an end-to-end latency of 170ms. With the use of ASIO input and output devices, we have managed to achieve an end-to-end latency of 90ms, but it is highly dependent on hardware driver support.

The dataset for the pre-training model uses nearly 50 hours of high quality audio from the VCTK open source dataset.

High quality licensed song datasets will be added to the training-set often for your use, without having to worry about copyright infringement.

Please look forward to the pretrained base model of RVCv3, which has larger parameters, more training data, better results, unchanged inference speed, and requires less training data for training.

Features:

  • Reduce tone leakage by replacing the source feature to training-set feature using top1 retrieval;
  • Easy + fast training, even on poor graphics cards;
  • Training with a small amounts of data (>=10min low noise speech recommended);
  • Model fusion to change timbres (using ckpt processing tab->ckpt merge);
  • Easy-to-use WebUI;
  • UVR5 model to quickly separate vocals and instruments;
  • High-pitch Voice Extraction Algorithm InterSpeech2023-RMVPE to prevent a muted sound problem. Provides the best results (significantly) and is faster with lower resource consumption than Crepe_full;
  • AMD/Intel graphics cards acceleration supported;
  • Intel ARC graphics cards acceleration with IPEX supported.

Preparing the environment

The following commands need to be executed with Python 3.8 or higher.

(Windows/Linux) First install the main dependencies through pip:

# Install PyTorch-related core dependencies, skip if installed
# Reference: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio

#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/21
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

#For Linux + AMD Cards, you need to use the following pytorch versions:
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2

Then can use poetry to install the other dependencies:

# Install the Poetry dependency management tool, skip if installed
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -

# Install the project dependencies
poetry install

You can also use pip to install them:


for Nvidia graphics cards
  pip install -r requirements.txt

for AMD/Intel graphics cards on Windows (DirectML)
  pip install -r requirements-dml.txt

for Intel ARC graphics cards on Linux / WSL using Python 3.10: 
  pip install -r requirements-ipex.txt

for AMD graphics cards on Linux (ROCm):
  pip install -r requirements-amd.txt

Mac users can install dependencies via run.sh:

sh ./run.sh

Preparation of other Pre-models

RVC requires other pre-models to infer and train.

#Download all needed models from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/
python tools/download_models.py

Or just download them by yourself from our Huggingface space.

Here's a list of Pre-models and other files that RVC needs:

./assets/hubert/hubert_base.pt

./assets/pretrained 

./assets/uvr5_weights

Additional downloads are required if you want to test the v2 version of the model.

./assets/pretrained_v2

If you want to test the v2 version model (the v2 version model has changed the input from the 256 dimensional feature of 9-layer Hubert+final_proj to the 768 dimensional feature of 12-layer Hubert, and has added 3 period discriminators), you will need to download additional features

./assets/pretrained_v2

If you want to use the latest SOTA RMVPE vocal pitch extraction algorithm, you need to download the RMVPE weights and place them in the RVC root directory

https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.pt

    For AMD/Intel graphics cards users you need download:

    https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.onnx

2. Install FFmpeg

If you have FFmpeg and FFprobe installed on your computer, you can skip this step.

For Ubuntu/Debian users

sudo apt install ffmpeg

For MacOS users

brew install ffmpeg

For Windwos users

Download these files and place them in the root folder:

ROCm Support for AMD graphic cards (Linux only)

To use ROCm on Linux install all required drivers as described here.

On Arch use pacman to install the driver:

pacman -S rocm-hip-sdk rocm-opencl-sdk

You might also need to set these environment variables (e.g. on a RX6700XT):

export ROCM_PATH=/opt/rocm
export HSA_OVERRIDE_GFX_VERSION=10.3.0

Make sure your user is part of the render and video group:

sudo usermod -aG render $USERNAME
sudo usermod -aG video $USERNAME

Get started

start up directly

Use the following command to start WebUI:

python infer-web.py

Use the integration package

Download and extract file RVC-beta.7z, then follow the steps below according to your system:

For Windows users

双击go-web.bat

For MacOS users

sh ./run.sh

For Intel IPEX users (Linux Only)

source /opt/intel/oneapi/setvars.sh

Credits

Thanks to all contributors for their efforts