From 1d86fb7a873c61278537227a32e2d59e8812cb7e Mon Sep 17 00:00:00 2001
From: Ftps <63702646+Tps-F@users.noreply.github.com>
Date: Mon, 18 Sep 2023 23:50:30 +0900
Subject: [PATCH] Revert "Update README.md (#1257)" (#1274)
This reverts commit aa2ed8e646c9b2e059f8eb5c7bb31dbf9c5ee303.
---
README.md | 146 ++++++++++++++++++++----------------------------------
1 file changed, 53 insertions(+), 93 deletions(-)
diff --git a/README.md b/README.md
index 1e52b81..f221877 100644
--- a/README.md
+++ b/README.md
@@ -1,105 +1,93 @@
Retrieval-based-Voice-Conversion-WebUI
-An easy-to-use Voice Conversion framework based on VITS.
+一个基于VITS的简单易用的语音转换(变声器)框架
[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange
)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
-
+
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
-[![Licence](https://img.shields.io/github/license/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
+[![Licence](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
+[**更新日志**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_CN.md) | [**常见问题解答**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E8%A7%A3%E7%AD%94) | [**AutoDL·5毛钱训练AI歌手**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/Autodl%E8%AE%AD%E7%BB%83RVC%C2%B7AI%E6%AD%8C%E6%89%8B%E6%95%99%E7%A8%8B) | [**对照实验记录**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/Autodl%E8%AE%AD%E7%BB%83RVC%C2%B7AI%E6%AD%8C%E6%89%8B%E6%95%99%E7%A8%8B](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/%E5%AF%B9%E7%85%A7%E5%AE%9E%E9%AA%8C%C2%B7%E5%AE%9E%E9%AA%8C%E8%AE%B0%E5%BD%95)) | [**在线演示**](https://huggingface.co/spaces/Ricecake123/RVC-demo)
+
------
-[**Changelog**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_EN.md) | [**FAQ (Frequently Asked Questions)**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/FAQ-(Frequently-Asked-Questions))
-[**English**](../en/README.en.md) | [**中文简体**](../../README.md) | [**日本語**](../jp/README.ja.md) | [**한국어**](../kr/README.ko.md) ([**韓國語**](../kr/README.ko.han.md)) | [**Türkçe**](../tr/README.tr.md)
+[**English**](./docs/en/README.en.md) | [**中文简体**](./README.md) | [**日本語**](./docs/jp/README.ja.md) | [**한국어**](./docs/kr/README.ko.md) ([**韓國語**](./docs/kr/README.ko.han.md)) | [**Français**](./docs/fr/README.fr.md)| [**Türkçe**](./docs/tr/README.tr.md)
+点此查看我们的[演示视频](https://www.bilibili.com/video/BV1pm4y1z7Gm/) !
-Check our [Demo Video](https://www.bilibili.com/video/BV1pm4y1z7Gm/) here!
+> 使用了RVC的实时语音转换: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
-Realtime Voice Conversion Software using RVC : [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
+> 底模使用接近50小时的开源高质量VCTK训练集训练,无版权方面的顾虑,请大家放心使用
+> 请期待RVCv3的底模,参数更大,数据更大,效果更好,基本持平的推理速度,需要训练数据量更少。
-> The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset.
+## 简介
+本仓库具有以下特点
++ 使用top1检索替换输入源特征为训练集特征来杜绝音色泄漏
++ 即便在相对较差的显卡上也能快速训练
++ 使用少量数据进行训练也能得到较好结果(推荐至少收集10分钟低底噪语音数据)
++ 可以通过模型融合来改变音色(借助ckpt处理选项卡中的ckpt-merge)
++ 简单易用的网页界面
++ 可调用UVR5模型来快速分离人声和伴奏
++ 使用最先进的[人声音高提取算法InterSpeech2023-RMVPE](#参考项目)根绝哑音问题。效果最好(显著地)但比crepe_full更快、资源占用更小
++ A卡I卡加速支持
-> High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.
+## 环境配置
+以下指令需在 Python 版本大于3.8的环境中执行。
-> Please look forward to the pretrained base model of RVCv3, which has larger parameters, more training data, better results, unchanged inference speed, and requires less training data for training.
-
-## Summary
-This repository has the following features:
-+ Reduce tone leakage by replacing the source feature to training-set feature using top1 retrieval;
-+ Easy and fast training, even on relatively poor graphics cards;
-+ Training with a small amount of data also obtains relatively good results (>=10min low noise speech recommended);
-+ Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge);
-+ Easy-to-use Webui interface;
-+ Use the UVR5 model to quickly separate vocals and instruments.
-+ Use the most powerful High-pitch Voice Extraction Algorithm [InterSpeech2023-RMVPE](#Credits) to prevent the muted sound problem. Provides the best results (significantly) and is faster, with even lower resource consumption than Crepe_full.
-+ AMD/Intel graphics cards acceleration supported.
-+ Intel ARC graphics cards acceleration with IPEX supported.
-
-## Preparing the environment
-The following commands need to be executed in the environment of Python version 3.8 or higher.
-
-(Windows/Linux)
-First install the main dependencies through pip:
+(Windows/Linux)
+首先通过 pip 安装主要依赖:
```bash
-# Install PyTorch-related core dependencies, skip if installed
-# Reference: https://pytorch.org/get-started/locally/
+# 安装Pytorch及其核心依赖,若已安装则跳过
+# 参考自: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
-#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/21
+#如果是win系统+Nvidia Ampere架构(RTX30xx),根据 #21 的经验,需要指定pytorch对应的cuda版本
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
-
-#For Linux + AMD Cards, you need to use the following pytorch versions:
-#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2
```
-Then can use poetry to install the other dependencies:
+可以使用 poetry 来安装依赖:
```bash
-# Install the Poetry dependency management tool, skip if installed
-# Reference: https://python-poetry.org/docs/#installation
+# 安装 Poetry 依赖管理工具, 若已安装则跳过
+# 参考自: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
-# Install the project dependencies
+# 通过poetry安装依赖
poetry install
```
-You can also use pip to install them:
+你也可以通过 pip 来安装依赖:
```bash
+N卡:
-for Nvidia graphics cards
- pip install -r requirements.txt
+pip install -r requirements.txt
-for AMD/Intel graphics cards on Windows (DirectML):
- pip install -r requirements-dml.txt
+A卡/I卡:
+pip install -r requirements-dml.txt
-for Intel ARC graphics cards on Linux / WSL using Python 3.10:
- pip install -r requirements-ipex.txt
-
-for AMD graphics cards on Linux (ROCm):
- pip install -r requirements-amd.txt
```
------
-Mac users can install dependencies via `run.sh`:
+Mac 用户可以通过 `run.sh` 来安装依赖:
```bash
sh ./run.sh
```
-## Preparation of other Pre-models
-RVC requires other pre-models to infer and train.
+## 其他预模型准备
+RVC需要其他一些预模型来推理和训练。
-You need to download them from our [Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/).
+你可以从我们的[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)下载到这些模型。
-Here's a list of Pre-models and other files that RVC needs:
+以下是一份清单,包括了所有RVC所需的预模型和其他文件的名称:
```bash
./assets/hubert/hubert_base.pt
@@ -107,66 +95,39 @@ Here's a list of Pre-models and other files that RVC needs:
./assets/uvr5_weights
-Additional downloads are required if you want to test the v2 version of the model.
+想测试v2版本模型的话,需要额外下载
./assets/pretrained_v2
-If you want to test the v2 version model (the v2 version model has changed the input from the 256 dimensional feature of 9-layer Hubert+final_proj to the 768 dimensional feature of 12-layer Hubert, and has added 3 period discriminators), you will need to download additional features
+如果你正在使用Windows,则你可能需要这个文件,若ffmpeg和ffprobe已安装则跳过; ubuntu/debian 用户可以通过apt install ffmpeg来安装这2个库, Mac 用户则可以通过brew install ffmpeg来安装 (需要预先安装brew)
-./assets/pretrained_v2
-
-#If you are using Windows, you may also need these two files, skip if FFmpeg and FFprobe are installed
-ffmpeg.exe
+./ffmpeg
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe
-ffprobe.exe
+./ffprobe
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe
-If you want to use the latest SOTA RMVPE vocal pitch extraction algorithm, you need to download the RMVPE weights and place them in the RVC root directory
+如果你想使用最新的RMVPE人声音高提取算法,则你需要下载音高提取模型参数并放置于RVC根目录
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.pt
- For AMD/Intel graphics cards users you need download:
+ A卡I卡用户需要的dml环境要请下载
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.onnx
```
-
-Intel ARC graphics cards users needs to run `source /opt/intel/oneapi/setvars.sh` command before starting Webui.
-
-Then use this command to start Webui:
+之后使用以下指令来启动WebUI:
```bash
python infer-web.py
```
-If you are using Windows or macOS, you can download and extract `RVC-beta.7z` to use RVC directly by using `go-web.bat` on windows or `sh ./run.sh` on macOS to start Webui.
+如果你正在使用Windows 或 macOS,你可以直接下载并解压`RVC-beta.7z`,前者可以运行`go-web.bat`以启动WebUI,后者则运行命令`sh ./run.sh`以启动WebUI。
-## ROCm Support for AMD graphic cards (Linux only)
-To use ROCm on Linux install all required drivers as described [here](https://rocm.docs.amd.com/en/latest/deploy/linux/os-native/install.html).
+仓库内还有一份`小白简易教程.doc`以供参考。
-On Arch use pacman to install the driver:
-````
-pacman -S rocm-hip-sdk rocm-opencl-sdk
-````
-
-You might also need to set these environment variables (e.g. on a RX6700XT):
-````
-export ROCM_PATH=/opt/rocm
-export HSA_OVERRIDE_GFX_VERSION=10.3.0
-````
-Also make sure your user is part of the `render` and `video` group:
-````
-sudo usermod -aG render $USERNAME
-sudo usermod -aG video $USERNAME
-````
-After that you can run the WebUI:
-```bash
-python infer-web.py
-```
-
-## Credits
+## 参考项目
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
@@ -176,9 +137,8 @@ python infer-web.py
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
+ [Vocal pitch extraction:RMVPE](https://github.com/Dream-High/RMVPE)
+ The pretrained model is trained and tested by [yxlllc](https://github.com/yxlllc/RMVPE) and [RVC-Boss](https://github.com/RVC-Boss).
-
-## Thanks to all contributors for their efforts
+
+## 感谢所有贡献者作出的努力
-