"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.":"This software is open source under the MIT license, the author does not have any control over the software, and those who use the software and spread the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or quote any codes and files in the software package. See root directory <b>Agreement-LICENSE.txt</b> for details.",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ":"It is recommended +12key for male to female conversion, and -12key for female to male conversion. If the sound range goes too far and the voice is distorted, you can also adjust it to the appropriate range by yourself. ",
"特征检索库文件路径,为空则使用下拉的选择结果":"Path to Feature index file(If null, use dropdown result):",
"自动检测index路径,下拉式选择(dropdown)":"Path to the '.index' file in 'logs' directory is auto detected. Pick the matching file from the dropdown:",
"特征文件路径":"Path to Feature file:",
"检索特征占比":"Search feature ratio:",
"后处理重采样至最终采样率,0为不进行重采样":"Resample the audio in post-processing to a different sample rate.(Default(0): No post-resampling):",
"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络":"Use volume envelope of input to mix or replace the volume envelope of output, the closer the rate is to 1, the more output envelope is used.(Default(1): don't mix input envelope):",
"保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果":"Protect voiceless consonant and breath, less artifact. 0.5: don' use it. The number smaller, the stronger protection.",
"输出音频(右下角三个点,点了可以下载)":"Export audio (Click on the three dots in the bottom right corner to download)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ":"For batch conversion, input the audio folder to be converted, or upload multiple audio files, and output the converted audio in the specified folder ('opt' by default). ",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ":"step1: Fill in the experimental configuration. The experimental data is placed under 'logs', and each experiment has a folder. You need to manually enter the experimental name path, which contains the experimental configuration, logs, and model files obtained from training. ",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ":"step2a: Automatically traverse all files that can be decoded into audio in the training folder and perform slice normalization. Generates 2 wav folders in the experiment directory; Only single-singer/speaker training is supported for the time being. ",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)":"step2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (must specify GPU)",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速":"Cache all training sets to GPU Memory. Small data(~under 10 minutes) can be cached to speed up training, but large data caching will eats up the GPU Memory and may not increase the speed :",
"是否在每次保存时间点将最终小模型保存至weights文件夹":"Save a small finished model to the 'weights' directory for every epoch matching the specified 'save frequency' :",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况":"Model extraction (enter the path of the large file model under the logs folder), which is suitable for half of the training and does not want to train the model without automatically extracting and saving the small file model, or if you want to test the intermediate model",