-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Home
Welcome to the so-vits-svc-fork wiki!
Windows:
py -3.10 -m venv venv
venv\Scripts\activateLinux/MacOS:
python3.10 -m venv venv
source venv/bin/activateAnaconda:
conda create -n so-vits-svc-fork python=3.10 pip
conda activate so-vits-svc-forkInstalling without creating a virtual environment may cause a PermissionError if Python is installed in Program Files, etc.
Most parameters have not been changed from those in VITS and seems to be "empirical" (or left alone).
Changing the train parameter basically will not cause the need to reset the model; changing LR, etc. may improve the training speed. The segment_size parameter is the length of the audio and the tensor to be passed to the decoder, and increasing it may increase the training speed of the decoder, but may not make much difference because of increased VRAM usage.
Changing the model parameter may reset some or all of the weights. The current model may be too large for one speaker or may not. Simply reducing the number of channels does not seem to be effective. Changing the decoder to MS-iSTFT-Generator etc. seems to double the inference speed.
The ssl_dim is the number of input channels, and the correct number of output channels for the officially trained ContentVec model is 768, but after applying final_proj it is 256.
Lacking understanding about this model and need to study more basics .... I shouldn't waste my time at the computer working on these too unimportant bugs...
Note that these trained models differ in some training hyperparameters, so this is not an exact comparison. Training was conducted using the same initial weights. Weights that did not match the size were reset.
NSF-Hifi-GAN Generator, ~1.3?k epochs, 192/768 channels
individualAudio.4.mp4
MS-iSTFT Generator, 29.4k epochs (somehow has noise)
individualAudio.2.mp4
NSF-Hifi-GAN Generator, 17.4k epochs, 32/64 channels
individualAudio.1.mp4
| Name | fp32 (TFLOPS) | fp16 (TFLOPS) | VRAM (GB) |
| T4 | 8.141 | 65.13 | 16 |
| P100 | 9.626 | 19.05 | 16 |
| P4000 | 5.304 | 0.08288 | 8 |
| P5000 | 8.873 | 0.1386 | 16 |
| P6000 | 12.63 | 0.1974 | 24 |
| V100 PCIe | 14.13 | 28.26 | 16/32 |
| V100 SXM2 | 15.67 | 31.33 | 16/32 |
| RTX4000 | 7.119 | 14.24 | 8 |
| RTX5000 | 11.15 | 22.3 | 16 |
| A4000 | 19.17 | 19.17 | 16 |
| A5000 | 27.77 | 27.77 | 24 |
| A6000 | 38.71 | 38.71 | 48 |
| A100 PCIe/SXM4 | 19.49 | 77.97 | 40/80 |
Citing [The Best GPUs for Deep Learning in 2023 — An In\-depth Analysis](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#) https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#, viewed 2023/03/28
wookayin/gpustat: 📊 A simple command-line utility for querying and monitoring GPU status.
- Windows:
Speakers (NVidia RTX Voice) - RTX Voice Input:
(Default Device) - RTX Voice Output: `CABLE Input (VB-Audio Virtual Cable)
- so-vits-svc-fork GUI Input: `CABLE Output (VB-Audio Virtual Cable)
- so-vits-svc-fork GUI Output:
(your device)