https://ngc.nvidia.com/catalog/model-scripts/nvidia:tacotron_2_and_waveglow_for_pytorch
https://ngc.nvidia.com/catalog/model-scripts/nvidia:tacotron_2_and_waveglow_for_pytorch/quickStartGuide
https://www.jianshu.com/p/4905bf8e06e5
model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) model.eval()记住,您必须调用model.eval(),以便在运行推断之前将dropout和batch规范化层设置为评估模式。如果不这样做,将会产生不一致的推断结果。
https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
# Running nvidia-docker isolating specific GPUs by index NV_GPU='0,1' nvidia-docker <docker-options> <docker-command> <docker-args> #!/bin/bash nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -it --rm --ipc=host -v $PWD:/workspace/tacotron2/ tacotron2 bash
上面的是打开docker.
合成命令:
python inference.py --tacotron2 output/checkpoint_Tacotron2_750 --waveglow output/checkpoint_WaveGlow_1000 -o output/ -i phrases/phrase.txt --amp-run可以结合Tacotron2的代码, 来优化代码结构, 这个分布式的代码结构有点老.
主要在于ckpt的临时保存, 直接把单独的Tacotron2代码迁移过来就可, 但是其中learning_rate怎么放到opt中, 以及opt怎么delay,
更重要的是怎么恢复到ckpt中的learning_rate.
以及思考如何在inference中使用分布式.
https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2