Tacotron-1严格复现CUHK-Mix-Language论文

mac2024-05-23  51

We clip gradients when their global norm ex-ceeds 1 and use parallel-mode monotonic attention with initial en-ergy function scalar bias set to -1.

https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/seq2seq/monotonic_attention https://arxiv.org/pdf/1704.00784.pdf 标贝数据集文本的处理脚本: /home/hujk17/data_BZNSYP/ProsodyLabeling LJ处理脚本的数据: tacotron/preprocessor.py 其中删掉了很多. 13000=>7980 要改正. mix他们http://219.223.184.100:8880/notebooks/Mix_phoneme_G2P_demo.ipynb

最新回复(0)