We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在运行vq_post_emb_a, vq_id_a, _, quantized, spk_embs_a = fa_decoder_v2(enc_out_a, prosody_a, eval_vq=False, vq=True)时,由于phone量化后的维度比prosody大一位导致在outs += out的时候报错,请问这是bug吗
The text was updated successfully, but these errors were encountered:
我也遇到了相同的问题,求答案~~
Sorry, something went wrong.
发现部分文件会有这要的问题,试着在forward里加了个补齐没问题了
+ pads = torch.zeros([prosody_feature.shape[0], prosody_feature.shape[1], x.shape[-1] - prosody_feature.shape[-1]]) + prosody_feature = torch.cat([prosody_feature, pads], dim=2) x_timbre = x outs, qs, commit_loss, quantized_buf = self.quantize( x, prosody_feature, n_quantizers=n_quantizers )
No branches or pull requests
在运行vq_post_emb_a, vq_id_a, _, quantized, spk_embs_a = fa_decoder_v2(enc_out_a, prosody_a, eval_vq=False, vq=True)时,由于phone量化后的维度比prosody大一位导致在outs += out的时候报错,请问这是bug吗
The text was updated successfully, but these errors were encountered: