-
Just like CONSISTENT TRAINING AND DECODING FOR END-TO-END SPEECH RECOGNITION USING LATTICE-FREE MMI. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
That is an interesting paper, |
Beta Was this translation helpful? Give feedback.
That is an interesting paper,
In principle, as long as we are not constraining things with some kind of graph, the LF-MMI denominator probability would exactly equal 1.0, since all paths sum to 1 and all paths are included in the LF-MMI denominator. So it should generate no derivative. But I can imagine that, by representing it as a graph of some kind (after pruning), we are focusing on the more probable paths that could actually compete as hypotheses, so it might have a positive effect.
I'm interested in this kind of thing, but I doubt that the core team will have time to get to it in the next few months.