Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about loss #8

Open
Kiljoon opened this issue Aug 28, 2023 · 0 comments
Open

Questions about loss #8

Kiljoon opened this issue Aug 28, 2023 · 0 comments

Comments

@Kiljoon
Copy link

Kiljoon commented Aug 28, 2023

Thanks for the great work.

The "fixed-point correction" appears to be applied in a dense manner, as seen in RAFT and similar methods. However, the paper mentions that it is applied in a sparse manner. What are the differences in the results between these two approaches?

Furthermore, it seems that intermediate hidden states need to be stored to compute the fixed-point correction. However, changing the f_thres did not result in any difference in memory usage. Can you explain why this is the case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant