-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not compatible with pytorch-cuda on windows platform #49
Comments
Thanks for reporting. This is going to be difficult because I don't have a like platform to troubleshoot on. I think the issue may be that the console mode is not getting set correctly. Can you add this code and let me know if there are any differences between the cuda version and the non-cuda version? # Add after manager = enlighten.get_manager()
import sys
from msvcrt import get_osfhandle
import jinxed
from jinxed.win32 import get_console_mode
# Check if stdout is redirected
print(sys.stdout is sys.__stdout__)
# Check for errors in blessed
print(manager.term.errors)
# See what module jinxed is using for terminfo
print(jinxed._terminal.TERM.terminfo.__name__)
# Check console mode
print(get_console_mode(get_osfhandle(sys.stdout.fileno()))) |
Thanks for your reply! non-cuda version output
cuda-version output
|
What that tells us is stdout is getting wrapped by colorama. Try specifying the real stdout when invoking the manager. manager = enlighten.get_manager(stream=sys.__stdout__) |
It works! Thank you very very much avylove! |
Glad to hear it! I'm wondering if I should make |
Changed default from |
Describe the bug
As far as I've tested, it is a bug that only existed on windows platform.
If a pytorch (with cuda) module is imported, all the print functions that between
counter.update()
will malfunction. In detail, if there's only one line print betweenupdate()
, the last printed text will be replaced by the new one; if there are multiple lines, the progressbar will be replaced and the progressbar will be rewritten at a new line whenupdate
is called.What confused me is that the bug only occurs in cuda-version-pytroch and only on windows platform. In other words, a cpu version of pytorch will not encounter it. I guess the cuda initialization progress may conflict with enlighten.
To Reproduce
No torch, no bug.
code:
output:
Torch (cuda version), BUG!
output:
Torch (cpu version), no bug
(The same code with the last one)
Environment (please complete the following information):
Addition
I've tested it on my remote linux server. It works pretty well under all conditions.
The text was updated successfully, but these errors were encountered: