-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor code according to keyedModelHandler changes #21819
Conversation
Run Python 3.7 PostCommit |
Run Python 3.9 PostCommit |
PytorchModelHandler( | ||
state_dict_path=known_args.model_state_dict_path, | ||
model_class=model_class, | ||
model_params=model_params)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need the type hints on line 139?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, still getting the error(expecting Union) if I remove the output hint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it's because RunInference
is imported from api.py
instead of base.py
and the RunInference
in api.py
has Union[...]
type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. I thought the splitting of model handlers would allow us getting the type deterministically instead of having to make a union. we can look into it separately.
cc: @robertwb @yeandy @rezarokni
state_dict_path=known_args.model_state_dict_path, | ||
model_class=model_class, | ||
model_params=model_params) | ||
# the input to RunInference transform is keyed. Wrap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds like the input is always keyed. Isn't it user's choice? If so, would this be more appropriate instead:
In this example we pass keyed inputs to RunInference transform.
Therefore, we use KeyedModelHandler wrapper over PytorchModelHandler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
sdks/python/apache_beam/examples/inference/pytorch_image_classification.py
Outdated
Show resolved
Hide resolved
@AnandInguva can you fix the input and state_dict path names too? Or maybe in a separate PR |
Ran the IT test that uses the above example offline and it passes. Also all the unit tests for torch RunInference passes locally. |
will look into it in a separate PR. thanks |
* Refactor code according to keyedModelHandler changes * Add comments on why keyedModelHandler is used.
Resolve tests failing due to #21777.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
R: @username
).addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.