-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix ToTensor to handle numpy #4
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sprt
added a commit
to sprt/vision
that referenced
this pull request
Dec 11, 2019
pytorch#1019 introduced a change that removes empty boxes in postprocessing on the basis that, previously, empty boxes would actually reach this postprocessing step, and that the model could therefore output empty boxes (even though NMS would've most likely filtered them out). It's essentially a safety check that'd be seldom needed. However, that filtering causes dynamicity on the TPU (because the number of empty boxes, if any, would be unknown). And in any case, we recently introduced a change in PR pytorch#4 that purposefully pads the boxes tensor with empty boxes, to avoid dynamicity. Therefore there's no point trying to remove boxes that we use as padding, we'll just filter those out of the output on the CPU.
sprt
added a commit
to sprt/vision
that referenced
this pull request
Dec 11, 2019
pytorch#1019 introduced a change that removes empty boxes in postprocessing on the basis that, previously, empty boxes would actually reach this postprocessing step, and that the model could therefore output empty boxes (even though NMS would've most likely filtered them out). It's essentially a safety check that'd be seldom needed. However, that filtering causes dynamicity on the TPU (because the number of empty boxes, if any, would be unknown). And in any case, we recently introduced a change in PR pytorch#4 that purposefully pads the boxes tensor with empty boxes, to avoid dynamicity. Therefore there's no point trying to remove boxes that we use as padding, we'll just filter those out of the output on the CPU.
sprt
added a commit
to sprt/vision
that referenced
this pull request
Dec 11, 2019
pytorch#1019 introduced a change that removes empty boxes in postprocessing on the basis that, previously, empty boxes would actually reach this postprocessing step, and that the model could therefore output empty boxes (even though NMS would've most likely filtered them out). It's essentially a safety check that'd be seldom needed. However, that filtering causes dynamicity on the TPU (because the number of empty boxes, if any, would be unknown). And in any case, we recently introduced a change in PR pytorch#4 that purposefully pads the boxes tensor with empty boxes, to avoid dynamicity. Therefore there's no point trying to remove boxes that we use as padding, we'll just filter those out of the output on the CPU.
sprt
added a commit
to sprt/vision
that referenced
this pull request
Dec 17, 2019
pytorch#1019 introduced a change that removes empty boxes in postprocessing on the basis that, previously, empty boxes would actually reach this postprocessing step, and that the model could therefore output empty boxes (even though NMS would've most likely filtered them out). It's essentially a safety check that'd be seldom needed. However, that filtering causes dynamicity on the TPU (because the number of empty boxes, if any, would be unknown). And in any case, we recently introduced a change in PR pytorch#4 that purposefully pads the boxes tensor with empty boxes, to avoid dynamicity. Therefore there's no point trying to remove boxes that we use as padding, we'll just filter those out of the output on the CPU.
zhiqwang
pushed a commit
to zhiqwang/vision
that referenced
this pull request
Jan 19, 2022
Apply ufmt format
This was referenced Feb 22, 2022
facebook-github-bot
pushed a commit
that referenced
this pull request
Jun 7, 2022
… to conform with non-quantized countertpart filenames (#77037) Summary: X-link: pytorch/pytorch#77037 Names of analogous files in quantized directory (previously snake case) were inconsistent with their non-quantized filename counterparts (pascal case). This is the first of a series of PRs that changes all files in quantized (and sub-directories) dir to have pascal case. `aten/src/ATen/native/quantized/qconv_unpack.cpp` has not been renamed yet because (for reasons currently unknown) after making the name change, `import torch` produces the below error (`qlinear_unpack.cpp` renaming also seems to fail some phabricator CI tests for similar reasons). We suspect that these may be undefined errors and will revisit naming these files in a future PR. ``` terminate called after throwing an instance of 'c10::Error' what(): Type c10::intrusive_ptr<ConvPackedParamsBase<2> > could not be converted to any of the known types. Exception raised from operator() at ../aten/src/ATen/core/jit_type.h:1735 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x55 (0x7f26745c0c65 in /data/users/dzdang/pytorch/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xb1 (0x7f26745bdcd1 in /data/users/dzdang/pytorch/torch/lib/libc10.so) frame #2: <unknown function> + 0x1494e24 (0x7f2663b14e24 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #3: <unknown function> + 0xfed0bc (0x7f266366d0bc in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #4: c10::detail::infer_schema::make_function_schema(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x5a (0x7f266366d71a in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #5: c10::detail::infer_schema::make_function_schema(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x7b (0x7f266366e06b in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #6: <unknown function> + 0x1493f32 (0x7f2663b13f32 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #7: <unknown function> + 0xe227dd (0x7f26634a27dd in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #8: <unknown function> + 0x14e0a (0x7f268c934e0a in /lib64/ld-linux-x86-64.so.2) ..........................truncated............. ``` Reviewed By: malfet Differential Revision: D36862332 Pulled By: dzdang fbshipit-source-id: 598c36656b4e71f906d940e7ff19ecf82d43031d
datumbox
added a commit
that referenced
this pull request
Jun 8, 2022
…zed directory… (#6133) * [quant][core][better-engineering] Rename files in quantized directory to conform with non-quantized countertpart filenames (#77037) Summary: X-link: pytorch/pytorch#77037 Names of analogous files in quantized directory (previously snake case) were inconsistent with their non-quantized filename counterparts (pascal case). This is the first of a series of PRs that changes all files in quantized (and sub-directories) dir to have pascal case. `aten/src/ATen/native/quantized/qconv_unpack.cpp` has not been renamed yet because (for reasons currently unknown) after making the name change, `import torch` produces the below error (`qlinear_unpack.cpp` renaming also seems to fail some phabricator CI tests for similar reasons). We suspect that these may be undefined errors and will revisit naming these files in a future PR. ``` terminate called after throwing an instance of 'c10::Error' what(): Type c10::intrusive_ptr<ConvPackedParamsBase<2> > could not be converted to any of the known types. Exception raised from operator() at ../aten/src/ATen/core/jit_type.h:1735 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x55 (0x7f26745c0c65 in /data/users/dzdang/pytorch/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xb1 (0x7f26745bdcd1 in /data/users/dzdang/pytorch/torch/lib/libc10.so) frame #2: <unknown function> + 0x1494e24 (0x7f2663b14e24 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #3: <unknown function> + 0xfed0bc (0x7f266366d0bc in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #4: c10::detail::infer_schema::make_function_schema(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x5a (0x7f266366d71a in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #5: c10::detail::infer_schema::make_function_schema(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x7b (0x7f266366e06b in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #6: <unknown function> + 0x1493f32 (0x7f2663b13f32 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #7: <unknown function> + 0xe227dd (0x7f26634a27dd in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so) frame #8: <unknown function> + 0x14e0a (0x7f268c934e0a in /lib64/ld-linux-x86-64.so.2) ..........................truncated............. ``` Reviewed By: malfet Differential Revision: D36862332 Pulled By: dzdang fbshipit-source-id: 598c36656b4e71f906d940e7ff19ecf82d43031d * empty commit * empty commit * empty commit Co-authored-by: dzdang <[email protected]> Co-authored-by: Vasilis Vryniotis <[email protected]>
rajveerb
pushed a commit
to rajveerb/vision
that referenced
this pull request
Nov 30, 2023
* RNN-T reference update for MLPerf Training v1.0 * switch to stable DALI release * transcritp tensor building - index with np array instead of torch tensor * fix multi-GPU bucketing * eval every epoch, logging improvement * user can adjust optimizer betas * gradient clipping * missing config file * [README] add driver disclaimer * right path to sentencepieces * bind all gpus in docker/launch.sh script * move speed perturbation out of evaluation * remove not related code; update logging; default arguments with LAMB * add evaluation when every sample is seen once * add run_and_time.sh * update logging * missing augmentation logs * revert unwanted dropout removal from first two encode layers * scaling weights initialization * limit number of symbols produced by the greedy decoder * simplification - rm old eval pipeline * dev_ema in tb_logginer * loading from checkpoint restores optimizer state * Rnnt logging update (pytorch#4) * logging uses constants instead of raw strings * missing log entries * add weights initialization logging according to mlcommons/logging#80 * 0.5 wights initialization scale gives more stable convergence * fix typo, update logging lib to include new constant * README update * apply review suggestions * [README] fix model diagram 2x time stacking after 2nd encoder layer, not 3x * transcript tensor padding comment * DALI output doesn't need extra zeroing of padding * Update README.md Links to code sources, fix LSTM weight and bias initialization description * [README] model diagram fix - adjust to 1023 sentencepieces
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.