Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cifar 10 and 100 #3

Merged
merged 1 commit into from
Nov 10, 2016
Merged

cifar 10 and 100 #3

merged 1 commit into from
Nov 10, 2016

Conversation

soumith
Copy link
Member

@soumith soumith commented Nov 10, 2016

No description provided.

@soumith soumith merged commit 63dabca into master Nov 10, 2016
@soumith soumith deleted the cifar branch November 10, 2016 21:43
sprt added a commit to sprt/vision that referenced this pull request Dec 11, 2019
zhiqwang pushed a commit to zhiqwang/vision that referenced this pull request Jan 19, 2022
Fixing python lint, docstrings and add typing annotations
facebook-github-bot pushed a commit that referenced this pull request Jun 7, 2022
… to conform with non-quantized countertpart filenames (#77037)

Summary:
X-link: pytorch/pytorch#77037

Names of analogous files in quantized directory (previously snake case) were inconsistent with
their non-quantized filename counterparts (pascal case). This is the first of a series of PRs that changes
all files in quantized (and sub-directories) dir to have pascal case.

`aten/src/ATen/native/quantized/qconv_unpack.cpp` has not been renamed yet
because (for reasons currently unknown) after making the name change, `import torch` produces the below error (`qlinear_unpack.cpp` renaming also seems to fail some phabricator CI tests for similar reasons). We suspect that these may be undefined errors and will revisit naming these files in a future PR.

```
terminate called after throwing an instance of 'c10::Error'
  what():  Type c10::intrusive_ptr<ConvPackedParamsBase<2> > could not be converted to any of the known types.
Exception raised from operator() at ../aten/src/ATen/core/jit_type.h:1735 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x55 (0x7f26745c0c65 in /data/users/dzdang/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xb1 (0x7f26745bdcd1 in /data/users/dzdang/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x1494e24 (0x7f2663b14e24 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0xfed0bc (0x7f266366d0bc in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #4: c10::detail::infer_schema::make_function_schema(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x5a (0x7f266366d71a in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #5: c10::detail::infer_schema::make_function_schema(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x7b (0x7f266366e06b in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x1493f32 (0x7f2663b13f32 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0xe227dd (0x7f26634a27dd in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x14e0a (0x7f268c934e0a in /lib64/ld-linux-x86-64.so.2)
..........................truncated.............
```

Reviewed By: malfet

Differential Revision: D36862332

Pulled By: dzdang

fbshipit-source-id: 598c36656b4e71f906d940e7ff19ecf82d43031d
datumbox added a commit that referenced this pull request Jun 8, 2022
…zed directory… (#6133)

* [quant][core][better-engineering] Rename files in quantized directory to conform with non-quantized countertpart filenames (#77037)

Summary:
X-link: pytorch/pytorch#77037

Names of analogous files in quantized directory (previously snake case) were inconsistent with
their non-quantized filename counterparts (pascal case). This is the first of a series of PRs that changes
all files in quantized (and sub-directories) dir to have pascal case.

`aten/src/ATen/native/quantized/qconv_unpack.cpp` has not been renamed yet
because (for reasons currently unknown) after making the name change, `import torch` produces the below error (`qlinear_unpack.cpp` renaming also seems to fail some phabricator CI tests for similar reasons). We suspect that these may be undefined errors and will revisit naming these files in a future PR.

```
terminate called after throwing an instance of 'c10::Error'
  what():  Type c10::intrusive_ptr<ConvPackedParamsBase<2> > could not be converted to any of the known types.
Exception raised from operator() at ../aten/src/ATen/core/jit_type.h:1735 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x55 (0x7f26745c0c65 in /data/users/dzdang/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xb1 (0x7f26745bdcd1 in /data/users/dzdang/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x1494e24 (0x7f2663b14e24 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0xfed0bc (0x7f266366d0bc in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #4: c10::detail::infer_schema::make_function_schema(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x5a (0x7f266366d71a in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #5: c10::detail::infer_schema::make_function_schema(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x7b (0x7f266366e06b in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x1493f32 (0x7f2663b13f32 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0xe227dd (0x7f26634a27dd in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x14e0a (0x7f268c934e0a in /lib64/ld-linux-x86-64.so.2)
..........................truncated.............
```

Reviewed By: malfet

Differential Revision: D36862332

Pulled By: dzdang

fbshipit-source-id: 598c36656b4e71f906d940e7ff19ecf82d43031d

* empty commit

* empty commit

* empty commit

Co-authored-by: dzdang <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>
rajveerb pushed a commit to rajveerb/vision that referenced this pull request Nov 30, 2023
* [llm] Init draft NVIDIA reference

* [LLM] Add exact HPs used to match NVIDIA's convergence curves

* [LLM] Add data preprocessing steps and remove dropout

* [LLM] fix eval, add ckpt load util, remove unnecessary files

* [LLM] Update data preprocessing stage in README

* Full validation and google settings

* Apply review comments

* Anmolgupt/nvidia llm reference update (pytorch#3)

* Update Nvidia LLM reference code version

Co-authored-by: Anmol Gupta <[email protected]>

* fixes to imports (pytorch#5)

Co-authored-by: Anmol Gupta <[email protected]>

* distributed checkpoint and mlperf logger support (pytorch#6)

* readme and mllogger keywords update (pytorch#7)

Co-authored-by: Anmol Gupta <[email protected]>

* Update fp32_checkpoint_checksum.log

* Update README.md

* Update README.md

* Update README.md

* mlperf logger keywords update (pytorch#8)

Co-authored-by: Anmol Gupta <[email protected]>

* [LLM] Create framework folder

* [LLM] Update README to follow reference template

* Describe LLM checkpoint format in README (pytorch#9)

Describe LLM checkpoint format in README

* [LLM] Readme updates, small fixes

* readme update and run script eval update (pytorch#10)

Co-authored-by: Anmol Gupta <[email protected]>

---------

Co-authored-by: Mikołaj Błaż <[email protected]>
Co-authored-by: anmolgupt <[email protected]>
Co-authored-by: Anmol Gupta <[email protected]>
Co-authored-by: mikolajblaz <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant