Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS test falsely uses MPS, fails and is misreported as passing #1416

Open
mikekgfb opened this issue Dec 11, 2024 · 0 comments
Open

MacOS test falsely uses MPS, fails and is misreported as passing #1416

mikekgfb opened this issue Dec 11, 2024 · 0 comments
Labels
bug Something isn't working CI Infra Issues related to CI infrastructure and setup

Comments

@mikekgfb
Copy link
Contributor

mikekgfb commented Dec 11, 2024

🐛 Describe the bug

#1415, #1404 and all other PRs fail on test-readme-macos when torchchat apparently falsely tries to load the model to MPS.
Due to #1315 , the test don't report as failed, so things get committed anyway.

I don't know why test-readme-macos would try to load into MPS. There's a multi-layered story here, where virtualized Mac does not support MPS (I think because most likely there's no MMU for MPS, so you can't virtualize MPS). MPS is however still reported as available by the OS, and hence a simple check torch.backends.mps.is_available(): is not sufficient because pytorch thinks that MPS is actually available (but any and all memory allocations should fail).

We're trying to fix this by doing an allocation of a tensor in MPS memory and see if that succeeds or fails in is_mps_available() here => https://github.com/pytorch/torchchat/blob/main/torchchat/utils/build_utils.py#L269 when get_device_str() is looking to check if MPS is available, and the fastest device should be returned as MPS.

Ideally, this should be fixed in the expansion get_device_str() and is_mps_available() functions, together with #1315.

Fail example is here => https://github.com/pytorch/torchchat/actions/runs/12243820522/job/34154220414?pr=1404

## Running via PyTorch 
  Downloading https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt...
  Downloading https://github.com/karpathy/llama2.c/raw/master/tokenizer.model...
  NumExpr defaulting to 6 threads.
  PyTorch version 2.6.0.dev20241013 available.
  Moving model to /Users/runner/.torchchat/model-cache/stories15M.
  
  Downloading builder script:   0%|          | 0.00/5.67k [00:00<?, ?B/s]
  Downloading builder script: 100%|██████████| 5.67k/5.67k [00:00<00:00, 5.30MB/s]
  Traceback (most recent call last):
    File "/Users/runner/work/torchchat/torchchat/torchchat.py", line 96, in <module>
  Using device=mps 
  Loading model...
      generate_main(args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/generate.py", line 1235, in main
      gen = Generator(
    File "/Users/runner/work/torchchat/torchchat/torchchat/generate.py", line 293, in __init__
      self.model = _initialize_model(self.builder_args, self.quantize, self.tokenizer)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 603, in _initialize_model
      model = _load_model(builder_args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 465, in _load_model
      model = _load_model_default(builder_args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 427, in _load_model_default
      checkpoint = _load_checkpoint(builder_args)
    File "/Users/runner/work/torchchat/torchchat/torchchat/cli/builder.py", line 412, in _load_checkpoint
      checkpoint = torch.load(
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1359, in load
      return _load(
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1856, in _load
      result = unpickler.load()
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/_weights_only_unpickler.py", line 388, in load
      self.append(self.persistent_load(pid))
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1820, in persistent_load
  Time to load model: 0.10 seconds
      typed_storage = load_tensor(
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1792, in load_tensor
      wrap_storage=restore_location(storage, location),
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1693, in restore_location
      return default_restore_location(storage, map_location)
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 601, in default_restore_location
      result = fn(storage, location)
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 467, in _mps_deserialize
      return obj.mps()
    File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/storage.py", line 260, in mps
      return torch.UntypedStorage(self.size(), device="mps").copy_(self, False)
  RuntimeError: MPS backend out of memory (MPS allocated: 1.02 GB, other allocations: 0 bytes, max allowed: 15.87 GB). Tried to allocate 256 bytes on shared pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
+ echo ::group::Completion

Versions

problem occurs in github ci/cd

mikekgfb added a commit to mikekgfb/torchchat-1 that referenced this issue Dec 11, 2024
as per pytorch#1416 torchchat on hosts without MPS (which is all github hosts which use kvm to virtualize MacOS, but not MPS) should choose CPU as "fast" device.  The logic is present (see discussion in pytorch#1416 ), but either not fully functional (that would be the easier one to fix, just print the result of get_device_str and fix the code!) or specifically ignored on load in torch/serialization.py (If this is the case, we're effectively looking at a core PyTorch issue....)

In the meantime, this bandaid just forces the use of CPU on MacOS tests, to make MacOS tests run on CPU -- labeit hsortcircuiting test/execution of the "fast" device logic.  Not ideal, but some testing beats no testing.
@Jack-Khuu Jack-Khuu added bug Something isn't working CI Infra Issues related to CI infrastructure and setup labels Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CI Infra Issues related to CI infrastructure and setup
Projects
None yet
Development

No branches or pull requests

2 participants