Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Does not work on MacOS #7430

Open
devlux76 opened this issue Aug 12, 2024 · 9 comments
Open

[Bug]: Does not work on MacOS #7430

devlux76 opened this issue Aug 12, 2024 · 9 comments
Labels
bug Something isn't working unstale

Comments

@devlux76
Copy link

Your current environment

```text username@My-MacBook-Pro ~ % python3 collect_env.py torch.device('cpu'), Collecting environment information... PyTorch version: 2.2.2 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A

OS: macOS 14.5 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.6 (default, Feb 3 2024, 15:58:28) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-14.5-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz

Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.2.2
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: N/A
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

</summary>

pip install vllm
Defaulting to user installation because normal site-packages is not writeable
Collecting vllm
  Downloading vllm-0.5.4.tar.gz (958 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 958.6/958.6 kB 2.8 MB/s eta 0:00:00
  Installing build dependencies ... error
  error: subprocess-exited-with-error
  
  × pip subprocess to install build dependencies did not run successfully.
  │ exit code: 1
  ╰─> [10 lines of output]
      Collecting cmake>=3.21
        Downloading cmake-3.30.2-py3-none-macosx_11_0_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.whl.metadata (6.1 kB)
      Collecting ninja
        Downloading ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB)
      Collecting packaging
        Using cached packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
      Collecting setuptools>=49.4.0
        Downloading setuptools-72.1.0-py3-none-any.whl.metadata (6.6 kB)
      ERROR: Could not find a version that satisfies the requirement torch==2.4.0 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2)
      ERROR: No matching distribution found for torch==2.4.0
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

</details>


### 🐛 Describe the bug

The instructions on the front page don't seem to work for MacOS. I'm at a loss for what to do next.

`
pip install vllm
`

Defaulting to user installation because normal site-packages is not writeable
Collecting vllm
Downloading vllm-0.5.4.tar.gz (958 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 958.6/958.6 kB 2.8 MB/s eta 0:00:00
Installing build dependencies ... error
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
Collecting cmake>=3.21
Downloading cmake-3.30.2-py3-none-macosx_11_0_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.whl.metadata (6.1 kB)
Collecting ninja
Downloading ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB)
Collecting packaging
Using cached packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
Collecting setuptools>=49.4.0
Downloading setuptools-72.1.0-py3-none-any.whl.metadata (6.6 kB)
ERROR: Could not find a version that satisfies the requirement torch==2.4.0 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2)
ERROR: No matching distribution found for torch==2.4.0
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

@devlux76 devlux76 added the bug Something isn't working label Aug 12, 2024
@devlux76
Copy link
Author

Unsure what's causing the mess above either. Sorry.

@servient-ashwin
Copy link

servient-ashwin commented Aug 12, 2024

So this is a vllm and a pytorch issue.

These are the available versions for pytorch cpu on Intel Mac

pip index versions torch           
WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
torch (2.2.2)
Available versions: 2.2.2, 2.2.1, 2.2.0

and these are the available versions for pytorch gpu

pip index versions torch
WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
WARNING: Ignoring invalid distribution -m-eval (/opt/conda/lib/python3.10/site-packages)
torch (2.4.0)
Available versions: 2.4.0, 2.3.1, 2.3.0, 2.2.2, 2.2.1, 2.2.0, 2.1.2, 2.1.1, 2.1.0, 2.0.1, 2.0.0, 1.13.1, 1.13.0, 1.12.1, 1.12.0, 1.11.0

  INSTALLED: 2.3.1
  LATEST:    2.4.0

My assumption is In a recent bump I believe they just replaced the version number and added +cpu to it but the binary doesn't seem to be made available yet (although this could be incorrect).
I came across this same issue while starting my contribution PR but couldn't build vllm locally.

I believe #6931 maybe the cause.
Although the wheel seems to be listed here https://download.pytorch.org/whl/torch/

@girishponkiya
Copy link

girishponkiya commented Aug 26, 2024

The latest version of vLLM needs v2.4.0, but PyTorch has stopped building MacOS x86_64 binaries since torch v2.3.0.
RFC: pytorch/pytorch#114602, and my comment on it

@neviaumi
Copy link

I have managed to force install on PDM by override the resolutions. not sure it could do similar thing on pip

My configuration here

[project]
name = "experimental-vllm"
version = "0.1.0"
description = "Default template for PDM package"
authors = [
    { name = "David Ng", email = "david.ng.dev@gmail.com" },
]
dependencies = [
    "huggingface-hub[cli]>=0.26.2",
    "vllm>=0.6.3.post1",
    "torch==2.2.2; sys_platform == 'macos'",
    "torchvision==0.17.2; sys_platform == 'macos'",
]
requires-python = "==3.12.*"
readme = "README.md"
license = { text = "MIT" }


[tool.pdm]
distribution = false

[tool.pdm.resolution.overrides]
torch = "2.2.2; sys_platform == 'macos'"
torchvision = "0.17.2; sys_platform == 'macos'"

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Jan 28, 2025
@NomiJ
Copy link

NomiJ commented Jan 29, 2025

Can we open this? I came here looking for a solution. I have no idea where to go from here?

@github-actions github-actions bot added unstale and removed stale labels Jan 30, 2025
@ammaraslam10
Copy link

:( unfair

@mohanajuhi166
Copy link

I have the same issue. Can we open this ?

@qdrddr
Copy link

qdrddr commented Feb 20, 2025

Would that be of any help the LM Studio has implemented MLX. And here is Anemll ANE library to work with MLX it is MIT Licensed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unstale
Projects
None yet
Development

No branches or pull requests

8 participants