Skip to content

Commit

Permalink
Merge branch 'deepspeedai:master' into avoid-graph-break-caused-by-in…
Browse files Browse the repository at this point in the history
…ner-classes
  • Loading branch information
deepcharm authored Feb 26, 2025
2 parents d9ec2e7 + 729dfaf commit 561bd6e
Show file tree
Hide file tree
Showing 21 changed files with 665 additions and 54 deletions.
3 changes: 2 additions & 1 deletion .github/workflows/no-torch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,12 @@ jobs:
run: |
pip uninstall torch --yes
pip install setuptools
pip install build
pip list
- name: Build deepspeed
run: |
DS_BUILD_STRING=" " python setup.py sdist
DS_BUILD_STRING=" " python -m build --sdist
- name: Open GitHub issue if nightly CI fails
if: ${{ failure() && (github.event_name == 'schedule') }}
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ jobs:
- name: Build DeepSpeed
run: |
pip install setuptools
DS_BUILD_STRING=" " python setup.py sdist
pip install build
DS_BUILD_STRING=" " python -m build --sdist
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,7 @@ dynamically link them at runtime.
| Intel | Intel(R) Gaudi(R) 2 AI accelerator | hpu | Yes | Yes |
| Intel | Intel(R) Xeon(R) Processors | cpu | Yes | Yes |
| Intel | Intel(R) Data Center GPU Max series | xpu | Yes | Yes |
| Tecorigin | Scalable Data Analytics Accelerator | sdaa | Yes | No |

## PyPI
We regularly push releases to [PyPI](https://pypi.org/project/deepspeed/) and encourage users to install from there in most cases.
Expand Down
19 changes: 18 additions & 1 deletion accelerator/real_accelerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
except ImportError as e:
dsa2 = None

SUPPORTED_ACCELERATOR_LIST = ['cuda', 'cpu', 'xpu', 'xpu.external', 'npu', 'mps', 'hpu', 'mlu']
SUPPORTED_ACCELERATOR_LIST = ['cuda', 'cpu', 'xpu', 'xpu.external', 'npu', 'mps', 'hpu', 'mlu', 'sdaa']

ds_accelerator = None

Expand Down Expand Up @@ -80,6 +80,12 @@ def get_accelerator():
except ImportError as e:
raise ValueError(f"NPU_Accelerator requires torch_npu, which is not installed on this system.")
pass
elif accelerator_name == "sdaa":
try:
import torch_sdaa # noqa: F401 # type: ignore
except ImportError as e:
raise ValueError(f"SDAA_Accelerator requires torch_sdaa, which is not installed on this system.")
pass
elif accelerator_name == "mps":
try:
import torch.mps
Expand Down Expand Up @@ -137,6 +143,13 @@ def get_accelerator():
accelerator_name = "npu"
except ImportError as e:
pass
if accelerator_name is None:
try:
import torch_sdaa # noqa: F401,F811 # type: ignore

accelerator_name = "sdaa"
except ImportError as e:
pass
if accelerator_name is None:
try:
import torch.mps
Expand Down Expand Up @@ -205,6 +218,10 @@ def get_accelerator():
from .npu_accelerator import NPU_Accelerator

ds_accelerator = NPU_Accelerator()
elif accelerator_name == "sdaa":
from .sdaa_accelerator import SDAA_Accelerator

ds_accelerator = SDAA_Accelerator()
elif accelerator_name == "mps":
from .mps_accelerator import MPS_Accelerator

Expand Down
Loading

0 comments on commit 561bd6e

Please sign in to comment.