Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fallback from GPU gen to shared for non-tensor #1476

Merged
merged 4 commits into from
Feb 20, 2024
Merged

Conversation

jeremylt
Copy link
Member

Minor PR to automatically fall back to shared backends when you find an operator with non-tensor bases for GPU

@jeremylt
Copy link
Member Author

When this is approved, I'll make a matching PR to do the same to SYCL

@nbeams
Copy link
Contributor

nbeams commented Feb 20, 2024

*-shared doesn't have any non-tensor kernels, only -*ref does (and magma). So if you create a shared fallback, it will actually be calling ref for everything through delegation. I think the fallback (and message) might as well be for ref, to avoid confusion.

@jeremylt
Copy link
Member Author

Good point, I'll update that since its really the same core operator handling between the two

@jeremylt jeremylt force-pushed the jeremy/gpu-fallback branch from a704be0 to 4535e69 Compare February 20, 2024 22:51
@jeremylt jeremylt merged commit 814bef8 into main Feb 20, 2024
23 of 24 checks passed
@jeremylt jeremylt deleted the jeremy/gpu-fallback branch February 20, 2024 23:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants