Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paged attention support for multi gpu #1059

Merged
merged 1 commit into from
Jan 16, 2025
Merged

Conversation

EricLBuehler
Copy link
Owner

  • Maximum PA. GPU allocation is now equivalent of the model max. seq length (should this not be included for the case of multiple requests?)
  • Handle different GPUs with different memory availability and therefore a lower max kv cache size for the model
  • Better measure memory on Metal devices

Copy link

Code Metrics Report
  ===============================================================================
 Language            Files        Lines         Code     Comments       Blanks
===============================================================================
 C Header                2           35           28            0            7
 Dockerfile              1           41           22           10            9
 JSON                   12          105          104            0            1
 Python                 64         2729         2359           71          299
 Shell                   1           57           22           18           17
 Plain Text              3         3723            0         2413         1310
 TOML                   18          611          544            2           65
 YAML                    2           21           19            2            0
-------------------------------------------------------------------------------
 Jupyter Notebooks       4            0            0            0            0
 |- Markdown             2           77           32           31           14
 |- Python               2          205          178            1           26
 (Total)                            282          210           32           40
-------------------------------------------------------------------------------
 Markdown               44         3460            0         2625          835
 |- BASH                 6          103          100            0            3
 |- JSON                 1           12           12            0            0
 |- Python               7          121          109            0           12
 |- Rust                13          440          373            0           67
 |- TOML                 2           75           63            0           12
 (Total)                           4211          657         2625          929
-------------------------------------------------------------------------------
 Rust                  299        94304        84616         1921         7767
 |- Markdown           145         1617           25         1469          123
 (Total)                          95921        84641         3390         7890
===============================================================================
 Total                 450       105086        87714         7062        10310
===============================================================================
  

@EricLBuehler EricLBuehler merged commit 333717b into master Jan 16, 2025
12 checks passed
@EricLBuehler EricLBuehler deleted the paged_attn_multi_gpu branch January 16, 2025 11:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant