Skip to content
This repository has been archived by the owner on Nov 1, 2024. It is now read-only.

[wip] Adding flash attention for sequence parallel #511

Closed
wants to merge 25 commits into from

Conversation

stephenroller
Copy link
Contributor

Patch Description
Since we're doing manual activation checkpointing, we need to have custom backwards for MHA. This patch leverages the flash implementation in xformers.

TODO:

  • Gate behind the appropriate (existing) flag, and allow vanilla implementation to still exist
  • Add a test

Testing steps
At very large scale, was a ~0.5-1% speedup. Probably not worth it at the largest scales given the risk of numeric changes, but maybe still worth it for medium scales.

@dianaml0 dianaml0 force-pushed the flash_attention_seqpar branch 3 times, most recently from 711ab36 to 5fa1a21 Compare December 5, 2022 18:49
@dianaml0 dianaml0 force-pushed the flash_attention_seqpar branch from d9b126e to 8e5969f Compare December 16, 2022 18:40
@facebook-github-bot
Copy link

Hi @stephenroller!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants