-
Notifications
You must be signed in to change notification settings - Fork 2.3k
[ALST/Ulysses] Added ALST/Ulysses documentation #4420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Simplify ALST/Ulysses section to match style of other docs sections - Condense from ~495 lines to ~164 lines while keeping essential info - Add practical example using trl/scripts/sft.py with 4 GPUs - Update requirements to recommend Flash Attention 2 over SDPA - Add concise 2D parallelism explanation with code snippet - Simplify best practices to 5 key points - Streamline troubleshooting section - Fix alst_ulysses_4gpu.yaml config (remove 'auto' values) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
- Simplify ALST/Ulysses section to match style of other docs sections - Condense from ~495 lines to ~164 lines while keeping essential info - Add practical example using trl/scripts/sft.py with 4 GPUs - Update requirements to recommend Flash Attention 2 over SDPA - Add concise 2D parallelism explanation with code snippet - Simplify best practices to 5 key points - Streamline troubleshooting section - Fix alst_ulysses_4gpu.yaml config (remove 'auto' values) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
docs/source/distributing_training.md
Outdated
|
|
||
| ### 2D Parallelism | ||
|
|
||
| The 4 GPU configuration above automatically enables 2D parallelism by combining Data Parallelism (DP) with Context Parallelism (CP). The `dp_shard_size` is automatically calculated to distribute across available GPUs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add experiments here as the fsdp example
docs/source/distributing_training.md
Outdated
| cp_size = 2 | ||
| dp_shard_size = num_gpus // cp_size # Automatically calculated | ||
|
|
||
| parallelism_config = ParallelismConfig( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do it via .yaml as in the other example to follow the same style?
| Here's how to run ALST/Ulysses training using the built-in [`sft.py`](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py) script with 4 GPUs: | ||
|
|
||
| ```bash | ||
| accelerate launch --config_file examples/accelerate_configs/alst_ulysses_4gpu.yaml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
|
cc @stas00 |
NouamaneTazi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice 🙌
Co-authored-by: Sergio Paniego Blanco <[email protected]>
sergiopaniego
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
What does this PR do?
Added an ALST/Ulysses section to the distributed training doc with yaml config.
Needs:
Fixes # (issue)
Before submitting
Pull Request section?
to it if that's the case.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.