-
Notifications
You must be signed in to change notification settings - Fork 75
Merge OpenAI Triton commit 40dd0c4
#5466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Enable cp.async.bulk.tensor.2d.tile::gather4.shared on sm_120 and sm_121. - Skip TMA scatter4 test on sm_120 since it is unsupported by hardware. Note: All other TMA features except for cluster-related ones are supported on sm_120.
This PR exposes the internal layout utility `chooseScaledMfmaScaleLayout` and `chooseScaledWmmaScaleLayout` for Gluon, to help generate a linear layout for scale used in `mfma_scaled`/`wmma_scaled`. This also allows gluon kernels to specify a scalar scale value or leave it as None.
Without resetting opt_flags, the following does not work and gives error
`AssertionError: opt_flags already set; please reset to None first`:
```
import torch
from triton_kernels.matmul_ogs import matmul_ogs, PrecisionConfig
from triton_kernels.matmul_ogs_details.opt_flags import (
make_opt_flags,
set_opt_flags,
)
from triton_kernels.routing import RoutingData
m = 64
n = 128
k = 32
BATCH_SIZE = 1000
dtype = torch.float16
x = torch.randn((BATCH_SIZE, m, k), device="cuda", dtype=dtype)
w = torch.randn((BATCH_SIZE, k, n), device="cuda", dtype=dtype)
bias = None
opt_flags = make_opt_flags(
dtype,
dtype,
dtype,
PrecisionConfig(),
m,
n,
k,
RoutingData(None, None, BATCH_SIZE, 1),
True,
False,
False,
)
set_opt_flags(opt_flags)
tri_y = matmul_ogs(x, w, bias)
opt_flags.num_warps = 2
set_opt_flags(opt_flags)
tri_y = matmul_ogs(x, w, bias)
```
After adding `reset_opt_flags()` before the second call of
`set_opt_flags` everything works fine.
Functions and their individual arguments are passed as an array. All the arguments are just appended together in MLIR, but the `WarpSpecializeOp::canonicalize` method will clean up duplicate arguments.
This is in preparation for more examples to add and be consistent with other directory names.
…8531) `warp_specialize` ops currently have unknown location set in the TTGIR due to a quirk in the code emission in `_semantic.py`: for `warp_specialize` we need save and then restore insert point. Location is being inferred from the insert point, however if insert point happens to be in a place that doesn't have location assigned (end of a block), we set unknown loc. This change is a minimal fix that adds a helper that gets the location from block's parent in such a case. Alternatively we could also save location along with insert point, and then restore it accordingly. This approach is simpler and should help for most cases I could have think of however. This change is important for consan changes I am working on, as it breaks the LLVM backend if we create instrumentation function calls with unknown location inferred from warp_specialize op.
…c (#8529) During SWP, we are checking if a given `LoadOp` should be lowered to `AsyncCopyGlobalToLocalOp` twice - first in `AssignLatency`, and `LowerLoops` next. The two checks duplicate non-trivial conditions like `copyVecBytes >= 4` or `op.getResultTypes()[0].getIntOrFloatBitWidth() >= 32`. I moved the `isPipeliningBeneficial` function from `AssignLatency` into utilities so that it can also be used by `LowerLoops`. This will also be used by WS to determine if `LoadOp` should be lowered to cpasync and assigned to the load partition.
Expose `buffer_load` and `buffer_store`, inherited from CDNA3, to gfx1250.
<!--- The core Triton is a small number of people, and we receive many PRs (thank you!). To help us review your code more quickly, **if you are a new contributor (less than 3 PRs merged) we ask that you complete the following tasks and include the filled-out checklist in your PR description.** Complete the following tasks before sending your PR, and replace `[ ]` with `[x]` to indicate you have done them. --> # New contributor declaration - [x] I am not making a trivial change, such as fixing a typo in a comment. - [x] I have written a PR description following these [rules](https://cbea.ms/git-commit/#why-not-how). - [x] I have run `pre-commit run --from-ref origin/main --to-ref HEAD`. - Select one of the following. - [x] I have added tests. - `/test` for `lit` tests - `/unittest` for C++ tests - `/python/test` for end-to-end tests - [ ] This PR does not need a test because `FILL THIS IN`. - Select one of the following. - [x] I have not added any `lit` tests. - [ ] The `lit` tests I have added follow these [best practices](https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices), including the "tests should be minimal" section. (Usually running Python code and using the instructions it generates is not minimal.)
…528) Each aggregate class tracks its callable members and when the aggregate is referenced by name, the cache keys of all its members are computed. This does require `def __init__` to be marked as `@constexpr_function`
Signed-off-by: Whitney Tsang <[email protected]>
chengjunlu
approved these changes
Nov 13, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR changes the Triton base from 4734af3 to 40dd0c4 (Oct 24).
Pass rate: 94.59%->94.91%