Skip to content

reverting attn exp calculations to before 3n#511

Open
ncylich wants to merge 1 commit intomainfrom
revert-attn
Open

reverting attn exp calculations to before 3n#511
ncylich wants to merge 1 commit intomainfrom
revert-attn

Conversation

@ncylich
Copy link
Collaborator

@ncylich ncylich commented Mar 9, 2026

You can see there's no more diffs between this and the prior to 3n commit's kernel_attention.cpp:
d5cd5c0...d0ad24a

Signed-off-by: Noah Cylich <noahcylich@gmail.com>
Copilot AI review requested due to automatic review settings March 9, 2026 04:43
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reverts the vectorized exp/exp2 approximation used inside the FP16 attention softmax path back to the pre-“3n” polynomial form, aligning kernel_attention.cpp with the earlier implementation referenced in the PR description.

Changes:

  • Replaced the higher-order fused-multiply-add polynomial for the fractional exponent with a 2nd-order Taylor approximation using ln(2) and ln(2)^2/2.
  • Applied the same change in both the Apple Accelerate-based attention path and the general NEON vectorized attention path.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants