Skip to content

Conversation

SmirkingKitsune
Copy link

There is a rare instance when prompt length is too long in SDXL causing a tensor mismatch, as mentioned in #2540. Added padding logic in “get_learned_conditioning” to ensure equal token lengths before concatenating the local and global conditionings in SDXL, preventing shape mismatches during inference. When the local and global conditioning tensors have different lengths, the shorter tensor is padded with zeros before concatenation.

pad SDXL conditioning tensors before concatenation when shape differs.
Panchovix added a commit to Panchovix/stable-diffusion-webui-reForge that referenced this pull request Aug 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant