Remove conv1d weight cast in Qwen3-Next forward #2084
                
     Merged
            
            
          
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Conv1D's compute precision in Qwen3-Next in G2 should be kept as float, thus Conv1D weight should cast to fp32.
This PR removes the unnecessary bf16->fp32 cast in every forward call in flat linear attention. Instead it just calls the cast once in the first time (normally during the profile run), and then removes the original bf16 conv1d weight since it won't be used any more.