Skip to content

Conversation

computervisionlearner
Copy link

对Moe的实现进行了重构,原来的Moe实现带有python的流程控制,会导致导出onnx失败。修改后的moe实现,通过mask可以让pytorch更好动态追踪expert的选择。

@computervisionlearner computervisionlearner force-pushed the main_fix_moe branch 4 times, most recently from cccd69f to ce4798f Compare April 26, 2025 08:39
@xingchensong
Copy link
Member

有没有单元测试小代码可以验证下两种方式在相同输入下可以得到相同输出?单测代码可以贴到这个PR里

@robin1001 robin1001 requested a review from cdliang11 April 27, 2025 02:43
@SwingSoulF
Copy link

正好我最近也在研究这个,这种做法会让所有专家都参与推理,让onnx推理速度变慢,不建议采用
[expert(xs_flat) for expert in self.experts]

@computervisionlearner
Copy link
Author

computervisionlearner commented May 5, 2025 via email

@cdliang11
Copy link
Collaborator

正好我最近也在研究这个,这种做法会让所有专家都参与推理,让onnx推理速度变慢,不建议采用 [expert(xs_flat) for expert in self.experts]

赞同。libtorch支持流控制,这种改动也会导致libtorch推理变慢

@Mddct
Copy link
Collaborator

Mddct commented May 6, 2025

感谢 这部分最近看下

llm moe那边有标准处理方法 大概思思路是 利用one hot 代替 for

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants