Skip to content

refactor moe of positionwise_feed_forward to let us convert onnx success#2726

Open
computervisionlearner wants to merge 1 commit intowenet-e2e:mainfrom
computervisionlearner:main_fix_moe
Open

refactor moe of positionwise_feed_forward to let us convert onnx success#2726
computervisionlearner wants to merge 1 commit intowenet-e2e:mainfrom
computervisionlearner:main_fix_moe

Conversation

@computervisionlearner
Copy link
Copy Markdown

对Moe的实现进行了重构,原来的Moe实现带有python的流程控制,会导致导出onnx失败。修改后的moe实现,通过mask可以让pytorch更好动态追踪expert的选择。

@computervisionlearner computervisionlearner force-pushed the main_fix_moe branch 4 times, most recently from cccd69f to ce4798f Compare April 26, 2025 08:39
@xingchensong
Copy link
Copy Markdown
Member

有没有单元测试小代码可以验证下两种方式在相同输入下可以得到相同输出?单测代码可以贴到这个PR里

@robin1001 robin1001 requested a review from cdliang11 April 27, 2025 02:43
@SwingSoulF
Copy link
Copy Markdown

正好我最近也在研究这个,这种做法会让所有专家都参与推理,让onnx推理速度变慢,不建议采用
[expert(xs_flat) for expert in self.experts]

@computervisionlearner
Copy link
Copy Markdown
Author

computervisionlearner commented May 5, 2025 via email

@cdliang11
Copy link
Copy Markdown
Collaborator

正好我最近也在研究这个,这种做法会让所有专家都参与推理,让onnx推理速度变慢,不建议采用 [expert(xs_flat) for expert in self.experts]

赞同。libtorch支持流控制,这种改动也会导致libtorch推理变慢

@Mddct
Copy link
Copy Markdown
Collaborator

Mddct commented May 6, 2025

感谢 这部分最近看下

llm moe那边有标准处理方法 大概思思路是 利用one hot 代替 for

@HyacinthJingjing
Copy link
Copy Markdown

感谢 这部分最近看下

llm moe那边有标准处理方法 大概思思路是 利用one hot 代替 for

@Mddct 关于moe模型如何转onnx,能否提供一些参考文章呢?期待回复

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants