Skip to content

Inference: enable flex dispatcher (DeepEP/HybridEP) for prefill#4321

Open
mathemakitten wants to merge 1 commit intoNVIDIA:mainfrom
mathemakitten:helenn-deepep-prefill
Open

Inference: enable flex dispatcher (DeepEP/HybridEP) for prefill#4321
mathemakitten wants to merge 1 commit intoNVIDIA:mainfrom
mathemakitten:helenn-deepep-prefill

Conversation

@mathemakitten
Copy link
Copy Markdown
Contributor

What does this PR do ?

Enables the MoE dispatcher backend to use the fused kernels from HybridEP for comms.

Previously, _setup_inference_mode() only accepted the alltoall dispatcher. The flex dispatcher with DeepEP/HybridEP was already functional in training but blocked from inference by an assertion. The existing dispatcher swap mechanism (flex for prefill, InferenceCUDAGraphTokenDispatcher for decode) works without modification. The decode path is unaffected and uses the A2A dispatcher still.

For prefill sequence lengths at 32k this is a ~25% speedup. Enable with --moe-token-dispatcher-type flex --moe-flex-dispatcher-backend hybridep --moe-hybridep-num-sms 8. The number of SMs should be tuned accordingly.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@mathemakitten mathemakitten requested review from a team as code owners April 15, 2026 15:38
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 15, 2026 15:38
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 15, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 15, 2026
@mathemakitten mathemakitten marked this pull request as ready for review April 15, 2026 15:45
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 15, 2026 15:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants