Fix multitask_model.pt loading (ModuleNotFoundError on import)#98
Merged
RobbinBouwmeester merged 1 commit into4.0_MTfrom Apr 24, 2026
Merged
Fix multitask_model.pt loading (ModuleNotFoundError on import)#98RobbinBouwmeester merged 1 commit into4.0_MTfrom
RobbinBouwmeester merged 1 commit into4.0_MTfrom
Conversation
… to _architecture and register legacy module shim The bundled multitask_model.pt was serialised when MultitaskDeepLCModel and BatchedHeads lived in a top-level module called `multitask_model`. That module no longer exists, so torch.load raised ModuleNotFoundError for any user who tried to load the default model. Fix: - Add BatchedHeads and MultitaskDeepLCModel to deeplc/_architecture.py, where the rest of the model architecture already lives. - Add _patch_legacy_multitask_module() to deeplc/_model_ops.py, which registers a sys.modules shim mapping the old import path to the new classes before torch.load is called. The shim is a no-op if the module is already registered. - Add test_load_multitask_model_without_prior_shim to tests/test_model_ops.py to prevent regression. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
multitask_model.ptwas serialised whenMultitaskDeepLCModelandBatchedHeadslived in a top-level module calledmultitask_model. That module no longer exists in the package, so any call that loads the default model (including plaindeeplc.predict(...)) raised:or, with a partial shim:
Fix
deeplc/_architecture.py— Add the two missing classes:BatchedHeads: parallel output heads (batched linear projection + per-head dot product). Includes a proper__init__so future checkpoints can be constructed from scratch.MultitaskDeepLCModel: multi-task backbone (same 4 input branches asDeepLCModel, plus a shared trunk feeding intoBatchedHeads). Checkpoint-load-only for now;forwardis fully implemented.deeplc/_model_ops.py— Add_patch_legacy_multitask_module(), called insideload_model()beforetorch.loadwhen the argument is a file path. It registers asys.modulesshim that maps the oldmultitask_model.*import paths to the new classes indeeplc._architecture. The shim is a no-op if the key is already present.tests/test_model_ops.py— Addtest_load_multitask_model_without_prior_shimto catch regressions: it explicitly removes any pre-existing shim, loads the bundled checkpoint, and asserts the output has the expected multi-head shape.Verification
All 45 tests pass. End-to-end
deeplc.predict(peptides, model=MODEL_MULTITASK)works without any user-side workaround, returning shape(n_peptides, 1020).Performance on PXD002549 (real LC-MS/MS data, 500-peptide held-out set):
predictmultitask (best head)predict_and_calibratemultitaskfinetune_and_predictmultitask🤖 Generated with Claude Code