You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

ProtT3 Stage2 v13

This export packages the deployable checkpoint and minimal inference code for the ProtT3 stage2 model variant referred to in this repo as v13.

What v13 means in this repo

The evaluation tag v13 used throughout this repository resolves to the checkpoint:

  • all_checkpoints/stage2_sft_protswiss_v1/epoch=00-v3_bf16.ckpt

That checkpoint is included here as epoch=00-v3_bf16.ckpt.

Base models required at load time

This checkpoint is not a fully merged standalone language model. It should be loaded on top of these base Hugging Face models:

  • Protein encoder: facebook/esm2_t30_150M_UR50D
  • Q-former init: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
  • Text decoder: facebook/galactica-1.3b

The stage2 checkpoint also contains LoRA-style decoder deltas. The exported modeling_prott3.py loader merges those deltas into the base Galactica weights at load time.

Repo contents

  • epoch=00-v3_bf16.ckpt: stage2 checkpoint used for the repo's v13 runs
  • prott3_config.json: model and generation defaults
  • modeling_prott3.py: minimal loader and generation implementation
  • inference.py: small CLI entrypoint
  • requirements.txt: inference-time dependencies

Quickstart

pip install -r requirements.txt

python inference.py \
  --protein "MNNRWLF..." \
  --prompt "Describe the catalytic activity of this protein." \
  --dtype float32

Python usage

import torch

from modeling_prott3 import ProtT3ForConditionalGeneration

model = ProtT3ForConditionalGeneration.from_export_dir(".", dtype=torch.float32)
text = model.generate(
    proteins="MNNRWLF...",
    prompts="Describe the catalytic activity of this protein.",
)[0]
print(text)

Notes

  • The original training run metadata is in all_checkpoints/stage2_sft_protswiss_v1/lightning_logs/version_3/hparams.yaml.
  • The stage2_sft_protswiss_v1_eval_00-v13* directories in the source repo are evaluation outputs, not separate checkpoints.
  • Default inference dtype is float32 for robustness across environments. You can override it with --dtype bfloat16 if your local stack is stable there.
  • For best parity with the repo's evaluations, use prompts consistent with the task instructions in the SwissProt split CSVs.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support