MIMIC-IV Tabular-Infused PEFT
Collection
Predictive models of hospitalization outcomes from the MIMIC-IV dataset. All models were finetuned using our proposed tabular-infused PEFT methods. • 24 items • Updated
This model is designed to predict if a patient will get discharged within 12-hours based on the prior hospital records. It is trained on clinical notes from prior hospitalizations on MIMIC-IV. Model was trained on a novel tabular-infused LoRA, whereby the pre-operative tabular features (e.g., patient demographics and insurance information) were used to initialize the newly introduced LoRA parameters.
True/False)microsoft/biogptfrom transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cja5553/biogpt_MIMIC_IV_discharge_within_12_hours_prediction_lora_ti")
model = AutoModelForSequenceClassification.from_pretrained("cja5553/biogpt_MIMIC_IV_discharge_within_12_hours_prediction_lora_ti")
Then you can use this function below to get one test point
import torch
def get_outcome(tokenizer, model, text, device="cuda:0", max_length=512):
device = torch.device(device)
model = model.to(device)
model.eval()
inputs = tokenizer(
text,
return_tensors="pt",
max_length=max_length,
truncation=True,
padding="max_length"
).to(device)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1)[0] # (2,)
probs = probs.detach().cpu().numpy()
result = {
"False": float(probs[0]),
"True": float(probs[1])
}
return result
Contact me at alba@wustl.edu
Base model
microsoft/biogpt