MIMIC-IV Tabular-Infused PEFT
Collection
Predictive models of hospitalization outcomes from the MIMIC-IV dataset. All models were finetuned using our proposed tabular-infused PEFT methods. • 24 items • Updated
This model is designed to predict 7-day mortality upon hospital discharge. It is trained on discharge notes from the MIMIC-IV dataset, which comprises of open-sourced Electronic Health Records (EHRs). Model was trained on a novel tabular-infused LoRA, whereby the pre-operative tabular features (e.g., patient demographics and insurance information) were used to initialize the newly introduced LORa parameters, instead of initializing them randomly.
True/False)emilyalsentzer/Bio ClinicalBERTfrom transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cja5553/Bio_ClinicalBERT_MIMIC_IV_death_in_7_prediction_lora_ti")
model = AutoModelForSequenceClassification.from_pretrained("cja5553/Bio_ClinicalBERT_MIMIC_IV_death_in_7_prediction_lora_ti")
Then you can use this function below to get one test point
import torch
def get_outcome(tokenizer, model, text, device="cuda:0", max_length=512):
device = torch.device(device)
model = model.to(device)
model.eval()
inputs = tokenizer(
text,
return_tensors="pt",
max_length=max_length,
truncation=True,
padding="max_length"
).to(device)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1)[0] # (2,)
probs = probs.detach().cpu().numpy()
result = {
"False": float(probs[0]),
"True": float(probs[1])
}
return result
Contact me at alba@wustl.edu
Base model
emilyalsentzer/Bio_ClinicalBERT