Meridian.AI
Meridian.AI is an experimental finance-focused sparse Mixture-of-Experts (MoE) causal language model trained via continual updates.
Intended use
Use this model for:
- Financial Q&A style prompting
- Basic quantitative finance explanations
- Summarization/classification-style finance text tasks
Base model + tokenizer
- Base model weights:
hpcai-tech/openmoe-base - Tokenizer:
google/umt5-small(256k vocab, SentencePiece/umT5)
This repo includes a working umT5 tokenizer at the root so AutoTokenizer.from_pretrained("MeridianAlgo/MeridianAI") works.
How to use (Transformers)
The model weights are stored under the checkpoint/ subfolder.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
repo_id = "MeridianAlgo/MeridianAI"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
subfolder="checkpoint",
trust_remote_code=True,
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
ignore_mismatched_sizes=True,
)
model.eval()
prompt = """### Instruction:
Explain what a P/E ratio is and how it is used.
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt")
out = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.85,
top_p=0.92,
repetition_penalty=1.25,
no_repeat_ngram_size=3,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(out[0], skip_special_tokens=True))
Training data
Training uses streaming mixes of finance datasets (FinanceMTEB family) plus optional larger corpora depending on environment configuration.
Limitations
- This is a continually trained experimental model and may exhibit repetition.
- Not financial advice.
- Outputs may be incorrect or outdated.
Source code
Training pipeline and scripts live in the GitHub repo: https://github.com/MeridianAlgo/FinAI
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for MeridianAlgo/MeridianAI
Base model
hpcai-tech/openmoe-base