AISquare-Instruct-llama2-koen-13b-v0.9.24

Model Details

Developed by Inswave Systems UI Platform Team

Method
Using DPO method and SFT method

Hardware
We utilized an A100x4 * 1 for training our model

Base Model beomi/llama2-koen-13b

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
600
Safetensors
Model size
13B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24

Quantizations
1 model

Space using inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24 1