You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

L3.3-70B-Animus-V12.5

Wings_of_Fire

Send me your support to help me feed the data beast! also taking comissions for universe specific models

Support on Ko-fi

Important: Chat Template

This model uses the Llama 3 instruction template. Ensure your client is configured correctly to avoid degraded performance.

Human-Readable Format:

<|start_header_id|>system<|end_header_id|>\n\n[SYSTEM_PROMPT]<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n[USER_MESSAGE]<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n

Jinja Template:

{{- bos_token }}{%- if custom_tools is defined %}{%- set tools = custom_tools %}{%- endif %}{%- if not tools_in_user_message is defined %}{%- set tools_in_user_message = true %}{%- endif %}{%- if not date_string is defined %}{%- set date_string = "26 Jul 2024" %}{%- endif %}{%- if not tools is defined %}{%- set tools = none %}{%- endif %}{%- if messages[0]['role'] == 'system' %}{%- set system_message = messages[0]['content']|trim %}{%- set messages = messages[1:] %}{%- else %}{%- set system_message = "" %}{%- endif %}{{- "<|start_header_id|>system<|end_header_id|>\n\n" }}{%- if builtin_tools is defined or tools is not none %}{{- "Environment: ipython\n" }}{%- endif %}{%- if builtin_tools is defined %}{{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "\n\n"}}{%- endif %}{{- "Cutting Knowledge Date: December 2023\n" }}{{- "Today Date: " + date_string + "\n\n" }}{%- if tools is not none and not tools_in_user_message %}{{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}{{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}{{- "Do not use variables.\n\n" }}{%- for t in tools %}{{- t | tojson(indent=4) }}{{- "\n\n" }}{%- endfor %}{%- endif %}{{- system_message }}{{- "<|eot_id|>" }}{%- if tools_in_user_message and not tools is none %}{%- if messages | length != 0 %}{%- set first_user_message = messages[0]['content']|trim %}{%- set messages = messages[1:] %}{%- else %}{{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}{%- endif %}{{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}{{- "Given the following functions, please respond with a JSON for a function call " }}{{- "with its proper arguments that best answers the given prompt.\n\n" }}{{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}{{- "Do not use variables.\n\n" }}{%- for t in tools %}{{- t | tojson(indent=4) }}{{- "\n\n" }}{%- endfor %}{{- first_user_message + "<|eot_id|>"}}{%- endif %}{%- for message in messages %}{%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}{{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}{%- elif 'tool_calls' in message %}{%- if not message.tool_calls|length == 1 %}{{- raise_exception("This model only supports single tool-calls at once!") }}{%- endif %}{%- set tool_call = message.tool_calls[0].function %}{%- if builtin_tools is defined and tool_call.name in builtin_tools %}{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}{{- "<|python_tag|>" + tool_call.name + ".call(" }}{%- for arg_name, arg_val in tool_call.arguments | items %}{{- arg_name + '="' + arg_val + '"' }}{%- if not loop.last %}{{- ", " }}{%- endif %}{%- endfor %}{{- ")" }}{%- else %}{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}{{- '{"name": "' + tool_call.name + '", ' }}{{- '"parameters": ' }}{{- tool_call.arguments | tojson }}{{- "}" }}{%- endif %}{%- if builtin_tools is defined %}{{- "<|eom_id|>" }}{%- else %}{{- "<|eot_id|>" }}{%- endif %}{%- elif message.role == "tool" or message.role == "ipython" %}{{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}{%- if message.content is mapping or message.content is iterable %}{{- message.content | tojson }}{%- else %}{{- message.content }}{%- endif %}{{- "<|eot_id|>" }}{%- endif %}{%- endfor %}{%- if add_generation_prompt %}{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{%- endif %}

Quantized Models

The quantized model files are available for download. Click the buttons below to view the files.

Download EXL3 Files โ†’ Download GGUF Files โ†’ Download compressed-tensors for vLLM โ†’

Character Card & Lore Book

For the best roleplaying experience, it is highly recommended to use the provided character card and lore book. These files help guide the model's persona and provide rich, in-universe context.

Download Files โ†’

Sampler Presets

For a seamless setup in SillyTavern, you can download pre-configured sampler presets. These are tuned to provide an optimal balance between creativity and narrative coherence for this model.

Simply download the .json file below and import it into SillyTavern's sampler presets menu.

Download SillyTavern Presets โ†’

  • For those that dont use silly tavern, Samplers settings are:
    • Temp: 1

      Min P: 0.02

      Dry: 0.8 , 1.75, 4

    Roleplay Format Guide

    For the best results, use this structured format. This helps the AI clearly distinguish between actions, inner thoughts, and dialogue.

    Actions / Descriptions
    *He walked across the room and stared out the window.*
    Inner Thoughts
    *-I wonder what she's thinking.-*
    Dialogue
    Alex (Curious): "What do you see out there?"

    Standard novel-style formatting is also understood, but this structured format is preferred for clarity.

    Roleplay Example

    Click the button below to view a full, unedited chatlog demonstrating the model's narrative style and character portrayal.

    View Chatlog Example โ†’

    Model Description

    This is Version 12.5, in the Animus series. V12.5 is a direct fine-tune of kldzj/Llama-3.3-70B-Instruct-heretic.

    V12.5's strength comes from a novel dataset designed to teach the model the why behind the lore, not just the what. The training data is a mix of:

    • A 3,000-example Q&A dataset: This data is framed as an in-character study session, like a student at Jade Mountain Academy learning about the history, relationships, and politics of Pyrrhia's tribes. This provides a deep, contextual understanding of the universe.
    • A 3,000-example uncensored roleplay dataset: The same high-quality, mature roleplay scenarios used in previous versions, ensuring the model maintains its engaging and dynamic narrative capabilities.

    The result is a model with exceptionally strong prose and a deep grasp of in-universe lore, making for a highly immersive and accurate roleplaying experience.

    Note for roleplay, it follows system prompt and first message, meaning if the first assistant message is short, the following messages will be short.

    Training Details

    V12.5 Training Process

    V12.5 marks a shift from model merging to a focused, direct fine-tuning approach. This allows for greater control over the final model's characteristics.

    • Base Model: kldzj/Llama-3.3-70B-Instruct-heretic
    • Hardware: 1x NVIDIA B200
    • Training Time: 20 hours
    • Epochs: 3

    Training Dataset

    The V12.5 dataset consists of 6,000 high-quality examples, a combination of two distinct types:

    • In-Character Q&A (3,000 examples): This new dataset simulates a student at Jade Mountain Academy studying the world's lore. It's composed of roleplay-style questions and answers covering tribe history, family dynamics, and political relationships. This method builds a foundational, interconnected understanding of the lore.
    • Uncensored Roleplay (3,000 examples): This is the same mature, canon-centric dataset refined for previous versions. It explores pivotal "what-if" scenarios from the books using only canon characters, ensuring the model can handle complex and dramatic narratives.

    Both datasets underwent a rigorous cleaning process to remove formatting artifacts, such as **scene transitions**, resulting in a cleaner and more natural narrative style.

    Intended Use & Limitations

    • Intended Use: The primary purpose of this model is for creative and roleplaying within the Wings of Fire universe. However, user feedback indicates it is also highly effective for general-purpose roleplaying.
    • Limitations & Quirks:
      • Performance on tasks outside of its training domain (general knowledge, coding, etc.) is not guaranteed and will likely be poor.
      • Versatility: While it appears to be only a Wings of Fire tuned model, users have reported it is very capable of performing normal roleplay with other settings and characters.
      • The model may "hallucinate" or generate plausible but non-canonical information, especially when pushed outside the established "what-if" scenarios.
      • Content: The training data includes mature and darker themes from the Wings of Fire series, such as conflict, character death, and moral ambiguity. The model is capable of generating content reflecting these themes. As always, it is up to the user what they do with it.
      • Formatting: Training data was cleaned to remove narrative artifacts like **scene transitions**. The model should now produce cleaner prose.
      • Safety: This model has not undergone additional safety alignment beyond what was included in its base model. Standard responsible AI practices should be followed.

    Acknowledgements

    • Credit to kldzj for the powerful Llama-3.3-70B-Instruct-heretic model.
    • Credit to Google for the Gemini Pro model, used in dataset generation.
    Downloads last month
    -
    Safetensors
    Model size
    71B params
    Tensor type
    BF16
    ยท
    Inference Providers NEW
    This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

    Model tree for Darkhn/L3.3-70B-Animus-V12.5

    Finetuned
    (2)
    this model
    Quantizations
    2 models