| --- |
| license: apache-2.0 |
| datasets: |
| - bigcode/the-stack |
| - bigcode/the-stack-v2 |
| - bigcode/starcoderdata |
| - bigcode/commitpack |
| library_name: mlx |
| tags: |
| - code |
| - mlx |
| base_model: JetBrains/Mellum-4b-sft-python |
| pipeline_tag: text-generation |
| model-index: |
| - name: Mellum-4b-sft-python |
| results: |
| - task: |
| type: text-generation |
| dataset: |
| name: RepoBench 1.1 (Python) |
| type: tianyang/repobench_python_v1.1 |
| metrics: |
| - type: exact_match |
| value: 0.2837 |
| name: EM |
| verified: false |
| - type: exact_match |
| value: 0.2987 |
| name: EM ≤ 8k |
| verified: false |
| - type: exact_match |
| value: 0.2924 |
| name: EM |
| verified: false |
| - type: exact_match |
| value: 0.306 |
| name: EM |
| verified: false |
| - type: exact_match |
| value: 0.2977 |
| name: EM |
| verified: false |
| - type: exact_match |
| value: 0.268 |
| name: EM |
| verified: false |
| - type: exact_match |
| value: 0.2543 |
| name: EM |
| verified: false |
| - task: |
| type: text-generation |
| dataset: |
| name: SAFIM |
| type: gonglinyuan/safim |
| metrics: |
| - type: pass@1 |
| value: 0.4212 |
| name: pass@1 |
| verified: false |
| - type: pass@1 |
| value: 0.3316 |
| name: pass@1 |
| verified: false |
| - type: pass@1 |
| value: 0.3611 |
| name: pass@1 |
| verified: false |
| - type: pass@1 |
| value: 0.571 |
| name: pass@1 |
| verified: false |
| - task: |
| type: text-generation |
| dataset: |
| name: HumanEval Infilling (Single-Line) |
| type: loubnabnl/humaneval_infilling |
| metrics: |
| - type: pass@1 |
| value: 0.8045 |
| name: pass@1 |
| verified: false |
| - type: pass@1 |
| value: 0.4819 |
| name: pass@1 |
| verified: false |
| - type: pass@1 |
| value: 0.3768 |
| name: pass@1 |
| verified: false |
| --- |
| |
| # mlx-community/Mellum-4b-sft-python-8bit |
|
|
| This model [mlx-community/Mellum-4b-sft-python-8bit](https://huggingface.co/mlx-community/Mellum-4b-sft-python-8bit) was |
| converted to MLX format from [JetBrains/Mellum-4b-sft-python](https://huggingface.co/JetBrains/Mellum-4b-sft-python) |
| using mlx-lm version **0.25.2**. |
|
|
| ## Use with mlx |
|
|
| ```bash |
| pip install mlx-lm |
| ``` |
|
|
| ```python |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load("mlx-community/Mellum-4b-sft-python-8bit") |
| |
| prompt = "hello" |
| |
| if tokenizer.chat_template is not None: |
| messages = [{"role": "user", "content": prompt}] |
| prompt = tokenizer.apply_chat_template( |
| messages, add_generation_prompt=True |
| ) |
| |
| response = generate(model, tokenizer, prompt=prompt, verbose=True) |
| ``` |
|
|