Original Model Link : https://huggingface.co/Xenova/sweep-next-edit-1.5B

name: sweep-next-edit-1.5B-MLX-BF16
base_model: sweepai/sweep-next-edit-1.5B
license: apache-2.0
pipeline_tag: text-generation
tasks: text-generation
language: en
get_started_code: uvx --from mlx-lm mlx_lm.generate --model  --model mlx-community/sweep-next-edit-1.5B-MLX-BF16 --prompt 'Test Prompt' --prompt 'Test prompt'

sweep-next-edit-1.5B-MLX-BF16

sweep-next-edit-1.5B is a code completion model based on Qwen2.5-Coder that can run in under 500ms on consumer hardware.

MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4/M5)

Generation using uv https://docs.astral.sh/uv/**:

uvx --from mlx-lm mlx_lm.generate --model  --model mlx-community/sweep-next-edit-1.5B-MLX-BF16 --prompt 'Test Prompt'

Generation using pip:

pipx --from mlx-lm mlx_lm.generate --model  --model mlx-community/sweep-next-edit-1.5B-MLX-BF16 --prompt 'Test Prompt'
Downloads last month
3
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/sweep-next-edit-1.5B-MLX-BF16

Finetuned
(1)
this model