-
-
-
-
-
-
Inference Providers
Active filters:
vLLM
QuantTrio/Qwen3-235B-A22B-Thinking-2507-AWQ
Text Generation
•
235B
•
Updated
•
3.7k
•
6
QuantTrio/Qwen3-VL-32B-Thinking-AWQ
Image-Text-to-Text
•
33B
•
Updated
•
881
•
5
QuantTrio/DeepSeek-V3.2-AWQ
Text Generation
•
685B
•
Updated
•
28.9k
•
10
Text Generation
•
358B
•
Updated
•
22.3k
•
23
QuantTrio/GLM-4.7-Flash-AWQ
Text Generation
•
31B
•
Updated
•
76.9k
•
4
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
9B
•
Updated
•
63
•
6
model-scope/glm-4-9b-chat-GPTQ-Int8
Text Generation
•
9B
•
Updated
•
2
•
2
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
•
73B
•
Updated
•
34
•
2
tclf90/qwen2.5-72b-instruct-gptq-int3
Text Generation
•
69B
•
Updated
•
38
prithivMLmods/Nu2-Lupi-Qwen-14B
Text Generation
•
15B
•
Updated
•
1
•
2
mradermacher/Nu2-Lupi-Qwen-14B-GGUF
15B
•
Updated
•
99
•
1
mradermacher/Nu2-Lupi-Qwen-14B-i1-GGUF
15B
•
Updated
•
5.25k
•
1
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
•
0.6B
•
Updated
•
350
•
1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
•
0.6B
•
Updated
•
31
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
1.36k
•
1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
13
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
•
33B
•
Updated
•
758
•
4
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
•
33B
•
Updated
•
274
•
3
JunHowie/Qwen3-30B-A3B-GPTQ-Int4
Text Generation
•
5B
•
Updated
•
40
•
1
JunHowie/Qwen3-14B-GPTQ-Int8
Text Generation
•
15B
•
Updated
•
104
•
1
JunHowie/Qwen3-14B-GPTQ-Int4
Text Generation
•
15B
•
Updated
•
485
•
4
JunHowie/Qwen3-8B-GPTQ-Int8
Text Generation
•
8B
•
Updated
•
63
JunHowie/Qwen3-8B-GPTQ-Int4
Text Generation
•
8B
•
Updated
•
2.64k
•
4
JunHowie/Qwen3-4B-GPTQ-Int4
Text Generation
•
4B
•
Updated
•
800
•
1
JunHowie/Qwen3-4B-GPTQ-Int8
Text Generation
•
4B
•
Updated
•
21
JunHowie/Qwen3-30B-A3B-GPTQ-Int8
Text Generation
•
8B
•
Updated
•
494
QuantTrio/Qwen3-235B-A22B-GPTQ-Int8
Text Generation
•
235B
•
Updated
•
8
BeastyZ/Qwen2.5-3B-ConvSearch-R1-TopiOCQA
3B
•
Updated
•
44
QuantTrio/DeepSeek-R1-0528-Qwen3-8B-GPTQ-Int4-Int8Mix
Text Generation
•
11B
•
Updated
•
320
•
4
QuantTrio/DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Lite
Text Generation
•
721B
•
Updated
•
11
•
2