Inference Providers
Active filters: chatglm
9B • Updated • 3
• 2
zai-org/androidgen-glm-4-9b
9B • Updated • 19
• 2
wutZYH/chatglm2-bearing-lora
9B • Updated • 10
Feature Extraction
• 9B • Updated • 2
Feature Extraction
• 9B • Updated • 1
zyy0923/lujiazui-spatiotemporal-glm4
Updated • 34
6B • Updated • 2
radariscs/finetuned-model
9B • Updated • 1
mlx-community/glm-4-9b-chat-1m-4bit
Text Generation
• Updated • 75
mlx-community/glm-4-9b-chat-1m-6bit
Text Generation
• Updated • 66
• 1
mlx-community/glm-4-9b-chat-1m-8bit
Text Generation
• Updated • 53
mlx-community/glm-4-9b-chat-1m-bf16
Text Generation
• Updated • 39
• 1
mlx-community/LongWriter-llama3.1-8b-6bit
Text Generation
• 8B • Updated • 32
• 2
amd/chatglm3-6b-onnx-ryzenai-hybrid
amd/chatglm3-6b-onnx-ryzenai-npu
Updated • 56
mlx-community/LongWriter-llama3.1-8b-8bit
Text Generation
• 8B • Updated • 16
• 1
Text Generation
• 9B • Updated • 1
Emanresu/GLM-4-9b-AutoRound-4bits
1B • Updated • 1
Text Generation
• 6B • Updated • 1
Text Generation
• 6B • Updated • 1
optimum-intel-internal-testing/tiny-random-chatglm
19.3M • Updated • 4.34k
optimum-intel-internal-testing/tiny-random-chatglm4
9.76M • Updated • 4.29k
Pujitha30/zai-org_chatglm2-6b_8bitqunatize
Feature Extraction
• 6B • Updated • 1