Mirek
mirekphd
ยท
AI & ML interests
ML, MLOps, AIOps, LLM, VLM, SSM,
Organizations
None yet
Error when loading Qwen3-VL-30B-A3B-Instruct-AWQ with transformers
2
#3 opened 6 months ago
by
dfg543
Any chance of a smaller coding model in the 30-70b range?
๐โค๏ธ 50
4
#6 opened 9 months ago
by
smcleod
Inference problems for all Qwen2.5 VL models in transformers above 4.49.0
2
#26 opened about 1 year ago
by
mirekphd
Inference problems in transformers==4.50.0
1
#3 opened about 1 year ago
by
mirekphd
Can you distill qwen-2.5-72b?
๐ 1
1
#30 opened about 1 year ago
by
xldistance
Failed to load model (with the latest version 17 hours ago )
9
#3 opened over 1 year ago
by
omnibookxp
Will there be an AWQ quant for 26B or 40B?
โ 1
6
#7 opened almost 2 years ago
by
SilentAntagonist
Will there be an AWQ quant for 26B or 40B?
โ 1
6
#7 opened almost 2 years ago
by
SilentAntagonist
Will there be an AWQ quant for 26B or 40B?
โ 1
6
#7 opened almost 2 years ago
by
SilentAntagonist