Benjamin Marie PRO
bnjmnmarie
AI & ML interests
None yet
Recent Activity
updated a collection 9 days ago
Quantized Qwen3.5 updated a model 9 days ago
kaitchup/Qwen3.5-27B-autoround-NVFP4-linearattn-BF16 published a model 17 days ago
kaitchup/Qwen3.5-27B-autoround-NVFP4-linearattn-BF16Organizations
Error running vllm 12
1
#1 opened 3 months ago
by
evetsagg
Reasoning not parsing correctly within VLLM
11
#4 opened 4 months ago
by
SuperbEmphasis
works with vLLM, with FLASHINFER_MOE_FP4
1
#2 opened 4 months ago
by
bnjmnmarie
Update README.md
#1 opened 4 months ago
by
bmarie
Question about Llama 3.1 license compliance
😎👍 6
2
#3 opened 5 months ago
by
tokinasin
qwen3moe instead of qwen3_moe?
3
#5 opened 8 months ago
by
bnjmnmarie
How can I get the access?
1
#1 opened 12 months ago
by
minjunsz
how to run this model
4
#1 opened 12 months ago
by
cicdatopea
Qwen-32B overflow issue
8
#1 opened about 1 year ago
by
cicdatopea
Your quants are not listed in the base model
2
#2 opened about 1 year ago
by
dazipe
Any plan for 8 bit?
1
#1 opened about 1 year ago
by
jm4n21
Request for Mistral Large 2 Instruct 2407 3bit with Autoround GPTQ
4
#1 opened about 1 year ago
by
MLDataScientist
Update README.md
#1 opened over 1 year ago
by
bmarie
Mistral-large 2bit?
1
#1 opened over 1 year ago
by
KnutJaegersberg
SFT notebooks
2
#1 opened over 1 year ago
by
silvacarl
Librarian Bot: Add language metadata for dataset
#2 opened over 1 year ago
by
librarian-bot
Good, but it doesn't stop
3
#1 opened almost 2 years ago
by
FM-1976
Does this quantized model adequately work?
5
#1 opened about 2 years ago
by
Dtree07
Loading the Model
2
#1 opened about 2 years ago
by deleted