Multiple function_tool call needed
1
#12 opened 13 days ago
by
kq

i love GLM! but please consider longer context length
1
#10 opened 15 days ago
by
mahmood36
Please consider making different model sizes and multimodality
#9 opened 16 days ago
by
Dampfinchen
</think> tokenization in the training set
#8 opened 16 days ago
by
L29Ah
Quantize to 4 bits with bitsandbytesconfig
#7 opened 21 days ago
by
Day1Kim
Good hardware for this model?
1
#6 opened 24 days ago
by
CoffeeBliss

When will GLM4.5 Flash be released
5
#5 opened 26 days ago
by
WilliamKing9
I have a draft PR up to llama.cpp, keen for your input
❤️
7
#4 opened 26 days ago
by
smcleod

Disable thinking mode?
❤️
1
7
#3 opened 27 days ago
by
daaain

Finetuning
2
#2 opened 27 days ago
by
AlexWortega

Add AWQ Quant?
➕
4
3
#1 opened 27 days ago
by
Foggierlucky