200 Downloads Updated 1 year ago
Updated 1 year ago
1 year ago
b250317a0f84 · 7.7GB ·
q4_K_M, q6_K and q8_0 quantized versions of mlabonne/NeuralBeagle14-7B
Supports up to 8K context. Modelfile is configured for 4K.