375 1 year ago

Meta-Llama-3-12B-Instruct is a depth upscaling merge of llama3-8b from M. Labonne

Models

View all →

22 models

llama3-12b:latest

6.6GB · 8K context window · Text · 1 year ago

llama3-12b:q2_k

4.5GB · 8K context window · Text · 1 year ago

llama3-12b:q3_k_s

5.2GB · 8K context window · Text · 1 year ago

llama3-12b:q3_k_m

5.7GB · 8K context window · Text · 1 year ago

llama3-12b:q3_k_l

6.2GB · 8K context window · Text · 1 year ago

llama3-12b:q4_0

6.6GB · 8K context window · Text · 1 year ago

llama3-12b:q4_1

7.3GB · 8K context window · Text · 1 year ago

llama3-12b:q4_k_s

6.7GB · 8K context window · Text · 1 year ago

llama3-12b:q4_k_m

7.0GB · 8K context window · Text · 1 year ago

llama3-12b:q5_0

8.0GB · 8K context window · Text · 1 year ago

llama3-12b:q5_1

8.7GB · 8K context window · Text · 1 year ago

llama3-12b:q5_k_s

8.0GB · 8K context window · Text · 1 year ago

llama3-12b:q5_k_m

8.2GB · 8K context window · Text · 1 year ago

llama3-12b:q6_k

9.5GB · 8K context window · Text · 1 year ago

llama3-12b:q8_0

12GB · 8K context window · Text · 1 year ago

llama3-12b:iq2_xxs

3.3GB · 8K context window · Text · 1 year ago

llama3-12b:iq2_xs

3.6GB · 8K context window · Text · 1 year ago

llama3-12b:iq2_s

3.8GB · 8K context window · Text · 1 year ago

llama3-12b:iq3_xxs

4.6GB · 8K context window · Text · 1 year ago

llama3-12b:iq3_s

5.2GB · 8K context window · Text · 1 year ago

llama3-12b:iq4_xs

6.3GB · 8K context window · Text · 1 year ago

llama3-12b:iq4_nl

6.6GB · 8K context window · Text · 1 year ago

Readme

Meta-Llama-3-12B-Instruct is a depth upscaling merge of llama3-8b from M. Labonne

Quantized with imatrix using groups_merged.txt

https://huggingface.co/mlabonne/Meta-Llama-3-12B-Instruct