305 1 year ago

This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B-Instruct.

Models

View all →

22 models

smaug-llama3-8b:latest

4.7GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q2_k

3.2GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q3_k_s

3.7GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q3_k_m

4.0GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q3_k_l

4.3GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q4_0

4.7GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q4_1

5.1GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q4_k_s

4.7GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q4_k_m

4.9GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q5_0

5.6GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q5_1

6.1GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q5_k_s

5.6GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q5_k_m

5.7GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q6_k

6.6GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:q8_0

8.5GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq2_xxs

2.4GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq2_xs

2.6GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq2_s

2.8GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq3_xxs

3.3GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq3_s

3.7GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq4_xs

4.4GB · 8K context window · Text · 1 year ago

smaug-llama3-8b:iq4_nl

4.7GB · 8K context window · Text · 1 year ago

Readme

Llama-3-Smaug-8B

Quantizations with i-matrix groups_merged.txt, saftensors converted to fp32

Built with Meta Llama 3

image/png

This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B-Instruct.

Model Description

Evaluation

MT-Bench

########## First turn ##########
                   score
model             turn
Llama-3-Smaug-8B 1   8.77500
Meta-Llama-3-8B-Instruct 1   8.31250
########## Second turn ##########
                   score
model             turn
Meta-Llama-3-8B-Instruct 2   7.8875 
Llama-3-Smaug-8B 2   7.8875
########## Average ##########
                 score
model
Llama-3-Smaug-8B  8.331250
Meta-Llama-3-8B-Instruct 8.10
Model First turn Second Turn Average
Llama-3-Smaug-8B 8.78 7.89 8.33
Llama-3-8B-Instruct 8.31 7.89 8.10

This version of Smaug uses new techniques and new data compared to Smaug-72B, and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.