235 Downloads Updated 1 year ago
Name
21 models
llama3-sppo-iter3:latest
4.7GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q2_k
3.2GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q3_k_s
3.7GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q3_k_m
4.0GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q3_k_l
4.3GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q4_0
4.7GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q4_1
5.1GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q4_k_s
4.7GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q4_k_m
4.9GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q5_0
5.6GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q5_1
6.1GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q5_k_s
5.6GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q5_k_m
5.7GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q6_k
6.6GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:q8_0
8.5GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:iq2_xs
2.6GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:iq2_s
2.8GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:iq3_xxs
3.3GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:iq3_s
3.7GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:iq4_xs
4.4GB · 8K context window · Text · 1 year ago
llama3-sppo-iter3:iq4_nl
4.7GB · 8K context window · Text · 1 year ago
Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
This model was developed using Self-Play Preference Optimization at iteration 3, based on the meta-llama/Meta-Llama-3-8B-Instruct architecture as starting point. We utilized the prompt sets from the openbmb/UltraFeedback dataset, splited to 3 parts for 3 iterations by snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset. All responses used are synthetic.
Model | LC. Win Rate | Win Rate | Avg. Length |
---|---|---|---|
Llama-3-8B-SPPO Iter1 | 31.73 | 31.74 | 1962 |
Llama-3-8B-SPPO Iter2 | 35.15 | 35.98 | 2021 |
Llama-3-8B-SPPO Iter3 | 38.77 | 39.85 | 2066 |
Results are reported by using lm-evaluation-harness v0.4.1
arc_challenge | truthfulqa_mc2 | winogrande | gsm8k | hellaswag | mmlu | average | |
---|---|---|---|---|---|---|---|
Llama-3-8B-SPPO Iter1 | 63.82 | 54.96 | 76.40 | 75.44 | 79.80 | 65.65 | 69.35 |
Llama-3-8B-SPPO Iter2 | 64.93 | 56.48 | 76.87 | 75.13 | 80.39 | 65.67 | 69.91 |
Llama-3-8B-SPPO Iter3 | 65.19 | 58.04 | 77.11 | 74.91 | 80.86 | 65.60 | 70.29 |
The following hyperparameters were used during training:
@misc{wu2024self,
title={Self-Play Preference Optimization for Language Model Alignment},
author={Wu, Yue and Sun, Zhiqing and Yuan, Huizhuo and Ji, Kaixuan and Yang, Yiming and Gu, Quanquan},
year={2024},
eprint={2405.00675},
archivePrefix={arXiv},
primaryClass={cs.LG}
}