298 1 year ago

A high-quality Mixture of Experts (MoE) model with open weights by Mistral AI.

1 year ago

c50d8f6bf633 · 8.9GB ·

llama
·
12.9B
·
Q5_K_S
{ "stop": [ "[INST]", "[/INST]" ] }
"[INST] {{ .System }} {{ .Prompt }} [/INST]"

Readme

The Mixtral-7Bx2 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

@HuggingFace https://huggingface.co/ManniX-ITA/Mixtral_7Bx2_MoE-GGUF