298 Downloads Updated 1 year ago
Name
11 models
mixtral_7bx2_moe:latest
7.3GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q2_K
4.8GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q3_K_S
5.6GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q3_K_M
6.2GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q3_K_L
6.7GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q4_0
7.3GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q4_K_S
7.3GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q4_K_M
7.8GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q5_K_S
8.9GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q5_K_M
9.1GB · 32K context window · Text · 1 year ago
mixtral_7bx2_moe:q6_K
11GB · 32K context window · Text · 1 year ago
The Mixtral-7Bx2 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
@HuggingFace https://huggingface.co/ManniX-ITA/Mixtral_7Bx2_MoE-GGUF