597 Downloads Updated 1 year ago
Name
2 models
Size
Context
Input
vikhr_llama3.1_8b:latest
16GB · 128K context window · Text · 1 year ago
16GB
128K
Text
vikhr_llama3.1_8b:Q4_K_M
4.9GB · 128K context window · Text · 1 year ago
4.9GB
Russian Vikhr model finetuned from Llama-3.1-8B-Instruct Authors: https://t.me/vikhrmodels
FP16 (latest): HF Q4_K_M: Quantized using Ollama