19 Downloads Updated 2 weeks ago

Kumru-2B is the lightweight, open-source version of Kumru LLM, developed for Turkish from scratch by VNGRS.
Try the demo of 7B version here.
Both Kumru-7B and Kumru-2B are evaluated on Cetvel benchmark.

We observe that Kumru overall surpasses significantly larger models such as LLaMA-3.3–70B, Gemma-3–27B, Qwen-2–72B and Aya-32B. It excels at tasks related to the nuances of the Turkish language, such as grammatical error correction and text summarization.
Kumru tokenizer is a modern BPE tokenizer with a vocabulary size of 50,176, pre-tokenization regex and a chat template.

Other open-source models spend between 38% to 98% more tokens than Kumru while still having larger vocabulary sizes. This means Kumru can represent more texts in its context length and process faster and cheaper. Although the native context length of Kumru is 8,192, its effective context length can be considered between 1128 and 1618, compared to other multilingual models out there. This shows the efficiency of having a native Turkish tokenizer in terms of representation power, speed and cost.
@misc{turker2025kumru,
title={Kumru},
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
year={2025},
url={https://huggingface.co/vngrs-ai/Kumru-2B}
}