303 Downloads Updated 1 year ago
Model files: MaziyarPanahi/calme-2.4-rys-78b GGUF files: mradermacher/calme-2.4-rys-78b-i1-GGUF
This model is a fine-tuned version of the dnhkng/RYS-XLarge
, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
This model is suitable for a wide range of applications, including but not limited to:
Here are GGUF models thanks to @mradermacher: - https://huggingface.co/mradermacher/calme-2.4-rys-78b-GGUF - https://huggingface.co/mradermacher/calme-2.4-rys-78b-i1-GGUF
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 50.26 |
IFEval (0-Shot) | 80.11 |
BBH (3-Shot) | 62.16 |
MATH Lvl 5 (4-Shot) | 37.69 |
GPQA (0-shot) | 20.36 |
MuSR (0-shot) | 34.57 |
MMLU-PRO (5-shot) | 66.69 |
This model uses ChatML
prompt template:
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.4-rys-78b")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.4-rys-78b")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.4-rys-78b")
As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.