645.1K Downloads Updated 1 year ago
Updated 1 year ago
1 year ago
28e06621edf0 · 29GB
Orca Mini is a Llama and Llama 2 model trained on Orca Style datasets created using the approaches defined in the paper, Orca: Progressive Learning from Complex Explanation Traces of GPT-4. There are two variations available. The original Orca Mini based on Llama in 3, 7, and 13 billion parameter sizes, and v3 based on Llama 2 in 7, 13, and 70 billion parameter sizes.
Open the terminal and run ollama run orca-mini
Example:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "orca-mini",
"prompt":"Why is the sky blue?"
}'
3b parameters original source: Pankaj Mathur
7b parameters original source: Pankaj Mathur
13b parameters original source: Pankaj Mathur
Orca Mini v3 source on Ollama
13b parameters original source: Pankaj Mathur
70b parameters source: Pankaj Mathur
Orca: Progressive Learning from Complex Explanation Traces of GPT-4