642.2K Downloads Updated 1 year ago
Name
119 models
orca-mini:latest
2.0GB · 2K context window · Text · 1 year ago
orca-mini:3b
latest2.0GB · 2K context window · Text · 1 year ago
orca-mini:7b
3.8GB · 4K context window · Text · 1 year ago
orca-mini:13b
7.4GB · 4K context window · Text · 1 year ago
orca-mini:70b
39GB · 4K context window · Text · 1 year ago
Orca Mini is a Llama and Llama 2 model trained on Orca Style datasets created using the approaches defined in the paper, Orca: Progressive Learning from Complex Explanation Traces of GPT-4. There are two variations available. The original Orca Mini based on Llama in 3, 7, and 13 billion parameter sizes, and v3 based on Llama 2 in 7, 13, and 70 billion parameter sizes.
Open the terminal and run ollama run orca-mini
Example:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "orca-mini",
"prompt":"Why is the sky blue?"
}'
3b parameters original source: Pankaj Mathur
7b parameters original source: Pankaj Mathur
13b parameters original source: Pankaj Mathur
Orca Mini v3 source on Ollama
13b parameters original source: Pankaj Mathur
70b parameters source: Pankaj Mathur
Orca: Progressive Learning from Complex Explanation Traces of GPT-4