240.1K Downloads Updated 1 year ago
Name
49 models
wizard-vicuna-uncensored:latest
3.8GB · 2K context window · Text · 1 year ago
wizard-vicuna-uncensored:7b
latest3.8GB · 2K context window · Text · 1 year ago
wizard-vicuna-uncensored:13b
7.4GB · 2K context window · Text · 1 year ago
wizard-vicuna-uncensored:30b
18GB · 2K context window · Text · 1 year ago
Wizard Vicuna Uncensored is a 7B, 13B, and 30B parameter model based on Llama 2 uncensored by Eric Hartford. The models were trained against LLaMA-7B with a subset of the dataset, responses that contained alignment / moralizing were removed.
The model used in the example below is the Wizard Vicuna Uncensored model, with 7b parameters, which is a general-use model.
ollama serve
)curl -X POST http://localhost:11434/api/generate -d '{
"model": "wizard-vicuna-uncensored",
"prompt":"Who made Rose promise that she would never let go?"
}'
ollama run wizard-vicuna-uncensored
Note: The ollama run
command performs an ollama pull
if the model is not already downloaded. To download the model without running it, use ollama pull wizard-vicuna-uncensored
If you run into issues with higher quantization levels, try using the q4 model or shut down any other programs that are using a lot of memory.
By default, Ollama uses 4-bit quantization. To try other quantization levels, please try the other tags. The number after the q represents the number of bits used for quantization (i.e. q4 means 4-bit quantization). The higher the number, the more accurate the model is, but the slower it runs, and the more memory it requires.
Aliases |
---|
latest, 7b, 7b-q4_0 |
13b, 13b-q4_0 |
30b, 30b-q4_0 |
Wizard Vicuna Uncensored source on Ollama
7b parameters original source: Eric Hartford
13b parameters original source: Eric Hartford
30b parameters original source: Eric Hartford