Ollama
Models Docs Pricing
Sign in Download
Models Download Docs Pricing Sign in
⇅
deepSeek-R2 · Ollama
Search for models on Ollama.
  • deepseek-r1

    DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.

    tools thinking 1.5b 7b 8b 14b 32b 70b 671b

    83.3M  Pulls 35  Tags Updated  9 months ago

  • lsm03624/deepseek-r1

    DeepSeek-R1-0528 仍然使用 2024 年 12 月所发布的 DeepSeek V3 Base 模型作为基座,但在后训练过程中投入了更多算力,显著提升了模型的思维深度与推理能力。这个8B精馏版本编程能力都爆表!

    thinking

    1,011  Pulls 1  Tag Updated  10 months ago

  • alsdjfalsdjfs/DeepSeek-R1-0528-Q2_K_XL

    (251GB) unsloth/DeepSeek-R1-0528-GGUF:Q2_K_XL

    thinking 671b

    234  Pulls 7  Tags Updated  10 months ago

  • Huzderu/deepseek-r1-671b-2.51bit

    Merged GGUF Unsloth's DeepSeek-R1 671B 2.51bit dynamic quant

    60.5K  Pulls 1  Tag Updated  1 year ago

  • Huzderu/deepseek-r1-671b-2.22bit

    Merged GGUF Unsloth's DeepSeek-R1 671B 2.22bit dynamic quant

    5,876  Pulls 1  Tag Updated  1 year ago

  • huihui_ai/deepseek-r1-pruned

    DeepSeek-R1-Pruned-Coder-411B is a pruned version of the DeepSeek-R1 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

    411b

    87  Pulls 3  Tags Updated  1 year ago

  • rockn/DeepSeek-R1-0528-Qwen3-8B-IQ4_NL

    DeepSeek-R1-0528-Qwen3-8B-IQ4_NL

    3,635  Pulls 1  Tag Updated  10 months ago

  • okamototk/deepseek-r1

    DeepSeek R1 0528 Qwen3 8B with tool calling/MCP support

    tools thinking 8b

    2,781  Pulls 1  Tag Updated  9 months ago

  • lucasmg/deepseek-r1-8b-0528-qwen3-q4_K_M-tool-true

    DeepSeek R1 0528 Qwen3 8B Q4 with tool calling

    tools thinking

    1,860  Pulls 1  Tag Updated  10 months ago

  • mychen76/deepseek_r1_cline_roocode

    Quantized version of DeepSeek-R1-32B optimized for tool usage with Cline / Roo Code and complex problem solving.

    tools 32b

    1,715  Pulls 1  Tag Updated  11 months ago

  • dengcao/DeepSeek-R1-0528-Qwen3-8B

    DeepSeek-R1-0528-Qwen3-8B,包含2个量化模型:Q5_K_M,Q8_0

    1,427  Pulls 2  Tags Updated  10 months ago

  • alsdjfalsdjfs/DeepSeek-R1-0528-IQ1_S

    (168GB) unsloth/DeepSeek-R1-0528-GGUF:IQ1_S

    thinking 671b

    246  Pulls 3  Tags Updated  10 months ago

  • wao/DeepSeek-R1-Distill-Qwen-32B-Japanese

    Japanese instruction-tuned LLM by CyberAgent, distilled from Qwen-72B.

    250  Pulls 1  Tag Updated  11 months ago

  • hengwen/DeepSeek-R1-Distill-Qwen-32B

    DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models.

    147.5K  Pulls 2  Tags Updated  1 year ago

  • nvjob/DeepSeek-R1-32B-Cline

    Adapted to work with Cline (Claude Dev) Visual Studio Code

    tools

    3,126  Pulls 1  Tag Updated  1 year ago

  • aia/DeepSeek-R1-Distill-Qwen-32B-Uncensored-i1

    2,816  Pulls 1  Tag Updated  1 year ago

  • huihui_ai/deepseek-r1-Fusion

    DeepSeek-R1-Distill-Qwen-Coder-32B-Fusion-9010 is a mixed model that combines the strengths of two powerful DeepSeek-R1-Distill-Qwen-based models: huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.

    32b

    2,367  Pulls 6  Tags Updated  1 year ago

  • secfa/DeepSeek-R1-UD-Q2_K_XL

    Unsloth's DeepSeek-R1 , I just merged the thing and uploaded it here. This is the full 671b model. MoE Bits:2.51bit Type:UD-Q2_K_XL Disk Size:212GB Accuracy:Best Details:MoE all 2.5bit. down_proj in MoE mixture of 3.5/2.5bit

    2,062  Pulls 2  Tags Updated  1 year ago

  • huihui_ai/deepseek-v3-pruned

    DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

    411b

    1,358  Pulls 5  Tags Updated  1 year ago

  • secfa/DeepSeek-R1-UD-IQ2_XXS

    Unsloth's DeepSeek-R1 , I just merged the thing and uploaded it here. This is the full 671b model. MoE Bits:2.22bit Type:UD-IQ2_XXS Disk Size:183GB Accuracy:Better Details:MoE all 2.06bit. down_proj in MoE mixture of 2.5/2.06bit

    1,401  Pulls 2  Tags Updated  1 year ago

© 2026 Ollama
Blog Contact