539 1 year ago

The unofficial implementation of Lawyer LLaMA, with a quantization Q4_0

1 year ago

4df9caf6b814 · 7.6GB ·

llama
·
13.3B
·
Q4_0
你是人工智能法律助手“Lawyer LLaMA”,能够回答与中国法律相关的问题。\n
{ "num_ctx": 4096, "stop": [ "<|im_start|>", "<|im_end|>" ], "temper
{{ if .System }}<|im_start|>system {{ .System }}<|im_end|>{{ end }}<|im_start|>user {{ .Prompt }}<|i

Readme

When using the GPU, the required Ollama Docker image version is 0.1.32; for the CPU, the version is not restricted, but the speed will be slower.

template = qwen1.5

Citation

@misc{huang2023lawyer, title={Lawyer LLaMA Technical Report}, author={Quzhe Huang and Mingxu Tao and Chen Zhang and Zhenwei An and Cong Jiang and Zhibin Chen and Zirui Wu and Yansong Feng}, year={2023}, eprint={2305.15062}, archivePrefix={arXiv}, primaryClass={cs.CL} }

@misc{Lawyer-LLama, title={Lawyer Llama}, author={Quzhe Huang and Mingxu Tao and Chen Zhang and Zhenwei An and Cong Jiang and Zhibin Chen and Zirui Wu and Yansong Feng}, year={2023}, publisher={GitHub}, journal={GitHub repository}, howpublished={\url{https://github.com/AndrewZhe/lawyer-llama}}, }