121 Downloads Updated 10 months ago
Name
4 models
UIGEN-T1-Qwen-14:latest
30GB · 32K context window · Text · 10 months ago
UIGEN-T1-Qwen-14:q4_K_S
8.6GB · 32K context window · Text · 10 months ago
UIGEN-T1-Qwen-14:q6_K
12GB · 32K context window · Text · 10 months ago
UIGEN-T1-Qwen-14:q8_0
16GB · 32K context window · Text · 10 months ago
license: apache-2.0
datasets: - smirki/UI_Reasoning_Dataset
language: - en
pipeline_tag: text-generation
tags: - code
base_model: - Qwen/Qwen2.5-Coder-14B-Instruct
new_version: smirki/UIGEN-T1.1-Qwen-14B

New and Improved reasoning traces. Better ui generation. Smarter decisions. Better code generation! Trained on a 700+ dataset.
USE BUDGET FORCING (putting the word answer or think at the end of the assistant generation to keep generationg more thinking and use ‘answer’ to write code.)
SFT on a 4090 for 4 hours.
UIGEN-T1.1 is a 14-billion parameter transformer model fine-tuned on Qwen2.5-Coder-14B-Instruct. It is designed for reasoning-based UI generation, leveraging a complex chain-of-thought approach to produce robust HTML and CSS-based UI components. Currently, it is limited to basic applications such as dashboards, landing pages, and sign-up forms.
UIGEN-T1.1 generates HTML and CSS-based UI layouts by reasoning through design principles. While it has a strong chain-of-thought reasoning process, it is currently limited to text-based UI elements and simpler frontend applications. The model excels at dashboards, landing pages, and sign-up forms, but lacks advanced interactivity (e.g., JavaScript-heavy functionalities).

<|im_start|>user
{question}<|im_end|>
<|im_start|>assistant
<|im_start|>think
{reasoning}<|im_end|>
<|im_start|>answer
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "smirki/UIGEN-T1.1-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
prompt = """<|im_start|>user
Make a dark-themed dashboard for an oil rig.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=12012, do_sample=True, temperature=0.7) #max tokens has to be greater than 12k
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
If using this model, please cite:
BibTeX:
@misc{smirki_UIGEN-T1.1,
title={UIGEN-T1.1.1: Chain-of-Thought UI Generation Model},
author={smirki},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/smirki/UIGEN-T1.11}
}







