5,289 Downloads Updated 1 month ago
Model documentation: MedGemma
Resources:
Author: Google
This section describes the MedGemma model and how to use it.
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension. Developers can use MedGemma to accelerate building healthcare-based AI applications. MedGemma currently comes in two variants: a 4B multimodal version and a 27B text-only version.
MedGemma 4B utilizes a SigLIP image encoder that has been specifically pre-trained on a variety of de-identified medical data, including chest X-rays, dermatology images, ophthalmology images, and histopathology slides. Its LLM component is trained on a diverse set of medical data, including radiology images, histopathology patches, ophthalmology images, and dermatology images.
MedGemma 4B is available in both pre-trained (suffix: -pt
) and
instruction-tuned (suffix -it
) versions. The instruction-tuned version is a
better starting point for most applications. The pre-trained version is
available for those who want to experiment more deeply with the models.
MedGemma 27B has been trained exclusively on medical text and optimized for inference-time computation. MedGemma 27B is only available as an instruction-tuned model.
MedGemma variants have been evaluated on a range of clinically relevant benchmarks to illustrate their baseline performance. These include both open benchmark datasets and curated datasets. Developers can fine-tune MedGemma variants for improved performance. Consult the Intended Use section below for more details.
A full technical report will be available soon.
Below are some example code snippets to help you quickly get started running the model locally on GPU. If you want to use the model at scale, we recommend that you create a production version using Model Garden.
First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
$ pip install -U transformers
Run model with the pipeline
API
from transformers import pipeline
from PIL import Image
import requests
import torch
pipe = pipeline(
"image-text-to-text",
model="google/medgemma-4b-it",
torch_dtype=torch.bfloat16,
device="cuda",
)
# Image attribution: Stillwaterising, CC0, via Wikimedia Commons
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are an expert radiologist."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this X-ray"}
{"type": "image", "image": image},
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Run the model directly
# pip install accelerate
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import requests
import torch
model_id = "google/medgemma-4b-it"
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
# Image attribution: Stillwaterising, CC0, via Wikimedia Commons
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are an expert radiologist."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this X-ray"},
{"type": "image", "image": image}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
See the following Colab notebooks for examples of how to use MedGemma:
To give the model a quick try, running it locally with weights from Hugging Face, see Quick start notebook in Colab. Note that you will need to use Colab Enterprise to run the 27B model without quantization.
For an example of fine-tuning the model, see the Fine-tuning notebook in Colab.
The MedGemma model is built based on Gemma 3 and uses the same decoder-only transformer architecture as Gemma 3. To read more about the architecture, consult the Gemma 3 model card.
A technical report is coming soon. In the meantime, if you publish using this model, please cite the Hugging Face model page:
@misc{medgemma-hf,
author = {Google},
title = {MedGemma Hugging Face}
howpublished = {\url{https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4}},
year = {2025},
note = {Accessed: [Insert Date Accessed, e.g., 2025-05-20]}
}
Input:
Output:
MedGemma was evaluated across a range of different multimodal classification, report generation, visual question answering, and text-based tasks.
The multimodal performance of MedGemma 4B was evaluated across a range of benchmarks, focusing on radiology, dermatology, histopathology, ophthalmology, and multimodal clinical reasoning.
MedGemma 4B outperforms the base Gemma 3 4B model across all tested multimodal health benchmarks.
Task and metric | MedGemma 4B | Gemma 3 4B |
---|---|---|
Medical image classification | ||
MIMIC CXR - Average F1 for top 5 conditions | 88.9 | 81.1 |
CheXpert CXR - Average F1 for top 5 conditions | 48.1 | 31.2 |
DermMCQA* - Accuracy | 71.8 | 42.6 |
Visual question answering | ||
SlakeVQA (radiology) - Tokenized F1 | 62.3 | 38.6 |
VQA-Rad** (radiology) - Tokenized F1 | 49.9 | 38.6 |
PathMCQA (histopathology, internal***) - Accuracy | 69.8 | 37.1 |
Knowledge and reasoning | ||
MedXpertQA (text + multimodal questions) - Accuracy | 18.8 | 16.4 |
*Described in Liu (2020, Nature medicine), presented as a 4-way MCQ per example for skin condition classification.
**Based on “balanced split,” described in Yang (2024, arXiv).
***Based on multiple datasets, presented as 3-9 way MCQ per example for identification, grading, and subtype for breast, cervical, and prostate cancer.
MedGemma chest X-ray (CXR) report generation performance was evaluated on MIMIC-CXR using the RadGraph F1 metric. We compare the MedGemma pre-trained checkpoint with our previous best model for CXR report generation, PaliGemma 2.
Metric | MedGemma 4B (pre-trained) | PaliGemma 2 3B (tuned for CXR) | PaliGemma 2 10B (tuned for CXR) |
---|---|---|---|
Chest X-ray report generation | |||
MIMIC CXR - RadGraph F1 | 29.5 | 28.8 | 29.5 |
The instruction-tuned versions of MedGemma 4B and Gemma 3 4B achieve lower scores (0.22 and 0.12, respectively) due to the differences in reporting style compared to the MIMIC ground truth reports. Further fine-tuning on MIMIC reports will enable users to achieve improved performance.
MedGemma 4B and text-only MedGemma 27B were evaluated across a range of text-only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all tested text-only health benchmarks.
Metric | MedGemma 27B | Gemma 3 27B | MedGemma 4B | Gemma 3 4B |
---|---|---|---|---|
MedQA (4-op) | 89.8 (best-of-5) 87.7 (0-shot) | 74.9 | 64.4 | 50.7 |
MedMCQA | 74.2 | 62.6 | 55.7 | 45.4 |
PubMedQA | 76.8 | 73.4 | 73.4 | 68.4 |
MMLU Med (text only) | 87.0 | 83.3 | 70.0 | 67.2 |
MedXpertQA (text only) | 26.7 | 15.7 | 14.2 | 11.6 |
AfriMed-QA | 84.0 | 72.0 | 52.0 | 48.0 |
For all MedGemma 27B results, test-time scaling is used to improve performance.
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
In addition to development level evaluations, we conduct “assurance evaluations” which are our “arms-length” internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High-level findings are fed back to the model team, but prompt sets are held out to prevent overfitting and preserve the results’ ability to inform decision making. Notable assurance evaluation results are reported to our Responsibility & Safety Council as part of release review.
For all areas of safety testing, we saw safe levels of performance across the categories of child safety, content safety, and representational harms. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For text-to-text, image-to-text, and audio-to-text, and across both MedGemma model sizes, the model produced minimal policy violations. A limitation of our evaluations was that they included primarily English language prompts.
The base Gemma models are pre-trained on a large corpus of text and code data. MedGemma 4B utilizes a SigLIP image encoder that has been specifically pre-trained on a variety of de-identified medical data, including radiology images, histopathology images, ophthalmology images, and dermatology images. Its LLM component is trained on a diverse set of medical data, including medical text relevant to radiology images, chest-x rays, histopathology patches, ophthalmology images and dermatology images.
MedGemma models have been evaluated on a comprehensive set of clinically relevant benchmarks, including over 22 datasets across 5 different tasks and 6 medical image modalities. These include both open benchmark datasets and curated datasets, with a focus on expert human evaluations for tasks like CXR report generation and radiology VQA.
MedGemma utilizes a combination of public and private datasets.
This model was trained on diverse public datasets including MIMIC-CXR (chest X-rays and reports), Slake-VQA (multimodal medical images and questions), PAD-UFES-20 (skin lesion images and data), SCIN (dermatology images), TCGA (cancer genomics data), CAMELYON (lymph node histopathology images), PMC-OA (biomedical literature with images), and Mendeley Digital Knee X-Ray (knee X-rays).
Additionally, multiple diverse proprietary datasets were licensed and incorporated (described next).
In addition to the public datasets listed above, MedGemma was also trained on de-identified datasets licensed for research or collected internally at Google from consented participants.
MIMIC-CXR Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng, S. (2024). MIMIC-CXR Database (version 2.1.0). PhysioNet. https://physionet.org/content/mimic-cxr/2.1.0/ and Johnson, Alistair E. W., Tom J. Pollard, Seth J. Berkowitz, Nathaniel R. Greenbaum, Matthew P. Lungren, Chih-Ying Deng, Roger G. Mark, and Steven Horng. 2019. “MIMIC-CXR, a de-Identified Publicly Available Database of Chest Radiographs with Free-Text Reports.” Scientific Data 6 (1): 1–8.
SLAKE Liu, Bo, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu. 2021.SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical Visual Question Answering.” http://arxiv.org/abs/2102.09542.
PAD-UEFS Pacheco, A. G. C., Lima, G. R., Salomao, A., Krohling, B., Biral, I. P., de Angelo, G. G., Alves, F. O. G., Ju X. M., & P. R. C. (2020). PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 1551-1558). IEEE. https://doi.org/10.1109/BIBM49941.2020.9313241
SCIN Ward, Abbi, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley Carrick, Bilson Campana, Jay Hartford, et al. 2024. “Creating an Empirical Dermatology Dataset Through Crowdsourcing With Web Search Advertisements.” JAMA Network Open 7 (11): e2446615–e2446615.
TCGA The results shown here are in whole or part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga.
CAMELYON16 Ehteshami Bejnordi, Babak, Mitko Veta, Paul Johannes van Diest, Bram van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen A. W. M. van der Laak, et al. 2017. “Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer.” JAMA 318 (22): 2199–2210.
MedQA Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. “What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams.” http://arxiv.org/abs/2009.13081.
Mendeley Digital Knee X-Ray Gornale, Shivanand; Patravali, Pooja (2020), “Digital Knee X-ray Images”, Mendeley Data, V1, doi: 10.17632/t9ndx37v5h.1
AfrimedQA Olatunji, Tobi, Charles Nimo, Abraham Owodunni, Tassallah Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, et al. 2024. “AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset.” http://arxiv.org/abs/2411.15640.
VQA-RAD Lau, Jason J., Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. 2018. “A Dataset of Clinically Generated Visual Questions and Answers about Radiology Images.” Scientific Data 5 (1): 1–10.
MedexpQA Alonso, I., Oronoz, M., & Agerri, R. (2024). MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering. arXiv preprint arXiv:2404.05590. Retrieved from https://arxiv.org/abs/2404.05590
MedXpertQA Zuo, Yuxin, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu, Ermo Hua, Kaiyan Zhang, Ning Ding, and Bowen Zhou. 2025. “MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding.” http://arxiv.org/abs/2501.18362.
Google and partnerships utilize datasets that have been rigorously anonymized or de-identified to ensure the protection of individual research participants and patient privacy
Details about the model internals.
Training was done using JAX.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
MedGemma is an open multimodal generative AI model intended to be used as a starting point that enables more efficient development of downstream healthcare applications involving medical text and images. MedGemma is intended for developers in the life sciences and healthcare space. Developers are responsible for training, adapting and making meaningful changes to MedGemma to accomplish their specific intended use. MedGemma models can be fine-tuned by developers using their own proprietary data for their specific tasks or solutions.
MedGemma is based on Gemma 3 and has been further trained on medical images and text. MedGemma enables further development in any medical context (image and textual), however the model was pre-trained using chest X-ray, pathology, dermatology, and fundus images. Examples of tasks within MedGemma’s training include visual question answering pertaining to medical images, such as radiographs, or providing answers to textual medical questions. Full details of all the tasks MedGemma has been evaluated can be found in an upcoming technical report.
MedGemma is not intended to be used without appropriate validation, adaptation and/or making meaningful modification by developers for their specific use case. The outputs generated by MedGemma are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications. Performance benchmarks highlight baseline capabilities on relevant benchmarks, but even for image and text domains that constitute a substantial portion of training data, inaccurate model output is possible. All outputs from MedGemma should be considered preliminary and require independent verification, clinical correlation, and further investigation through established research and development methodologies.
MedGemma’s multimodal capabilities have been primarily evaluated on single-image tasks. MedGemma has not been evaluated in use cases that involve comprehension of multiple images.
MedGemma has not been evaluated or optimized for multi-turn applications.
MedGemma’s training may make it more sensitive to the specific prompt used than Gemma 3.
When adapting MedGemma developer should consider the following: