Llama 3 × Gemma 2 Multilingual — Deep SLERP
A cross-family SLERP merge between Llama-3-8B-Instruct and Gemma-2-9B-IT, specifically tuned for multilingual output quality. The Gemma lineage brings Google's multilingual training signal; Llama brings instruction robustness.
Author
mergekit-community
Published
March 10, 2026
Last updated
March 10, 2026
Versions
1
Best Score
↑ 69.5
Stars
119
Merge Lineage
Merge Lineage
2 source modelsSource Models
Output
llama3-gemma2-multilingual
9B
SLERPClick any model node to open its Hugging Face page
Config YAML
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models: - model: meta-llama/Meta-Llama-3-8B-Instruct
- model: google/gemma-2-9b-it
parameters: t: 0.45
dtype: bfloat16Benchmark Scores
| Benchmark | Merged | Gemma-2-9B | Llama-3-8B | Δ Best |
|---|---|---|---|---|
MMLUtop | 69.5 | 68.8 | 66.9 | +0.7 |
MT-Bench | 7.6 | 7.5 | 7.4 | +0.1 |
Blend Ratio (t = 0.45)
How I Built This
Embed Badge
Add this to your Hugging Face model card to link back to this recipe.
[](https://www.mergekit.com/recipes/llama3-gemma-slerp-multilingual)Use this Model
Run, deploy, or interact with Llama 3 × Gemma 2 Multilingual — Deep SLERP directly.
MergeKit Cloud
Run this model on serverless GPU infrastructure — zero setup, pay-per-second.
Select GPU
Serverless GPU · Powered by RunPod
Reproduce Locally
Run this exact merge on your own machine in three steps:
pip install mergekitmerge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
- model: google/gemma-2-9b-it
parameters:
t: 0.45
dtype: bfloat16mergekit-yaml llama3-gemma-slerp-multilingual.yaml ./outputWant to build your own merge?
Use the MergeKit config generator to build a YAML recipe visually — no code required.