FrankenLlama 120B — Passthrough Layer Stack
A passthrough (frankenmerge) recipe stacking layers from Llama-3-70B and Llama-3-8B to produce a 120B architecture. Demonstrates the power of layer-level composition for creating models larger than any single input. Layers 0-39 from 70B, layers 40-79 spliced from 8B.
Author
chargoddardPublished
January 5, 2026
Last updated
January 5, 2026
Versions
1
Best Score
↑ 79.4
Stars
244
Merge Lineage
Merge Lineage
2 source modelsLayer composition
Click any model node to open its Hugging Face page
Config YAML
merge_method: passthrough
slices: - sources:
- model: meta-llama/Meta-Llama-3-70B-Instruct
layer_range: [0, 39]
- sources: - model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 39]
dtype: bfloat16Benchmark Scores
| Benchmark | Merged | Llama-3-70B | Llama-3-8B | Δ Best |
|---|---|---|---|---|
MMLUtop | 79.4 | 78.9 | 66.9 | +0.5 |
MT-Bench | 8.9 | 8.7 | 8.0 | +0.2 |
HumanEval | 72.0 | 71.5 | 61.0 | +0.5 |
How I Built This
Embed Badge
Add this to your Hugging Face model card to link back to this recipe.
[](https://www.mergekit.com/recipes/frankenllama-120b-passthrough)Use this Model
Run, deploy, or interact with FrankenLlama 120B — Passthrough Layer Stack directly.
MergeKit Cloud
Run this model on serverless GPU infrastructure — zero setup, pay-per-second.
Select GPU
Serverless GPU · Powered by RunPod
Reproduce Locally
Run this exact merge on your own machine in three steps:
pip install mergekitmerge_method: passthrough
slices:
- sources:
- model: meta-llama/Meta-Llama-3-70B-Instruct
layer_range: [0, 39]
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 39]
dtype: bfloat16mergekit-yaml frankenllama-120b-passthrough.yaml ./outputWant to build your own merge?
Use the MergeKit config generator to build a YAML recipe visually — no code required.