SLERPFeaturedslerpgeneralinstructllama3mistralpopular

Dolphin 2.9 — Llama 3 × Mistral SLERP Blend

A balanced SLERP merge of Dolphin Llama-3-8B-Instruct and Mistral-7B-Instruct-v0.3, producing a strong general-purpose assistant with excellent instruction following and broad world knowledge. Popular community recipe from r/LocalLLaMA.

Author

cognitivecomputations

Published

December 1, 2025

Last updated

February 14, 2026

Versions

2

Best Score

8.2

Stars

312

Meta-Llama-3-8B-Instruct · 8BMistral-7B-Instruct-v0.3 · 7B

Merge Lineage

Merge Lineage

2 source models

Click any model node to open its Hugging Face page

Config YAML

dolphin-llama3-mistral-slerp-v2.0.yaml
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models: - model: meta-llama/Meta-Llama-3-8B-Instruct
  - model: mistralai/Mistral-7B-Instruct-v0.3
merge_method: slerp
parameters: t: 0.6
dtype: bfloat16

Benchmark Scores

BenchmarkMergedLlama-3-8BMistral-7BΔ Best
MT-Bench
8.28.07.8+0.2
MMLUtop
68.466.964.2+1.5
HumanEval
62.161.058.3+1.1

Blend Ratio (t = 0.6)

Meta-Llama-3-8B-Instruct
60%
Mistral-7B-Instruct-v0.3
40%

Embed Badge

Add this to your Hugging Face model card to link back to this recipe.

MergeKitRecipe
markdown
[![MergeKit Recipe](https://img.shields.io/badge/MergeKit-Recipe-10b981?style=flat-square)](https://www.mergekit.com/recipes/dolphin-llama3-mistral-slerp)

Version History

  1. v2.0latest
    68.4

    February 14, 2026

    Rebalanced blend ratio to 0.6 after MT-Bench evaluation; improved reasoning scores

  2. v1.0
    67.1

    December 1, 2025

    Initial release with t=0.5 blend

Use this Model

Run, deploy, or interact with Dolphin 2.9 — Llama 3 × Mistral SLERP Blend directly.

MergeKit Cloud

Run this model on serverless GPU infrastructure — zero setup, pay-per-second.

RunPod Serverless

Select GPU

A10G · 24 GB~45s cold
A100 · 80 GB~30s cold
H100 · 80 GB~20s cold
Est. ~$0.001 / inference · billed per second
Coming Soon

Serverless GPU · Powered by RunPod

Reproduce Locally

Run this exact merge on your own machine in three steps:

bash
pip install mergekit
yaml
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models:
  - model: meta-llama/Meta-Llama-3-8B-Instruct
  - model: mistralai/Mistral-7B-Instruct-v0.3
merge_method: slerp
parameters:
  t: 0.6
dtype: bfloat16
bash
mergekit-yaml dolphin-llama3-mistral-slerp.yaml ./output

Want to build your own merge?

Use the MergeKit config generator to build a YAML recipe visually — no code required.