Solar DARE — Llama 3 × Mistral DARE-TIES
DARE-TIES merge using random delta dropping to regularize a three-model combination of Llama-3 and two Mistral fine-tunes. Reduces interference between specialist models, producing one of the cleanest multi-model blends for general instruction following.
Merge Lineage
Merge Lineage
4 source modelsSource Models
Output
Click any model node to open its Hugging Face page
Config YAML
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
models: - model: meta-llama/Meta-Llama-3-8B-Instruct
parameters: weight: 0.4
density: 0.6
- model: mistralai/Mistral-7B-Instruct-v0.3
parameters: weight: 0.3
density: 0.6
- model: codestral/Mistral-Coder-7B
parameters: weight: 0.3
density: 0.5
parameters: normalize: true
dtype: bfloat16Benchmark Scores
| Benchmark | Merged | Llama-3-8B | Mistral-7B | Mistral-Coder | Δ Best |
|---|---|---|---|---|---|
MMLUtop | 73.8 | 71.9 | 70.1 | 65.3 | +1.9 |
HumanEval | 67.0 | 61.0 | 60.2 | 63.0 | +4.0 |
MT-Bench | 8.1 | 8.0 | 7.7 | 7.4 | +0.1 |
ARC-C | 68.4 | 65.2 | 63.1 | 61.5 | +3.2 |
Model Weights & Density — DARE
How I Built This
Embed Badge
Add this to your Hugging Face model card to link back to this recipe.
[](https://www.mergekit.com/recipes/solar-dare-llama-mistral)Version History
- v1.1latest↑ 73.8
March 1, 2026
Added third model (Mistral-coding fine-tune) at weight 0.3, improved HumanEval +4 pts
- v1.0↑ 71.2
October 15, 2025
Initial two-model DARE-TIES release
Use this Model
Run, deploy, or interact with Solar DARE — Llama 3 × Mistral DARE-TIES directly.
MergeKit Cloud
Run this model on serverless GPU infrastructure — zero setup, pay-per-second.
Select GPU
Serverless GPU · Powered by RunPod
Reproduce Locally
Run this exact merge on your own machine in three steps:
pip install mergekitmerge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.4
density: 0.6
- model: mistralai/Mistral-7B-Instruct-v0.3
parameters:
weight: 0.3
density: 0.6
- model: codestral/Mistral-Coder-7B
parameters:
weight: 0.3
density: 0.5
parameters:
normalize: true
dtype: bfloat16mergekit-yaml solar-dare-llama-mistral.yaml ./outputWant to build your own merge?
Use the MergeKit config generator to build a YAML recipe visually — no code required.