TIESFeaturedtiesmultilingualcodingmistralqweninstruct

OpenHermes Fusion — Mistral × Qwen TIES Multi-Model

A TIES merge of OpenHermes-2.5-Mistral-7B and Qwen2.5-7B-Instruct, targeting improved multilingual capability and coding while preserving Hermes instruction tuning. Sign conflicts resolved at density 0.7.

Author

teknium

Published

November 10, 2025

Last updated

January 20, 2026

Versions

2

Best Score

71.3

Stars

187

Mistral-7B-v0.1 · 7BQwen2.5-7B-Instruct · 7B

Merge Lineage

Merge Lineage

2 source models

Source Models

Output

openhermes-qwen-fusion-7b

7B

TIES

Click any model node to open its Hugging Face page

Config YAML

openhermes-mistral-qwen-ties-v1.2.yaml
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
models: - model: teknium/OpenHermes-2.5-Mistral-7B
    parameters: weight: 0.6
      density: 0.7
  - model: Qwen/Qwen2.5-7B-Instruct
    parameters: weight: 0.4
      density: 0.75
parameters: normalize: true
dtype: bfloat16

Benchmark Scores

BenchmarkMergedOpenHermes-2.5Qwen2.5-7BΔ Best
MMLUtop
71.369.869.1+1.5
HumanEval
65.461.263.4+2.0
MT-Bench
7.87.67.4+0.2

Model Weights & Density — TIES

OpenHermes-2.5-Mistral-7B7B
weight
0.60
density
0.70
Qwen2.5-7B-Instruct7B
weight
0.40
density
0.75

Embed Badge

Add this to your Hugging Face model card to link back to this recipe.

MergeKitRecipe
markdown
[![MergeKit Recipe](https://img.shields.io/badge/MergeKit-Recipe-10b981?style=flat-square)](https://www.mergekit.com/recipes/openhermes-mistral-qwen-ties)

Version History

  1. v1.2latest
    71.3

    January 20, 2026

    Increased Qwen density to 0.75 — better multilingual without degrading English

  2. v1.1
    70.1

    November 10, 2025

    Reduced Qwen weight to 0.4 to prevent instruction format drift

Use this Model

Run, deploy, or interact with OpenHermes Fusion — Mistral × Qwen TIES Multi-Model directly.

MergeKit Cloud

Run this model on serverless GPU infrastructure — zero setup, pay-per-second.

RunPod Serverless

Select GPU

A10G · 24 GB~45s cold
A100 · 80 GB~30s cold
H100 · 80 GB~20s cold
Est. ~$0.001 / inference · billed per second
Coming Soon

Serverless GPU · Powered by RunPod

Reproduce Locally

Run this exact merge on your own machine in three steps:

bash
pip install mergekit
yaml
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
models:
  - model: teknium/OpenHermes-2.5-Mistral-7B
    parameters:
      weight: 0.6
      density: 0.7
  - model: Qwen/Qwen2.5-7B-Instruct
    parameters:
      weight: 0.4
      density: 0.75
parameters:
  normalize: true
dtype: bfloat16
bash
mergekit-yaml openhermes-mistral-qwen-ties.yaml ./output

Want to build your own merge?

Use the MergeKit config generator to build a YAML recipe visually — no code required.