NeuralChat × DeepSeek — SLERP Instruct Blend
SLERP blend of Intel's NeuralChat-7B-v3.3 and DeepSeek-7B-Instruct at t=0.55, combining NeuralChat's strong instruction adherence with DeepSeek's reasoning depth. Well-regarded in the community for consistent benchmark improvements over either parent.
Author
bevangelista
Published
September 12, 2025
Last updated
September 12, 2025
Versions
1
Best Score
↑ 7.7
Stars
98
Merge Lineage
Merge Lineage
2 source modelsSource Models
Output
NeuralDeepSeek-7B
7B
SLERPClick any model node to open its Hugging Face page
Config YAML
merge_method: slerp
base_model: Intel/neural-chat-7b-v3-3
models: - model: Intel/neural-chat-7b-v3-3
- model: deepseek-ai/deepseek-llm-7b-chat
parameters: t: 0.55
dtype: bfloat16Benchmark Scores
| Benchmark | Merged | NeuralChat-7B | DeepSeek-7B | Δ Best |
|---|---|---|---|---|
MT-Bench | 7.7 | 7.5 | 7.2 | +0.2 |
MMLUtop | 65.2 | 64.8 | 63.1 | +0.4 |
Blend Ratio (t = 0.55)
How I Built This
Embed Badge
Add this to your Hugging Face model card to link back to this recipe.
[](https://www.mergekit.com/recipes/neural-chat-deepseek-slerp-instruct)Use this Model
Run, deploy, or interact with NeuralChat × DeepSeek — SLERP Instruct Blend directly.
MergeKit Cloud
Run this model on serverless GPU infrastructure — zero setup, pay-per-second.
Select GPU
Serverless GPU · Powered by RunPod
Reproduce Locally
Run this exact merge on your own machine in three steps:
pip install mergekitmerge_method: slerp
base_model: Intel/neural-chat-7b-v3-3
models:
- model: Intel/neural-chat-7b-v3-3
- model: deepseek-ai/deepseek-llm-7b-chat
parameters:
t: 0.55
dtype: bfloat16mergekit-yaml neural-chat-deepseek-slerp-instruct.yaml ./outputWant to build your own merge?
Use the MergeKit config generator to build a YAML recipe visually — no code required.