Mistral Medium 3.5 is Mistral AI's frontier-class mid-tier multimodal model, released April 29, 2026 as open weights under a Modified MIT license. It scores 77.6% on SWE-Bench Verified — ahead of Devstral 2 and Qwen3.5 397B A17B — at aggressive $1.50/$7.50 per million-token pricing.
Mistral Medium 3.5 (mistral-medium-3-5) is Mistral AI's frontier-class mid-tier multimodal model, released April 29, 2026. It is positioned as Mistral's primary play for agentic and coding workloads, with adjustable reasoning depth via a reasoning_effort parameter that lets developers trade latency and cost against capability on the same model. Medium 3.5 is released as open weights under a Modified MIT license — preserving Mistral's open-source ecosystem positioning while adding terms specific to certain commercial deployments.
The headline result is performance: Medium 3.5 scores 77.6% on SWE-Bench Verified, ahead of Devstral 2 and Alibaba's Qwen3.5 397B A17B model. At $1.50/$7.50 per million input/output tokens, Medium 3.5 is materially cheaper than the closed-frontier models (GPT-5.5, Opus 4.7) it competes with on coding-task quality, making it the most aggressive open-weight pricing point at this capability tier.
mistral-medium-3-5reasoning_effort parameter — trade latency/cost vs. capabilityCoding (headline result): 77.6% on SWE-Bench Verified — ahead of Devstral 2 and Qwen3.5 397B A17B. Positions Medium 3.5 as the highest-quality open-weight coding model in its release window.
Adjustable Reasoning: reasoning_effort parameter lets developers choose between low-latency answers and deeper deliberation on the same model — analogous to OpenAI's reasoning_effort and Anthropic's thinking modes.
Multimodal: Frontier-class multimodal capability — text and image input.
Agentic Workflows: Mistral's positioning emphasizes agentic and coding use cases as the primary deployment target — with the SWE-Bench result as the headline validation.
Open-Weight Distribution: Modified MIT license allows commercial deployment with some terms — more permissive than Llama's custom license but less permissive than the pure-MIT approach DeepSeek uses for V4.
Mid-Tier Pricing: $1.50/$7.50 per million input/output tokens — substantially below GPT-5.5's $5/$30 and Opus 4.7's $5/$25, while delivering competitive coding-task quality.
Not the Frontier on Hardest Tasks: Medium 3.5 is designed to compete on coding and agentic benchmarks at mid-tier price, not to lead frontier benchmarks like ARC-AGI-2 or GPQA Diamond against Gemini 3.1 Ultra, GPT-5.5 Pro, or Opus 4.7. For the hardest reasoning tasks, frontier closed models retain a meaningful edge.
License Complexity: Modified MIT introduces commercial-use terms beyond standard MIT — enterprises should review the specific terms before production deployment, particularly for services that resell or redistribute the model.
Mixed Independent Coverage: Some reviewers (e.g., revolutioninai.com) raised pricing concerns about Mistral Medium 3.5's $7.50/M output cost relative to its benchmark improvements over Medium 3 — Mistral's positioning is that the SWE-Bench result is the differentiator, but the value calculus depends on the workload.
mistral-small-2603, hybrid instruct/reasoning/coding, 256K context), Mistral Large 3 (December 2025 flagship), Codestral, and Voxtral/Forge — completing Mistral's 2026 portfolio refresh.May 7, 2026