Muse Spark is the first flagship LLM from Meta Superintelligence Labs, released April 8, 2026 under CAIO Alexandr Wang's leadership. It accepts voice, text, and image inputs (text-only output) and marks Meta's strategic shift from Llama's open-weight default to a closed-source flagship with a separate open variant planned.
Muse Spark is Meta's first flagship large language model from Meta Superintelligence Labs, released April 8, 2026, under the leadership of Chief AI Officer Alexandr Wang (Scale AI co-founder, recruited via Meta's $14.3B / 49% stake in Scale AI). Originally code-named "Avocado," Muse Spark is the first model in the new Muse series and represents Meta's most consequential strategic shift in AI in years: the deliberate move away from open-source-by-default with the Llama family toward a closed-source flagship, with a planned (but distinct) open-source variant to follow later.
Meta describes Muse Spark as competitive with frontier proprietary models on multimodal perception, reasoning, health, and agentic benchmarks, at a fraction of the compute cost of Meta's older mid-size Llama 4 variant. The model accepts voice, text, and image inputs but produces text-only output. It now powers the Meta AI assistant in the standalone Meta AI app and desktop website, with rollout planned across Facebook, Instagram, WhatsApp, Messenger, and Ray-Ban Meta AI glasses.
Multimodal Input: Voice, text, and image inputs in a single model — designed for the conversational, voice-first interaction patterns that define Meta's consumer AI surfaces (especially Ray-Ban Meta and WhatsApp voice).
Frontier Benchmark Performance: Meta describes Muse Spark as competitive with frontier proprietary models across multimodal perception, reasoning, health, and agentic benchmarks.
Compute Efficiency: Operates at a fraction of the compute cost of the mid-size Llama 4 variant — Meta's positioning emphasizes that Muse Spark is competitive with frontier models without the training-cost profile of GPT-5.5 or Opus 4.7.
Consumer-Scale Distribution: Designed from the start for the largest distribution surface in AI — Meta's apps reach roughly 3 billion daily users across Facebook, Instagram, WhatsApp, and Messenger, plus the Ray-Ban Meta glasses installed base.
Text-only output: Despite multimodal input, Muse Spark produces text only — not images, video, or audio. Meta's image and video generation capabilities continue to live in the separately developed "Mango" multimodal generation model (planned for H1 2026 release).
Closed-source default: A philosophical and practical break from the Llama strategy. The promised open-source variant has not yet been released, and Meta has not committed to a date. Enterprises that built workflows around Llama's permissive open-weight model may need to re-evaluate.
Limited public benchmarks: Meta's benchmark claims are framed in self-reported terms ("competitive with frontier models," "fraction of compute cost") rather than published head-to-head results against named competitors.
May 7, 2026