Cerebras Launches Qwen3-235B: World's Fastest Frontier AI Model with Full 131K Context Support

6 hours ago 2

World's fastest frontier AI reasoning model now available on Cerebras Inference Cloud

Delivers production-grade code generation at 30x the speed and 1/10th the cost of closed-source alternatives

PARIS, July 08, 2025--(BUSINESS WIRE)--Cerebras Systems today announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform. This milestone represents a breakthrough in AI model performance, combining frontier-level intelligence with unprecedented speed at one-tenth the cost of closed-source models, fundamentally transforming enterprise AI deployment.

Frontier Intelligence on Cerebras

Alibaba’s Qwen3-235B delivers model intelligence that rivals frontier models such as Claude 4 Sonnet, Gemini 2.5 Flash, and DeepSeek R1 across a range of science, coding, and general knowledge benchmarks according to independent tests by Artificial Analysis.

Qwen3-235B uses an efficient mixture-of-experts architecture that delivers exceptional compute efficiency, enabling Cerebras to offer the model at $0.60 per million input tokens and $1.20 per million output tokens—less than one-tenth the cost of comparable closed-source models.

Cut Reasoning Time from Minutes to Seconds

Reasoning models are notoriously slow, often taking minutes to answer a simple question. By leveraging the Wafer Scale Engine, Cerebras accelerates Qwen3-235B to an unprecedented 1,500 tokens per second, reducing response times from 1-2 minutes to 0.6 seconds, making coding, reasoning, and deep-RAG workflows nearly instantaneous.

Based on Artificial Analysis measurements, Cerebras is the only company globally offering a frontier AI model capable of generating output at over 1,000 tokens per second, setting a new standard for real-time AI performance.

131K Context Enables Production-grade Code Generation

Concurrent with this launch, Cerebras has quadrupled its context length support from 32K to 131K tokens—the maximum supported by Qwen3-235B. This expansion directly impacts the model's ability to reason over large codebases and complex documents. While 32K context is sufficient for simple code generation use cases, 131K context allows the model to process dozens of files and tens of thousands of lines of code simultaneously, enabling production-grade application development.

This enhanced context length means Cerebras now directly addresses the enterprise code generation market, which is one of the largest and fastest-growing segments for generative AI.

Strategic Partnership with Cline

To showcase these new capabilities, Cerebras has partnered with Cline, the leading agentic coding agent for Microsoft VS Code with over 1.8 million installations. Cline users can now access Cerebras Qwen models directly within the editor—starting with Qwen3-32B at 64K context on the free tier. This rollout will expand to include Qwen3-235B with 131K context, delivering 10–20x faster code generation speeds compared to alternatives like DeepSeek R1.

Read Entire Article