Back to Blog

Grok 4: xAI succeeds in going from 0 to new SOTA LLM in 2 years

xAI launched Grok 4 and Grok 4 Heavy, large language models rumored to have 2.4 trillion parameters and trained with 100x more compute than Grok 2 on 100k H100 GPUs. Grok 4 achieved new state-of-the-art results on benchmarks like ARC-AGI-2 (15.9%), HLE (50.7%), and Vending-Bench, outperforming models such as Claude 4 Opus. The model supports a 256K context window and is priced at $3.00/M input tokens and $15.00/M output tokens. It is integrated into platforms like Cursor, Cline, LangChain, and Perplexity Pro/Max. The launch was accompanied by a controversial voice mode and sparked industry discussion about xAI's rapid development pace, with endorsements from figures like Elon Musk and Arav Srinivas.

Read original post