Introducing Grok-1.5, the latest model capable of long context understanding & advanced reasoning. It will be available to early testers and existing Grok users on the 𝕏 platform in the coming days.

Grok-1.5 comes with improved reasoning capabilities and a context length of 128,000 tokens. And it will be available on 𝕏 platform soon.

Robust and adaptable infrastructure is required for advanced LLM research conducted on large GPU clusters. Based on JAX, Rust, and Kubernetes, Grok-1.5 is a customized distributed training framework.

Grok-1.5 now includes the capacity to analyze lengthy contexts with up to 128K tokens inside of its context window.

Grok can now store information from much longer documents in its memory, up to 16 times the context length of the previous version.

As its context window grows, the Grok 1.5 model can follow instructions even with lengthier and more complicated prompts.

Grok-1.5 has made significant progress in a number of areas, including math and coding skills. 1. 50.6% score on the MATH benchmark 2. 90% score on the GSM8K benchmark 3. 74.1% on the HumanEval benchmark

 As xAI gradually rolls out Grok-1.5 to a wider audience, it will be exciting to see the several new features over the coming days.

Looking for  developing an AI chatbot like Grok for innovative solutions? WE ARE HERE TO DEVELOP IT FOR YOU!