xAI Grok Provider

With the xAI Grok provider, you can access xAI’s language model features via its API.

xAI Grok Provider

Setup

The xAI Grok provider can be accessed through the @ai-sdk/xai module. To get started, install it using the following command



Provider Instance

Bring the default provider instance, xai, into your project by importing it from @ai-sdk/xai:

To set up a personalized configuration, simply import createXai from @ai-sdk/xai and create a provider instance with the options you need.

You can tailor the xAI provider instance by adjusting these optional configuration fields:

baseURL: string

Specify an alternative base URL for all API requests—for example, when routing through a proxy. If not provided, it defaults to https://api.x.ai/v1.

apiKey: string

Sets the API key used in the Authorization header. When omitted, the provider will automatically use the XAI_API_KEY environment variable.

headers: Record<string, string>

Allows you to attach additional custom headers to every outgoing request.

fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>

Provides a custom fetch function. By default, the global fetch is used, but you can override it to intercept requests, apply middleware, or supply a mock implementation—such as when writing tests.


Language Models

You can build xAI models by working through a provider instance, specifying the model ID—such as grok-3—as the first parameter.

By default, xai(modelId) operates through the Chat API. If you want to work with the Responses API—especially when using server-side agentic tools—be sure to call xai.responses(modelId) instead.

Example

You can generate text with xAI language models by using the generateText function:

xAI language models are also compatible with the streamText, generateObject, and streamObject functions available in the AI SDK Core.


Provider Options

xAI chat models offer several extra provider-specific configuration settings beyond the standard call parameters. These can be included inside the providerOptions field:

Below are the optional provider-level settings supported by xAI chat models:

parallel_function_callingboolean

Enables or disables the ability to execute function calls in parallel when tools are used.

  • true: the model may invoke multiple functions at the same time
  • false: function calls will be executed one after another
  • Default: true

reasoningEffort'low' | 'high'

Specifies the level of reasoning depth for models that support extended reasoning.
This option is available only for grok-3-mini and grok-3-mini-fast.

Responses API (Agentic Tools)

The Responses API (Agentic Tools) from xAI lets you call tools directly on the server using the xai.responses(modelId) method. In simple terms, this means the model can think more independently—it can look things up, run code, and gather information on its own while processing your request.

It honestly feels a bit like giving the model its own little toolkit to work with.


With the Responses API, the model can automatically use several powerful tools:

  • web_search: It searches the internet in real time and can browse pages as needed.
  • x_search: It looks up posts, users, and threads on X (Twitter).
  • code_execution: It can run Python code for calculations, analysis, or number-crunching.


Web Search Tool

The web search tool is especially handy—it lets the model roam the web on its own. You can even limit search domains if you want it to stay focused, or let it analyze images it finds along the way.

Here’s what using it looks like:

When this runs, the model goes off to explore the specified sites, gathers what it finds, and brings it back—along with sources so you can see where the information came from. It’s surprisingly satisfying to watch it pull everything together.


Web Search Parameters (Explained Simply)

allowedDomainsstring[]

A list of domains you want the model to search in (max 5). You can’t use this at the same time as excludedDomains.

excludedDomainsstring[]

A list of domains you don’t want it to search (max 5). Not allowed together with allowedDomains.

enableImageUnderstandingboolean

Lets the model look at and understand images it finds while searching. (Just keep in mind—it uses more tokens.)


X Search Tool

The X Search tool lets you look through posts on X (formerly Twitter) in a really focused way. You can narrow things down by specific accounts, dates, and even let the model understand images and videos. It feels a bit like having a supercharged assistant who digs through the noise and only brings back the good stuff.

Example:

X Search Parameters

  • allowedXHandles — string[]

Choose exactly which X accounts you want to hear from (up to 10).
When you use this, you can’t also use excludedXHandles.

Perfect for when you only trust a handful of voices.

  • excludedXHandles — string[]

Block posts from accounts you don’t want included (up to 10).
Can’t be used together with allowedXHandles.

Helpful when you just want to avoid certain noisy corners of X.

  • fromDate — string

The earliest date to search from, using the YYYY-MM-DD format.
Think of it as telling the tool: “Start looking from here.”

  • toDate — string

The latest date to search up to, also in YYYY-MM-DD format.
Basically: “Stop searching after this day.”

  • enableImageUnderstanding — boolean

Turn this on if you want the model to actually see and understand images in posts.
It feels surprisingly human when it spots details you might’ve missed.

  • enableVideoUnderstanding — boolean

Let the model watch and interpret videos inside X posts.
Great when the conversation online is happening through clips, not just text.


Code Execution Tool

The code execution tool is like having a super-smart calculator built right into your workflow. It lets the model actually run Python code, which makes handling numbers, solving problems, and analyzing data feel effortless.

Here’s an example. Let’s say you’re curious about how much your $10,000 might grow with 5% interest over 10 years. Instead of doing the math yourself, you can ask the model to figure it out for you—and it feels surprisingly satisfying to watch it take over the heavy lifting:


Multiple Tools

Sometimes you need more than one tool to really dig into a topic. And honestly, when you're trying to understand something as huge and fast-moving as AI safety, it feels comforting to know you can pull information from different places at once.
Here’s a setup that lets your server use multiple tools together—searching the web, checking X posts, and even running code—so your research feels smoother and a lot less overwhelming.

Provider Options

The Responses API lets you fine-tune how your model behaves, giving you a bit more control over the “personality” and effort behind each response.

What These Options Mean

You can adjust a setting called reasoningEffort, and it comes with two choices:

  • 'low'
  • 'high'


Think of it like this:
If you choose low, the model keeps things quick and light—fast answers without too much deep thinking.
But if you switch to high, the model slows down a bit and really digs in, putting more thought and care into its response. This can lead to richer, more detailed answers—but it may take a little longer and use more tokens.

It’s a simple trade-off:
Speed vs. depth.

Pick whichever feels right for the moment.

The Responses API only supports server-side tools. You cannot mix server-side tools with client-side function tools in the same request.


Live Search:

xAI models now come with a feature called Live Search, which basically means they can look up fresh, real-time information from different sources while answering your questions. It feels almost like talking to someone who quickly Googles things for you—except faster—and they even show citations so you know exactly where the information came from.


Basic Search

If you want to turn on search in your app, you just need to add a few search settings. Below is an example that shows how to enable it.
This setup tells the model to automatically decide when to search, bring back up to five results, and include citations. It’s pretty simple once you see it in action:

With this setup, you can ask your model something like “What’s new in AI lately?” and it will actually go out, search live data, and come back with up-to-date answers.
It feels surprisingly helpful—almost like having a research assistant who never gets tired.

Search Parameters (Made Simple)

When you’re customizing how search works, here are the options you can tweak. Don’t worry—it's easier than it looks once you get the hang of it.

mode: 'auto' | 'on' | 'off'
This is basically the search engine’s mood switch.

  • auto (default): The model decides on its own if it should search. Honestly, it’s pretty smart about it.
  • on: Always search, no matter what.
  • off: No searching at all—just the model thinking on its own.

returnCitations: boolean
If you want sources listed in the response, keep this true (it’s the default). If citations stress you out, switch it to false.

fromDate / toDate: string
If you want results from a certain time period, you can set start and end dates using the YYYY-MM-DD format. Super helpful when you only want fresh info—or nostalgia.

maxSearchResults: number
This controls how many results the model considers. Default is 20, max is 50. Enough to feel thorough, but not overwhelming.

sources: Array<SearchSource>
Pick where the model should look for information. If you don’t set anything, it checks web and X (Twitter) by default.


Search Sources:

You can choose different places for the model to search. Here’s how each one works:

Web Search

Web search settings you can customize:

  • country: The country to search in (ISO alpha-2 code).
  • allowedWebsites: Sites you want to include (max 5).
  • excludedWebsites: Sites you want the search to ignore (max 5).
  • safeSearch: Keeps the results clean and safe. Default is true.


X (Twitter) Search

X (Twitter) search options:

  • includedXHandles: The handles you want to search (no @ needed).
  • excludedXHandles: Handles you don’t want in your results.
  • postFavoriteCount: Only show posts with at least this many likes.
  • postViewCount: Only show posts with at least this many views.

News Search

How news settings work:

  • country: The two-letter country code (like “US”).
  • excludedWebsites: Up to 5 sites you want to avoid.
  • safeSearch: Keep things clean and safe (on by default).

RSS Feed Search

If you’d rather tap straight into an RSS feed, you can do that too. It feels a bit like tuning a radio into exactly the updates you care about.

RSS settings:

  • links: An array of RSS URLs (only one supported for now).


Using Multiple Sources

Sometimes you want a fuller picture — a mix of news, websites, and social updates.
This setup feels like giving your AI a whole research team instead of just one assistant.

Sources and Citations

When you turn on search with returnCitations: true, the model doesn’t just give you an answer — it also shows you where that answer came from. Honestly, it feels reassuring, like having someone point to the exact articles they used while researching your question.

Here’s an example of how it works:

It feels good to know you’re not just taking the model’s word for it — you can actually see the links it relied on.


Streaming with Search

You can also stream responses and get live search results at the same time. The text comes in piece by piece, like someone thinking out loud, and when it’s all finished, the citations appear so you can see the factual backbone behind the answer.

Here’s what that looks like:

Watching the answer unfold in real time — and then seeing the sources pop up at the end — honestly feels a bit magical, like the model is letting you peek behind the curtain.


Model Capabilities

xAI Grok Provider


Image Models

Working with xAI’s image models is surprisingly simple—and honestly, pretty fun. You can generate images using the .image() factory method, and if you want to dig deeper into image generation, the generateImage() function gives you even more control.

Right now, the xAI image model doesn’t let you adjust the aspect ratio or set a custom size. Every image comes out at a default 1024×768, which still looks pretty great for most uses.


Model-Specific Options

If you want to shape the model’s behavior a bit more, you can tweak a few settings. For example, you can choose how many images the model should output in one go or limit how many it’s allowed to generate.

There’s something really satisfying about watching the model take a simple sentence and turn it into visuals—almost like watching someone bring your imagination to life.


Model Capabilities

ModelSizesNotes
grok-2-image1024×768 (default)A powerful text-to-image model from xAI. It’s trained on a wide range of images, which helps it create everything from dreamy landscapes to futuristic scenes with surprising detail and style.