# Perplexity Sonar

## 1. Overview

Perplexity AI is an AI-powered conversational search engine designed to provide users with direct, accurate answers through natural language processing technology.

{% hint style="success" %}
This API is compatible with the OpenAI interface format.
{% endhint %}

**Model List:**

* `sonar`
* `sonar-pro`
* `sonar-reasoning-pro`

## 2. Request Description

* **Request Method**: `POST`
* **Request URL**:

  > `https://gateway.theturbo.ai/v1/chat/completions`

{% hint style="info" %}
To ensure concurrent resource availability, the backend uses multi-account load balancing. To improve cache hit rates in multi-turn conversation mode, include the HTTP request header `X-Conversation-Id` with a random string in your request. The platform will preferentially route requests to the same backend account. [Reference Documentation](/api-reference/compute/aig/gateway-features/cache-optimization.md)
{% endhint %}

***

## 3. Request Parameters

### 3.1 Header Parameters

| Parameter Name  | Type   | Required | Description                                                         | Example                |
| --------------- | ------ | -------- | ------------------------------------------------------------------- | ---------------------- |
| `Content-Type`  | string | Yes      | Set the request header type, must be `application/json`             | `application/json`     |
| `Accept`        | string | Yes      | Set the response type, recommended to use `application/json`        | `application/json`     |
| `Authorization` | string | Yes      | API\_KEY required for authentication, format `Bearer $YOUR_API_KEY` | `Bearer $YOUR_API_KEY` |

***

### 3.2 Body Parameters (application/json)

| Parameter Name          | Type    | Required | Description                                                                                                                                       | Example                                 |
| ----------------------- | ------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
| **model**               | string  | Yes      | The model ID to use. See [Overview](#id-1.-overview) for available versions, e.g. `sonar`.                                                        | `sonar`                                 |
| **messages**            | array   | Yes      | Chat message list, compatible with OpenAI format. Each object in the array contains `role` and `content`.                                         | `[{"role": "user","content": "Hello"}]` |
| role                    | string  | No       | Message role, possible values: `system`, `user`, `assistant`.                                                                                     | `user`                                  |
| content                 | string  | No       | The specific content of the message.                                                                                                              | `Hello, please tell me a joke.`         |
| temperature             | number  | No       | Sampling temperature, ranging from `0~2`. Higher values make the output more random; lower values make the output more focused and deterministic. | `0.7`                                   |
| top\_p                  | number  | No       | Another way to adjust the sampling distribution, ranging from `0~1`. Usually set one of these or `temperature`.                                   | `0.9`                                   |
| n                       | number  | No       | Number of replies to generate for each input message.                                                                                             | `1`                                     |
| stream                  | boolean | No       | Whether to enable streaming output. When set to `true`, returns streaming data similar to ChatGPT.                                                | `false`                                 |
| max\_tokens             | number  | No       | Maximum number of tokens that can be generated in a single reply, limited by the model's context length.                                          | `1024`                                  |
| presence\_penalty       | number  | No       | -2.0 \~ 2.0. Positive values encourage the model to generate more new topics, while negative values reduce the probability of new topics.         | `0`                                     |
| frequency\_penalty      | number  | No       | -2.0 \~ 2.0. Positive values reduce the frequency of the model repeating phrases, while negative values increase the probability of repetition.   | `0`                                     |
| search\_recency\_filter | string  | No       | Returns search results within the specified time interval. Possible values: `month`, `week`, `day`, `hour`.                                       | `month`                                 |

***

## 4. Request Examples

{% tabs %}
{% tab title="HTTP" %}

```http
POST /v1/chat/completions
Content-Type: application/json
Accept: application/json
Authorization: Bearer $YOUR_API_KEY

{
	"model": "sonar",
	"messages": [
		{
			"role": "system",
			"content": "Be precise and concise."
		},
		{
			"role": "user",
			"content": "How many stars are there in our galaxy?"
		}
	]
}
```

{% endtab %}

{% tab title="Shell" %}

```sh
curl https://gateway.theturbo.ai/v1/chat/completions \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "Authorization: Bearer $YOUR_API_KEY" \
	-d "{
	\"model\": \"sonar\",
	\"messages\": [{
			\"role\": \"system\",
			\"content\": \"Be precise and concise.\"
		},
		{
			\"role\": \"user\",
			\"content\": \"How many stars are there in our galaxy?\"
		}
	]
}"
```

{% endtab %}

{% tab title="Go" %}

```go
package main

import (
	"context"
	"fmt"

	"github.com/openai/openai-go"
	"github.com/openai/openai-go/option"
)

func main() {
	apiKey := "sk-123456789012345678901234567890123456789012345678"

	client := openai.NewClient(
		option.WithAPIKey(apiKey),
		option.WithBaseURL("https://gateway.theturbo.ai/v1"),
	)

	resp, err := client.Chat.Completions.New(
		context.Background(),
		openai.ChatCompletionNewParams{
			Model: "sonar",
			Messages: []openai.ChatCompletionMessageParamUnion{
				openai.SystemMessage("Be precise and concise."),
				openai.UserMessage("How many stars are there in our galaxy?"),
			},
		},
	)

	if err != nil {
		fmt.Println("error:", err)
		return
	}

	fmt.Println(resp.Choices[0].Message.Content)
}

```

{% endtab %}

{% tab title="Python" %}

```python
#!/usr/bin/env python3

from openai import OpenAI

def main():
    api_key = "sk-123456789012345678901234567890123456789012345678"

    client = OpenAI(
        api_key=api_key,
        base_url="https://gateway.theturbo.ai/v1"
    )

    response = client.chat.completions.create(
        model="sonar",
        messages=[
            {"role": "system", "content": "Be precise and concise."},
            {"role": "user", "content": "How many stars are there in our galaxy?"}
        ]
    )

    print(response.choices[0].message.content)

if __name__ == "__main__":
    main()

```

{% endtab %}
{% endtabs %}

## 5. Response Example

```json
{
	"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
	"model": "sonar",
	"object": "chat.completion",
	"created": 1724369245,
	"citations": [
		"https://www.astronomy.com/science/astro-for-kids-how-many-stars-are-there-in-space/",
		"https://www.esa.int/Science_Exploration/Space_Science/Herschel/How_many_stars_are_there_in_the_Universe",
		"https://www.space.com/25959-how-many-stars-are-in-the-milky-way.html",
		"https://www.space.com/26078-how-many-stars-are-there.html",
		"https://en.wikipedia.org/wiki/Milky_Way",
		"https://www.littlepassports.com/blog/space/how-many-stars-are-in-the-universe/?srsltid=AfmBOoqWVymRloolU4KZBI9-LotDIoTnzhKYKCw7vVkaIifhjrEU66_5"
	],
	"choices": [
		{
			"index": 0,
			"finish_reason": "stop",
			"message": {
				"role": "assistant",
				"content": "The number of stars in the Milky Way galaxy is estimated to be between 100 billion and 400 billion stars. The most recent estimates from the Gaia mission suggest that there are approximately 100 to 400 billion stars in the Milky Way, with significant uncertainties remaining due to the difficulty in detecting faint red dwarfs and brown dwarfs."
			},
			"delta": {
				"role": "assistant",
				"content": ""
			}
		}
	],
	"usage": {
		"prompt_tokens": 14,
		"completion_tokens": 70,
		"total_tokens": 84
	}
}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.console.zenlayer.com/api-reference/compute/aig/chat-completion/perplexity-chat-completion.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
