# Perplexity Sonar

## 1. 概述

Perplexity AI 是一个由人工智能驱动的对话式搜索引擎，旨在通过自然语言处理技术为用户提供直接、准确的答案。

{% hint style="success" %}
本 API 与 OpenAI 接口格式兼容。
{% endhint %}

**模型列表：**

* `sonar`
* `sonar-pro`
* `sonar-reasoning-pro`

## 2. 请求说明

* **请求方法**:`POST`
* **请求地址**:

  > `https://gateway.theturbo.ai/v1/chat/completions`

{% hint style="info" %}
平台为保障并发资源量，后端为多账号负载，如需提高缓存命中率，多轮对话模式可携带http请求头`X-Conversation-Id`加随机字符串请求，平台会优先路由到后端同一账号上。[参考文档](/api/cn/compute/aig/gateway-features/cache-optimization.md)
{% endhint %}

***

## 3. 请求参数

### 3.1 Header 参数

| 参数名称            | 类型     | 必填 | 说明                                         | 示例值                    |
| --------------- | ------ | -- | ------------------------------------------ | ---------------------- |
| `Content-Type`  | string | 是  | 设置请求头类型，必须为 `application/json`             | `application/json`     |
| `Accept`        | string | 是  | 设置响应类型，建议统一为 `application/json`            | `application/json`     |
| `Authorization` | string | 是  | 身份验证所需的 API\_KEY，格式 `Bearer $YOUR_API_KEY` | `Bearer $YOUR_API_KEY` |

***

### 3.2 Body 参数 (application/json)

| 参数名称                    | 类型      | 必填 | 说明                                                          | 示例                                   |
| ----------------------- | ------- | -- | ----------------------------------------------------------- | ------------------------------------ |
| **model**               | string  | 是  | 要使用的模型 ID。详见[概述](#1-概述)列出的可用版本，如 `sonar`。                   | `sonar`                              |
| **messages**            | array   | 是  | 聊天消息列表，格式与 OpenAI 兼容。数组中的每个对象包含 `role`(角色) 与 `content`(内容)。 | `[{"role": "user","content": "你好"}]` |
| role                    | string  | 否  | 消息角色，可选值：`system`、`user`、`assistant`。                       | `user`                               |
| content                 | string  | 否  | 消息的具体内容。                                                    | `你好，请给我讲个笑话。`                        |
| temperature             | number  | 否  | 采样温度，取值 `0～2`。数值越大，输出越随机；数值越小，输出越集中和确定。                     | `0.7`                                |
| top\_p                  | number  | 否  | 另一种调节采样分布的方式，取值 `0～1`。和 `temperature` 通常二选一设置。              | `0.9`                                |
| n                       | number  | 否  | 为每条输入消息生成多少条回复。                                             | `1`                                  |
| stream                  | boolean | 否  | 是否开启流式输出。设置为 `true` 时，返回类似 ChatGPT 的流式数据。                   | `false`                              |
| max\_tokens             | number  | 否  | 单次回复可生成的最大 token 数量，受模型上下文长度限制。                             | `1024`                               |
| presence\_penalty       | number  | 否  | -2.0 \~ 2.0。正值会鼓励模型输出更多新话题，负值会降低输出新话题的概率。                   | `0`                                  |
| frequency\_penalty      | number  | 否  | -2.0 \~ 2.0。正值会降低模型重复字句的频率，负值会提高重复字句出现的概率。                  | `0`                                  |
| search\_recency\_filter | string  | 否  | 返回指定时间间隔内的搜索结果。可选值包括：`month`、`week`、`day`、`hour`。           | `month`                              |

***

## 4. 请求示例

{% tabs %}
{% tab title="HTTP" %}

```http
POST /v1/chat/completions
Content-Type: application/json
Accept: application/json
Authorization: Bearer $YOUR_API_KEY

{
	"model": "sonar",
	"messages": [
		{
			"role": "system",
			"content": "Be precise and concise."
		},
		{
			"role": "user",
			"content": "How many stars are there in our galaxy?"
		}
	]
}
```

{% endtab %}

{% tab title="Shell" %}

```sh
curl https://gateway.theturbo.ai/v1/chat/completions \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "Authorization: Bearer $YOUR_API_KEY" \
	-d "{
	\"model\": \"sonar\",
	\"messages\": [{
			\"role\": \"system\",
			\"content\": \"Be precise and concise.\"
		},
		{
			\"role\": \"user\",
			\"content\": \"How many stars are there in our galaxy?\"
		}
	]
}"
```

{% endtab %}

{% tab title="Go" %}

```go
package main

import (
	"context"
	"fmt"

	"github.com/openai/openai-go"
	"github.com/openai/openai-go/option"
)

func main() {
	apiKey := "sk-123456789012345678901234567890123456789012345678"

	client := openai.NewClient(
		option.WithAPIKey(apiKey),
		option.WithBaseURL("https://gateway.theturbo.ai/v1"),
	)

	resp, err := client.Chat.Completions.New(
		context.Background(),
		openai.ChatCompletionNewParams{
			Model: "sonar",
			Messages: []openai.ChatCompletionMessageParamUnion{
				openai.SystemMessage("Be precise and concise."),
				openai.UserMessage("How many stars are there in our galaxy?"),
			},
		},
	)

	if err != nil {
		fmt.Println("error:", err)
		return
	}

	fmt.Println(resp.Choices[0].Message.Content)
}

```

{% endtab %}

{% tab title="Python" %}

```python
#!/usr/bin/env python3

from openai import OpenAI

def main():
    api_key = "sk-123456789012345678901234567890123456789012345678"

    client = OpenAI(
        api_key=api_key,
        base_url="https://gateway.theturbo.ai/v1"
    )

    response = client.chat.completions.create(
        model="sonar",
        messages=[
            {"role": "system", "content": "Be precise and concise."},
            {"role": "user", "content": "How many stars are there in our galaxy?"}
        ]
    )

    print(response.choices[0].message.content)

if __name__ == "__main__":
    main()

```

{% endtab %}
{% endtabs %}

## 5. 响应示例

```json
{
	"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
	"model": "sonar",
	"object": "chat.completion",
	"created": 1724369245,
	"citations": [
		"https://www.astronomy.com/science/astro-for-kids-how-many-stars-are-there-in-space/",
		"https://www.esa.int/Science_Exploration/Space_Science/Herschel/How_many_stars_are_there_in_the_Universe",
		"https://www.space.com/25959-how-many-stars-are-in-the-milky-way.html",
		"https://www.space.com/26078-how-many-stars-are-there.html",
		"https://en.wikipedia.org/wiki/Milky_Way",
		"https://www.littlepassports.com/blog/space/how-many-stars-are-in-the-universe/?srsltid=AfmBOoqWVymRloolU4KZBI9-LotDIoTnzhKYKCw7vVkaIifhjrEU66_5"
	],
	"choices": [
		{
			"index": 0,
			"finish_reason": "stop",
			"message": {
				"role": "assistant",
				"content": "The number of stars in the Milky Way galaxy is estimated to be between 100 billion and 400 billion stars. The most recent estimates from the Gaia mission suggest that there are approximately 100 to 400 billion stars in the Milky Way, with significant uncertainties remaining due to the difficulty in detecting faint red dwarfs and brown dwarfs."
			},
			"delta": {
				"role": "assistant",
				"content": ""
			}
		}
	],
	"usage": {
		"prompt_tokens": 14,
		"completion_tokens": 70,
		"total_tokens": 84
	}
}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.console.zenlayer.com/api/cn/compute/aig/chat-completion/perplexity-chat-completion.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
