# Chat Completion

## 1. Overview

A chat completion API launched by ByteDance Doubao that conforms to the OpenAI interface standard.

{% hint style="success" %}
This API is compatible with the OpenAI interface format.
{% endhint %}

**Model List:**

* `Doubao-1.5-pro-32k`
* `Doubao-1.5-pro-256k`
* `Doubao-1.5-lite-32k`
* `Doubao-pro-32k`

## 2. Request Description

* **Request Method**: `POST`
* **Request URL**:

  > `https://gateway.theturbo.ai/v1/chat/completions`

{% hint style="info" %}
To ensure concurrent resource availability, the backend uses multi-account load balancing. To improve cache hit rates in multi-turn conversation mode, include the HTTP request header `X-Conversation-Id` with a random string in your request. The platform will preferentially route requests to the same backend account. [Reference Documentation](https://docs.console.zenlayer.com/api/compute/aig/gateway-features/cache-optimization)
{% endhint %}

***

## 3. Request Parameters

### 3.1 Header Parameters

| Parameter Name  | Type   | Required | Description                                                         | Example                |
| --------------- | ------ | -------- | ------------------------------------------------------------------- | ---------------------- |
| `Content-Type`  | string | Yes      | Set the request header type, must be `application/json`             | `application/json`     |
| `Accept`        | string | Yes      | Set the response type, recommended to use `application/json`        | `application/json`     |
| `Authorization` | string | Yes      | API\_KEY required for authentication, format `Bearer $YOUR_API_KEY` | `Bearer $YOUR_API_KEY` |

***

### 3.2 Body Parameters (application/json)

| Parameter Name     | Type    | Required | Description                                                                                                                                       | Example                                 |
| ------------------ | ------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
| **model**          | string  | Yes      | The model ID to use. See [Overview](#id-1.-overview) for available versions, e.g. `Doubao-1.5-pro-32k`.                                           | `Doubao-1.5-pro-32k`                    |
| **messages**       | array   | Yes      | Chat message list, compatible with OpenAI format. Each object in the array contains `role` and `content`.                                         | `[{"role": "user","content": "Hello"}]` |
| role               | string  | No       | Message role, possible values: `system`, `user`, `assistant`.                                                                                     | `user`                                  |
| content            | string  | No       | The specific content of the message.                                                                                                              | `Hello, please tell me a joke.`         |
| temperature        | number  | No       | Sampling temperature, ranging from `0~2`. Higher values make the output more random; lower values make the output more focused and deterministic. | `0.7`                                   |
| top\_p             | number  | No       | Another way to adjust the sampling distribution, ranging from `0~1`. Usually set one of these or `temperature`.                                   | `0.9`                                   |
| n                  | number  | No       | Number of replies to generate for each input message.                                                                                             | `1`                                     |
| stream             | boolean | No       | Whether to enable streaming output. When set to `true`, returns streaming data similar to ChatGPT.                                                | `false`                                 |
| stop               | string  | No       | Up to 4 strings can be specified. Once any of these strings appears in the generated content, token generation stops.                             | `"\n"`                                  |
| max\_tokens        | number  | No       | Maximum number of tokens that can be generated in a single reply, limited by the model's context length.                                          | `1024`                                  |
| presence\_penalty  | number  | No       | -2.0 \~ 2.0. Positive values encourage the model to generate more new topics, while negative values reduce the probability of new topics.         | `0`                                     |
| frequency\_penalty | number  | No       | -2.0 \~ 2.0. Positive values reduce the frequency of the model repeating phrases, while negative values increase the probability of repetition.   | `0`                                     |

***

## 4. Request Examples

{% tabs %}
{% tab title="HTTP" %}

```http
POST /v1/chat/completions
Content-Type: application/json
Accept: application/json
Authorization: Bearer $YOUR_API_KEY

{
	"model": "Doubao-1.5-pro-32k",
	"messages": [
		{
			"role": "user",
			"content": "Hello, can you explain quantum mechanics to me?"
		}
	],
	"temperature": 0.7,
	"max_tokens": 1024
}
```

{% endtab %}

{% tab title="Shell" %}

```sh
curl https://gateway.theturbo.ai/v1/chat/completions \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "Authorization: Bearer $YOUR_API_KEY" \
	-d "{
	\"model\": \"Doubao-1.5-pro-32k\",
	\"messages\": [{
		\"role\": \"user\",
		\"content\": \"Hello, can you explain quantum mechanics to me?\"
	}]
}"
```

{% endtab %}

{% tab title="Go" %}

```go
package main

import (
	"context"
	"fmt"

	"github.com/openai/openai-go"
	"github.com/openai/openai-go/option"
)

func main() {
	apiKey := "sk-123456789012345678901234567890123456789012345678"

	client := openai.NewClient(
		option.WithAPIKey(apiKey),
		option.WithBaseURL("https://gateway.theturbo.ai/v1"),
	)

	resp, err := client.Chat.Completions.New(
		context.Background(),
		openai.ChatCompletionNewParams{
			Model: "Doubao-1.5-pro-32k",
			Messages: []openai.ChatCompletionMessageParamUnion{
				openai.UserMessage("Hello, can you explain quantum mechanics to me?"),
			},
		},
	)

	if err != nil {
		fmt.Println("error:", err)
		return
	}

	fmt.Println(resp.Choices[0].Message.Content)
}

```

{% endtab %}

{% tab title="Python" %}

```python
#!/usr/bin/env python3

from openai import OpenAI

def main():
    api_key = "sk-123456789012345678901234567890123456789012345678"

    client = OpenAI(
        api_key=api_key,
        base_url="https://gateway.theturbo.ai/v1"
    )

    response = client.chat.completions.create(
        model="Doubao-1.5-pro-32k",
        messages=[
            {"role": "user", "content": "Hello, can you explain quantum mechanics to me?"}
        ]
    )

    print(response.choices[0].message.content)

if __name__ == "__main__":
    main()

```

{% endtab %}
{% endtabs %}

## 5. Response Example

```json
{
	"id": "chatcmpl-1234567890",
	"object": "chat.completion",
	"created": 1699999999,
	"model": "Doubao-1.5-pro-32k",
	"choices": [
		{
			"message": {
				"role": "assistant",
				"content": "Quantum mechanics is a branch of physics that studies the microscopic world..."
			},
			"finish_reason": "stop"
		}
	],
	"usage": {
		"prompt_tokens": 10,
		"completion_tokens": 30,
		"total_tokens": 40
	}
}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.console.zenlayer.com/api/compute/aig/chat-completion/doubao/doubao-chat-completion.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
