# Doubao

## 1. Overview

Doubao, launched by ByteDance, is a conversation generation API compatible with the OpenAI interface standard.

**Available model list:**

* `Doubao-1.5-pro-32k`
* `Doubao-1.5-pro-256k`
* `Doubao-1.5-lite-32k`
* `Doubao-pro-32k`

{% hint style="info" %} <mark style="color:blue;">**Note**</mark>

<mark style="color:blue;">This API is compatible with the OpenAI interface format.</mark>
{% endhint %}

## 2. Request Description

* **Request method**: `POST`
* **Request address**: `https://gateway.theturbo.ai/v1/chat/completions`

## 3. Input Parameters

### 3.1 Header Parameters

<table><thead><tr><th width="188">Parameter Name</th><th width="85">Type</th><th width="101">Required</th><th width="217">Description</th><th>Example Value</th></tr></thead><tbody><tr><td><code>Content-Type</code></td><td>string</td><td>Yes</td><td>Set the request header type, which must be <code>application/json</code></td><td><code>application/json</code></td></tr><tr><td><code>Accept</code></td><td>string</td><td>Yes</td><td>Set the response type, which is recommended to be unified as <code>application/json</code></td><td><code>application/json</code></td></tr><tr><td><code>Authorization</code></td><td>string</td><td>Yes</td><td>API_KEY required for authentication. Format: <code>Bearer $YOUR_API_KEY</code></td><td><code>Bearer $YOUR_API_KEY</code></td></tr></tbody></table>

### 3.2 Body Parameters (application/json)

<table><thead><tr><th width="185">Parameter Name</th><th width="96">Type</th><th width="97">Required</th><th width="210">Description</th><th>Example</th></tr></thead><tbody><tr><td>model</td><td>string</td><td>Yes</td><td>The model ID to use. See available models listed in the <a href="#id-1.-overview">Overview</a> for details, such as <code>Doubao-1.5-pro-32k</code>.</td><td><code>Doubao-1.5-pro-32k</code></td></tr><tr><td>messages</td><td>array</td><td>Yes</td><td>Chat message list, compatible with OpenAI interface format. Each object in the array contains <code>role</code> and <code>content</code>.</td><td><code>[{"role": "user","content": "hello"}]</code></td></tr><tr><td>role</td><td>string</td><td>No</td><td>Message role. Optional values: <code>system</code>, <code>user</code>, <code>assistant</code>.</td><td><code>user</code></td></tr><tr><td>content</td><td>string</td><td>No</td><td>The specific content of the message.</td><td><code>Hello, please tell me a joke.</code></td></tr><tr><td>temperature</td><td>number</td><td>No</td><td>Sampling temperature, taking a value between <code>0</code> and <code>2</code>. The larger the value, the more random the output; the smaller the value, the more concentrated and certain the output.</td><td><code>0.7</code></td></tr><tr><td>top_p</td><td>number</td><td>No</td><td>Another way to adjust the sampling distribution, taking a value between <code>0</code> and <code>1</code>. It is usually set as an alternative to the <code>temperature</code>.</td><td><code>0.9</code></td></tr><tr><td>n</td><td>number</td><td>No</td><td>How many replies to generate for each input message.</td><td><code>1</code></td></tr><tr><td>stream</td><td>boolean</td><td>No</td><td>Whether to enable streaming output. When set to <code>true</code>, returns streaming data similar to ChatGPT.</td><td><code>false</code></td></tr><tr><td>stop</td><td>string</td><td>No</td><td>Up to 4 strings can be specified. Once one of these strings appears in the generated content, it stops generating more tokens.</td><td><code>"\n"</code></td></tr><tr><td>max_tokens</td><td>number</td><td>No</td><td>The maximum number of tokens that can be generated in a single reply, subject to the model context length limit.</td><td><code>1024</code></td></tr><tr><td>presence_penalty</td><td>number</td><td>No</td><td>-2.0 ~ 2.0. A positive value encourages the model to output more new topics, while a negative value reduces the probability of outputting new topics.</td><td><code>0</code></td></tr><tr><td>frequency_penalty</td><td>number</td><td>No</td><td>-2.0 ~ 2.0. A positive value reduces the frequency of repeated phrases in the model, while a negative value increases the probability of repeated phrases.</td><td><code>0</code></td></tr></tbody></table>

## 4. Request Example

{% tabs %}
{% tab title="HTTP" %}

```http
POST /v1/chat/completions
Content-Type: application/json
Accept: application/json
Authorization: Bearer $YOUR_API_KEY

{
	"model": "Doubao-1.5-pro-32k",
	"messages": [
		{
			"role": "user",
			"content": "Hello, can you explain quantum mechanics to me?"
		}
	],
	"temperature": 0.7,
	"max_tokens": 1024
}
```

{% endtab %}

{% tab title="Shell" %}

```sh
curl https://gateway.theturbo.ai/v1/chat/completions \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "Authorization: Bearer $YOUR_API_KEY" \
	-d "{
	\"model\": \"Doubao-1.5-pro-32k\",
	\"messages\": [{
		\"role\": \"user\",
		\"content\": \"Hello, can you explain quantum mechanics to me?\"
	}]
}"
```

{% endtab %}

{% tab title="Go" %}

```go
package main

import (
	"fmt"
	"io/ioutil"
	"net/http"
	"strings"
)

const (
	YOUR_API_KEY    = "sk-123456789012345678901234567890123456789012345678"
	REQUEST_PAYLOAD = `{
	"model": "Doubao-1.5-pro-32k",
	"messages": [{
		"role": "user",
		"content": "Hello, can you explain quantum mechanics to me?"
	}],
	"temperature": 0.7,
	"max_tokens": 1024
}`
)

func main() {

	requestURL := "https://gateway.theturbo.ai/v1/chat/completions"
	requestMethod := "POST"
	requestPayload := strings.NewReader(REQUEST_PAYLOAD)

	req, err := http.NewRequest(requestMethod, requestURL, requestPayload)
	if err != nil {
		fmt.Println("Create request failed, err: ", err)
		return
	}

	req.Header.Add("Content-Type", "application/json")
	req.Header.Add("Accept", "application/json")
	req.Header.Add("Authorization", "Bearer "+YOUR_API_KEY)

	client := &http.Client{}

	resp, err := client.Do(req)
	if err != nil {
		fmt.Println("Do request failed, err: ", err)
		return
	}
	defer resp.Body.Close()

	respBodyBytes, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		fmt.Println("Read response body failed, err: ", err)
		return
	}
	fmt.Println(string(respBodyBytes))
}
```

{% endtab %}
{% endtabs %}

## 5. Response Example

```json
{
	"id": "chatcmpl-1234567890",
	"object": "chat.completion",
	"created": 1699999999,
	"model": "Doubao-1.5-pro-32k",
	"choices": [
		{
			"message": {
				"role": "assistant",
				"content": "Quantum mechanics is a branch of physics that studies the microscopic world..."
			},
			"finish_reason": "stop"
		}
	],
	"usage": {
		"prompt_tokens": 10,
		"completion_tokens": 30,
		"total_tokens": 40
	}
}
```
