# Chat Completion (Anthropic Protocol)

## 1. Overview

Claude is a large language model developed by Anthropic, with powerful conversational and writing capabilities. It can understand context, generate coherent text, write code, and excels at logical reasoning and analysis. It prioritizes safety and ethical guidelines and clearly identifies itself as an AI assistant. It supports multilingual communication and can handle complex tasks and long conversations.

{% hint style="success" %}
This API conforms to the Anthropic Claude interface format specification and supports all official parameters.
{% endhint %}

{% hint style="info" %}
This document only lists a subset of parameters. For the full parameter list, refer to the [official documentation](https://platform.claude.com/docs/en/api/messages/create).
{% endhint %}

**Model List:**

* `claude-sonnet-4-20250514`
* `claude-sonnet-4-5-20250929`
* `claude-haiku-4-5-20251001`
* `claude-opus-4-5-20251101`
* `claude-opus-4-6`
* `claude-sonnet-4-6`
* `claude-opus-4-7`

## 2. Request Description

* **Request Method**: `POST`
* **Request URL**:

  > `https://gateway.theturbo.ai/v1/messages`

{% hint style="info" %}
To ensure concurrent resource availability, the backend uses multi-account load balancing. To improve cache hit rates in multi-turn conversation mode, include the HTTP request header `X-Conversation-Id` with a random string in your request. The platform will preferentially route requests to the same backend account. [Reference Documentation](/api/compute/aig/gateway-features/cache-optimization.md)
{% endhint %}

***

## 3. Request Parameters

### 3.1 Header Parameters

| Parameter Name | Type   | Required | Description                                                   | Example            |
| -------------- | ------ | -------- | ------------------------------------------------------------- | ------------------ |
| `Content-Type` | string | Yes      | Sets the request header type, must be `application/json`      | `application/json` |
| `Accept`       | string | Yes      | Sets the response type, recommended to use `application/json` | `application/json` |
| `x-api-key`    | string | Yes      | API\_KEY required for authentication, format: `$YOUR_API_KEY` | `$YOUR_API_KEY`    |

***

### 3.2 Body Parameters (application/json)

| Parameter Name  | Type    | Required | Description                                                                                                                                    | Example                                                                           |
| --------------- | ------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| **model**       | string  | Yes      | The model ID to use. See the available versions listed in [Overview](#id-1.-overview), e.g. `claude-3-5-haiku-20241022`.                       | `claude-haiku-4-5-20251001`                                                       |
| **messages**    | array   | Yes      | List of chat messages, compatible with Anthropic format. Each object in the array contains `role` and `content`.                               | `[{"role": "user","content": [{"type":"text","text":"Hello, tell me a joke."}]}]` |
| message.role    | string  | No       | Message role, possible values: `user`, `assistant`.                                                                                            | `user`                                                                            |
| message.content | array   | No       | The specific content of the message.                                                                                                           | `[{"type":"text","text":"Hello, tell me a joke."}]`                               |
| system          | array   | No       | System prompt.                                                                                                                                 | `[{"type":"text","text":"You are a friendly AI assistant"}]`                      |
| temperature     | number  | No       | Sampling temperature, value range `0～2`. Higher values produce more random output; lower values produce more focused and deterministic output. | `0.7`                                                                             |
| top\_p          | number  | No       | Another way to control the sampling distribution, value range `0～1`. Typically used as an alternative to `temperature`.                        | `0.9`                                                                             |
| stream          | boolean | No       | Whether to enable streaming output. When set to `true`, returns streaming data similar to ChatGPT.                                             | `false`                                                                           |
| max\_tokens     | number  | No       | Maximum number of tokens that can be generated in a single response, limited by the model's context length.                                    | `8192`                                                                            |

***

## 4. Request Examples

### 4.1 Chat Conversation

{% tabs %}
{% tab title="HTTP" %}

```http
POST /v1/messages
Content-Type: application/json
Accept: application/json
x-api-key: $YOUR_API_KEY

{
	"model": "claude-haiku-4-5-20251001",
	"max_tokens": 4096,
	"system": [{
		"type": "text",
		"text": "You are a friendly AI assistant"
	}],
	"messages": [{
		"role": "user",
		"content": [{
			"type": "text",
			"text": "Hello, please give me an introduction to quantum mechanics"
		}]
	}]
}
```

{% endtab %}

{% tab title="Shell" %}

```sh
curl https://gateway.theturbo.ai/v1/messages \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "x-api-key: $YOUR_API_KEY" \
	-d "{
	\"model\": \"claude-haiku-4-5-20251001\",
	\"max_tokens\": 4096,
	\"system\": [{
		\"type\": \"text\",
		\"text\": \"You are a friendly AI assistant\"
	}],
	\"messages\": [{
		\"role\": \"user\",
		\"content\": [{
			\"type\": \"text\",
			\"text\": \"Hello, please give me an introduction to quantum mechanics\"
		}]
	}]
}"
```

{% endtab %}

{% tab title="Go" %}

```go
package main

import (
	"context"
	"fmt"

	"github.com/anthropics/anthropic-sdk-go"
	"github.com/anthropics/anthropic-sdk-go/option"
)

func main() {
	apiKey := "sk-123456789012345678901234567890123456789012345678"

	client := anthropic.NewClient(
		option.WithAPIKey(apiKey),
		option.WithBaseURL("https://gateway.theturbo.ai"),
	)

	resp, err := client.Messages.New(
		context.Background(),
		anthropic.MessageNewParams{
			Model:     "claude-haiku-4-5-20251001",
			MaxTokens: 4096,
			System: []anthropic.TextBlockParam{
				{
					Type: "text",
					Text: "You are a friendly AI assistant",
				},
			},
			Messages: []anthropic.MessageParam{
				anthropic.NewUserMessage(anthropic.NewTextBlock("Hello, please give me an introduction to quantum mechanics")),
			},
		},
	)

	if err != nil {
		fmt.Println("error:", err)
		return
	}

	for _, block := range resp.Content {
		if block.Type == "text" {
			fmt.Println("💬 Assistant reply:")
			fmt.Println(block.Text)
		}
	}

	fmt.Println("\n📊 Token usage:")
	fmt.Printf("  - Input tokens: %d\n", resp.Usage.InputTokens)
	fmt.Printf("  - Output tokens: %d\n", resp.Usage.OutputTokens)
}

```

{% endtab %}

{% tab title="Python" %}

```python
#!/usr/bin/env python3

from anthropic import Anthropic

def main():
    api_key = "sk-123456789012345678901234567890123456789012345678"

    client = Anthropic(
        api_key=api_key,
        base_url="https://gateway.theturbo.ai"
    )

    response = client.messages.create(
        model="claude-haiku-4-5-20251001",
        max_tokens=4096,
        system=[{
            "type": "text",
            "text": "You are a friendly AI assistant"
        }],
        messages=[{
            "role": "user",
            "content": "Hello, please give me an introduction to quantum mechanics"
        }]
    )

    for block in response.content:
        if block.type == "text":
            print("💬 Assistant reply:")
            print(block.text)

    print("\n📊 Token usage:")
    print(f"  - Input tokens: {response.usage.input_tokens}")
    print(f"  - Output tokens: {response.usage.output_tokens}")

if __name__ == "__main__":
    main()

```

{% endtab %}
{% endtabs %}

## 5. Response Example

```json
{
	"model": "claude-3-5-haiku-20241022",
	"id": "msg_bdrk_01AZpXbu4crT5R6gsYJwk6KD",
	"type": "message",
	"role": "assistant",
	"content": [{
		"type": "text",
		"text": "Quantum mechanics is a branch of physics that studies the microscopic world..."
	}],
	"stop_reason": "end_turn",
	"stop_sequence": null,
	"usage": {
		"input_tokens": 36,
		"cache_creation_input_tokens": 0,
		"cache_read_input_tokens": 0,
		"output_tokens": 366
	}
}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.console.zenlayer.com/api/compute/aig/chat-completion/anthropic-claude/anthropic-claude-message.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
