# 对话生成（Gemini原生协议）

## 1. 概述

Google 推出的多模态人工智能模型，旨在处理多种数据类型，包括文本、图像、音频、视频和代码。

{% hint style="success" %}
本 API 符合 Google Gemini 接口格式规范，支持官方所有参数。
{% endhint %}

{% hint style="info" %}
本文档只列举了一部分参数，详细参数列表可参考[官方文档](https://ai.google.dev/api/generate-content)。
{% endhint %}

**模型列表：**

* `gemini-2.5-flash`
* `gemini-2.5-pro`
* `gemini-2.5-flash-lite`
* `gemini-2.5-flash-lite-preview-06-17`
* `gemini-3-pro-preview`
* `gemini-3-flash-preview`
* `gemini-3.1-pro-preview`
* `gemini-3.1-flash-lite-preview`

## 2. 请求说明

* **请求方法**:`POST`
* **请求地址**:

  > `https://gateway.theturbo.ai/v1/v1beta/models/{model}:generateContent`
* **请求地址（流式）**:

  > `https://gateway.theturbo.ai/v1/v1beta/models/{model}:streamGenerateContent`

{% hint style="info" %}
平台为保障并发资源量，后端为多账号负载，如需提高缓存命中率，多轮对话模式可携带http请求头`X-Conversation-Id`加随机字符串请求，平台会优先路由到后端同一账号上。[参考文档](/api-reference/cn/compute/aig/gateway-features/cache-optimization.md)
{% endhint %}

***

## 3. 请求参数

### 3.1 Header 参数

| 参数名称             | 类型     | 必填 | 说明                                  | 示例值                |
| ---------------- | ------ | -- | ----------------------------------- | ------------------ |
| `Content-Type`   | string | 是  | 设置请求头类型，必须为 `application/json`      | `application/json` |
| `Accept`         | string | 是  | 设置响应类型，建议统一为 `application/json`     | `application/json` |
| `x-goog-api-key` | string | 是  | 身份验证所需的 API\_KEY，格式 `$YOUR_API_KEY` | `$YOUR_API_KEY`    |

***

### 3.2 Body 参数 (application/json)

| 参数名称                              | 类型     | 必填 | 说明                                                          | 示例                                                             |
| --------------------------------- | ------ | -- | ----------------------------------------------------------- | -------------------------------------------------------------- |
| **contents**                      | array  | 是  | 与模型当前对话的内容。对于单轮查询，这是单个实例。对于多轮查询（例如聊天），这是包含对话历史记录和最新请求的重复字段。 | `[{"role":"user","parts":[{"text":"A cute baby sea otter"}]}]` |
| content.role                      | string | 是  | 消息角色。必须是`user`或`model`。                                     | `user`                                                         |
| content.parts                     | array  | 否  | 构成单条消息的有序 Parts。部分可能具有不同的 MIME 类型。                          | `[{"text":"A cute baby sea otter"}]}]`                         |
| content.parts.text                | string | 否  | 内嵌文本。                                                       | `A cute baby sea otter`                                        |
| content.parts.inlineData          | struct | 否  | 内嵌媒体字节。                                                     |                                                                |
| content.parts.inlineData.mimeType | string | 是  | 来源数据的 IANA 标准 MIME 类型。                                      | `image/png`                                                    |
| content.parts.inlineData.data     | string | 是  | 媒体格式的原始字节。使用 base64 编码的字符串。                                 |                                                                |
| generationConfig                  | struct | 否  | 模型生成和输出的配置选项。                                               |                                                                |

***

## 4. 请求示例

### 4.1 聊天对话

{% tabs %}
{% tab title="HTTP" %}

```http
POST /v1/v1beta/models/gemini-2.5-flash:generateContent
Content-Type: application/json
Accept: application/json
x-goog-api-key: $YOUR_API_KEY

{
	"contents": [{
		"role": "user",
		"parts": [{
			"text": "你好，给我科普一下量子力学吧"
		}]
	}],
	"generationConfig": {
		"temperature": 0.7,
		"maxOutputTokens": 1024
	}
}
```

{% endtab %}

{% tab title="Shell" %}

```sh
curl https://gateway.theturbo.ai/v1/v1beta/models/gemini-2.5-flash:generateContent \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "x-goog-api-key: $YOUR_API_KEY" \
	-d "{
	\"contents\": [{
		\"role\": \"user\",
		\"parts\": [{
			\"text\": \"你好，给我科普一下量子力学吧\"
		}]
	}],
	\"generationConfig\": {
		\"temperature\": 0.7,
		\"maxOutputTokens\": 1024
	}
}"
```

{% endtab %}

{% tab title="Go" %}

```go
package main

import (
	"context"
	"fmt"

	"google.golang.org/genai"
)

func main() {

	apiKey := "sk-123456789012345678901234567890123456789012345678"

	client, err := genai.NewClient(
		context.Background(),
		&genai.ClientConfig{
			APIKey:  apiKey,
			Backend: genai.BackendGeminiAPI,
			HTTPOptions: genai.HTTPOptions{
				BaseURL: "https://gateway.theturbo.ai",
			},
		})
	if err != nil {
		fmt.Println("error creating client:", err)
		return
	}

	resp, err := client.Models.GenerateContent(
		context.Background(),
		"gemini-2.5-flash",
		[]*genai.Content{
			{
				Role: "user",
				Parts: []*genai.Part{
					{Text: "你好，给我科普一下量子力学吧"},
				},
			},
		},
		&genai.GenerateContentConfig{
			Temperature:     genai.Ptr(float32(0.7)),
			MaxOutputTokens: 1024,
		},
	)
	if err != nil {
		fmt.Println("error:", err)
		return
	}

	if len(resp.Candidates) > 0 && len(resp.Candidates[0].Content.Parts) > 0 {
		fmt.Println("💬 Assistant reply:")
		for _, part := range resp.Candidates[0].Content.Parts {
			if part.Text != "" {
				fmt.Println(part.Text)
			}
		}
	}

	if resp.UsageMetadata != nil {
		fmt.Println("\n📊 Token usage:")
		fmt.Printf("  - Prompt tokens: %d\n", resp.UsageMetadata.PromptTokenCount)
		fmt.Printf("  - Completion tokens: %d\n", resp.UsageMetadata.CandidatesTokenCount)
		fmt.Printf("  - Total tokens: %d\n", resp.UsageMetadata.TotalTokenCount)
	}
}

```

{% endtab %}

{% tab title="Python" %}

```python
#!/usr/bin/env python3

from google import genai

def main():
    api_key = "sk-123456789012345678901234567890123456789012345678"

    client = genai.Client(
        api_key=api_key,
        http_options={
            "base_url": "https://gateway.theturbo.ai"
        }
    )

    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents="你好，给我科普一下量子力学吧",
        config=genai.types.GenerateContentConfig(
            temperature=0.7,
            max_output_tokens=1024
        )
    )

    print("💬 Assistant reply:")
    print(response.text)

    if response.usage_metadata:
        print("\n📊 Token usage:")
        print(f"  - Prompt tokens: {response.usage_metadata.prompt_token_count}")
        print(f"  - Completion tokens: {response.usage_metadata.candidates_token_count}")
        print(f"  - Total tokens: {response.usage_metadata.total_token_count}")

if __name__ == "__main__":
    main()

```

{% endtab %}
{% endtabs %}

## 5. 响应示例

```json
{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [
          {
            "text": "量子力学是研究微观世界的物理学分支……"
          }
        ]
      },
      "finishReason": "MAX_TOKENS",
      "avgLogprobs": -2.1121876037198732
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 5,
    "candidatesTokenCount": 153,
    "totalTokenCount": 1027,
    "trafficType": "ON_DEMAND",
    "promptTokensDetails": [
      {
        "modality": "TEXT",
        "tokenCount": 5
      }
    ],
    "candidatesTokensDetails": [
      {
        "modality": "TEXT",
        "tokenCount": 153
      }
    ],
    "thoughtsTokenCount": 869
  },
  "modelVersion": "gemini-2.5-flash",
  "createTime": "2025-12-11T10:01:58.402576Z",
  "responseId": "lpY6aZDJGOTCgeAP1J-e4Qg"
}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.console.zenlayer.com/api-reference/cn/compute/aig/chat-completion/google-gemini/google-gemini-generate-content.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
