DeepSeek
1. Overview
DeepSeek's current most affordable large model offers low prompt/generation costs, making it highly suitable for Chinese-English translation needs.
Available model list:
deepseek-chatdeepseek-coderdeepseek-ai/DeepSeek-V2.5(Open-source model)deepseek-ai/DeepSeek-V3(Open-source model)
2. Request Description
Request method:
POSTRequest address:
https://gateway.theturbo.ai/v1/chat/completions
3. Input Parameters
3.1 Header Parameters
Content-Type
string
Yes
Set the request header type, which must be application/json
application/json
Accept
string
Yes
Set the response type, which is recommended to be unified as application/json
application/json
Authorization
string
Yes
API_KEY required for authentication. Format: Bearer $YOUR_API_KEY
Bearer $YOUR_API_KEY
3.2 Body Parameters (application/json)
model
string
Yes
The model ID to use. See available models listed in the Overview for details, such as deepseek-chat.
deepseek-chat
messages
array
Yes
Chat message list, compatible with OpenAI interface format. Each object in the array contains role and content.
[{"role": "user","content": "hello"}]
role
string
No
Message role. Optional values: system, user, assistant.
user
content
string
No
The specific content of the message.
Hello, please tell me a joke.
temperature
number
No
Sampling temperature, taking a value between 0 and 2. The larger the value, the more random the output; the smaller the value, the more concentrated and certain the output.
0.7
top_p
number
No
Another way to adjust the sampling distribution, taking a value between 0 and 1. It is usually set as an alternative to the temperature.
0.9
n
number
No
How many replies to generate for each input message.
1
stream
boolean
No
Whether to enable streaming output. When set to true, returns streaming data similar to ChatGPT.
false
stop
string
No
Up to 4 strings can be specified. Once one of these strings appears in the generated content, it stops generating more tokens.
"\n"
max_tokens
number
No
The maximum number of tokens that can be generated in a single reply, subject to the model context length limit.
1024
presence_penalty
number
No
-2.0 ~ 2.0. A positive value encourages the model to output more new topics, while a negative value reduces the probability of outputting new topics.
0
frequency_penalty
number
No
-2.0 ~ 2.0. A positive value reduces the frequency of repeated phrases in the model, while a negative value increases the probability of repeated phrases.
0
4. Request Example
POST /v1/chat/completions
Content-Type: application/json
Accept: application/json
Authorization: Bearer $YOUR_API_KEY
{
	"model": "deepseek-chat",
	"messages": [
		{
			"role": "user",
			"content": "Hello, can you explain quantum mechanics to me?"
		}
	],
	"temperature": 0.7,
	"max_tokens": 1024
}curl https://gateway.theturbo.ai/v1/chat/completions \
	-H "Content-Type: application/json" \
	-H "Accept: application/json" \
	-H "Authorization: Bearer $YOUR_API_KEY" \
	-d "{
	\"model\": \"deepseek-chat\",
	\"messages\": [{
		\"role\": \"user\",
		\"content\": \"Hello, can you explain quantum mechanics to me?\"
	}]
}"package main
import (
	"fmt"
	"io/ioutil"
	"net/http"
	"strings"
)
const (
	YOUR_API_KEY    = "sk-123456789012345678901234567890123456789012345678"
	REQUEST_PAYLOAD = `{
	"model": "deepseek-chat",
	"messages": [{
		"role": "user",
		"content": "Hello, can you explain quantum mechanics to me?"
	}],
	"temperature": 0.7,
	"max_tokens": 1024
}`
)
func main() {
	requestURL := "https://gateway.theturbo.ai/v1/chat/completions"
	requestMethod := "POST"
	requestPayload := strings.NewReader(REQUEST_PAYLOAD)
	req, err := http.NewRequest(requestMethod, requestURL, requestPayload)
	if err != nil {
		fmt.Println("Create request failed, err: ", err)
		return
	}
	req.Header.Add("Content-Type", "application/json")
	req.Header.Add("Accept", "application/json")
	req.Header.Add("Authorization", "Bearer "+YOUR_API_KEY)
	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		fmt.Println("Do request failed, err: ", err)
		return
	}
	defer resp.Body.Close()
	respBodyBytes, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		fmt.Println("Read response body failed, err: ", err)
		return
	}
	fmt.Println(string(respBodyBytes))
}5. Response Example
{
	"id": "chatcmpl-1234567890",
	"object": "chat.completion",
	"created": 1699999999,
	"model": "deepseek-chat",
	"choices": [
		{
			"message": {
				"role": "assistant",
				"content": "Quantum mechanics is a branch of physics that studies the microscopic world..."
			},
			"finish_reason": "stop"
		}
	],
	"usage": {
		"prompt_tokens": 10,
		"completion_tokens": 30,
		"total_tokens": 40
	}
}Last updated