Claude
1. Overview
Claude is a large language model developed by Anthropic with powerful dialogue and writing capabilities. It can understand context, generate coherent text, write code, and is good at logical reasoning and analysis. Focused on safety and ethics, it clearly identifies itself as an AI assistant. It supports multilingual communication and can handle complex tasks and long conversations.
Available model list:
claude-3-opus-20240229
claude-3-haiku-20240307
claude-3-5-haiku-20241022
claude-3-5-sonnet-20240620
claude-3-5-sonnet-20241022
Note
This API is compatible with the OpenAI interface format.
2. Request Description
Request method:
POST
Request address:
https://gateway.theturbo.ai/v1/chat/completions
3. Input Parameters
3.1 Header Parameters
Content-Type
string
Yes
Set the request header type, which must be application/json
application/json
Accept
string
Yes
Set the response type, which is recommended to be unified as application/json
application/json
Authorization
string
Yes
API_KEY required for authentication. Format: Bearer $YOUR_API_KEY
Bearer $YOUR_API_KEY
3.2 Body Parameters (application/json)
model
string
Yes
claude-3-5-haiku-20241022
messages
array
Yes
Chat message list, compatible with OpenAI interface format. Each object in the array contains role
and content
.
[{"role": "user","content": "hello"}]
role
string
No
Message role. Optional values: system
, user
, assistant
.
user
content
string
No
The specific content of the message.
Hello, please tell me a joke.
temperature
number
No
Sampling temperature, taking a value between 0
and 2
. The larger the value, the more random the output; the smaller the value, the more concentrated and certain the output.
0.7
top_p
number
No
Another way to adjust the sampling distribution, taking a value between 0
and 1
. It is usually set as an alternative to the temperature
.
0.9
stream
boolean
No
Whether to enable streaming output. When set to true
, returns streaming data similar to ChatGPT.
false
max_tokens
number
No
The maximum number of tokens that can be generated in a single reply, subject to the model context length limit.
1024
4. Request Example
5. Response Example
Last updated