Home / claude-3-7-sonnet-20250219
claude-3-7-sonnet-20250219

claude-3-7-sonnet-20250219

claude-3-7-sonnet-20250219
Models that support prompt caching
Price
Input$2.85 per million tokens $3 per million tokens
Cached writes (5m)$3.5625 per million tokens $3.75 per million tokens
Cached reads$0.285 per million tokens $0.30 per million tokens
Output$14.25 per million tokens $15 per million tokens

Use the following code example to integrate our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.jiekou.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="claude-3-7-sonnet-20250219",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=64000,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Information

Provider
Quantification
-

Supported Features

Context length
200000
Maximum output
64000
Function call
Support
Structured output
Support
serverless
Support
Input Capabilities
text, image
Output Capabilities
text
Contact Us