OpenAI Compatible API
Use OpenAI SDK to call various models on Kouri Ai
Kouri Ai is fully compatible with OpenAI API format. You can use the official OpenAI SDK directly by simply changing the base_url and api_key to call all supported models.
Endpoints
| Endpoint Type | URL | Description |
|---|---|---|
| Chat Completions | https://api.kourichat.com/v1/chat/completions | Chat completion API |
| Responses | https://api.kourichat.com/v1/responses | Response API |
| Standard Endpoint | https://api.kourichat.com/v1 | Recommended for SDKs (with /v1) |
| Base Endpoint | https://api.kourichat.com | For some applications |
When configuring base_url, make sure to add the /v1 suffix, otherwise you may get 404 errors.
API Selection Guide
OpenAI provides two main chat APIs. Please choose based on model requirements:
| API | Path | Supported Models | Max Timeout |
|---|---|---|---|
| Chat Completions | /v1/chat/completions | GPT-4o, GPT-4, most models | ~5 min |
| Responses | /v1/responses | gpt-5.2-pro, o3-pro, etc. | ~20 min |
Chat Completions API
Suitable for most scenarios, supports regular chat models:
POST /v1/chat/completionsResponse API (Required for Reasoning Models)
Important: Some advanced models like gpt-5.2-pro and o3-pro only support the Response API, not Chat Completions.
Advantages of Response API:
- Longer timeout: Up to ~20 minutes, suitable for complex reasoning
- Better reasoning support: Designed for o-series reasoning models
- More advanced features: Such as reasoning token budget
POST /v1/responsesFor models that support both APIs (like o1, o3), Kouri Ai handles compatibility automatically. You can use Chat Completions and the platform will convert. But for models that only support Response API, you must use the Response API.
Sora Video Generation API
This API is special. Please refer to the official documentation or Apifox for usage.
Quick Start
cURL Request
Chat Completions API:
curl https://api.kourichat.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-xxxxxxxx" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Response API (required for gpt-5.2-pro, etc.):
curl https://api.kourichat.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-xxxxxxxx" \
-d '{
"model": "gpt-5.2-pro",
"input": "Explain the basic principles of quantum computing"
}'Python
Using the official OpenAI Python SDK:
Chat Completions API:
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx', # Replace with your Kouri Ai token
)
# Standard request
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)Response API (for gpt-5.2-pro, etc.):
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx',
)
# Use Response API for gpt-5.2-pro
response = client.responses.create(
model="gpt-5.2-pro",
input="Explain the basic principles of quantum computing"
)
print(response.output_text)Response API - with reasoning parameters:
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx',
)
response = client.responses.create(
model="gpt-5.2-pro",
input="How do I solve this math problem?",
reasoning={
"effort": "high" # Reasoning depth: low, medium, high
}
)
print(response.output_text)Response API - Streaming:
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx',
)
# Streaming with Response API
stream = client.responses.create(
model="gpt-5.2-pro",
input="Tell me a story",
stream=True
)
for event in stream:
if hasattr(event, 'type') and event.type == 'response.output_text.delta':
print(event.delta, end="", flush=True)Streaming
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx',
)
# Streaming request
stream = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Tell me a story"}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)JavaScript / Node.js
Using the official OpenAI Node.js SDK:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.kourichat.com/v1',
apiKey: 'sk-xxxxxxxx', // Replace with your Kouri Ai token
});
async function main() {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'Hello!' }
]
});
console.log(response.choices[0].message.content);
}
main();Streaming
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.kourichat.com/v1',
apiKey: 'sk-xxxxxxxx',
});
async function main() {
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}
}
main();LangChain Integration
Configure via environment variables:
import os
os.environ["OPENAI_API_BASE"] = "https://api.kourichat.com/v1"
os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxx"Or configure directly in code:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
base_url="https://api.kourichat.com/v1",
api_key="sk-xxxxxxxx",
)
response = llm.invoke("Hello!")
print(response.content)LangChain's OPENAI_API_BASE environment variable requires the /v1 suffix.
Multimodal Models
Image Understanding
Supports image URLs or Base64-encoded images:
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx',
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg"
}
}
]
}
]
)
print(response.choices[0].message.content)Using Base64 Images
import base64
from openai import OpenAI
client = OpenAI(
base_url='https://api.kourichat.com/v1',
api_key='sk-xxxxxxxx',
)
# Read and encode local image
with open("image.jpg", "rb") as image_file:
base64_image = base64.b64encode(image_file.read()).decode('utf-8')
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
]
)
print(response.choices[0].message.content)Common Issues
404 Error
Make sure base_url has the /v1 suffix:
# Correct
base_url='https://api.kourichat.com/v1'
# Wrong
base_url='https://api.kourichat.com'Legacy SDK Compatibility
If using an older OpenAI SDK (< 1.0), configuration is slightly different:
import openai
openai.api_base = "https://api.kourichat.com/v1"
openai.api_key = "sk-xxxxxxxx"We recommend upgrading to OpenAI SDK 1.0+ for a better experience.