Gemini API
Call Gemini models using native Gemini protocol or OpenAI-compatible protocol
Kouri Ai's Gemini models support both native Google Gemini SDK protocol and OpenAI SDK compatible protocol. We recommend using the native protocol for better stability and richer features.
Protocol Selection
| Protocol Type | Endpoint URL | Description |
|---|---|---|
| Gemini Protocol | https://api.kourichat.com/v1beta | Native protocol, recommended, supports all models |
| OpenAI Protocol | https://api.kourichat.com/v1 | Compatible protocol for simple scenarios |
Native Gemini protocol recommended: Major applications like Dify and Chatbox support the native protocol. OpenAI-compatible protocol is only recommended for applications that only support OpenAI format.
cURL Request (Native Protocol)
curl "https://api.kourichat.com/v1beta/models/gemini-2.5-pro:generateContent" \
-H "x-goog-api-key: sk-xxxxxxxx" \
-H "Content-Type: application/json" \
-X POST \
-d '{
"contents": [
{
"parts": [
{
"text": "Hello!"
}
]
}
]
}'Native Gemini Protocol
Python SDK (New google-genai)
Using the latest google-genai SDK:
from google import genai
from google.genai import types
client = genai.Client(
api_key="sk-xxxxxxxx", # Replace with your Kouri Ai token
http_options=types.HttpOptions(
api_version="v1beta",
base_url="https://api.kourichat.com"
),
)
response = client.models.generate_content(
model='gemini-2.5-pro',
contents="Hello!",
config=types.GenerateContentConfig()
)
print(response.text)When using the new SDK, it's recommended to set api_version: "v1beta" to use the correct API version.
Python SDK (Legacy google-generativeai)
If you're using the legacy google-generativeai SDK:
import google.generativeai as genai
# Must explicitly specify rest protocol, grpc is not supported
genai.configure(
api_key='sk-xxxxxxxx', # Replace with your Kouri Ai token
transport="rest", # Important: must specify rest protocol
client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)
model = genai.GenerativeModel('gemini-2.5-pro')
response = model.generate_content("Hello!")
print(response.text)Important: You must explicitly specify transport="rest", otherwise it will default to gRPC protocol and cause errors.
Streaming
import google.generativeai as genai
genai.configure(
api_key='sk-xxxxxxxx',
transport="rest",
client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)
model = genai.GenerativeModel('gemini-2.5-pro')
response = model.generate_content(
"Tell me a story",
stream=True
)
for chunk in response:
print(chunk.text, end="", flush=True)Multi-turn Conversation
import google.generativeai as genai
genai.configure(
api_key='sk-xxxxxxxx',
transport="rest",
client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)
model = genai.GenerativeModel('gemini-2.5-pro')
chat = model.start_chat(history=[])
response = chat.send_message("Hi, my name is John")
print(response.text)
response = chat.send_message("What's my name?")
print(response.text)Image Understanding
import google.generativeai as genai
from PIL import Image
genai.configure(
api_key='sk-xxxxxxxx',
transport="rest",
client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)
model = genai.GenerativeModel('gemini-2.5-pro')
# Using local image
image = Image.open("image.jpg")
response = model.generate_content(["Describe this image", image])
print(response.text)Image Generation
from google import genai
from google.genai import types
prompt = "A popular anime games screenshot"
aspect_ratio = "16:9"
resolution = "4K"
client = genai.Client(
api_key="sk-xxxxxxxx", # Replace with your Kouri Ai token
http_options=types.HttpOptions(
base_url="https://api.kourichat.com"
),
)
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents=prompt,
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE'],
image_config=types.ImageConfig(
aspect_ratio=aspect_ratio,
image_size=resolution
),
)
)
for part in response.parts:
if part.text is not None:
print(part.text)
elif image := part.as_image():
image.save("test.png")Common Parameters
Native Protocol Parameters
| Parameter | Type | Description |
|---|---|---|
model | string | Model name |
contents | string/list | Input content |
config | GenerateContentConfig | Generation config |
GenerateContentConfig Options
| Parameter | Type | Description |
|---|---|---|
temperature | float | Randomness, 0-2 |
top_p | float | Nucleus sampling |
top_k | int | Top-K sampling |
max_output_tokens | int | Maximum output tokens |
stop_sequences | list | Stop sequences |
Common Issues
gRPC Protocol Error
If you encounter gRPC-related errors:
- Use
transport="rest"parameter (legacy SDK) - Or properly set
http_options(new SDK)
Response Timeout
Gemini models may take longer for complex tasks. If you encounter timeouts:
- Try reducing input length
- Use streaming mode
- Increase client timeout settings