DeepAnima
Kouri Ai

Gemini API

Call Gemini models using native Gemini protocol or OpenAI-compatible protocol

Kouri Ai's Gemini models support both native Google Gemini SDK protocol and OpenAI SDK compatible protocol. We recommend using the native protocol for better stability and richer features.

Protocol Selection

Protocol TypeEndpoint URLDescription
Gemini Protocolhttps://api.kourichat.com/v1betaNative protocol, recommended, supports all models
OpenAI Protocolhttps://api.kourichat.com/v1Compatible protocol for simple scenarios

Native Gemini protocol recommended: Major applications like Dify and Chatbox support the native protocol. OpenAI-compatible protocol is only recommended for applications that only support OpenAI format.

cURL Request (Native Protocol)

curl "https://api.kourichat.com/v1beta/models/gemini-2.5-pro:generateContent" \
  -H "x-goog-api-key: sk-xxxxxxxx" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{
    "contents": [
      {
        "parts": [
          {
            "text": "Hello!"
          }
        ]
      }
    ]
  }'

Native Gemini Protocol

Python SDK (New google-genai)

Using the latest google-genai SDK:

from google import genai
from google.genai import types

client = genai.Client(
    api_key="sk-xxxxxxxx",  # Replace with your Kouri Ai token
    http_options=types.HttpOptions(
        api_version="v1beta",
        base_url="https://api.kourichat.com"
    ),
)

response = client.models.generate_content(
    model='gemini-2.5-pro',
    contents="Hello!",
    config=types.GenerateContentConfig()
)

print(response.text)

When using the new SDK, it's recommended to set api_version: "v1beta" to use the correct API version.

Python SDK (Legacy google-generativeai)

If you're using the legacy google-generativeai SDK:

import google.generativeai as genai

# Must explicitly specify rest protocol, grpc is not supported
genai.configure(
    api_key='sk-xxxxxxxx',  # Replace with your Kouri Ai token
    transport="rest",  # Important: must specify rest protocol
    client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)

model = genai.GenerativeModel('gemini-2.5-pro')
response = model.generate_content("Hello!")
print(response.text)

Important: You must explicitly specify transport="rest", otherwise it will default to gRPC protocol and cause errors.

Streaming

import google.generativeai as genai

genai.configure(
    api_key='sk-xxxxxxxx',
    transport="rest",
    client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)

model = genai.GenerativeModel('gemini-2.5-pro')

response = model.generate_content(
    "Tell me a story",
    stream=True
)

for chunk in response:
    print(chunk.text, end="", flush=True)

Multi-turn Conversation

import google.generativeai as genai

genai.configure(
    api_key='sk-xxxxxxxx',
    transport="rest",
    client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)

model = genai.GenerativeModel('gemini-2.5-pro')
chat = model.start_chat(history=[])

response = chat.send_message("Hi, my name is John")
print(response.text)

response = chat.send_message("What's my name?")
print(response.text)

Image Understanding

import google.generativeai as genai
from PIL import Image

genai.configure(
    api_key='sk-xxxxxxxx',
    transport="rest",
    client_options={"api_endpoint": "https://api.kourichat.com/v1beta"},
)

model = genai.GenerativeModel('gemini-2.5-pro')

# Using local image
image = Image.open("image.jpg")
response = model.generate_content(["Describe this image", image])
print(response.text)

Image Generation

from google import genai
from google.genai import types

prompt = "A popular anime games screenshot"
aspect_ratio = "16:9"
resolution = "4K"

client = genai.Client(
    api_key="sk-xxxxxxxx",  # Replace with your Kouri Ai token
    http_options=types.HttpOptions(
        base_url="https://api.kourichat.com"
    ),
)

response = client.models.generate_content(
    model="gemini-3-pro-image-preview",
    contents=prompt,
    config=types.GenerateContentConfig(
        response_modalities=['TEXT', 'IMAGE'],
        image_config=types.ImageConfig(
            aspect_ratio=aspect_ratio,
            image_size=resolution
        ),
    )
)

for part in response.parts:
    if part.text is not None:
        print(part.text)
    elif image := part.as_image():
        image.save("test.png")

Common Parameters

Native Protocol Parameters

ParameterTypeDescription
modelstringModel name
contentsstring/listInput content
configGenerateContentConfigGeneration config

GenerateContentConfig Options

ParameterTypeDescription
temperaturefloatRandomness, 0-2
top_pfloatNucleus sampling
top_kintTop-K sampling
max_output_tokensintMaximum output tokens
stop_sequenceslistStop sequences

Common Issues

gRPC Protocol Error

If you encounter gRPC-related errors:

  1. Use transport="rest" parameter (legacy SDK)
  2. Or properly set http_options (new SDK)

Response Timeout

Gemini models may take longer for complex tasks. If you encounter timeouts:

  1. Try reducing input length
  2. Use streaming mode
  3. Increase client timeout settings

More Resources

On this page