CherryINCherryIN
New API Guides

Code Tutorial

Step-by-step guide to calling the CherryIN API with code

Introduction

This tutorial will guide you step-by-step on how to call the CherryIN API using code. Even if you're new to programming, you can easily get started!

Before You Begin

Before starting, make sure you have completed the steps in Getting Started and obtained your API Key.

Security Warning

Never hardcode your API Key in code and commit it to a Git repository! The examples in this tutorial are for demonstration and local testing only. In production environments, always manage your API Key through:

  • Environment Variables: Use os.getenv("CHERRYIN_API_KEY") to read
  • Configuration Files: Use .env files with the python-dotenv library (remember to add .env to .gitignore)
  • Secret Management Services: Such as AWS Secrets Manager, HashiCorp Vault, etc.

Leaking your API Key may result in unauthorized usage of your account and unexpected charges.

Core Concepts

Before writing code, let's understand a few key concepts:

ConceptDescriptionExample
API KeyYour credentials for authenticating requestssk-xxxxxxxx
Base URLThe API service addresshttps://open.cherryin.net/v1
ModelThe AI model you want to useopenai/gpt-5-chat, anthropic/claude-sonnet-4.5, google/gemini-2.5-flash
MessagesThe conversation content between you and AI[{"role": "user", "content": "Hello"}]

About Model Names

CherryIN model names follow the format vendor/model-name, e.g., openai/gpt-5-chat. Check the model list in the console for available models.


Python Tutorial

We recommend using UV to manage Python environments and run code. UV is a modern Python package manager that is easy to install, fast, and perfect for beginners.

Step 1: Install UV

Open PowerShell and run the following command:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

After installation, reopen your terminal and type uv --version to verify the installation.

Open Terminal and run the following command:

curl -LsSf https://astral.sh/uv/install.sh | sh

After installation, reopen your terminal and type uv --version to verify the installation.

More Installation Options

For more installation options, please refer to the UV Documentation.

Step 2: Initialize Project and Install Dependencies

Create a new folder as your project directory, then navigate to it in your terminal and run:

uv init
uv add openai

This will automatically create the project configuration and install the openai library.

Step 3: Write Your First Program

Create a new file hello_ai.py and copy the following code:

hello_ai.py
from openai import OpenAI

# ===== Configuration (replace with your own info) =====
API_KEY = "sk-xxxxxxxx"  # Replace with your API Key
BASE_URL = "https://open.cherryin.net/v1"
MODEL = "openai/gpt-5-chat"  # The model you want to use
# ======================================================

# Create client
client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL
)

# Send request
response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {"role": "user", "content": "Hello, please introduce yourself in one sentence"}
    ]
)

# Print AI's response
print(response.choices[0].message.content)

Step 4: Run the Program

Run in terminal:

uv run hello_ai.py

If everything goes well, you'll see AI's response!


Advanced Usage

After mastering the basics, let's explore more practical techniques.

Multi-turn Conversations

AI needs to know the conversation history to have coherent multi-turn conversations:

multi_turn.py
from openai import OpenAI

client = OpenAI(
    api_key="sk-xxxxxxxx",  # Replace with your API Key
    base_url="https://open.cherryin.net/v1"
)

# Conversation history
messages = []

def chat(user_input):
    # Add user message
    messages.append({"role": "user", "content": user_input})

    # Send request
    response = client.chat.completions.create(
        model="openai/gpt-5-chat",
        messages=messages
    )

    # Get AI reply
    ai_reply = response.choices[0].message.content

    # Add AI reply to history
    messages.append({"role": "assistant", "content": ai_reply})

    return ai_reply

# Have a multi-turn conversation
print("AI:", chat("My name is John"))
print("AI:", chat("What did I just say my name was?"))  # AI will remember your name

Streaming Output (Typewriter Effect)

Make AI's response appear character by character like typing:

streaming.py
from openai import OpenAI

client = OpenAI(
    api_key="sk-xxxxxxxx",  # Replace with your API Key
    base_url="https://open.cherryin.net/v1"
)

# Enable streaming
stream = client.chat.completions.create(
    model="openai/gpt-5-chat",
    messages=[
        {"role": "user", "content": "讲一个100字的小故事"}
    ],
    stream=True  # 关键参数
)

# print chunk by chunk
for chunk in stream:
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

Setting System Prompts

Define AI's behavior through the system role:

system_prompt.py
from openai import OpenAI

client = OpenAI(
    api_key="sk-xxxxxxxx",  # Replace with your API Key
    base_url="https://open.cherryin.net/v1"
)

response = client.chat.completions.create(
    model="openai/gpt-5-chat",
    messages=[
        {
            "role": "system",
            "content": "You are a professional English teacher who explains grammar in simple terms."
        },
        {
            "role": "user",
            "content": "What is the present perfect tense?"
        }
    ]
)

print(response.choices[0].message.content)

Controlling Output Parameters

Adjust AI's creativity and output length:

parameters.py
from openai import OpenAI

client = OpenAI(
    api_key="sk-xxxxxxxx",  # Replace with your API Key
    base_url="https://open.cherryin.net/v1"
)

response = client.chat.completions.create(
    model="openai/gpt-5-chat",
    messages=[
        {"role": "user", "content": "Write a poem about spring"}
    ],
    temperature=0.8,  # Creativity (0-2, higher = more creative, default 1)
    max_tokens=500,   # Maximum output tokens
    top_p=0.9         # Sampling range (0-1, lower = more deterministic)
)

print(response.choices[0].message.content)

Other Language Examples

First install the dependency:

npm install openai

Then create index.js:

index.js
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'sk-xxxxxxxx',  // Replace with your API Key
  baseURL: 'https://open.cherryin.net/v1'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'openai/gpt-5-chat',
    messages: [
      { role: 'user', content: 'Hello, please introduce yourself' }
    ]
  });

  console.log(response.choices[0].message.content);
}

main();

Send a request directly from the terminal:

curl https://open.cherryin.net/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-xxxxxxxx" \
  -d '{
    "model": "openai/gpt-5-chat",
    "messages": [
      {"role": "user", "content": "Hello"}
    ]
  }'

If you're using another programming language, you can send an HTTP POST request directly:

Endpoint: POST https://open.cherryin.net/v1/chat/completions

Headers:

Content-Type: application/json
Authorization: Bearer sk-xxxxxxxx

Request Body:

{
  "model": "openai/gpt-5-chat",
  "messages": [
    {"role": "user", "content": "Hello"}
  ]
}

FAQ

Error: AuthenticationError

Cause: Invalid API Key or not properly configured

Solution: Check if the API Key is copied correctly, ensure there are no extra spaces

Error: RateLimitError

Cause: Too many requests in a short time

Solution: Wait a moment and retry, or add delays in your code

Error: InsufficientQuotaError

Cause: Insufficient account balance or token quota exhausted

Solution: Go to the console to recharge or adjust token quota

How to View Available Models?

Log in to the CherryIN console and check the "Model List" page to see all supported models and their names.


Complete Example: Simple Chatbot

Let's combine everything we've learned to create a chatbot that runs in the terminal:

chatbot.py
from openai import OpenAI

# Configuration
API_KEY = "sk-xxxxxxxx"  # Replace with your API Key
BASE_URL = "https://open.cherryin.net/v1"
MODEL = "openai/gpt-5-chat"

# Create client
client = OpenAI(api_key=API_KEY, base_url=BASE_URL)

# Conversation history with system prompt
messages = [
    {"role": "system", "content": "You are a friendly AI assistant. Answer questions concisely."}
]

print("=" * 50)
print("Welcome to CherryIN Chatbot!")
print("Type 'quit' or 'exit' to exit")
print("=" * 50)

while True:
    # Get user input
    user_input = input("\nYou: ").strip()

    # Check exit condition
    if user_input.lower() in ['quit', 'exit', 'q']:
        print("Goodbye!")
        break

    if not user_input:
        continue

    # Add user message
    messages.append({"role": "user", "content": user_input})

    try:
        # Streaming output
        print("\nAI: ", end="", flush=True)

        stream = client.chat.completions.create(
            model=MODEL,
            messages=messages,
            stream=True
        )

        # Collect full reply
        full_reply = ""
        for chunk in stream:
            if chunk.choices and chunk.choices[0].delta.content:
                content = chunk.choices[0].delta.content
                print(content, end="", flush=True)
                full_reply += content

        print()  # New line

        # Save AI reply to history
        messages.append({"role": "assistant", "content": full_reply})

    except Exception as e:
        print(f"\nError occurred: {e}")

After running, you'll have an AI chatbot that can hold continuous conversations!


Next Steps

  • Check the FAQ for solutions to common issues
  • Explore more models and try different AI capabilities
  • Integrate the API into your own projects