Skip to content

文本 API

ModelGate 提供强大的文本生成 API,支持多种主流大语言模型。我们提供多种 API 风格,满足不同场景的需求。

API 概览

ModelGate 支持以下 API 风格:

  • OpenAI Style: 最常用的兼容风格,可以访问几乎所有市面上的大模型
  • Anthropic Style: Claude 官方风格,满足 Claude 模型完整数据支持
  • Google Style: Google Gemini 官方风格,支持所有 Gemini 模型特性和参数
  • OpenAI Response: OpenAI 原生响应格式,提供完整的响应数据包括使用量统计

API 调用示例

注意

如果您使用 modelgate 网页端,Host 为: https://mg.aid.pub
如果您使用 modelgate 客户端,Host 为: http://localhost:13148

OpenAI Style

最常用的兼容风格,可以访问几乎所有市面上的大模型。参数与响应结构基于 官方文本生成指南, 可直接复用 OpenAI SDK 与 REST 请求。

请求参数

参数名类型必填默认值说明
modelstring-模型名称,如 gpt-5-nano, gpt-4o, claude-3-5-sonnet-20241022
inputstring/array-支持普通文本、多轮对话数组或 { role, content } 对象
temperaturenumber1采样温度,范围 0-2。较高值(如 0.8)会使输出更随机,较低值(如 0.2)会使输出更确定
max_output_tokensnumber模型限制生成的最大 token 数量。不同模型有不同的限制
top_pnumber1核采样参数,范围 0-1。建议与 temperature 二选一使用
ninteger1返回的响应数量
streambooleanfalse是否以流式方式返回响应。启用后会通过 Server-Sent Events 返回
stopstring/arraynull最多 4 个序列,API 将停止生成更多 token。返回的文本不包含停止序列
logit_biasobjectnull调整特定 token 的采样概率,以 {"token_id": weight} 格式提供
functionsarray-函数定义数组,用于启用工具调用,每个函数包含 name,description,parameters
function_callstring/object"auto"控制函数调用行为 ("auto", "none", 或 {"name": "<函数名>"})
userstring-代表终端用户的唯一标识符,可以帮助监控和检测滥用行为

若需让模型调用自定义工具,可以通过 functions 定义函数再用 function_call 控制触发策略,OpenAI 会根据对话上下文自动或显式调用某个函数。具体行为遵循函数调用指南

input 除了字符串外也可以换成 [{ role: 'user', content: '...' }, ...] 这样的对话历史,以便构建多轮上下文。

python
# 安装 SDK
# pip install openai

from openai import OpenAI

client = OpenAI(
    api_key="your-modelgate-key",
    base_url="https://mg.aid.pub/v1"
)

completion = client.responses.create(
    model="gpt-5-nano",
    input="Explain quantum computing to a five-year-old"
)

text = completion.output[0].content[0].text
print("内容:", text)
print("输入 tokens:", completion.usage.prompt_tokens)
print("输出 tokens:", completion.usage.completion_tokens)
print("总 tokens:", completion.usage.total_tokens)
typescript
// 安装 SDK
// npm install openai

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-modelgate-key',
  baseURL: 'https://mg.aid.pub/v1'
});

const completion = await client.responses.create({
  model: 'gpt-5-nano',
  input: 'Explain quantum computing to a five-year-old'
});

const text = completion.output[0]?.content[0]?.text;
console.log('内容:', text);
console.log('输入 tokens:', completion.usage?.prompt_tokens);
console.log('输出 tokens:', completion.usage?.completion_tokens);
console.log('总 tokens:', completion.usage?.total_tokens);
javascript
// 使用 fetch API
const response = await fetch('https://mg.aid.pub/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer your-modelgate-key'
  },
  body: JSON.stringify({
    model: 'gpt-5-nano',
    input: 'Explain quantum computing to a five-year-old'
  })
});

const data = await response.json();
const text = data.output?.[0]?.content?.[0]?.text;
console.log('内容:', text);
console.log('输入 tokens:', data.usage.prompt_tokens);
console.log('输出 tokens:', data.usage.completion_tokens);
console.log('总 tokens:', data.usage.total_tokens);
go
// 安装 SDK
// go get github.com/sashabaranov/go-openai

package main

import (
    "context"
    "fmt"
    "github.com/sashabaranov/go-openai"
)

func main() {
    config := openai.DefaultConfig("your-modelgate-key")
    config.BaseURL = "https://mg.aid.pub/v1"

    client := openai.NewClientWithConfig(config)

    resp, err := client.CreateResponse(
        context.Background(),
        openai.ResponseCreateParams{
            Model: "gpt-5-nano",
            Input: "Explain quantum computing to a five-year-old",
        },
    )

    if err != nil {
        panic(err)
    }

    text := resp.Output[0].Content[0].Text
    fmt.Printf("内容: %s\n", text)
    fmt.Printf("输入 tokens: %d\n", resp.Usage.PromptTokens)
    fmt.Printf("输出 tokens: %d\n", resp.Usage.CompletionTokens)
    fmt.Printf("总 tokens: %d\n", resp.Usage.TotalTokens)
}
bash
curl https://mg.aid.pub/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_MODELGATE_API_KEY" \
  -d '{
    "model": "gpt-5-nano",
    "input": "Explain quantum computing to a five-year-old"
  }'

函数调用

typescript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-modelgate-key',
  baseURL: 'https://mg.aid.pub/v1'
});

const response = await client.responses.create({
  model: 'gpt-5-nano',
  input: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: '帮我查下上海的天气' }
  ],
  functions: [
    {
      name: 'getWeather',
      description: '查询指定城市的实时天气',
      parameters: {
        type: 'object',
        properties: {
          city: { type: 'string' },
          unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }
        },
        required: ['city']
      }
    }
  ],
  function_call: 'auto'
});

const output = response.output[0];
const functionCall = output?.content?.find(item => item.function_call)?.function_call;
if (functionCall) {
  console.log('函数名:', functionCall.name);
  console.log('参数:', functionCall.arguments);
}

响应示例:

json
{
  "id": "resp-xyz123",
  "object": "response",
  "model": "gpt-5-nano",
  "output": [
    {
      "id": "msg-1",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "text",
          "text": "我已调用 getWeather, 参数是 {...}"
        },
        {
          "function_call": {
            "name": "getWeather",
            "arguments": "{"city": "Shanghai", "unit": "celsius"}"
          }
        }
      ]
    }
  ],
  "usage": {
    "prompt_tokens": 24,
    "completion_tokens": 30,
    "total_tokens": 54
  }
}

Anthropic Style

Claude 官方风格,满足 Claude 模型完整数据支持,适合需要使用 Claude 特有功能的场景。官方文档

请求参数

参数名类型必填默认值说明
modelstring-Claude 模型名称,如 claude-3-5-sonnet-20241022, claude-sonnet-4-5-20250929
messagesarray-对话消息数组,每个消息包含 rolecontent 字段。注意:不支持 system 角色,需要使用单独的 system 参数
max_tokensinteger-生成的最大 token 数量,必填参数
systemstring/array-系统提示词,用于设置 Claude 的行为和角色。可以是字符串或对象数组
temperaturenumber1采样温度,范围 0-1。影响输出的随机性
top_pnumber-核采样参数,范围 0-1。仅推荐高级用例使用,建议优先使用 temperature
top_kinteger-只从前 K 个选项中采样。仅推荐高级用例使用,建议优先使用 temperature
streambooleanfalse是否以流式方式返回响应
stop_sequencesarray[]自定义停止序列。遇到这些序列时 Claude 将停止生成
metadataobject-包含用户 ID 的对象,用于滥用监控。格式:{"user_id": "string"}
python
# 安装 SDK
# pip install anthropic

from anthropic import Anthropic

client = Anthropic(
    api_key="your-api-key",
    base_url="https://mg.aid.pub/anthropic/v1"
)

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "你好,请介绍一下你自己"}
    ]
)

print(message.content[0].text)
typescript
// 安装 SDK
// npm install @anthropic-ai/sdk

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'your-api-key',
  baseURL: 'https://mg.aid.pub/anthropic/v1'
});

const message = await client.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: '你好,请介绍一下你自己' }
  ]
});

console.log(message.content[0].text);
javascript
// 使用 fetch API
const response = await fetch('https://mg.aid.pub/anthropic/v1/messages', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'x-api-key': 'YOUR_API_KEY',
    'anthropic-version': '2023-06-01'
  },
  body: JSON.stringify({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [
      { role: 'user', content: '你好,请介绍一下你自己' }
    ]
  })
});

const data = await response.json();
console.log(data.content[0].text);
go
// 使用 Go 调用 Anthropic API
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
)

func main() {
    url := "https://mg.aid.pub/anthropic/v1/messages"

    payload := map[string]interface{}{
        "model":      "claude-3-5-sonnet-20241022",
        "max_tokens": 1024,
        "messages": []map[string]string{
            {"role": "user", "content": "你好,请介绍一下你自己"},
        },
    }

    jsonData, _ := json.Marshal(payload)

    req, _ := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
    req.Header.Set("Content-Type", "application/json")
    req.Header.Set("x-api-key", "YOUR_API_KEY")
    req.Header.Set("anthropic-version", "2023-06-01")

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    body, _ := io.ReadAll(resp.Body)
    fmt.Println(string(body))
}
bash
curl https://mg.aid.pub/anthropic/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-3-5-sonnet-20241022",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "你好,请介绍一下你自己"
      }
    ]
  }'

响应示例:

json
{
  "id": "msg_123",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "你好!我是 Claude,一个 AI 助手。"
    }
  ],
  "model": "claude-3-5-sonnet-20241022",
  "stop_reason": "end_turn"
}

Google Style

Google Gemini 官方风格,支持所有 Gemini 模型特性和参数,适合需要使用 Gemini 特有功能的场景。官方文档

请求参数

参数名类型必填默认值说明
contentsarray-对话内容数组,每个对象包含 roleparts 字段。parts 是包含 text 的数组
generationConfigobject-生成配置对象,包含以下可选字段
↳ temperaturenumber1.0采样温度,范围 0-2。控制输出的随机性
↳ topPnumber0.95核采样参数,范围 0-1
↳ topKinteger40Top-K 采样参数
↳ maxOutputTokensinteger8192生成的最大 token 数量
↳ stopSequencesarray[]停止序列数组,遇到这些序列时停止生成
↳ candidateCountinteger1生成的候选响应数量
safetySettingsarray-安全设置数组,用于过滤有害内容。每个对象包含 categorythreshold
systemInstructionobject-系统指令,格式:{"parts": [{"text": "string"}]}
python
# 安装 SDK
# pip install google-generativeai

import google.generativeai as genai

genai.configure(
    api_key="your-api-key",
    transport="rest",
    client_options={"api_endpoint": "https://mg.aid.pub/gemini/v1"}
)

model = genai.GenerativeModel("gemini-2.5-flash")

response = model.generate_content("你好,请介绍一下你自己")

print(response.text)
typescript
// 安装 SDK
// npm install @google/generative-ai

import { GoogleGenerativeAI } from '@google/generative-ai';

const genAI = new GoogleGenerativeAI('your-api-key');

// 配置自定义端点
const model = genAI.getGenerativeModel(
  { model: 'gemini-2.5-flash' },
  { baseUrl: 'https://mg.aid.pub/gemini/v1' }
);

const result = await model.generateContent('你好,请介绍一下你自己');
const response = await result.response;

console.log(response.text());
javascript
// 使用 fetch API
const response = await fetch('https://mg.aid.pub/gemini/v1/models/gemini-2.5-flash:generateContent', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'x-goog-api-key': 'YOUR_API_KEY'
  },
  body: JSON.stringify({
    contents: [
      {
        parts: [
          { text: '你好,请介绍一下你自己' }
        ]
      }
    ]
  })
});

const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);
go
// 使用 Go 调用 Google Gemini API
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
)

func main() {
    url := "https://mg.aid.pub/gemini/v1/models/gemini-2.5-flash:generateContent"

    payload := map[string]interface{}{
        "contents": []map[string]interface{}{
            {
                "parts": []map[string]string{
                    {"text": "你好,请介绍一下你自己"},
                },
            },
        },
    }

    jsonData, _ := json.Marshal(payload)

    req, _ := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
    req.Header.Set("Content-Type", "application/json")
    req.Header.Set("x-goog-api-key", "YOUR_API_KEY")

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    body, _ := io.ReadAll(resp.Body)
    fmt.Println(string(body))
}
bash
curl https://mg.aid.pub/gemini/v1/models/gemini-2.5-flash:generateContent \
  -H "Content-Type: application/json" \
  -H "x-goog-api-key: YOUR_API_KEY" \
  -d '{
    "contents": [
      {
        "parts": [
          {
            "text": "你好,请介绍一下你自己"
          }
        ]
      }
    ]
  }'

响应示例:

json
{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "你好!我是 Gemini,很高兴认识你。"
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP"
    }
  ]
}

OpenAI Response

OpenAI 原生 Response API 提供完整响应结构与 Token 使用统计,更适合需要统一监控输出与计量的场景。官方文档

请求参数

参数名类型必填默认值说明
modelstring-模型名称,支持所有兼容 OpenAI Response API 的模型
inputstring/array-支持字符串、数组或 { role, content } 对象,可模拟多轮对话。单次简单调用可直接传入普通文本
temperaturenumber1采样温度,范围 0-2。影响输出的随机性和创造性
max_output_tokensnumber模型限制最大输出 token 数量
top_pnumber1核采样参数,范围 0-1。建议与 temperature 二选一使用
ninteger1返回的响应数量
streambooleanfalse是否以流式方式返回响应
stopstring/arraynull停止序列,最多 4 个
logit_biasobjectnull修改特定 token 出现的可能性
userstring-终端用户的唯一标识符

响应字段说明:

字段名类型说明
idstring响应的唯一标识符
objectstring对象类型,值为 response
createdinteger创建时间戳
modelstring使用的模型名称
inputarray/string原始请求输入
outputarray生成的输出片段数组,每项包含 content 字段
usageobjectToken 使用情况统计,包含 prompt_tokenscompletion_tokenstotal_tokens
↳ prompt_tokensinteger输入提示消耗的 token 数量
↳ completion_tokensinteger生成内容消耗的 token 数量
↳ total_tokensinteger总计消耗的 token 数量
python
# 安装 SDK
# pip install openai

from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://mg.aid.pub/responses/v1"
)

completion = client.responses.create(
    model="gpt-5-nano",
    input="Write a one-sentence bedtime story about a unicorn."
)

# 访问完整响应包括使用量统计
output_text = completion.output[0].content[0].text
print(f"内容: {output_text}")
print(f"输入 tokens: {completion.usage.prompt_tokens}")
print(f"输出 tokens: {completion.usage.completion_tokens}")
print(f"总 tokens: {completion.usage.total_tokens}")
typescript
// 安装 SDK
// npm install openai

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-api-key',
  baseURL: 'https://mg.aid.pub/responses/v1'
});

const completion = await client.responses.create({
  model: 'gpt-5-nano',
  input: 'Write a one-sentence bedtime story about a unicorn.'
});

// 访问完整响应包括使用量统计
const outputText = completion.output[0]?.content[0]?.text;
console.log('内容:', outputText);
console.log('输入 tokens:', completion.usage?.prompt_tokens);
console.log('输出 tokens:', completion.usage?.completion_tokens);
console.log('总 tokens:', completion.usage?.total_tokens);
javascript
// 使用 fetch API
const response = await fetch('https://mg.aid.pub/responses/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY'
  },
  body: JSON.stringify({
    model: 'gpt-5-nano',
    input: 'Write a one-sentence bedtime story about a unicorn.'
  })
});

const data = await response.json();
const outputText = data.output?.[0]?.content?.[0]?.text;
console.log('内容:', outputText);
console.log('输入 tokens:', data.usage.prompt_tokens);
console.log('输出 tokens:', data.usage.completion_tokens);
console.log('总 tokens:', data.usage.total_tokens);
go
// 使用 Go 调用 OpenAI Response API
package main

import (
    "context"
    "fmt"
    "github.com/sashabaranov/go-openai"
)

func main() {
    config := openai.DefaultConfig("your-api-key")
    config.BaseURL = "https://mg.aid.pub/responses/v1"

    client := openai.NewClientWithConfig(config)

    resp, err := client.CreateResponse(
        context.Background(),
        openai.ResponseCreateParams{
            Model: "gpt-4o",
            Input: "Write a one-sentence bedtime story about a unicorn.",
        },
    )

    if err != nil {
        panic(err)
    }

    text := resp.Output[0].Content[0].Text
    fmt.Printf("内容: %s\n", text)
    fmt.Printf("输入 tokens: %d\n", resp.Usage.PromptTokens)
    fmt.Printf("输出 tokens: %d\n", resp.Usage.CompletionTokens)
    fmt.Printf("总 tokens: %d\n", resp.Usage.TotalTokens)
}
bash
curl https://mg.aid.pub/responses/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-5-nano",
    "input": "Write a one-sentence bedtime story about a unicorn."
  }'

以上示例使用简单文本 input,若需要多轮对话可以传入 [{"role": "user", "content": ...}] 这样的数组格式。

响应示例:

json
{
  "id": "resp-abc123",
  "object": "response",
  "created": 1677652288,
  "model": "gpt-4o",
  "output": [
    {
      "id": "msg-1",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "output_text",
          "text": "你好!我是 AI 助手,很高兴为你服务。"
        }
      ],
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 12,
    "total_tokens": 32
  }
}

流式响应

所有 API 风格都支持流式响应。设置 stream: true 启用流式响应:

python
from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://mg.aid.pub/v1"
)

# 启用流式响应
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "讲个故事"}],
    stream=True
)

# 处理流式响应
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
typescript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-api-key',
  baseURL: 'https://mg.aid.pub/v1'
});

// 启用流式响应
const stream = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: '讲个故事' }],
  stream: true
});

// 处理流式响应
for await (const chunk of stream) {
  if (chunk.choices[0]?.delta?.content) {
    process.stdout.write(chunk.choices[0].delta.content);
  }
}
javascript
const response = await fetch('https://mg.aid.pub/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY'
  },
  body: JSON.stringify({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: '讲个故事' }],
    stream: true
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  const lines = chunk.split('\n').filter(line => line.trim() !== '');

  for (const line of lines) {
    if (line.startsWith('data: ')) {
      const data = line.slice(6);
      if (data === '[DONE]') break;

      try {
        const parsed = JSON.parse(data);
        const content = parsed.choices[0]?.delta?.content;
        if (content) {
          process.stdout.write(content);
        }
      } catch (e) {
        // 忽略解析错误
      }
    }
  }
}
go
package main

import (
    "context"
    "fmt"
    "io"
    "github.com/sashabaranov/go-openai"
)

func main() {
    config := openai.DefaultConfig("your-api-key")
    config.BaseURL = "https://mg.aid.pub/v1"
    client := openai.NewClientWithConfig(config)

    // 启用流式响应
    stream, err := client.CreateChatCompletionStream(
        context.Background(),
        openai.ChatCompletionRequest{
            Model: "gpt-4o",
            Messages: []openai.ChatCompletionMessage{
                {
                    Role:    "user",
                    Content: "讲个故事",
                },
            },
            Stream: true,
        },
    )
    if err != nil {
        panic(err)
    }
    defer stream.Close()

    // 处理流式响应
    for {
        response, err := stream.Recv()
        if err == io.EOF {
            break
        }
        if err != nil {
            panic(err)
        }

        content := response.Choices[0].Delta.Content
        if content != "" {
            fmt.Print(content)
        }
    }
}
bash
curl https://mg.aid.pub/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "讲个故事"}],
    "stream": true
  }' \
  --no-buffer

# 响应格式 (Server-Sent Events):
# data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"从"}}]}
# data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"前"}}]}
# data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"有"}}]}
# ...
# data: [DONE]

获取可用模型列表

在调用文本生成 API 之前,您可以先获取当前可用的模型列表。

请求方式:

  • 接口地址: https://mg.aid.pub/v1/models
  • 请求方法: GET
  • 请求头: Authorization: Bearer YOUR_MODELGATE_API_KEY
python
# 使用 requests 库
import requests

response = requests.get(
    'https://mg.aid.pub/v1/models',
    headers={
        'Authorization': 'Bearer YOUR_MODELGATE_API_KEY'
    }
)

models = response.json()
for model in models['data']:
    print(f"模型 ID: {model['id']}, 拥有者: {model['owned_by']}")
typescript
// 使用 fetch API
const response = await fetch('https://mg.aid.pub/v1/models', {
  headers: {
    'Authorization': 'Bearer YOUR_MODELGATE_API_KEY'
  }
});

const models = await response.json();
models.data.forEach(model => {
  console.log(`模型 ID: ${model.id}, 拥有者: ${model.owned_by}`);
});
javascript
// 使用 fetch API
fetch('https://mg.aid.pub/v1/models', {
  headers: {
    'Authorization': 'Bearer YOUR_MODELGATE_API_KEY'
  }
})
  .then(response => response.json())
  .then(models => {
    models.data.forEach(model => {
      console.log(`模型 ID: ${model.id}, 拥有者: ${model.owned_by}`);
    });
  });
go
package main

import (
    "encoding/json"
    "fmt"
    "io"
    "net/http"
)

type ModelsResponse struct {
    Object string `json:"object"`
    Data   []struct {
        ID      string `json:"id"`
        Object  string `json:"object"`
        Created int64  `json:"created"`
        OwnedBy string `json:"owned_by"`
    } `json:"data"`
}

func main() {
    req, _ := http.NewRequest("GET", "https://mg.aid.pub/v1/models", nil)
    req.Header.Set("Authorization", "Bearer YOUR_MODELGATE_API_KEY")

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    body, _ := io.ReadAll(resp.Body)

    var models ModelsResponse
    json.Unmarshal(body, &models)

    for _, model := range models.Data {
        fmt.Printf("模型 ID: %s, 拥有者: %s\n", model.ID, model.OwnedBy)
    }
}
bash
curl https://mg.aid.pub/v1/models \
  -H "Authorization: Bearer YOUR_MODELGATE_API_KEY"

响应示例:

json
{
  "object": "list",
  "data": [
    {
      "id": "GPT-5.1",
      "object": "model",
      "created": 1765951367,
      "owned_by": "system"
    },
    {
      "id": "GPT-5.2",
      "object": "model",
      "created": 1765951367,
      "owned_by": "system"
    },
    {
      "id": "GPT-5.2-High",
      "object": "model",
      "created": 1765951367,
      "owned_by": "system"
    }
  ]
}

响应字段说明:

字段名类型说明
objectstring对象类型,值为 list
dataarray模型列表数组
↳ idstring模型的唯一标识符,用于 API 调用时指定模型
↳ objectstring对象类型,值为 model
↳ createdinteger模型创建时间戳
↳ owned_bystring模型拥有者,通常为 system

错误处理

API 可能返回以下错误码:

错误码说明
400请求参数错误
401API Key 无效或未提供
403访问被拒绝
429请求频率超限
500服务器内部错误

错误响应示例:

json
{
  "error": {
    "message": "Invalid API key provided",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

最佳实践

  1. 合理设置 temperature: 创意性任务使用 0.7-1.0,精确性任务使用 0.1-0.3
  2. 控制 max_tokens: 避免不必要的长回复以节省成本
  3. 使用流式响应: 提升用户体验,特别是长文本生成
  4. 错误重试: 实现指数退避重试机制
  5. 监控用量: 定期检查 API 使用情况和花费

ModelGate 产品文档