Chat with LLM

Table of contents

March 7, 2025

  1. Chat with LLM
    1. Request Headers
    2. Request Body
    3. Responses
    4. Model Content-Type: text/event-stream
      1. SSE model
      2. SSE parser
    5. Model Content-Type: application/json
    6. Examples
    7. Try It

Equivalent of chat.minimax.io.

Use your chat.minimax.io account for this endpoint, see Setup MiniMax for details.

Up to five parallel requests are supported per individual chat.minimax.io account. If you need more parallel executions, please add additional chat.minimax.io accounts.

Please visit 🚀 Chat with AI to try a fully-fledged chat demo that features the entire LLM functionality available via our API, complete with full source code.

Feel free to chat with our 🤖 Ask AI support bot, which is powered by this very API.

Currently following models supported:

Model Context length Notes
DeepSeek R1 64K Reasoning model
MiniMax-Text-01 1M Fast model with concise response

Both models support file uploads for processing and are capable of executing real-time web searches.

https://api.useapi.net/v1/minimax/llm

Request Headers
Authorization: Bearer {API token}
Content-Type: multipart/form-data
Request Body

Below is a FormData multipart/form-data example using Postman notation:

Key Type Value (example)
account Text Optional MiniMax API account
prompt Text Required text prompt
model Text ds-r1
searchWeb Text true
stream Text true
chatID Text 123456789
file File «Notes.txt»
file File «Design.doc»
file File «ProjectPlan.jpeg»
file File «Invoice.pdf»
  • account is optional when only one LLM chat.minimax.io account configured. However, if you have multiple accounts configured, this parameter becomes required.

  • prompt is required.
    Note: We strongly advise that you sanitize your prompts and remove all URLs. Currently, it appears that the AI will attempt to fetch every URL it finds in the request body, and this will ultimately lead to a failure. You can observe similar behavior when posting the same request at chat.minimax.io.

  • model is optional.
    Supported values:
  • searchWeb is optional, set it to true if you want the LLM to perform real-time web searches for your request. The default value is false.
    This can significantly slow down LLM response time.

  • chatID is optional, if you want to continue a conversation thread, include the chatID from a previous response to retain the full conversation context within the same chat thread.

  • stream is optional, set it to true if you want a real-time Content-Type: text/event-stream response. The default value is false, which returns a Content-Type: application/json response.
    We strongly recommend using the streaming approach when working with DeepSeek R1 or when sending complex queries in general to avoid timeouts.

  • file is optional, provide up to 10 files with the following extensions: txt, docx, doc, pdf, ppt, pptx, xls, xlsx, png, jpeg, jpg, webp, svg, heif, tiff. The maximum file size is 100MB.
    Consider adding one file at a time to ensure each file is accepted. You may include a simple prompt (e.g., “here’s my file x of y”) along with the file if you’re not ready to process it immediately and need to add all the files. All your files will be retained within the chat chatID context. The mm-01 model, with its 1M context length, is capable of processing large amounts of data and providing fast and accurate responses. However, once the context grows beyond the supported 64K, the ds-r1 model may eventually start hallucinating and provide incorrect responses, ignoring files in the chatID context.
    Note that you can repeat the file parameter with different values multiple times, this is supported by FormData.
Responses
  • 200 OK

    For the responses below, we used a simple request with the prompt tell me a joke and the model mm-01.

    A. stream parameter set to true, response Content-Type: text/event-stream:

      event:send_result
      data:{"type":1,"data":{"sendResult":{"userMsgID":"829104756293847156","chatID":"460283175649302817","systemMsgID":"167982430562198437","msgContent":"tell me a joke","wordSpreadRate":11,"chatTitle":"tell me a joke","canSearch":false,"canEdit":true,"extra":{"longCutCopyWriting":""},"isPreSendResult":false}},"statusInfo":{"code":0,"message":"Success"}}
    
      event:message_result
      data:{"type":2,"data":{"messageResult":{"msgID":"167982430562198437","chatID":"460283175649302817","msgType":"system","content":"Sure, here's a light-hearted joke","requestStatus":6,"isEnd":1,"msgSubType":0,"parentMsgID":"829104756293847156","extra":{"netSearchStatus":{"finalStatus":{"statusCode":0}},"warningText":""},"canRetry":false}},"statusInfo":{"code":0,"message":"success"}}
    
      event:message_result
      data:{"type":2,"data":{"messageResult":{"msgID":"167982430562198437","chatID":"460283175649302817","msgType":"system","content":"Sure, here's a light-hearted joke for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!\n\nI hope that","requestStatus":6,"isEnd":1,"msgSubType":0,"parentMsgID":"829104756293847156","extra":{"netSearchStatus":{"finalStatus":{"statusCode":0}},"warningText":""},"canRetry":false}},"statusInfo":{"code":0,"message":"success"}}
    
      event:message_result
      data:{"type":2,"data":{"messageResult":{"msgID":"167982430562198437","chatID":"460283175649302817","msgType":"system","content":"Sure, here's a light-hearted joke for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!\n\nI hope that brought a smile to your face! If you have any other requests or topics you'd like to","requestStatus":6,"isEnd":1,"msgSubType":0,"parentMsgID":"829104756293847156","extra":{"netSearchStatus":{"finalStatus":{"statusCode":0}},"warningText":""},"canRetry":false}},"statusInfo":{"code":0,"message":"success"}}
    
      event:message_result
      data:{"type":2,"data":{"messageResult":{"msgID":"167982430562198437","chatID":"460283175649302817","msgType":"system","content":"Sure, here's a light-hearted joke for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!\n\nI hope that brought a smile to your face! If you have any other requests or topics you'd like to discuss, feel free to let me know.","requestStatus":1,"isEnd":0,"msgSubType":0,"parentMsgID":"829104756293847156","extra":{"netSearchStatus":{"finalStatus":{"statusCode":0}},"feedbackStatus":{},"replyMsgType":"text","warningText":""},"canRetry":true}},"statusInfo":{"code":0,"message":"success"}}
    
      event:close_chunk
      data:{"type":8}
    
    

    B. stream parameter set to false (default), response Content-Type: application/json:

      {
        "request": {
          "sendResult": {
            "userMsgID": "930471528374615029",
            "chatID": "471639820571230486",
            "systemMsgID": "628940173205471392",
            "msgContent": "tell me a joke",
            "wordSpreadRate": 11,
            "chatTitle": "tell me a joke",
            "canSearch": false,
            "canEdit": true,
            "extra": {
              "longCutCopyWriting": ""
            },
            "isPreSendResult": false
          }
        },
        "response": {
          "messageResult": {
            "msgID": "628940173205471392",
            "chatID": "471639820571230486",
            "msgType": "system",
            "content": "Sure, here's a light-hearted joke for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!\n\nI hope that brought a smile to your face! If you have any other requests or topics you'd like to discuss, feel free to let me know.",
            "requestStatus": 1,
            "isEnd": 0,
            "msgSubType": 0,
            "parentMsgID": "930471528374615029",
            "extra": {
              "netSearchStatus": {
                "finalStatus": {
                  "statusCode": 0
                }
              },
              "feedbackStatus": {},
              "replyMsgType": "text",
              "warningText": ""
            },
            "canRetry": true
          }
        }
      }
    
  • 400 Bad Request

    {
      "error": "<Error message>"
    }
    
  • 401 Unauthorized

    {
      "error": "Unauthorized"
    }
    
  • 422 Unprocessable Content

    LLM is refusing to process your file(s) likely due to content moderation. This applies when the stream parameter is set to false, response Content-Type of application/json.

    {
      "error": "Invalid file. Please try another one"
    }
    
  • 429 Too Many Requests

    You have reached the maximum number of allowed parallel executions. Please wait until the running ones are completed and try again.

    {
        "error": "The previous request has not been completed yet. Please try again later."
    }
    
Model Content-Type: text/event-stream

See server-sent events to learn more about events format.

event:<even name>
data:<See SSE model>

event:<even name>
data:<See SSE model>

…
SSE model
{ // TypeScript, all fields are optional
  type: number
  data: {
    messageResult: {
      msgID: string
      chatID: string
      msgType: string
      content: string
      requestStatus: number
      isEnd: number
      msgSubType: number
      parentMsgID: string
      extra: {
        netSearchStatus: {
          finalStatus: {
            statusCode: number
          }
        }
        warningText: string
      }
      canRetry: boolean
    }
  }
  statusInfo: {
    code: number
    message: string
  }
}
SSE parser

When parsing the SSE response, we look for the very last occurrence of event:message_result where data.type is 2. If any of the values in data.statusInfo.code are non-zero, it indicates that an error has occurred.

Expand SSE parser code (JavaScript):
  /**
   * Generic SSE stream parser function.
   * It reads the response stream chunk by chunk, decodes text,
   * and reconstructs complete SSE events. When an event is complete,
   * it calls the provided onEvent callback with the event name and event data.
   *
   * @param {Response} response - Fetch API Response object with a body stream.
   * @param {Function} onEvent - Callback receiving (eventName, eventData) when an SSE event is complete.
   */
  async function parseSSEStream(response, onEvent) {
    const reader = response.body.getReader();
    const decoder = new TextDecoder("utf-8");
    let buffer = "";
    let currentEvent = { event: null, data: "" };

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      const chunk = decoder.decode(value, { stream: true });
      buffer += chunk;
      const lines = buffer.split(/\r?\n/);
      // Preserve the last (possibly incomplete) line in the buffer.
      buffer = lines.pop();
      for (const line of lines) {
        if (line.trim() === "") {
          // Blank line indicates the end of an event.
          if (currentEvent.event && currentEvent.data) {
            onEvent(currentEvent.event, currentEvent.data);
          }
          currentEvent = { event: null, data: "" };
        } else {
          if (line.startsWith("event:")) {
            currentEvent.event = line.slice("event:".length).trim();
          } else if (line.startsWith("data:")) {
            // Append data for events that span multiple lines.
            currentEvent.data += line.slice("data:".length).trim();
          }
        }
      }
    }

    // Process any remaining event data in the buffer.
    if (currentEvent.event && currentEvent.data) {
      onEvent(currentEvent.event, currentEvent.data);
    }

    await reader.closed;
  }

  /**
   * Process an SSE stream from a POST request.
   * This function sends a request to the provided URL with the given payload,
   * then uses the generic SSE parser to process events.
   *
   * The provided eventHandler callback is invoked for each processed event.
   *
   * @param {string} url - The API endpoint URL.
   * @param {object} payload - The JSON payload to be sent in the POST body.
   * @param {Function} eventHandler - Callback receiving (eventName, parsedEventData).
   * @returns {Promise<Headers>} - Resolves with the response headers (in case extra metadata is needed).
   */
  async function processSSE(url, payload, eventHandler) {
    try {
      const response = await fetch(url, {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(payload)
      });

      if (!response.ok) {
        const errorText = await response.text();
        throw new Error(`HTTP error! status: ${response.status}\n${errorText}`);
      }

      // Define how to process each SSE event.
      await parseSSEStream(response, (eventName, eventDataStr) => {
        try {
          // Parse the event data as JSON.
          const eventData = JSON.parse(eventDataStr);

          // If the event data contains a statusInfo with a non-zero code, treat it as an error.
          if (eventData.statusInfo && eventData.statusInfo.code !== 0) {
            eventHandler("error", { code: eventData.statusInfo.code, message: eventData.statusInfo.message });
            return;
          }

          // Process specific event types.
          if (
            eventName === "message_result" &&
            eventData.type === 2 &&
            eventData.data &&
            eventData.data.messageResult
          ) {
            // Pass the message result to the event handler.
            eventHandler(eventName, eventData.data.messageResult);
          } else {
            // For any other events, pass the event name and full parsed data.
            eventHandler(eventName, eventData);
          }
        } catch (err) {
          console.error("Failed to parse event data:", err);
        }
      });

      // Return headers if external logic needs to extract additional information.
      return response.headers;
    } catch (error) {
      console.error(`Process SSE error: ${error.message}`);
      throw error;
    }
  }

  // Example usage:
  // Replace this with your own event handling logic.
  processSSE("https://api.useapi.net/v1/minimax/llm", { prompt: "tell me a joke" }, (eventName, eventData) => {
    // For example, simply log the event.
    console.log(`Event: ${eventName}`, eventData);
  });
Model Content-Type: application/json
{ // TypeScript, all fields are optional
    request: {
        sendResult: {
            userMsgID: string
            chatID: string
            systemMsgID: string
            msgContent: string
            wordSpreadRate: number
            chatTitle: string
            canSearch: boolean
            canEdit: boolean
            extra: {
                form: {
                    formType: number
                    content: string
                    path?: string
                    fileID?: string
                    status?: number
                    fileByte?: number
                }[]
                longCutCopyWriting: string
            }
            isPreSendResult: boolean
        }
    }
    response: {
        messageResult: {
            msgID: string
            chatID: string
            msgType: string
            content: string
            requestStatus: number
            isEnd: number
            msgSubType: number
            parentMsgID: string
            extra: {
                netSearchStatus: {
                    finalStatus: {
                        statusCode: number
                    }
                }
                feedbackStatus: { [key: string]: unknown }
                replyMsgType: string
                warningText: string
            }
            canRetry: boolean
        }
    },
    error: {
        code: number
        httpCode: number
        message: string
        serviceTime: number
        requestID: string
        debugInfo: string
        serverAlert: number
    }
}
Examples
  • curl -H "Accept: application/json" \
         -H "Content-Type: application/json" \
         -H "Authorization: Bearer …" \
         -X POST "https://api.useapi.net/v1/minimax/llm" \
         -d '{"prompt": "…"}'
    
  • const prompt = "text prompt";      
    const apiUrl = `https://api.useapi.net/v1/minimax/llm`; 
    const token = "API token";
    const data = { 
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${token}`,
        'Content-Type': 'application/json' }
    };
    data.body = JSON.stringify({ 
      prompt
    });
    const response = await fetch(apiUrl, data);
    const result = await response.json();
    console.log("response", {response, result});
    
  • import requests
    prompt = "text prompt"      
    apiUrl = f"https://api.useapi.net/v1/minimax/llm" 
    token = "API token"
    headers = {
        "Content-Type": "application/json", 
        "Authorization" : f"Bearer {token}"
    }
    body = {
        "prompt": f"{prompt}"
    }
    response = requests.post(apiUrl, headers=headers, json=body)
    print(response, response.json())
    
Try It