Authorizations
Use the following format for authentication: Bearer <your api key>
Headers
Config desired response language for HTTP requests.
en-US,en "en-US,en"
Body
- Text Model
- Vision Model
The model code to be called. GLM-4.6 are the latest flagship model series, foundational models specifically designed for agent applications.
glm-4.6, glm-4.5, glm-4.5-air, glm-4.5-x, glm-4.5-airx, glm-4.5-flash, glm-4-32b-0414-128k "glm-4.6"
The current conversation message list as the modelβs prompt input, provided in JSON array format, e.g.,{βroleβ: βuserβ, βcontentβ: βHelloβ}. Possible message types include system messages, user messages, assistant messages, and tool messages. Note: The input must not consist of system messages or assistant messages only.
1- User Message
- System Message
- Assistant Message
- Tool Message
Passed by the user side, needs to be unique; used to distinguish each request. If not provided by the user side, the platform will generate one by default.
When do_sample is true, sampling strategy is enabled; when do_sample is false, sampling strategy parameters such as temperature and top_p will not take effect. Default value is true.
true
This parameter should be set to false or omitted when using synchronous call. It indicates that the model returns all content at once after generating all content. Default value is false. If set to true, the model will return the generated content in chunks via standard Event Stream. When the Event Stream ends, a data: [DONE] message will be returned.
false
Only supported by GLM-4.5 series and higher models. This parameter is used to control whether the model enable the chain of thought.
Sampling temperature, controls the randomness of the output, must be a positive number within the range: [0.0, 1.0]. The GLM-4.6 series default value is 1.0, GLM-4.5 series default value is 0.6, GLM-4-32B-0414-128K default value is 0.75.
0 <= x <= 11
Another method of temperature sampling, value range is: (0.0, 1.0]. The GLM-4.6, GLM-4.5 series default value is 0.95, GLM-4-32B-0414-128K default value is 0.9.
0 <= x <= 10.95
The maximum number of tokens for model output, the GLM-4.6 series supports 128K maximum output, the GLM-4.5 series supports 96K maximum output, the GLM-4.5v series supports 16K maximum output, GLM-4-32B-0414-128K supports 16K maximum output.
1 <= x <= 983041024
Whether to enable streaming response for Function Calls. Default value is false. Only supported by GLM-4.6. Refer the Stream Tool Call
false
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
- Function Call
- Retrieval
- Web Search
Controls how the model selects a tool. Used to control how the model selects which function to call. This is only applicable when the tool type is function. The default value is auto, and only auto is supported.
auto Stop word list. Generation stops when the model encounters any specified string. Currently, only one stop word is supported, in the format ["stop_word1"].
1Specifies the response format of the model. Defaults to text. Supports two formats:{ "type": "text" } plain text mode, returns natural language text, { "type": "json_object" } JSON mode, returns valid JSON data. When using JSON mode, itβs recommended to clearly request JSON output in the prompt.
Unique ID for the end user, 6β128 characters. Avoid using sensitive information.
6 - 128Response
Processing successful