Skip to content

Commit 5967403

Browse files
docs
1 parent 19e8fd8 commit 5967403

File tree

2 files changed

+0
-12
lines changed

2 files changed

+0
-12
lines changed

docs/en/latest/plugins/ai-proxy-multi.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,12 +63,6 @@ Proxying requests to OpenAI is supported now. Other LLM services will be support
6363
| provider.auth | Yes | object | Authentication details, including headers and query parameters. | |
6464
| provider.auth.header | No | object | Authentication details sent via headers. Header name must match `^[a-zA-Z0-9._-]+$`. | |
6565
| provider.auth.query | No | object | Authentication details sent via query parameters. Keys must match `^[a-zA-Z0-9._-]+$`. | |
66-
| provider.options.max_tokens | No | integer | Defines the maximum tokens for chat or completion models. | 256 |
67-
| provider.options.input_cost | No | number | Cost per 1M tokens in the input prompt. Minimum is 0. | |
68-
| provider.options.output_cost | No | number | Cost per 1M tokens in the AI-generated output. Minimum is 0. | |
69-
| provider.options.temperature | No | number | Defines the model's temperature (0.0 - 5.0) for randomness in responses. | |
70-
| provider.options.top_p | No | number | Defines the top-p probability mass (0 - 1) for nucleus sampling. | |
71-
| provider.options.stream | No | boolean | Enables streaming responses via SSE. | |
7266
| provider.override.endpoint | No | string | Custom host override for the AI provider. | |
7367
| timeout | No | integer | Request timeout in milliseconds (1-60000). | 30000 |
7468
| keepalive | No | boolean | Enables keepalive connections. | true |

docs/en/latest/plugins/ai-proxy.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -56,12 +56,6 @@ Proxying requests to OpenAI is supported now. Other LLM services will be support
5656
| model.provider | Yes | String | Name of the AI service provider (`openai`). |
5757
| model.name | Yes | String | Model name to execute. |
5858
| model.options | No | Object | Key/value settings for the model |
59-
| model.options.max_tokens | No | Integer | Defines the max tokens if using chat or completion models. Default: 256 |
60-
| model.options.input_cost | No | Number | Cost per 1M tokens in your prompt. Minimum: 0 |
61-
| model.options.output_cost | No | Number | Cost per 1M tokens in the output of the AI. Minimum: 0 |
62-
| model.options.temperature | No | Number | Matching temperature for models. Range: 0.0 - 5.0 |
63-
| model.options.top_p | No | Number | Top-p probability mass. Range: 0 - 1 |
64-
| model.options.stream | No | Boolean | Stream response by SSE. |
6559
| override.endpoint | No | String | Override the endpoint of the AI provider |
6660
| timeout | No | Integer | Timeout in milliseconds for requests to LLM. Range: 1 - 60000. Default: 30000 |
6761
| keepalive | No | Boolean | Enable keepalive for requests to LLM. Default: true |

0 commit comments

Comments
 (0)