Get a list of batch jobs for your organization and user.
from mistralai.client import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.batch.jobs.list(page=0, page_size=100, created_by_me=False, order_by="-created")
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
page |
Optional[int] |
➖ |
N/A |
page_size |
Optional[int] |
➖ |
N/A |
model |
OptionalNullable[str] |
➖ |
N/A |
agent_id |
OptionalNullable[str] |
➖ |
N/A |
metadata |
Dict[str, Any] |
➖ |
N/A |
created_after |
date |
➖ |
N/A |
created_by_me |
Optional[bool] |
➖ |
N/A |
status |
List[models.BatchJobStatus] |
➖ |
N/A |
order_by |
Optional[models.OrderBy] |
➖ |
N/A |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.ListBatchJobsResponse
| Error Type |
Status Code |
Content Type |
| errors.SDKError |
4XX, 5XX |
*/* |
Create a new batch job, it will be queued for processing.
from mistralai.client import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.batch.jobs.create(endpoint="/v1/moderations", model="mistral-small-latest", timeout_hours=24)
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
Example |
endpoint |
models.APIEndpoint |
✔️ |
N/A |
|
input_files |
List[str] |
➖ |
The list of input files to be used for batch inference, these files should be jsonl files, containing the input data corresponding to the bory request for the batch inference in a "body" field. An example of such file is the following: json {"custom_id": "0", "body": {"max_tokens": 100, "messages": [{"role": "user", "content": "What is the best French cheese?"}]}} {"custom_id": "1", "body": {"max_tokens": 100, "messages": [{"role": "user", "content": "What is the best French wine?"}]}} |
|
requests |
List[models.BatchRequest] |
➖ |
N/A |
|
model |
OptionalNullable[str] |
➖ |
The model to be used for batch inference. |
Example 1: mistral-small-latest Example 2: mistral-medium-latest |
agent_id |
OptionalNullable[str] |
➖ |
In case you want to use a specific agent from the deprecated agents api for batch inference, you can specify the agent ID here. |
|
metadata |
Dict[str, str] |
➖ |
The metadata of your choice to be associated with the batch inference job. |
|
timeout_hours |
Optional[int] |
➖ |
The timeout in hours for the batch inference job. |
|
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
|
models.BatchJob
| Error Type |
Status Code |
Content Type |
| errors.SDKError |
4XX, 5XX |
*/* |
Get a batch job details by its UUID.
Args:
inline: If True, return results inline in the response.
from mistralai.client import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.batch.jobs.get(job_id="4017dc9f-b629-42f4-9700-8c681b9e7f0f")
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
job_id |
str |
✔️ |
N/A |
inline |
OptionalNullable[bool] |
➖ |
N/A |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.BatchJob
| Error Type |
Status Code |
Content Type |
| errors.SDKError |
4XX, 5XX |
*/* |
Request the deletion of a batch job.
from mistralai.client import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.batch.jobs.delete(job_id="d9e71426-5791-49ad-b8d1-cf0d90d1b7d0")
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
job_id |
str |
✔️ |
N/A |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.DeleteBatchJobResponse
| Error Type |
Status Code |
Content Type |
| errors.SDKError |
4XX, 5XX |
*/* |
Request the cancellation of a batch job.
from mistralai.client import Mistral
import os
with Mistral(
api_key=os.getenv("MISTRAL_API_KEY", ""),
) as mistral:
res = mistral.batch.jobs.cancel(job_id="4fb29d1c-535b-4f0a-a1cb-2167f86da569")
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
job_id |
str |
✔️ |
N/A |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.BatchJob
| Error Type |
Status Code |
Content Type |
| errors.SDKError |
4XX, 5XX |
*/* |