OpenAI Assistants
Test and compare different OpenAI Assistants
The Assistants API from OpenAI combines their language models with tools like file search and code interpreter. With Empirical, you can test and evaluate your Assistants configuration to increase output response quality.
Run configuration
In your config file, set type
to assistant
, and specify the unique identifier of
the OpenAI Assistant in the assistant_id
field.
The prompt
key specifies the message that the user sends to the assistant. In the config
below, we refer to the user_query
input from the test dataset.
Should be “assistant”
Unique identifier of the assistant object, linked to the same account as the
OPENAI_API_KEY
environment variable
Incoming message from the user sent as the first message of the thread
JSON object of parameters to customize the Assistant (see more below)
A custom name or label for this run
Example
The Assistants example illustrates a complete configuration.
Parameters
The parameters
field in the run configuration allows you to customize the behavior of
the OpenAI Assistant. These can be modified in the JSON configuration, or in the
web UI to generate run variations.
Limitations
Our support for OpenAI Assistants supports single-turn behavior only (one message from the user)