📘 Get code for this guide

This guide is based off of the Flexible Evaluation Recipe

With Flexible Evaluations, you can evaluate an application variant that has multiple inputs/outputs and multiple steps.

For example, imagine you have an application that can answer math questions using stock data from the last few days to answer questions like “What was the percent change in APPL over the last three days?”.

An evaluation dataset for this application would have two inputs:

  1. query: the question the user asked the application
  2. stock_prices: A list that stores the stock prices for APPL over the last five days

An evaluation dataset for this application will look like the following:

We can generate outputs for these test cases (see the Flexible Evaluation Recipe for an example!). We can view them in the test case outputs table under our application variant:

Now, the dataset has 2 different inputs, so we would need to configure what the annotators can see. As part of the flexible evaluations process, there’s an object that users can use to tell the platform how to render the annotations UI called an Annotation Configuration. The annotation configuration is can pull data from different part of test case, test case output, and trace (using the data_loc field). You can see the full details of how the annotation configuration works, and how to construct them here.

annotation_config = {
    "direction": "row", # we could also change this to "col"
    "components": [
        [
            # Let's put the query and the output side by side since that's what we care about most
            { "data_loc": ["test_case_data", "input", "query"], "label": "Query" }, # Pull the "query" from the test case's input
            { "data_loc": ["test_case_output", "output"] } # Pull the entire output from running the application on the test case
        ],
        [
            # Pull the entire expected output from the test case
            { "data_loc": ["test_case_data", "expected_output"] }
        ],
        [
            # Pull the stock price from the test case's input
            { "data_loc": ["test_case_data", "input", "stock_prices"] }
        ],
    ]
}

Because the direction is set to row, the outer array of the components key will be rendered as rows and the inner array is displayed as columns of that row. This configuration is telling the UI to display three rows:

  • The first row will have one box showing query as the label and the contents of query from the dataset in the example above. The second box will show the output value of the test_case_output from the variant above.
  • The second row will show the current_date as the label and the contents of current_date from the dataset above.
  • The third row will show expected_output as the title and the contents of expected_output from the dataset above.

Here is an example of what the Annotation UI will look like based on this configuration:

Once the evaluation result is complete, you will be able to see the results of the annotation by visiting the evaluations page and clicking on “Table”.

Flexible evaluation runs have many more powerful features. For instance, you can attach traces to the output which are the intermediate steps of the application’s execution, and examine them while annotating. Or you can use a different annotation configuration for different questions.

To learn more, you can move forward to the full guide to flexible evaluation or read the Flexible Evaluation Recipe.