Question

Use the contributor_evaluation.question task type to ask contributors questions in the annotation UI.

Example Usage

Assume we have a categorical question with ID a0b25d45-7c6c-4480-9871-61ae0fd0b819 configured as such:

We can create an annotation queue as part of our evaluation to ask our contributors to answer this question in order to produce the same correctness result as the guided decoding example:

client.evaluations.create(
  name="Example Correctness Evaluation",
  data=[
    {
      "input": "What color is the sky?",
      "expected_output": "Blue",
      "generated_output": "The sky appears blue during ..."
    },
    ...
  ],
  tasks=[
    {
      "task_type": "contributor_evaluation.question",
      "alias": "correctness",
      "configuration": {
        "question_id": "a0b25d45-7c6c-4480-9871-61ae0fd0b819",
        "layout": {
          "direction": "column",
          "children": [
            # Component:
            { "data": "item.input", "label": "User Query" },
            # Nested container:
            {
              "direction": "row",
              "children": [
                { "data": "item.expected_output", "label": "Expected Output" },
                { "data": "item.generated_output", "label": "Generated Output" }
              ]
            }
          ]
        },
        "queue_id": "default"
      }
    }
  ]
)