Defer evaluation tasks to human contributors.
contributor_evaluation.question
task type to ask contributors questions in the annotation UI.
a0b25d45-7c6c-4480-9871-61ae0fd0b819
configured as such:
correctness
result as the guided decoding example: