Recipes
- Evaluations
- Applications
- Datasets
- Inference
Create an autogenerated evaluation dataset
Create an evaluation dataset and autogenerate test cases from a knowledge base. Evaluation datasets contain a set of test cases used to evaluate the performance of your applications.
Follow the instructions in the Quickstart Guide to setup the SGP Client
from scale_gp import SGPClient
client = SGPClient(api_key=api_key)
For autogenerated evaluation datasets, a generation job workflow is created to generate test cases. Test cases are generated based on the knowledge base provided, and must be approved before the dataset can be published. Evaluation datasets, once published, can be used for application variant runs and report card generation
autogenerated_evaluation_dataset = client.evaluation_datasets.create(
account_id=account_id,
name="autogenerated_eval_dataset",
schema_type="GENERATION",
type="autogenerated",
knowledge_base_id=knowledge_base_id,
)
dataset = client.evaluation_datasets.retrieve(evaluation_dataset_id=autogenerated_evaluation_dataset.id)
Start the generation job. This job will generate test cases based on the chunks/data present inside the specified knowledge base. In this example, we used a knowledge base with Legend of Zelda playthrough guides.
generation_job = client.evaluation_datasets.generation_jobs.create(
evaluation_dataset_id=autogenerated_evaluation_dataset.id,
num_test_cases=3,
group_by_artifact_id=False,
)
while True:
generation_job = client.evaluation_datasets.generation_jobs.retrieve(
generation_job_id=generation_job.generation_job_id,
evaluation_dataset_id=autogenerated_evaluation_dataset.id
)
if generation_job.status == "Pending":
print("generating test cases...")
time.sleep(5)
else:
break
# view autogenerated test cases
test_cases = client.evaluation_datasets.autogenerated_draft_test_cases.list(
evaluation_dataset_id=autogenerated_evaluation_dataset.id
)
Before publishing the dataset, review the auto-generated test cases and approve/decline each test case. Publishing is blocked until all test cases are reviewed.
for test_case in test_cases.items:
client.evaluation_datasets.autogenerated_draft_test_cases.approve(
evaluation_dataset_id=autogenerated_evaluation_dataset.id,
autogenerated_draft_test_case_id=test_case.id,
)
Publishing the dataset allows it to be available for use in evaluations
published_dataset_response = client.evaluation_datasets.publish(
evaluation_dataset_id=autogenerated_evaluation_dataset.id,
)