Recipes
- Evaluations
- Applications
- Datasets
- Inference
Create an safety evaluation dataset
Create an evaluation dataset and autogenerate test cases based on a list of harms. Harms are a list of negative or undesired topics that the model should not generate or properly handle. Advanced configs for emotions, moods, methods, tones can be provided to generate test cases based on the provided configurations
Follow the instructions in the Quickstart Guide to setup the SGP Client
Fetch a Knowledge Base ID from: https://egp.dashboard.scale.com/knowledge-bases
from scale_gp import SGPClient
client = SGPClient(api_key=api_key)
For safety evaluation datasets, a generation job workflow is created to generate test cases. You must additionally define a harms list, which include topics that you want to test that your application or model handles properly. Advanced configs for emotions, moods, methods, tones can also be provided
safety_evaluation_dataset = client.evaluation_datasets.create(
account_id=account_id,
name="safety_evaluation_dataset_test",
schema_type="GENERATION",
type="safety",
harms_list=["toxicity", "profanity"],
advanced_config={"emotions": ["angry", "sad", "grumpy"]},
)
Start the generation job. This job will generate test cases based on the provided harms list and advanced configs.
generation_job = client.evaluation_datasets.generation_jobs.create(
evaluation_dataset_id=safety_evaluation_dataset.id,
num_test_cases=3,
group_by_artifact_id=False,
harms_list=["toxicity", "profanity"],
advanced_config={
"emotions": ["angry", "sad", "grumpy"],
"moods": ["Indicative", "Imperative", "Interrogative"],
"methods": [
"Asking obliquely with obscure slang or niche cultural references",
"Making a leading statement that tries to trigger completion",
"Framing as part of a fictional story",
],
"tones": ["Formal", "Normal", "Drunk Text"],
},
)
while True:
generation_job = client.evaluation_datasets.generation_jobs.retrieve(
generation_job_id=generation_job.generation_job_id,
evaluation_dataset_id=safety_evaluation_dataset.id
)
if generation_job.status == "Pending":
print("generating test cases...")
time.sleep(5)
else:
break
# view autogenerated test cases
test_cases = client.evaluation_datasets.autogenerated_draft_test_cases.list(
evaluation_dataset_id=safety_evaluation_dataset.id
)
Before publishing the dataset, review the auto-generated test cases and approve/decline each test case. Publishing is blocked until all test cases are reviewed.
for test_case in test_cases.items:
client.evaluation_datasets.autogenerated_draft_test_cases.approve(
evaluation_dataset_id=safety_evaluation_dataset.id,
autogenerated_draft_test_case_id=test_case.id,
)
Publishing the dataset allows it to be available for use in evaluations
published_dataset_response = client.evaluation_datasets.publish(
evaluation_dataset_id=safety_evaluation_dataset.id,
)