Skip to main content
POST
/
v4
/
evaluation-datasets
/
{evaluation_dataset_id}
/
evaluation-dataset-versions
/
{evaluation_dataset_version_id}
/
publish
Python
import os
from scale_gp import SGPClient

client = SGPClient(
    api_key=os.environ.get("SGP_API_KEY"),  # This is the default and can be omitted
)
publish_evaluation_dataset_draft_response = client.evaluation_datasets.evaluation_dataset_versions.publish(
    evaluation_dataset_version_id="evaluation_dataset_version_id",
    evaluation_dataset_id="evaluation_dataset_id",
)
print(publish_evaluation_dataset_draft_response.autogenerated_draft_test_cases)
{
  "success": true,
  "autogenerated_draft_test_cases": [
    {
      "autogenerated_draft_test_case_id": "<string>",
      "success": true,
      "failed_chunks": [
        {
          "chunk_text": "<string>",
          "artifact_id": "<string>",
          "artifact_name": "<string>",
          "artifact_content_modification_identifier": "<string>"
        }
      ]
    }
  ]
}

Authorizations

x-api-key
string
header
required

Headers

x-selected-account-id
string | null

Path Parameters

evaluation_dataset_id
string
required
evaluation_dataset_version_id
string
required

Query Parameters

force
boolean
default:false

Force approve an evaluation dataset

Response

Successful Response

success
boolean
required

Whether or not the evaluation dataset was successfully published

autogenerated_draft_test_cases
ApproveAutoGeneratedDraftTestCaseResponse · object[]
required

List of responses for each of the input draft test cases.