PATCH
/
v4
/
evaluations
/
{evaluation_id}
import SGPClient from 'sgp';

const client = new SGPClient({
  apiKey: process.env['SGP_API_KEY'], // This is the default and can be omitted
});

async function main() {
  const evaluation = await client.evaluations.update('evaluation_id');

  console.log(evaluation.id);
}

main();
{
  "name": "<string>",
  "description": "<string>",
  "status": "PENDING",
  "application_spec_id": "<string>",
  "application_variant_id": "<string>",
  "tags": {},
  "evaluation_config": {},
  "evaluation_config_id": "<string>",
  "completed_at": "2023-11-07T05:31:56Z",
  "total_test_case_result_count": 123,
  "completed_test_case_result_count": 123,
  "annotation_config": {
    "annotation_config_type": "generation",
    "components": [
      [
        {
          "optional": true,
          "data_loc": [
            "<string>"
          ],
          "label": "<string>"
        }
      ]
    ],
    "direction": "col",
    "llm_prompt": {
      "variables": [
        {
          "name": "<string>",
          "optional": true,
          "data_loc": [
            "<string>"
          ]
        }
      ],
      "template": "<string>"
    }
  },
  "question_id_to_annotation_config": {},
  "metric_config": {
    "components": [
      {
        "type": "rouge",
        "name": "<string>",
        "mappings": {},
        "params": {}
      }
    ]
  },
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "account_id": "<string>",
  "created_by_user_id": "<string>",
  "archived_at": "2023-11-07T05:31:56Z"
}

Authorizations

x-api-key
string
header
required

Path Parameters

evaluation_id
string
required

Body

application/json
name
string
description
string
application_spec_id
string
application_variant_id
string
tags
object
evaluation_config
object
evaluation_config_id
string

The ID of the associated evaluation config.

question_id_to_annotation_config
object

Specifies the annotation configuration to use for specific questions.

annotation_config
object

Annotation configuration for tasking

evaluation_type
enum<string>

If llm_benchmark is provided, the evaluation will be updated to a hybrid evaluation. No-op on existing hybrid evaluations, and not available for studio evaluations.

Available options:
llm_benchmark
restore
boolean
default:false

Set to true to restore the entity from the database.

Response

200
application/json
Successful Response
name
string
required
description
string
required
status
enum<string>
required
Available options:
PENDING,
COMPLETED,
FAILED
application_spec_id
string
required
total_test_case_result_count
integer
required

The total number of test case results for the evaluation

completed_test_case_result_count
integer
required

The number of test case results that have been completed for the evaluation

id
string
required

The unique identifier of the entity.

created_at
string
required

The date and time when the entity was created in ISO format.

account_id
string
required

The ID of the account that owns the given entity.

created_by_user_id
string
required

The user who originally created the entity.

application_variant_id
string
tags
object
evaluation_config
object
evaluation_config_id
string

The ID of the associated evaluation config.

completed_at
string

The date and time that all test case results for the evaluation were completed for the evaluation in ISO format.

annotation_config
object

Annotation configuration for tasking

question_id_to_annotation_config
object

Specifies the annotation configuration to use for specific questions.

metric_config
object

Specifies the config for the metrics to be computed.

archived_at
string

The date and time when the entity was archived in ISO format.