Overview
The goal of this guide is to help contributors navigate the user interface of the Scale Generative AI platform.Step-by-Step Guide
Step #1
Upon loading Scale’s Generative AI platform, contributors will see a contributor dashboard. If there are multiple projects available, they will all appear in this view. Contributors can start labeling a specific project by clicking on “Start Labeling.” Under the Contributor Dashboard, contributors will see two cards:- Evaluations in Progress
- Evaluations Completed

Step #2
Under the Evaluations in Progress Card, Contributors will be able to view evaluation progress (# completed / # total evaluations). There will also be two buttons:- Task Dashboard
- Start Labeling
Step #3
If you click the “Task Dashboard” button, you will be able to view some metrics, including:- Project throughput status (# completed / # total evaluations)
- Number Tasks flagged for review
- Audit statistics: % of tasks unaudited, accepted, and fixed by the auditing team. You can use this as a preliminary indication of quality.
- Average time per task

Step #4
If you click the “Start Labeling” button, you will be taken to a UI where you will complete evaluations. To support high-quality evaluations, contributors will have access to the following information:- User information
- Generated output vs. Expected output
- Generation context vs. Expected context

Step #5
Once you’ve finished labeling the task, you can also select “Flag for Review,” which will escalate the task to an admin. You can leave a comment in the free text box, which will also surface to the admin.