Generate a personal performance dashboard for a participant — degradation alerts, KPIs with score sparkline, evaluator rankings with click-to-filter, trends, heatmap, and a filterable transcription table.
The Participant Analysis page aggregates all transcriptions evaluated for a selected participant and organizes the results into a degradation alert, KPI summaries, an evaluator-based ranking with click-to-filter, trend charts, a weekly quality heatmap, criteria failure breakdowns, and a filterable transcription detail table.
Each analysis generation consumes one unit of your account’s analytics_participant quota. Loading a previously cached result for the same participant ID does not consume quota.
Select a participant from the searchable selector at the top of the page and click Generate Analysis. The selector shows each participant’s name, ID, and evaluated calls count.
Navigating directly to a URL that includes a participant ID (e.g. from a bookmark or shared link) auto-loads the last cached result — no quota consumed.
The Generate Analysis button is disabled when no participant is selected or when your quota is exhausted.
A red alert banner appears directly above the KPI grid when the participant’s recent performance has dropped significantly.Trigger condition: The average score of the last 20 evaluated records is more than 10 points lower than the average of the previous 20 records.
A Degradation Alert indicates a statistically significant performance decline. Review recent transcriptions to identify contributing factors before it becomes a deeper trend.
This banner is purely informational and has no interactive actions. It is hidden when no significant decline is detected. If an evaluator filter is active (see Evaluators Ranking), the trend comparison also applies only to records from that evaluator.
The Average Score card includes two extra elements not found in the Evaluator Analysis:Sparkline — an inline chart of the last 20 scores plotted in chronological order (left to right). The line color matches the current score color. Shown when at least 2 data points exist; a progress bar is shown otherwise.Trend badge — compares the last 20 records against the previous 20:
Clicking any row in the Evaluators Ranking activates a client-side filter for that campaign:
The clicked row is highlighted; all other rows are dimmed.
An active filter tag (evaluator name + X button) appears in the table header.
Clicking the X button, or clicking the same row again, clears the filter.
While a filter is active, every section below — KPIs, charts, heatmap, top failed criteria, and the transcription table — is instantly recalculated using only records from that evaluator. No new API call is made.
Click-to-filter is a purely client-side operation. Exploring a specific campaign does not consume any quota — all recalculations happen locally from the already-loaded data.
Two side-by-side period cards — This Month and Previous Month — each showing Calls · Average Score · Pass Rate.A delta summary below the cards shows the score change and pass rate change between periods.
State
Display
One period has no data
Amber warning on that card
Both periods have no data
”Data in both months is needed to show the comparison”
Three tiles grouping calls by duration using p25/p75 thresholds:
Tier
Threshold
Short
Below p25
Medium
Between p25 and p75
Long
Above p75
Each tile shows call count and average score for that tier.
Duration thresholds are dynamically calculated from p25 and p75 percentiles of the dataset — they adapt to the actual distribution of this participant’s calls.
A day-of-week × hour-of-day grid showing the average quality score per time slot. This heatmap is built client-side from the raw data — it updates automatically when an evaluator filter is active, without consuming quota.
Color
Score range
Grey
No data
Emerald
≥ 80
Green
60–79
Amber
40–59
Red
< 40
Two auto-generated insights are shown above the grid:
Insight
Content
Best Performing Hour
The day and hour with the highest average score
Worst Performing Hour
The day and hour with the lowest average score
Hover over any cell to see the exact average score for that time slot.
A ranked list of the 5 most frequently failing criteria across the participant’s evaluated transcriptions (or the filtered subset if an evaluator filter is active).
Column
Detail
Rank
1–5 with a gradient badge
Criterion
Name — truncated with a tooltip showing the full text
Evaluator
Color-coded tag
Fail count
Absolute number of failures
Fail rate
Percentage with a progress bar
These are the highest-priority areas for coaching and improvement. When an evaluator filter is active, the list reflects only criteria from that campaign — useful for targeted, context-specific feedback.
This section is only rendered when criteria failure data is present.