Skip to main content
The Participant Analysis page aggregates all transcriptions evaluated for a selected participant and organizes the results into a degradation alert, KPI summaries, an evaluator-based ranking with click-to-filter, trend charts, a weekly quality heatmap, criteria failure breakdowns, and a filterable transcription detail table.
Each analysis generation consumes one unit of your account’s analytics_participant quota. Loading a previously cached result for the same participant ID does not consume quota.

Generating an analysis

Select a participant from the searchable selector at the top of the page and click Generate Analysis. The selector shows each participant’s name, ID, and evaluated calls count.
Navigating directly to a URL that includes a participant ID (e.g. from a bookmark or shared link) auto-loads the last cached result — no quota consumed.
The Generate Analysis button is disabled when no participant is selected or when your quota is exhausted.

Dashboard header

Once data is loaded, the dashboard header shows:
ControlBehavior
Participant nameDisplayed below the title once a participant is active
Last updatedRelative time since the last fetch
Download PDFGenerates and downloads a structured PDF report. Only visible when data is loaded.
RefreshRe-fetches the analysis for the current participant

Degradation Banner

A red alert banner appears directly above the KPI grid when the participant’s recent performance has dropped significantly. Trigger condition: The average score of the last 20 evaluated records is more than 10 points lower than the average of the previous 20 records.
A Degradation Alert indicates a statistically significant performance decline. Review recent transcriptions to identify contributing factors before it becomes a deeper trend.
This banner is purely informational and has no interactive actions. It is hidden when no significant decline is detected. If an evaluator filter is active (see Evaluators Ranking), the trend comparison also applies only to records from that evaluator.

KPI Grid

Four metric cards summarize this participant’s overall performance across all their evaluations.
KPIDescription
Total CallsCount of all transcriptions evaluated for this participant
Average ScoreScore out of 100. Color: Emerald ≥91 · Green 71–90 · Amber 51–70 · Orange 31–50 · Red <31. Includes a sparkline and a trend badge.
Pass RatePercentage of calls that passed. Badge: Excellent ≥80% · Good 60–79% · Fair 40–59% · Low <40%
CriticalsCalls where a Strict criterion failed. Red badge if >0; “No critical incidents” if 0

Average Score — sparkline and trend badge

The Average Score card includes two extra elements not found in the Evaluator Analysis: Sparkline — an inline chart of the last 20 scores plotted in chronological order (left to right). The line color matches the current score color. Shown when at least 2 data points exist; a progress bar is shown otherwise. Trend badge — compares the last 20 records against the previous 20:
DirectionColorMeaning
ImprovingGreenRecent average is higher
DecliningRedRecent average is lower
StableGreyNo significant change
Insufficient dataGreyNot enough records for comparison

Critical alert banner

If pending critical calls exist, a clickable amber/orange banner appears above the KPI grid:
“N pending critical calls — Pending review · Click for details”
Clicking it jumps directly to the Transcription Details table with the Pending Critical filter pre-applied.

Evaluators Ranking

A table of the top 8 evaluators this participant has been evaluated under, sorted by average score. Only rendered when evaluator data is present.
ColumnDetail
#Rank number. Top 3 show Trophy / Medal / Award icons. Rows where the critical fail rate exceeds 30% have a red-tinted background.
EvaluatorTag + ID (monospace)
CallsTotal evaluated calls under this evaluator
ScoreAverage score — color-coded badge
CriticalsCritical fail count. Red badge if >0; ”—” if none.
% CriticalsCritical fail rate — red if >30%, amber if >15%, grey otherwise
ConsistencyStandard deviation badge
DurationAverage call duration (MM:SS)
LinkOpens the evaluator’s analysis dashboard in a new tab
Consistency thresholds:
BadgeThreshold
Very consistentσ ≤ 5
Consistent6 < σ ≤ 10
Variable11 < σ ≤ 15
Very variableσ > 15

Click-to-filter

Clicking any row in the Evaluators Ranking activates a client-side filter for that campaign:
  • The clicked row is highlighted; all other rows are dimmed.
  • An active filter tag (evaluator name + X button) appears in the table header.
  • Clicking the X button, or clicking the same row again, clears the filter.
  • While a filter is active, every section below — KPIs, charts, heatmap, top failed criteria, and the transcription table — is instantly recalculated using only records from that evaluator. No new API call is made.
Click-to-filter is a purely client-side operation. Exploring a specific campaign does not consume any quota — all recalculations happen locally from the already-loaded data.

Charts

When an evaluator filter is active, all chart data is sourced from the filtered dataset.

Monthly Comparison

Two side-by-side period cards — This Month and Previous Month — each showing Calls · Average Score · Pass Rate. A delta summary below the cards shows the score change and pass rate change between periods.
StateDisplay
One period has no dataAmber warning on that card
Both periods have no data”Data in both months is needed to show the comparison”

Duration Analysis

Three tiles grouping calls by duration using p25/p75 thresholds:
TierThreshold
ShortBelow p25
MediumBetween p25 and p75
LongAbove p75
Each tile shows call count and average score for that tier.
Duration thresholds are dynamically calculated from p25 and p75 percentiles of the dataset — they adapt to the actual distribution of this participant’s calls.

Timeline Evolution

A dual-axis chart overlaying:
  • Left Y-axis — Average Score (line)
  • Right Y-axis — Call Volume (bars)
Hover over any point to see both metrics simultaneously.

Score Distribution

A bar chart grouping transcriptions into 20-point score ranges: 0–20 · 20–40 · 40–60 · 60–80 · 80–100.

Weekly Heatmap and Top Failed Criteria

Weekly Heatmap

A day-of-week × hour-of-day grid showing the average quality score per time slot. This heatmap is built client-side from the raw data — it updates automatically when an evaluator filter is active, without consuming quota.
ColorScore range
GreyNo data
Emerald≥ 80
Green60–79
Amber40–59
Red< 40
Two auto-generated insights are shown above the grid:
InsightContent
Best Performing HourThe day and hour with the highest average score
Worst Performing HourThe day and hour with the lowest average score
Hover over any cell to see the exact average score for that time slot.

Top Failed Criteria

A ranked list of the 5 most frequently failing criteria across the participant’s evaluated transcriptions (or the filtered subset if an evaluator filter is active).
ColumnDetail
Rank1–5 with a gradient badge
CriterionName — truncated with a tooltip showing the full text
EvaluatorColor-coded tag
Fail countAbsolute number of failures
Fail ratePercentage with a progress bar
These are the highest-priority areas for coaching and improvement. When an evaluator filter is active, the list reflects only criteria from that campaign — useful for targeted, context-specific feedback.
This section is only rendered when criteria failure data is present.

Transcription Details

A full paginated table (10 per page) of every transcription included in the analysis.

Group tabs

Filter by workflow group with a single click:
TabColor
AllDefault
No GroupGrey
Pending ReviewAmber
Under ReviewBlue
ArchivedPurple

Filters

Click Filters to open the filter panel:
FilterType
SearchText — matches by transcription ID or evaluator tag
FromDate — lower bound for transcription date
ToDate — upper bound for transcription date
Min ScoreNumber input
Max ScoreNumber input
Critical onlyCheckbox — shows only calls where a Strict criterion failed

Table columns

ColumnDetail
DateFull date and time
ScoreColor-coded badge (same thresholds as KPI grid)
DurationMM:SS formatted
EvaluatorEvaluator tag — clickable link opens the evaluator analysis in a new tab
CriticalRed alert icon if a Strict criterion failed
FailedFailure count. Tooltip lists the names of all failed criteria.
GroupGroup badge + pencil icon to update the group inline
Click any row to open the full transcription detail in a new tab.
Group changes made from this table are applied immediately and reflected without requiring a full page re-fetch.

PDF Export

The Download PDF button in the header is available once analysis data is loaded. Report filename: Participant_Report_[name]_[date].pdf Report contents:
  • Participant name and evaluation date
  • KPIs: Total Calls · Average Score · Pass Rate · Critical Fails
  • Critical fails warning (if applicable)
  • Top failed criteria table
  • Evaluators ranking table
  • Transcription detail table (ID · Date · Score · Duration · Evaluator · Critical · Group)

Participants

Manage your participants

Evaluator Analysis

View evaluator-level dashboards