Search

How Metrics Are Calculated

This page explains how each metric in the Chatarmin CX Dashboard is computed. Use it as a reference when reviewing your numbers or when a specific data point doesn't look like what you expect.


When Are Metrics Computed?

Dashboard metrics are computed in the background when a ticket is resolved or closed:

  • When a ticket transitions to "Resolved" or "Closed," all metrics for that ticket are calculated automatically (FRT, ART, AI metrics, reopen tracking, message stats, and topic analysis).

  • There may be a brief delay (typically a few seconds) between the resolution and the metrics appearing in the dashboard.

  • If a ticket is reopened before the computation finishes, the computation is cancelled.

Metrics are stored per ticket. The dashboard then aggregates these values when you load a page, apply filters, or change the date range.


First Response Time (FRT)

What it measures: The time between the customer's message and the first human agent reply, counted in business hours.

How it's calculated:

  1. The system finds the first human agent message on the ticket -- the earliest reply from a real agent (not AI, not a workflow, not an auto-reply).

  2. It then finds the last customer message sent before that first reply.

  3. FRT = the working time (in business hours) between that customer message and the agent's reply.

What does NOT count as a first response:

  • Auto-reply messages

  • AI-generated messages

  • Messages sent by workflows or flows

  • Internal notes

Result: If there is no qualifying first human response or no preceding customer message, FRT is not recorded for that ticket.


Average Resolution Time (ART)

What it measures: The total time from when the ticket was created to when it was resolved, minus the time spent waiting for the customer to reply, counted in business hours.

How it's calculated:

  1. Total time = business hours between when the ticket was created and when it was resolved or closed.

  2. Customer wait time = the sum of all periods where the team was waiting for the customer to reply. Specifically, the time between each agent message and the next customer reply.

  3. ART = Total time - Customer wait time

In practice, this means:

  • After an agent sends a message, the clock pauses

  • When the customer replies, the clock resumes

  • ART therefore reflects the time your team actively spent working on the ticket, not the total calendar time from open to close

Result: If the ticket hasn't been resolved, ART is not recorded. ART cannot go below zero.


Business Hours

Both FRT and ART respect your organization's configured business hours:

  • Weekly schedule: Configured per organization (e.g., Monday-Friday, 9:00-17:00). Time outside these hours is excluded from FRT and ART.

  • Holidays and special days: Your organization can configure special business hours for specific dates. On those days, either the special hours apply, or the day is treated as fully closed (no time counted).

  • No configuration: If your organization has not configured any business hours, all metrics use calendar time (24/7) -- every hour of every day counts.

  • Scope: Business hours are set at the organization level and apply uniformly to all channels and teams.

Example: If a customer sends a message at 18:00 on Friday and an agent replies at 09:30 on Monday (with business hours Mon-Fri 9:00-17:00), the FRT is 30 minutes -- not the entire weekend.


CSAT (Customer Satisfaction)

What it measures: Customer satisfaction based on survey responses, rated on a 1-5 scale.

Metrics computed:

  • Average Score: The mean of all survey scores where the customer responded, rounded to one decimal place.

  • Sent: Total number of surveys sent to customers.

  • Responded: Number of surveys where the customer submitted a rating.

  • Pending: Surveys sent but not yet answered (Sent minus Responded).

  • Response Rate: (Responded / Sent) x 100

  • Rating Distribution: Count and percentage of responses for each rating (1 through 5).

How the date range affects CSAT depends on the attribution mode:

  • Cohort mode: Shows survey results for tickets created during the selected period.

  • Throughput mode: Shows survey results for surveys sent during the selected period, regardless of when the ticket was created.

Rating labels:

  • 5 = Excellent

  • 4 = Good

  • 3 = Neutral

  • 2 = Poor

  • 1 = Very Poor


Resolved Tickets and Agent Attribution

What it measures: Which agent resolved the ticket and how resolution is credited.

How agent attribution works:

  • When an agent clicks "Resolve" or "Send + Resolve," that agent is recorded as the resolver.

  • If a ticket is resolved by the system (e.g., through a workflow, auto-resolve rule, or flow), no agent is credited -- the ticket counts as resolved but is not attributed to any individual agent.

Own vs. Helped:

  • Resolved Own: The agent who resolved the ticket is the same agent the ticket was assigned to. This means the agent resolved a ticket from their own queue.

  • Resolved Helped: The agent who resolved the ticket is different from the agent it was assigned to. This means the agent helped resolve a ticket from a colleague's queue.

  • Own Rate: (Resolved Own / Total Assigned) x 100

Important: Only the most recent resolution is tracked. If a ticket is reopened and then resolved again by a different agent, only the latest resolver is recorded.


AI Metrics

These metrics classify how AI and automation contributed to ticket resolution.

AI Handled

  • The ticket was resolved, AND

  • At least one AI-generated message was sent on the ticket, AND

  • No human agent sent a message, AND

  • No workflow was involved

  • Meaning: AI fully resolved the ticket without any human or workflow involvement.

Workflow Automated

  • The ticket was resolved, AND

  • At least one workflow completed on the ticket, AND

  • No human agent sent a message

  • Meaning: A workflow fully resolved the ticket without human involvement.

AI Escalated

  • AI was involved in the ticket (sent messages or generated suggestions), BUT

  • The conversation was escalated to a human agent

  • Meaning: AI tried but the ticket needed human intervention.

AI Involved

  • Any ticket where AI generated suggestions or sent messages, excluding tickets that were only handled by workflows.

  • Meaning: AI played some role in the ticket, whether it resolved it or an agent took over.

What counts as a "human message" for these calculations:

  • A normal message sent by a real agent. Messages from AI agents or flows do not count.

What counts as an "AI message":

  • A normal message sent by an AI agent, excluding messages generated by a workflow.


AI Drafts (Suggestion Metrics)

Suggestions Offered: The total number of AI-generated draft suggestions presented to agents for a ticket.

Suggestions Used: The number of those suggestions that the agent accepted and sent.

Suggestion Acceptance Rate: (Suggestions Used / Suggestions Offered) x 100

  • This is a binary measure -- each suggestion is either used or not.

Suggestion Fidelity: The average percentage of the original AI-suggested text that was retained in the final message sent by the agent.

  • A fidelity of 95% means agents kept most of the AI text. A fidelity of 40% means agents heavily edited the suggestions before sending.


Message Counts

Messages are counted by sender type. Only normal messages are included -- events, notes, and deleted messages are excluded.

  • Inbound (Customer) Messages: Messages sent by the customer.

  • Outbound Human Messages: Messages sent by human agents.

  • Outbound AI/Flow Messages: Messages sent by AI agents or automated flows.


Reopen and Reassignment Tracking

Reopen Count: The number of times a ticket was reopened after being resolved. A reopen is counted each time the ticket status changes from "Resolved" or "Closed" back to "Open" or "Need More Info."

Reassignment Count: The number of times a ticket was reassigned from one agent to another. The initial assignment is not counted -- only subsequent changes where the assigned agent is different from the previous one.

What happens when a ticket is reopened:

  • All previously computed metrics (FRT, ART, AI metrics, message stats) are cleared.

  • When the ticket is resolved again, all metrics are recalculated from scratch based on the full ticket history.

  • Only the most recent resolution cycle is reflected in the dashboard.

  • The reopen count is incremented to reflect the total number of reopens.


Trend Comparisons

Every KPI card and chart shows a trend comparing the current period to the previous period.

How the previous period is determined:

  • The previous period is the same length as the current period, ending exactly when the current period starts.

  • Example: If the current period is January 8-15 (7 days), the previous period is January 1-8 (7 days).

How the trend percentage is calculated:

  • Formula: ((Current value - Previous value) / Previous value) x 100, rounded to the nearest whole number.

  • Displayed as an up arrow (increase) or down arrow (decrease).

  • If the previous period had no data, the trend shows 100% up (or 0% if the current period also has no data).

Inverted metrics:

  • For metrics where lower is better (like FRT and ART), the direction is flipped: a decrease is shown as a positive (green/up) trend, and an increase is shown as a negative (red/down) trend.


Excluded Tickets

The following tickets are always excluded from all dashboard analytics:

  • Deleted tickets: Tickets that have been deleted.

  • Side conversations: Internal side conversations linked to a parent ticket.

These exclusions apply globally across all dashboard pages and metrics.


Channel Detection

When the dashboard groups or filters tickets by channel:

  • If a ticket was submitted via a Contact Form, it is categorized as "Contact Form" -- regardless of the underlying channel (e.g., even if the contact form sends via email).

  • Otherwise, the ticket's channel is used (Email, WhatsApp, Voice, Instagram, etc.).

  • Tickets without a channel are grouped as "Unknown."

When filtering by "Email," only pure email tickets are included -- tickets submitted through a contact form are excluded from the "Email" filter.


Quick Reference

  • FRT = Business hours from last customer message to first human agent reply

  • ART = Business hours from ticket creation to resolution, minus customer wait time

  • CSAT = Average of customer survey scores (1-5 scale)

  • AI Handled = Resolved by AI alone, no human, no workflow

  • Workflow Automated = Resolved by workflow alone, no human

  • AI Escalated = AI involved but escalated to human

  • Suggestion Acceptance Rate = Used suggestions / Offered suggestions

  • Suggestion Fidelity = How much of the AI text the agent kept

  • Reopen Count = Number of times resolved then reopened

  • Trend = Percentage change vs. previous period of equal length

  • Business Hours = Only working hours count for FRT and ART