CQA (Conversational Quality Analysis): User Guide
35 min
this document provides a comprehensive overview and operational instructions for the conversation quality analysis (cqa) platform 1\ introduction 1 1 purpose this user guide serves as the primary reference for all users of the conversation quality analysis (cqa) platform it provides detailed, step by step instructions for utilizing the platform's features to analyze, score, and improve customer interactions 1 2 intended audience this document is intended for all personnel involved in the quality assurance lifecycle, including system administrators responsible for platform setup, user management and configurations agents whose conversations are being analyzed and who participate in the dispute process supervisors & quality analysts responsible for reviewing analyses, managing quality profiles, and resolving disputes 2\ product overview cqa is an intelligent, automated quality assurance platform designed to streamline the evaluation of contact center interactions it functions as a centralized console that ingests call recordings, analyzes them against defined quality profile, and provides actionable insights into agent performance and operational trends the platform is designed to be agnostic , meaning it can potentially integrate with any contact center platform 2 1 how it works in cqa, analysis happens automatically as soon as a recording enters the system the workflow is driven by three core components input (metadata) when a recording arrives, it carries “metadata” (e g , campaign name, agent id, talk time, region) logic (assignment rules) the system checks this metadata against a set of pre defined rules for example, a rule might state "if campaign is 'sales' and duration is > 2 minutes, use the 'sales quality profile'" execution (quality profile) the system instantly applies the selected quality profile to the recording, generates a score, and logs the result in the analysis list 2 2 key concepts & definitions to operate cqa effectively, users must understand the following core building blocks 2 2 1 metadata the engine of automation metadata is the structured information associated with every interaction it is the most critical component of the cqa platform because it drives all automation and visibility routing it determines which quality profile is used to score a call filtering it allows users to slice and dice data on the dashboard (e g , "show me scores for the north region only") security it controls user access for instance, a supervisor tagged with "region a" metadata will only see data relevant to that region 2 2 2 quality profile (qp) the scoring rubric a quality profile is a structured evaluation form used to score an agent it is organized hierarchically to provide granular reporting category high level grouping (e g , "soft skills," "compliance") sub category specific focus area (e g , "greeting," "closing") kpi (key performance indicator) the specific question used to evaluate the agent (e g , "did the agent mention the brand name?") kpi types the system supports yes/no (binary), selection (multiple choice), and rating (1 5 scale), text (data extraction) inputs criticality kpis can be marked as "critical " if a critical kpi fails, it can automatically zero out the score for the entire category, sub category or the entire quality profile (based on the criticality level), enforcing strict compliance standards 2 2 3 assignment rules the automation layer assignment rules replace the need for manual scheduling an assignment rule is a conditional logic statement that tells the system when to use a specific quality profile dynamic logic you can create rules based on any available metadata field example "apply the 'urgent care profile' only if the queue name is 'emergency' and talk time is less than 30 seconds" continuous operation once a rule is active, it runs continuously in the background, analyzing every matching call upon arrival 2 2 4 analysis & interaction review an "analysis" refers to a single recording that has been processed by a quality profile the analysis list a central log of all processed interactions, searchable and filterable by score, date, agent, or metadata interaction view a detailed view where supervisors can listen to the call, read the transcript, viewing specific ai reasoning for every score, and handle disputes raised by agents 2 2 5 test evaluation the test evaluation module enables users to run ad hoc analyses on specific batches of recordings, which is ideal for testing new quality profiles or processing offline files that were not analyzed automatically setup users create a test by giving it a name, selecting a target quality profile , and uploading an excel file containing the recording urls and any relevant metadata , or direct audio/text files status tracking the system indicates the batch status, such as completed or partial success (if some audio links were invalid), allowing users to quickly identify and troubleshoot errors export users can export the detailed results of any test evaluation directly to their email for offline review 2 2 6 user management the user management module provides a centralized view of all individuals authorized to access the cqa platform administrators use this section to monitor account details and verify role assignments view user directory navigate to the users icon in the sidebar to see a comprehensive list of all registered accounts the table provides key identity details including name , email id , and username monitor roles quickly identify the permission level of each user admin has full access to system configuration (metadata, quality profiles, assignment rules) supervisor focuses on operational tasks like reviewing the analysis list and resolving disputes check status verify if an account is currently active or inactive directly from the status column note on access control while the current view lists roles, granular data visibility (e g , restricting a supervisor to a specific region) is configured via the metadata settings in conjunction with these user profiles 2 3 user roles admin configures metadata, builds quality profiles, and sets assignment rules supervisor reviews the analysis list, monitors dashboard trends, and resolves agent disputes agent logs in to view their own performance scores and can raise disputes on specific kpi results if they disagree with the ai's assessment 3\ getting started this section guides you through accessing the cqa console, navigating the interface, and understanding your initial view 3 1 accessing the platform to access the cqa console, you will need your organization's unique tenant id and user credentials navigate to the url open your browser and go to the cqa console login page (e g , http //cqa console in exotel com ) enter tenant id enter your tenant name/id to identify your organization enter credentials input your username and password log in click the button to access the platform 3 2 the home page analytics dashboard upon successful login, you are immediately directed to the analytics dashboard this is the default home page for all users note the dashboard by default shows the analysis for a specific quality profile user can however de select the quality profile to see the analysis across all the quality profiles central overview the main area displays widgets showing high level metrics like average score, interactions analyzed, and trends navigation sidebar located on the left, this bar provides access to all core modules home returns you to the dashboard metadata configure system data fields quality profiles create and manage quality profiles assignment rules automate analysis logic analysis list view logs of processed calls test evaluation run ad hoc tests on batches of recordings filter panel on the dashboard, a panel on the left allows you to filter the visualized data by quality profile , agent , date , or test evaluation 3 3 user profile & logout in the top right corner, you will find your user profile menu use this to view your account details log out securely exit the session 4\ administration & configuration this section is designed for administrators and qa leads it covers the foundational setup required to make the cqa platform operational defining your data structure (metadata), creating scoring rubrics (quality profiles), and automating the workflow (assignment rules) 4 1 metadata management goal define the "vocabulary" the system uses to understand your calls metadata consists of the tags attached to a recording (e g , agent id, campaign name, talk time ) which drive filtering, reporting, and automation 4 1 1 accessing metadata navigate to metadata in the sidebar you will see a list of all data keys currently ingested by the system 4 1 2 configuring fields admins can customize how each metadata field behaves using the following options enable/disable toggle the "enabled?" checkbox effect enabled fields are visible throughout the platform disabled fields are hidden entirely from dashboards and reports set as filter check the "filter option?" box effect this field will now appear as a selectable dropdown in the assignment rules and analysis list, allowing users to slice data by this attribute (e g , "show me calls from north region ") access control check the "access control?" box effect this restricts data visibility based on user roles for example, if "region" is set for access control, a supervisor assigned to "region a" will not see data for "region b" 4 1 3 cqa mapping (renaming fields) sometimes, the technical name of a field (e g , program id) is not intuitive for business users action click the edit icon next to a field display name enter a user friendly name (e g , "campaign") result users will see "campaign" in reports and filters, while the system continues to read program id in the background 4 2 building quality profiles (scorecards) goal create the structured rubric used to evaluate agent performance a quality profile represents a specific scorecard (e g , "sales audit" or "customer service review") there are two ways to build a profile manually (good for small edits or simple profiles) or via excel upload (recommended for bulk creation) 4 2 1 method a manual creation use this method to build a profile from scratch directly in the ui navigate to quality profiles and click create profile , and click create quality profile profile details enter a name (e g , "inbound support") and a description upload reference document attach documents (e g , sop documents) to be referred by the product while answering kpis create structure (categories) click add category to create a high level section (e g , "opening & closing") inside the category, click add sub category to create a focused group (e g , "greeting") add kpis (questions) inside a sub category, click add kpi kpi title enter a short name (e g , "warm welcome") that will appear in reports kpi input type select one of the following yes/no a binary choice (e g , did they verify the address?) you can assign points for 'yes' (e g , 5) and 'no' (e g , 0) selection a multiple choice question define scenarios (e g , "option a mentioned both tax and amount," "option b mentioned only amount") the ai selects the best match rating a 1 5 scale for subjective measures text a data extractor kpi that specifies what data to be extracted from the conversation (e g , “what are the competitor products mentioned by the customer”) criticality set the criticality level to automatically zero out the score for the entire category, sub category or the quality profile, regardless of other passing kpis kpi question enter the full, detailed prompt for the ai (e g , "did the agent introduce themselves by name?") allow scoring enable/disable scoring for the kpi note that the scoring will be disabled automatically for kpi input type of text save click save profile 4 2 2 method b excel upload (bulk creation) use this method to create large, complex profiles offline and upload them instantly 1\ download the template navigate to quality profiles and click create profile , and click create quality profile excel upload click on download our sample excel 2\ fill the excel template category & sub category define your sections here (e g , "soft skills" > "empathy") kpi a short identifier for reports (e g , "politeness") kpi question the detailed instruction for the ai kpi type specify "yes/no," "selection,", "rating", or “text” kpi option label specify the expected options to be displayed as outcomes kpi option weightage for yes/no you will see one row for "yes" and one row for "no " assign points to each (e g , yes = 5, no = 0) for selection create one row for every possible option/answer for rating mention the range between which the rating has to be applied criticality mark specific kpis as "critical" by specifying the criticality level as "category" or “subcategory” or “profile” in the designated column if applicable 3\ upload and validate return to the quality profiles page select upload excel choose your filled file the system will validate the structure and create the profile you can now open the profile in the ui to make any final tweaks before activating it note you can also download an existing profile to make edits in excel, then re upload it to update the version 4 3 assignment rules (automation) goal automate the quality assurance process by linking specific calls to specific quality profiles 4 3 1 how it works instead of manually selecting calls to audit, you create rules as recordings stream into the platform, the system checks their metadata if a call matches a rule, it is immediately analyzed using the assigned profile 4 3 2 creating an assignment rule navigate to assignment rules click create assignment select profile choose the quality profile to apply (e g , "sales profile") add filters (define conditions) build logic statements using your metadata fields example if queue name is support l1 and talk time is greater than 60 seconds result only calls from the support queue that are longer than a minute will be scored against the "sales profile" activate save the rule automation begins analysis against each quality profile for all new incoming calls 5\ operational workflows this section outlines the daily tasks performed by qa analysts, supervisors, and agents to monitor performance, validate scores, and manage data 5 1 workflow a monitoring & reviewing analyzed calls context the analysis list acts as the central log for every call processed by the system supervisors use this view to monitor real time traffic and identify interactions requiring attention steps navigate click analysis list in the sidebar filter & search use the filter bar to isolate specific interactions you can filter by date , score , quality profile , or specific metadata (e g , "show me calls from campaign a ") customize view click the columns icon to configure which data fields appear in the table you can show or hide fields like agent name , queue , or duration to create a view that suits your workflow this selection is saved for future sessions export data click the export button to download the current view (including applied filters) as a csv file containing interaction ids and scores 5 2 workflow b deep dive interaction review context once a specific call is identified in the analysis list, supervisors can drill down to understand why a score was given steps open interaction click on any interaction id in the list to open the detailed interaction view review media player use the embedded audio player to listen to the call recording transcript switch to the transcript tab to read the speech to text log of the conversation analyze scores scorecard view the right panel displays the full scorecard category breakdown view scores aggregated by category (e g , "soft skills") and sub category kpi details expand any category to see individual kpi results ai reasoning click a specific kpi to see the ai reason , explaining the logic and evidence from the transcript used to determine the pass/fail result disputes click on show disputes only to view only the kpis that have disputes 5 3 workflow c ad hoc testing (test evaluations) context use this workflow to test a new quality profile before going live, or to analyze a specific offline batch of recordings that weren't processed automatically steps navigate click test evaluation in the sidebar create new click the button to start a new test configuration name give your test a unique name (e g , "oct sales audit") select profile choose the quality profile you wish to test upload data excel upload upload an excel file containing the recording urls or text urls and any relevant metadata you want to associate with these calls audio upload upload mp3 or wav files directly for the audio analysis text upload upload txt, pdf files directly for the text analysis monitor progress the system processes files in real time you do not need to wait for the entire batch to finish; you can click into the evaluation immediately to see row by row results as they populate review outcomes completed all files processed successfully partial success some files failed (e g , invalid audio links) you can filter to view only the successful analyses export enter your email address to receive a full export of the test results, including scores and transcript insights 5 4 workflow d dispute resolution context cqa supports a feedback loop where agents can contest ai scores, and supervisors can review/overturn them 1\ raising a dispute (agent action) open the interaction view for your evaluated call locate the specific kpi you disagree with click the raise dispute icon form input suggested answer select what you believe the correct answer is (e g , change "no" to "yes") comment type your reasoning (e g , "i mentioned the tax at 02 14") pin audio pin the specific point in audio relevant to the kpi submit the dispute is logged and flagged for supervisor review 2\ resolving a dispute (supervisor action) in the analysis list , look for interactions flagged with "open disputes" open the interaction and toggle the "show disputes only" switch to filter the scorecard review read the agent's comment and compare it with the ai's reasoning/transcript action click on edit to then enable dispute resolution reject the score remains as is you can add a rebuttal comment accept & update the system automatically recalculates the score (e g , changing a 0 to a 10) and updates the total finalize click acknowledge to save the changes and close the dispute 5 5 workflow e manual review and edit context while cqa automates scoring, there are instances where a supervisor or qa lead needs to manually intervene to correct an ai evaluation or provide a human verified score this workflow allows you to override the system's output and document your reasoning prerequisite you must have supervisor access to perform this action select interaction navigate to the analysis list and click on the interaction id of the call you wish to review enable edit mode open the interaction and click the edit button to enable score modification modify answers scroll to the specific kpi you want to change change the ai generated answer to the correct option (e g , changing "no" to "yes") add justification upon changing an answer, you must add a comment justifying the reason for the manual update finalize click the acknowledge button to view a summary of the changes approve the changes to save the system will immediately update the kpi score and recalculate the total quality score for the interaction 6\ integrations & data ingestion cqa delivers value by analyzing conversations to do this, it must have access to call recordings and their associated metadata cqa supports various integration methods to ingest data from different telephony systems, crms, and storage solutions once an integration is established and the customer is onboarded (tenant/user creation + metadata/qp/assignment rule configuration), data ingestion and analysis occur automatically 6 1 exotel ecosystem integrations for customers using exotel's proprietary platforms 6 1 1 ecc4x integration (app server api & sftp) in this setup, cqa can either function separately, or can be embedded directly within the ecc4x interface via an iframe , providing a seamless user experience data flow cqa fetches recordings automatically on a daily basis, either via app server apis or by pulling reports uploaded on cqa server (s3) via sftp sftp approach how it works (backend) vla module on ecc generates and uploads a report to cqa server (s3) via sftp cqa uses the voice log api to fetch the actual audio recording using the call sid available in the sftp report the system analyzes the call based on configured assignment rules on premise requirements vla enabling vla must be enabled on the ecc4x setup app server api approach how it works (backend) ecc pushes recordings and metadata information in its own table cqa uses the app server api to fetch the actual audio recording using the call sid available in the ecc table the system analyzes the call based on configured assignment rules on premise requirements whitelisting app server customer needs to whitelist the app server so that the apis can directly access the tables 6 1 2 ecc6x integration (native kafka) this is a deeper, native integration designed for high throughput environments data flow bi directional streaming via a kafka pipeline on premise requirements for on prem ecc deployments, the only network configuration required is whitelisting the cqa ip address (cqa should be able to listen to the customer’s kafka servers) how it works (backend) ingestion cqa consumes recordings and metadata directly from the ecc kafka stream feedback after analysis, cqa pushes the results (scores/data) back into the kafka pipeline for ecc to consume event driven per conversation cqa picks it up as and when a conversation event is generated by ecc, making this near realtime analysis 6 1 3 exolite platform for users on the standard exolite platform, cqa is accessible via a dedicated redirection button within the exolite dashboard data flow cqa automatically fetches recordings daily using the bulk call api how it works the integration operates entirely on the backend via api connectivity users perform standard configuration (quality profiles, rules), and cqa handles the data retrieval automatically 6 2 universal & external integrations for customers using 3rd party contact centers (ccaas), crms, or cloud storage 6 2 1 standard api (external ccs & crms) cqa provides a standard api designed to make the platform agnostic this allows any external system to feed data into cqa and retrieve results architecture standard structure the api defines a strict schema for how conversations and metadata must be formatted connector a connector (middleware) is built to bridge the external component and cqa workflow input the connector pushes conversations and metadata from the external source (e g , salesforce, genesys) into cqa via the api processing cqa routes the data based on assignment rules and analyzes it output analysis results are exposed via the api (while also showing it on the cqa dashboard), allowing the external system to consume scores and insights in its own interface note the standard api can be used by customers/ps teams to build connectors to ingest data from external components to cqa, and fetch data back from cqa to the external components 6 2 2 aws s3 connector (cloud storage) this integration is designed for batch processing from systems that dump recordings to aws s3 data flow native connector support for pushing/pulling data from s3 buckets security supports authenticated s3 buckets (protected behind auth) how it works upload user/system uploads conversations and metadata files to a designated s3 bucket fetch cqa detects the new data, authenticates, and fetches the files analyze recordings are processed according to the active assignment rules push back cqa pushes the final analysis results back to an s3 bucket (output folder) note internally, this connector utilizes the same standard api infrastructure mentioned in section 4 6 3 summary of integration capabilities integration type primary mechanism data direction (ingress) data direction (egress) prerequisite status ecc4x sftp report or app server api ✅ ❌ vla enabled, ip white listed (if on prem) done ecc6x kafka pipeline ✅ ✅ whitelist cqa ip (if on prem) in progress exolite platform bulk call api ✅ ❌ none in progress external/ccs/crm standard api + connector ✅ ✅ connector development in progress s3 storage native s3 connector ✅ ✅ s3 bucket access/auth in progress 7\ troubleshooting this guide addresses common issues users may encounter while navigating the cqa platform, configuring settings, or analyzing data 7 1 login & access issues problem i cannot log in to the platform possible cause in the c3 version, the login flow requires a unique tenant id in addition to your username and password solution ensure you are entering the correct tenant name/id provided by your administrator before entering your credentials 7 2 dashboard & data visibility problem the analytics dashboard shows "no data " possible cause 1 (filters) your current filter selection (date, agent, or quality profile) may be too restrictive solution reset filters or expand the date range to include a wider period possible cause 2 (access control) your user account may be restricted to specific metadata (e g , a specific region or campaign) solution you will only see data relevant to your assigned access rights contact your admin to verify your metadata permissions problem i cannot find a specific metadata field in the filter dropdowns possible cause the field exists but has not been enabled for filtering by the admin solution an admin must navigate to the metadata page and check the "filter option?" box for that specific field 7 3 analysis & automation problem incoming calls are not being analyzed automatically possible cause the call's metadata does not match any active assignment rule solution navigate to assignment rules check the logic conditions (e g , "queue name is support") verify that the incoming call actually contains that specific metadata tag if the metadata doesn't match exactly, the system will skip analysis problem a test evaluation shows a status of "partial success" or "failed " possible cause some rows in your uploaded file may contain invalid audio urls or missing mandatory data solution open the specific test evaluation from the list the system displays a count of successful vs failed items filter the list to identify which specific recordings failed and verify their urls in your source file 7 4 disputes & scoring problem i cannot change a score while resolving a dispute possible cause the scorecard is in "view" mode solution you must click the edit button on the interaction evaluation page to enable score modification and dispute resolution problem a "critical" kpi failed, but other scores are still high possible cause the criticality logic depends on the level set (category vs profile) solution check the quality profile configuration if criticality is set to "category," only that specific category's score becomes 0 if set to "profile," the entire call score becomes 0
Have a question?
Our super-smart AI, knowledgeable support team and an awesome community will get you an answer in a flash.
To ask a question or participate in discussions, you'll need to authenticate first.
