Transparent AI. Measurable Impact.
Safeguarding results and providing visibility across every step of AI operations.
Modelling and testing
AI modeling and testing is subject to robust AI governance processes and controls. Runtime algorithms are subject to strict SLAs and interaction models are refreshed nightly to ensure the best possible caller/agent pairings
Runtime algorithms
Runtime algorithms subject to strict SLAs and failsafe controls
- Pairing decisions made in < 50ms
- Preexisting client call center operational constraints are strictly adhered to
Callers are being paired with the existing set of agents
Interaction model
Historical data is used to determine the expected payoff of different caller and agent pairings
Interaction models and associated variables dynamically evolve to ensure best possible pairings
- Our models are retrained and refreshed nightly
OFF sample allows for analysis and improvements to the pairing models
Validation
Gated process to validate across all stages
- Pre-deployment, deployment, and post-deployment
In the deployment phase Afiniti performs a cutover to Monitor Mode and validates every operational aspect before switching ON the solution
Periodic testing conducted to ensure models and data used don’t introduce bias (in conjunction with our clients, since Afiniti does not access/process customer protected data).
Sample algorithms
- Utilization control
- Wait time thresholds
- Complex incumbent routing
- Call flow adaptively estimated
- Agent availability adaptively estimated
Sample models
- Expectation maximization (EM)
clustering - Maximum likelihood
- Bayesian models (including hierarchical)
- Generalized models (linear and non-linear)
Validation techniques
- Control for over-fitting
- Matching strategy validation
- Comparative advantage validation
- Continuous ON/OFF benchmarking
Afiniti Risk Mitigation by Deployment Stage
Pre-deployment
Global data risk categorization
(low, medium, and high) to deter bias and discrimination (including known proxies)
Process controls to remediate medium and high risk data elements in the collection and organization stages of Afiniti’s data supply chain
Data elements documented
to include purpose, risk level, location, etc.
Dual data architecture
supporting Afiniti’s purpose: Discovery and Consumption
Organizational controls
to implement in each stage of the data supply chain when processing, labeling and preparing data for consumption
Deployment
Phased approach
Monitor Mode > FIFO > Cutover to validate before switching ON the solution
Priority signals
(more than 50) are part of our deployments i.e., signals for client assurance including agent utilization and caller waiting times
Continuous benchmarking
ON and OFF benchmarking to review impact of Afiniti across customer segments available from structured data
Configurable
to constrain potential bias based on metrics
Model re-training
system for mitigating any potential bias
Post-deployment
Access to variables
(and learning ability for model performance monitoring and logging over time
Signals and controls
in place to raise alarms
Model historical views
(production uptime, revenue impact, operational impact)
Afiniti can
temporarily switch back to Monitor mode as needed for risk mitigation purposes
Transparent Outputs
Client specific dashboards ensure transparent outputs across performance and operational metrics.
Performance
metrics
- Calls optimized
- Conversion rate
- Lift delivered
- Incremental saves
- Incremental revenue
Operational
metrics
- Calls handled
- Match rate
- Abandon rate
- Average wait time
- AHT
Aggregated agent dashboards
- Top scorers
- Highest conversion
- Highest revenue per call
- Lowest handling time
- Best performing region
Client risk & compliance adherence analysis
Using ON and OFF attribution, Afiniti will provide the necessary information allowing the client to conduct an independent analysis to ensure compliance.
Previous
Back to