Afiniti President, Tom Inskip, moderated a panel at COP28 with the Sustainable Markets Initiative on the role of digital technology in accelerating sustainability, featuring Tony Bates, the CEO of Genesys, Kate Kallot, the CEO of Amini AI, Natasha Franck, the CEO of EON, and Greg Jackson, the CEO of Octopus Energy.
Tom and the panellists agreed that decarbonizing technology’s own footprint is essential not only to achieve the global sustainability objectives set by the United Nations, but also to build confidence in technology and AI as drivers of sustainability. Digital technology and AI have a crucial role to play in helping other sectors minimize their carbon footprint, through innovation, data and AI that lead to sustainable action and behaviors all through the value chain of any product or service.
“At Afiniti, we are committed to taking the necessary steps to ensure that sustainability is at the heart of everything we do,” said Inskip. “As a leading technology company, we need to encourage and support sustainability, and help spread knowledge on what we can do as an industry to achieve sustainable outcomes.”
WASHINGTON – September 27, 2023 – Afiniti, the leading customer experience (CX) artificial intelligence (AI) application and cloud infrastructure provider, announces that Executive Vice President and Chief Commercial Officer, Tom Inskip, has been promoted to President of Afiniti.
In this new position, Mr. Inskip is charged with the broad responsibilities of leading all of the customer and partner-facing activities for the company. The role is chartered with delivering on all of the company’s growth and marketing opportunities, as well as presenting a unified, best-in-class customer experience to Afiniti customers and partners worldwide.
“Tom has played an instrumental role in Afiniti’s growth. I’ve had the pleasure to work alongside Tom for over a decade, and consider him an extraordinary growth leader, and business development and marketing talent,” said Hassan Afzal, CEO of Afiniti. “I am confident that his appointment positions us to fulfil our ambition to become the undisputed global leader in CX AI.”
“It’s an honor to work with Hassan and our talented colleagues at Afiniti. We share a passion for making our customers and partners successful, and are excited to play an instrumental role in ushering in a new era of CX AI,” said Mr. Inskip. “More now than ever, our customers are turning to AI to optimize for better outcomes at every turn – this is our expertise.”
Since 2013, Mr. Inskip has been responsible for revenue, helping the company achieve over 40% annual growth across a global customer base, while playing a key role in evolving our product portfolio beyond our AI Optimization solutions (AI Pairing, AI Offers, and AI Commissions) to include our AI Cloud Infrastructure product suite Afiniti InsideTM, which incorporates AIROTM, our new “self-serve” offering.
Media Contact
About Afiniti
Afiniti is a leading provider of customer experience (CX) artificial intelligence (AI). Our CX AI optimization services and infrastructure deliver measurably better business outcomes for some of the largest enterprises in the world. Our technology is used globally in the healthcare, telecommunications, hospitality, insurance, and banking industries, and across multiple customer experience channels. To learn more, visit www.afiniti.com.
AI is driving one of the biggest business transformations in history, where companies are moving fast to figure out how to use the technology to deliver more for customers.
The acceleration of AI has also sparked conversations about the effect it’ll have on employment – in particular, whether smart technology is undermining the value of human labour.
Across many industries, this fear is playing out in both the customer and employee experience, with a heavy focus on how generative AI will disrupt traditional ways of working, and how people interact with companies.
This disruption is often painted in stark terms, where employment opportunities will decline as machines move to the centre stage.
But companies that harness this technology to connect and empower people, rather than replace them, will find success with their customers, higher revenue and happier employees.
What the customer experience typically looks like
The contact centre at, say, a large health insurance company is the tip of the spear for the customer experience. The phones are ringing every second with customers wanting to know any number of details of their insurance package. What does it include? How quickly can payouts be made? What paperwork is involved?
Each customer is different: one is a first-time caller anxious to be told that they will receive full support; another knows the insurance process well and simply wants clarity on a few details; yet another is frustrated at having to go through the process yet again.
On the customer service side, contact centre agents are often grouped by skill set, as determined by the organisation’s business goals. Customers are typically routed to the first available agent within a group based on what they’re calling about, which leaves the opportunity of an optimal connection to chance and money on the table.
Improving, not undermining, the human element of CX
There are two false assumptions about the contact centre. The first is that agents are all alike; the second is that AI can now do the same jobs that humans have typically done – or better.
“While generative AI chatbots are great at simple interactions, such as helping a customer get updates on their account, they can’t do everything,” said Syed Adeel Ahmed, VP of AI R&D at Afiniti.
“In higher stakes interactions, like when a customer is filing a claim for a health procedure, the empathy only another person can provide makes a huge difference and can’t be replicated by a chatbot.”
Afiniti’s CX AI technology is designed to enhance, not undermine, the human element of the customer experience.
Just as each customer has a unique set of needs, each agent has their own set of skills and experience. The key challenge is designing an effective customer experience that harnesses this diversity to successfully match the right agent with the right customer.
Afiniti’s technology kicks into gear the second a customer makes contact with a contact centre. It’s at that first point of connection where the experience optimisation begins.
“Instead of routing the customer to the first available agent, as is often the case, companies use Afiniti’s AI to pair customers with agents based on historical patterns of data – such as how an agent has handled similar interactions and why a customer has contacted the company in the past,” said Ahmed. “This puts the agent in a position to deliver on critical metrics, like closing gaps in care, so the customer leaves the interaction satisfied.”
The success of any AI tool rests on how dynamic it can be, and how it can respond to challenging scenarios. Afiniti’s AI engine trains continuously, and on any given day may take into consideration hundreds of variables, such as product launches and agent turnover, to construct optimal models for connecting customers and agents.
The result? A better customer and agent experience and measurable increases in revenue.
Afiniti’s innovations support large enterprises – in insurance, telco, finance and hospitality – that are dealing with thousands to hundreds of thousands of calls each day.
Ensuring fairness with AI
Another concern about AI is the potential for bias to emerge and affect its decision-making processes. Afiniti has designed its tool so that it is constantly monitoring signs of bias, and quickly correcting them.
“Measurability is essential when implementing any AI technology to ensure it is both safe and effective,” said Ahmed.
“At Afiniti we turn our technology on and off throughout the day to create a benchmark. This allows us to continuously track whether bias or other unwanted effects are emerging when our AI is active so we can quickly mitigate them. We use this same capability to precisely measure the value our models are delivering against important business metrics.”
Afiniti has also addressed another unwanted effect that many contact centres face – uneven utilisation of agents. Often, contact centre agents considered high performers are routed to more customers than they can handle, while perceived lower performers aren’t given enough customer interactions to deliver on their KPIs.
Afiniti applies its technology fairly, harnessing the unique abilities of each agent across the contact centre at all times to ensure a consistent distribution and utilisation of agents – meaning no one slips to the bottom of the pile and no one gets burned out.
The need for human connection isn’t going anywhere
AI is exploding, and as Afiniti shows, the human connection complemented with advanced AI will be the foundation of the future customer experience.
After all, customers will always seek the reassurance that comes with human connection – the knowledge that you are being listened to, and that your concerns are being addressed.
AI can facilitate a better experience when it works with, not apart from, humans.
Originally published in Business Reporter on October 4, 2023.
As the AI race heats up, no business wants to be left behind – and doing things properly will yield even bigger benefits
The AI era is upon us, with what seems like new advances every week, pushing the technology to new heights. Between Google, OpenAI, Microsoft and a raft of other companies, new developments that can ease the way we live and work are accessible to people more than ever before. It’s little wonder, then, that businesses are starting to consider how best to integrate AI into their processes to reap the benefits.
But thinking before acting is vital in such a fast-moving space. The first-mover advantage that businesses seek out can quickly be negated by the regulatory risks of irresponsible use of AI.
“Lots of companies talk about AI, but only a few of them can talk about responsible AI,” says Vikash Khatri, senior vice-president for artificial intelligence at Afiniti, which provides AI that pairs customers and contact-centre agents based on how well they are likely to interact. “Yet, it’s vital that responsibility be front of mind when considering any deployment of AI – the risks of not considering that are too great.”
Think fast, act slower
In part, the fast moving and competitive environment often places the responsible use of AI secondary to gaining market share. The history of AI, says Khatri, has seen companies develop tools that harness the power of AI by making use of big data sets without fully considering what impact they can have on society. Widely used AI tools are trained by trawling the internet and gleaning information from what is found online, which can often replicate and amplify our societal biases. Another problem with AI generated content is that it is often ill-suited to the specific needs businesses may have when deploying AI.
“If I’m a broadband provider in the UK, as opposed to a health insurance company in the US, there’s a specific way that I communicate with my customer,” says Khatri. “With respect to the generative AI technology that’s receiving so much attention, it’s important that the AI models being used are trained on the company’s own data, rather than relying solely on generic, third-party data. That way, the organisation remains compliant with global data regulation and the AI models generate content that aligns with the company’s unique approach to its customers.”
Khatri points to how a customer service chatbot trained on the way users interact with one another on social media, for instance, could quickly turn quite poisonous rather than supportive, lobbing insults rather than offering advice.
“At Afiniti, we use responsible AI design to make those moments of human connection more valuable,” says Khatri. “That in turn produces better outcomes for customers, customer service agents and companies alike. One way we do this is by training our AI models only with the data we need, and we continuously monitor them so our customers and their customers get the results they want, while being protected from bias or other discriminatory outcomes.”
It’s not just the risk of alienating customers that should be at the forefront of a business leader’s mind when considering how to roll out AI within their organisation and to their clients. Regulation is on the horizon for AI, and is likely to bring specific requirements for how data is fed into models that are used to give AI its ‘brain’, and how AI is used to handle customer interactions.
Caution avoids consequences
“Before you even start to develop or deploy AI, you must be cognisant of the regulatory landscape,” says Kristin Johnston, associate general counsel for artificial intelligence, privacy and security at Afiniti. “This means examining your governance structure around data compliance to get your house in order first.”
AI regulation is complex and constantly changing, and a patchwork of laws across the globe can make it hard for businesses to comply. For example, businesses operating in Europe have different requirements from those with customers in the US, while the UK’s data protection regulation is likely to soon diverge from the European Union’s.
The magnitude of the task in responsibly deploying AI is something most businesses have yet to fully wrap their heads around, fears Johnston. “A lot of companies haven’t built out a governance process specifically around AI,” she says. To do so properly, Johnston says it’s important to consider, first, the definitions of ‘AI’ and ‘machine learning’, then to identify how AI is being used within the organisation based on those definitions, and to construct your responsible AI programme accordingly so that all employees are aligned.
AI is set to become so ubiquitous that external services that feed into your company may use AI as well. For instance, Google has now introduced generative AI-powered aids to develop documents and slide decks in its cloud-software suite that your employees could soon find themselves inadvertently using without knowing it. And if people in your company aren’t sure what AI is — or even if they’re using it — you can’t be confident your approach to AI is responsible.
Root and branch reform
Johnston stresses that a clearly understood definition of AI within your company is the basis of any AI governance programme. She recommends considering the definition of ‘AI systems’ in the artificial intelligence risk management framework published by the National Institute of Standards and Technology (NIST) in the US as a working definition.
“Making sure everyone is aligned is critical, because you want to check for any use of AI throughout your organisation,” she says. “Any protocol worth its salt needs to be able to categorically define who is using AI tools, when they’re using them, what data they’re using and what the limitations of the tools are. It’s also important to ensure AI tools are being used in a way that respects privacy and intellectual property, given the mounting legal actions against some generative AI tools by those who believe their data was used to train the models that power such platforms.”
Doing this work in making sure responsibility is front and centre of any AI deployment is vital because it will avoid headaches in the long run. Not only can the irresponsible use of AI lead to trouble, but generative AI’s tendency to ‘hallucinate’ content — in other words, generate untrue responses — could lead to even bigger trouble in the court of public opinion for spreading disinformation. Yet fewer than 20% of executives say their organisation’s actions around AI ethics live up to their stated principles on AI. By putting in place a robust responsible AI programme, companies can avoid the pitfalls that come with leaping headfirst into the promise of AI without considering its drawbacks. “We’re very mindful about ethical and responsible use of data,” says Johnston. “Responsible AI should be a priority for organisations globally.”
Responsibly transform your business with AI at afiniti.com.
Originally published in The Times Future of Data and AI report on March 22, 2023.
