-
I've just purchased a product over the phone from a salesperson that was pleasant, responsive and served my interests. I was served by a robot.
The capability of artificially assisted and even artificially managed services to the public is closer than ever before, with artificial intelligence (AI) algorithmic software ready to plug into company operations and services traditionally reserved for human execution only.
"Is that rapport at risk if a customer realises they have built an emotional connection with a robot? The answer is yes. At best, the experience comes off as lacking authenticity, and at worst, it can be seen as deceitful.”
At the heart of customer interactions - in particular the sales process and brand strategy - is building rapport that may evolve into a longer-term, trust-based relationship. Last month Forbes Magazine asked the question 'Can AI ever replace human salespeople?'. The answer? We’ll get to that.
Let’s face it: it's no mean feat for an algorithm (let's call it an AI agent) to replicate how to be human. As a long-time designer and observer of technology and data governance processes, I’m always impressed when I see 'humans find a way' around them. When their innate problem-solving comes to the fore.
We are a complex, crafty, resourceful and emotional bunch and we should not underestimate what it means to be human. Understanding human dignity, and the respect for it, is essential for the safe and successful deployment of AI-enabled services to any customer population.
Humans like to be treated as individuals, to be respected, to have autonomy over decisions that impact us. Humans do not like to be manipulated or humiliated, including being made to feel stupid. When humans reciprocate dignity toward each other, we build rapport.
Authentic relationship
The question is, is that rapport at risk if a customer realises they have built an emotional connection with a robot? The answer is yes. At best, the experience comes off as lacking authenticity, and at worst, it can be seen as deceitful.
There is a very good chance the customer will feel deceived and possibly humiliated. It's not about the outcome that may or may not have served their interests (purchasing a product or resolving an issue), it's how the customer was treated that counts.
Humans are not practiced at distinguishing between authentic and simulated relationships. Up until this point relationships have been between two living things. Informing the customer up front they are speaking to an AI agent resolves the question in the mind of the customer and allows them to focus on the purpose of the contact.
This also brings a whole new angle to the long-standing principle of transparency in the deployment of artificial intelligence. Transparency in data governance frameworks is typically about disclosing to a customer how decisions are made and what data is used.
In the evolving world of AI agents, transparency exists at a grass roots level of 'who or what am I speaking to?'. Up front and open disclosure about using AI agents empowers the customer and helps to build trust with the brand.
This is not to be confused with the deployment of 'trustworthy AI', which is about responsible practices for implementing AI into processes to produce reliable and accurate outcomes. Evidence points to significant improvements in the speed and accuracy of information processing using AI.
Most frameworks for AI focus on this aspect of trust, and it is essential for any brand to deploy trustworthy AI to be eligible to earn the customer's trust.
It makes the deployment of an AI service to customers a cross-functional challenge that brings together technologists, ethicists and process engineers. Their challenge? To create a trustworthy AI-enabled customer experience that acknowledges the 'human' vectors in each customer facing process: human vulnerability and outcome certainty.
Uniquely human
When you are being served by a human, they can do the work of maintaining rapport and giving comfort to the customer if and when there is an issue. They can draw from their experience, knowledge, tools and paths of escalation to demonstrate to the customer they’ve been heard and their situation is being addressed.
It's an aspect of personalisation that is just as important as having accurate and up-to-date data about the customer, their preferences and purchasing history.
For a customer to develop and maintain trust in a brand, they need to know their circumstances will be considered (vulnerability) and factored into decisions made about them (certainty).
There is care required, which is a uniquely human thing. For the time being this means human connection remains a vital part of customer interactions.
So, to answer the question posed by Forbes Magazine – can AI ever replace human salespeople? The irreplaceable element of human touch remains out of reach for AI. For now.
Michelle Pinheiro is Chief Risk Officer, Data & Technology at ANZ
The views and opinions expressed in this communication are those of the author and may not necessarily state or reflect those of ANZ.
anzcomau:Bluenotes/technology-innovation,anzcomau:Bluenotes/Digital
AI and the erosion of customer trust
2023-12-08
/content/dam/anzcomau/bluenotes/images/articles/2023/December/Michelle Pinheiro_WRivas_Sep 18 2023_Tattersalls Head Shots_1326.png
EDITOR'S PICKS
-
There are new helpers in the battle against online scammers. AI, machine learning and data analytics are revolutionising our ability to protect ourselves.
2023-10-31 10:37 -
When does AI move from sci-fi to helping you in your day job? It already is, with ANZ’s chatbot Z-GPT.
2023-08-22 15:12