- Newslink Global Insurance Trends-Editor's Weekly Overview
- PRA publishes Solvency II Consultation Paper
- Insurance Europe and other EU industry associations raise concerns regarding application timeline for new EU disclosure rules for sustainable investments and risks
- S&P Global Ratings welcomes latest amendments to IRFS 17
- Charles Taylor board backs cash offer of £261m from LMP Bidco
- NeuralMetrics, a InsurTech data provider using natural language processing(NLP) technologies to power a real-time, alternative data engine for general insurers, announces official launch
- Duck Creek Technologies announces that it has been recognized as a Leader in the 2019 Gartner “Magic Quadrant for P&C Core Insurance Platforms, North America-its fifth consecutive time expired
- GlobalData says start-up Found has the potential to reduce claims in contents insurance expired
- Greenlight Re Innovations invests in US digital MGU Coterie-its tenth investment expired
- FCA appoints Howard as executive director of Risk and Compliance Oversight(R&CO) expired
- Lloyd's appoints Sansom as CRO-Chief People Officer Andrews to leave expired
- Ascent Underwriting appoints Western as CUO expired
9th June 2019
Pegasystems research indicates consumers lack trust in artificial intelligence (AI) and don’t understand the extent to which it can make their interactions with businesses better and more efficient
Consumers lack trust in artificial intelligence (AI) and don’t understand the extent to which it can make their interactions with businesses better and more efficient, according to new research from Pegasystems, the software company empowering digital transformation at the world’s leading enterprises. The study, which was conducted by research firm Savanta and unveiled at PegaWorld in Las Vegas, surveyed 5,000 consumers around the world on their views around AI, morality, ethical behaviour, and empathy.
Despite AI delivering the types of customized, relevant experiences people demand, many consumers still aren’t sold on the benefits. With many businesses turning to AI to improve the customer experience, it’s important for organisations to understand their customers’ perceptions, concerns, and preferences. Key findings of the study included:
-Consumers are cynical about the companies they do business with: 68% of respondents said that organisations have an obligation to do what is morally right for the customer, beyond what is legally required. Despite this, 65% of respondents don’t trust that companies have their best interests at heart, raising significant questions about how much trust they have in the technology businesses use to interact with them. In a world that purports to be customer centric, consumers do not believe businesses actually care about them or show enough empathy for their individual situations.
-There are serious trust issues with AI: Less than half(40%) of respondents agreed that AI has the potential to improve the customer service of businesses they interact with, while less than one third(30%) felt comfortable with businesses using AI to interact with them. Just 9% said they were ‘very comfortable’ with the idea. At the same time, one third of all respondents said they were concerned about machines taking their jobs, with more than one quarter(27%) also citing the ‘rise of the robots and enslavement of humanity’ as a concern.
-Many believe that AI is unable to make unbiased decisions: Over half(53%) of respondents said it’s possible for AI to show bias in the way it makes decisions. 53% percent also felt that AI will always make decisions based on the biases of the person who created its initial instructions, regardless of how much time has passed.
-People still prefer the human touch: 70% of respondents still prefer to speak to a human than an AI system or a chatbot when dealing with customer service and 69% of respondents agree they would be more inclined to tell the truth to a human than to an AI system. And when it comes to making life and death decisions, an overwhelming 86% of people said they trust humans more than AI.
-Most believe that AI does not utilize morality or empathy: Only 12% of consumers agreed that AI can tell the difference between good and evil, while over half(56%) of customers don’t believe it is possible to develop machines that behave morally. Just 12% believe they have ever interacted with a machine that has shown empathy.
One of the critical ways organisations can increase customer trust and satisfaction is to use all the tools at their disposal and demonstrate more empathy in their interactions. But empathy is not a common corporate trait–especially when trying to maximize profitability. As AI becomes increasingly important in driving customer engagement, companies need to think about how to combine AI-based insights with human-supplied ethical considerations.
To help improve empathy in AI systems, Pega has announced the launch of its Customer Empathy Advisor. For further details on how this feature provides businesses with an ethical framework to operationalize empathy and ethics in all customer interactions, visit www.pega.com/ai-and-empathy.
“Our study found that only 25% of consumers would trust a decision made by an AI system over that of a person regarding their qualification for a bank loan,” said Dr Rob Walker, vp, decisioning and analytics at Pega. “Consumers likely prefer speaking to people because they have a greater degree of trust in them and believe it’s possible to influence the decision, when that’s far from the case. What’s needed is the ability for AI systems to help companies make ethical decisions. To use the same example, in addition to a bank following regulatory processes before making an offer of a loan to an individual it should also be able to determine whether or not it’s the right thing to do ethically.
An important part of the evolution of artificial intelligence will be the addition of guidelines that put ethical considerations on top of machine learning. This will allow decisions to be made by AI systems within the context of customer engagement that would be seen as empathetic if made by a person. AI shouldn’t be the sole arbiter of empathy in any organisation and it’s not going to help customers to trust organisations overnight. However, by building a culture of empathy within a business, AI can be used as a powerful tool to help differentiate companies from their competition.”
Pega surveyed 5,000 consumers on their views on artificial intelligence, morality, ethical behaviour, and empathy. The results included responses from the US, the UK, France, Germany and Japan.
Pegasystems Trends(55 articles)