Customers tend to cheat chatbots. How to avoid it?

Automated customer service systems are becoming more common across industries. Tools such as online forms, chatbots and other digital interfaces have benefits for both companies and customers.

New study indicates that convenience also has a downside: people tend to cheat digital systems more than 2 times more often than live interlocutors. We talk about the conclusions and recommendations that will help achieve more honest answers.

Customers tend to be fooled by chatbots. How to avoid it?

Imagine that you have just placed an online order on Amazon. What’s to stop you from saying that the package never arrived and demanding a refund – even if the order was executed exactly? Or let’s say you just bought a new phone and immediately dropped it, shattering the screen. You submit a replacement request and an automated system checks to see if the product arrived broken or if the damage was your fault. What do you say?

Dishonesty is not a new phenomenon. But as chatbots, online forms, and other digital interfaces become more prevalent in customer service, it’s easier than ever to twist the truth to save money. How can companies encourage customers to be honest while still taking advantage of automated tools?

Read on the topic: Russians began to annoy government chatbots instead of live specialists

To investigate this issue, two simple experiments were conducted that subtly measured the honesty of behavior.

The researcher first asked the participants to flip a coin ten times and told them that depending on their results, they would receive a cash prize. Some reported results via video call or chat, and others via an online form or voice bot. They tossed coins without strangers, and it was impossible to check the honesty of an individual participant.

However, scientists were able to assess the propensity to cheat a group of participants (as in general only 50% of coin flips should be successful).

What did they discover?

  • On average, when participants reported to a human, they said they got the desired result 54.5% of the time, which equates to an estimated 9% fraud rate.
  • In contrast, when they reported the result to the machine, they cheated 22% of the time.

In other words, some cheating is to be expected anyway, but participants in the experiment who interacted with a digital system cheated more than twice as often as when talking to a human.

In addition, when communicating with the machine, people were more than 3 times more likely to go for outright deception, saying that 9 or 10 tosses ended in the desired result.

A follow-up survey helped determine that the main psychological mechanism driving this effect was the participants’ level of anxiety about their personal reputation. Those who reported their results to the machine felt much less close to the researcher and, as a result, were much less concerned about their personal reputation than those who interacted with a human.

Thus, the researchers hypothesized that perhaps by making the digital system more human (for example, using a voice rather than a text interface), users will be more concerned about their reputation and less likely to lie. However, they found that the participants in the experiment still cheated just as often.

Probably, if people know they are interacting with a machine, giving it human traits is unlikely to make much difference.

Photo in text: Unsplash

It is certainly possible that the development of human-like AI systems will make this strategy more effective in the future. But for now, it’s clear that digital tools are making fraud a lot more common, and there’s simply no obvious quick fix.

The solution to the problem was discovered during the second experiment. While dishonesty cannot be eliminated, it is possible to predict who is more likely to lie to a robot and then push that particular group of users to use the human channel of communication.

The researchers first assessed participants’ overall propensity to cheat by asking them to flip a coin ten times and report their results via an online form, then categorized them as “probably cheating” and “probably honest.” In the next part of the experiment, the subjects were asked to choose how to report the results of a coin toss: directly to a person or via an online form.

Overall, about half of the participants preferred the person and the rest the online form, with “probably cheating” being significantly more likely to choose the online form, while “probably honest” choosing to communicate with another person.

This suggests that people who lie on their own initiative try to avoid situations in which they have to lie to a person, not a machine. Presumably this is due to the understanding (sometimes unconscious) that it would be more unpleasant to deceive a person.

Thus, if dishonest people tend to choose digital communication channels on their own, this allows for better detection of deception and minimization of it. One way to better identify these customers is to collect data about their preferred communication channel.

Of course, customers can try to trick the system into talking to a real agent, but this is really a win-win because, according to research, they are much more likely to act honestly when interacting with a person.

Ultimately, there is no cure for digital dishonesty. After all, it seems that fooling a robot isn’t as bad as lying to a real person’s face.

People are programmed to protect their reputation, and machines, in principle, do not pose such a threat to it as people do. By understanding why customers are more or less prone to lying, organizations can create systems that identify likely cases of fraud and, ideally, encourage sincerity.


Cover photo: fizkes / Shutterstock

Subscribe to our Telegram channelto keep up to date with the latest news and events!

Source: RB.RU by

*The article has been translated based on the content of RB.RU by If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!