Accounting, Edición 48

Cognitive Bias and Culture in the Detection of Fraud

Sesgo Cognitivo y Cultura en la Detección de Fraudesby: Yanira Petrides, Instituto Tecnológico Autónomo de México
Esperanza Huerta, Universidad de Texas en El Paso
TerryAnn Glandon, Universidad de Texas en El Paso

Systems that analyze the content of emails are used as a tool for detecting potential frauds.

These systems conduct a linguistic analysis of emails, searching for subtle clues that may suggest the intent to commit fraud. For example, the systems search for phrases that express rationalization, incentives or collusion. However, linguistic interpretation is highly subjective; therefore the reports generated by these systems cannot indicate the existence of fraud with complete certainty.

In our previous article “Potential Fraudsters,” we discussed five decisions that managers must make before using these systems: establish policies for email privacy, disclose the use of the system, select in-house or outsourced analysis, determine continuous or one-time assessment, and establish follow-up procedures for reports. In this article we will develop the latter issue. That is, we will discuss two challenges managers face when the system generates a report that has identified a potential fraudster and managers must follow up on the report.

The first challenge is to understand that the linguistic analysis of emails involves subjectivity and uncertainty and to acknowledge that the system can make mistakes. A fraudster is not likely to explicitly state in an email: “I’m going to commit fraud.” Therefore, these systems look for subtle expressions known as “precursors of fraud.” For example, one of the most common precursors of fraud is the desire to take revenge on the company or on an employee. This desire for revenge is an element of rationalization used by the fraudster to justify fraud.

The subjectivity involved in linguistic analysis can lead systems to produce two types of erroneous results when identifying potential fraudsters. The system can identify a person as a potential fraudster when the person is not, or the system can fail to identify a fraudster. These two types of errors have different implications. “Silent” fraudsters can commit fraud without leaving tracks in their emails. These systems cannot identify silent fraudsters.

The other type of error is more common—when an employee is identified as a potential fraudster, although he or she has not committed no wrongdoing. Employees may express in an email their dissatisfaction with the administration of the company only to vent, without any intention of committing fraud. Finally, precursors of fraud identified by the systems are, as the name implies, only potential antecedents of fraud. However, precursors may or may not be present when fraud is committed. Detection of a precursor of fraud does not imply with absolute certainty the existence of fraud.

The systems that conduct linguistic analysis have the capability to highlight the uncertainty in the results obtained to assist users to make decisions. The reports can include the likelihood that the person identified is in fact a fraudster. For example, the system can indicate that there is an 80% probability that the person has been correctly identified as a potential fraudster. Expressing the results with an estimated probability reminds users that there is a margin of error—that it is not 100% certain that the person identified is indeed a fraudster.

In summary, the first challenge for managers is to take into account that these systems do not ensure that they can identify all fraudsters (they do not detect silent fraudsters) or that all employees identified are indeed fraudsters (incorrect identification). Therefore, mangers must evaluate the report generated by the system to determine whether or not to initiate a fraud investigation.

This decision is crucial because it results in the allocation of resources for fraud investigation. If management believes the report has merit, the manager will initiate an investigation. If the manager believes the report has no merit, the report will be discarded as irrelevant and no investigation will be initiated. Due to the limited resources that companies have for fraud investigation, it is important to allocate resources to investigate cases in which it is more likely that fraud has occurred.

The second challenge for managers is to understand how the uncertainty and the estimated probabilities disclosed in the reports influence the decisions we make. People react differently depending on how percentages are framed. For example, an 80% probability of a correct result is mathematically equivalent to a 20% probability of an incorrect result. However, we react differently when the percentage is framed as correct as opposed to an incorrect percentage. That is, we do not make an objective mathematical evaluation; rather we make a biased evaluation depending on the frame used. This phenomenon was originally studied by Kahneman and Tversky, who developed prospect theory and demonstrated the existence of these biases in decision-making in different contexts [1].

We conducted an experiment to understand the framing effect in the reports generated by linguistic analysis systems [2]. Participants in the experiment read a report stating that a person had been identified as a potential fraudster. Some participants read that there was an 80% probability that the person had been identified correctly, while others read that there was a 20% probability that the person had been incorrectly identified. As suggested by prospect theory, the framing used (correct or incorrect) influenced the decisions people made.

Our results indicate that when the percentage is framed as a correct probability, people are more willing to initiate a fraud investigation. However, when the percentage is framed as an incorrect probability, people are less willing to initiate an investigation. That is, when the report highlights the possibility of the system making an error in the identification of a fraudster—the probability that the subject has been incorrectly identified is stated—people take a more cautious approach to initiate an investigation.

Therefore, the second challenge for managers is to consider that the decisions they make are influenced by the framing effect. If managers want the reports to be evaluated more critically, they should use systems that highlight the probability of incorrect identification. In this way, the user will be reminded that there is a margin of error in the identifications and will act more cautiously.

In conclusion, the framing of the report can assist the decision maker to be more aware of the fallibility of the systems and the possibility of spending resources on unnecessary fraud investigations. From a practical perspective, contrasting a report that indicates the possibility of correct results to a report that indicates the probability of incorrect results can lead people to take a more cautious approach when making decisions. The presence of cognitive bias, such as framing, and the influence of culture emphasize the complexity of implementing this type of system in multinational companies.



  1. The original studies are in: Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453-458. An updated and entertaining review of these and other biases in decision-making is in Kahneman (2013), Thinking Fast and Slow, Farrar, Straus and Giroux, ISBN-13: 978-0374275631.
  2. The original report of the experiment is published in Huerta, Glandon and Petrides, 2012, “Framing, decision-aid systems, and culture: Exploring influences on fraud investigations”. International Journal of Accounting Information Systems, 13(4), p. 316-333.

Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>