Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

June 5, 2025 | Alex Paradies

AI Mistakes: Human Error in Disguise?

AI

Are the Causes of AI Error and Human Error the Same?

The conversation around AI has exploded in the last few years. However, many have been disappointed in the results. If you have been playing with AI, you have experienced some errors. You may have received false information, or it has focused on the wrong piece of information.

At a basic level, learning models take experiences from training and then apply their “mental model” to a task in the uncertain real world. This sounds eerily similar to how we, as humans, learn.  

This got me thinking:  When we investigate accidents, we commonly look at the different ways people make mistakes to understand how our systems can be improved. In TapRooT® RCA, we have developed a way to categorize human error. So, if AI is going to make the same types of decisions as people regarding high-risk activities, do we need to treat their errors the same as human errors?

In this article, we will discuss the similarities and differences between AI Error and human error.

What is Human Error?

So, before we get into AI Error and its causes, we need to look at human error.

I recently did a video on how to classify human error. In the video, we cover a few different ways of classifying and understanding Human Error. For comparison to AI, we are going to look at the 5 types of causal factors that lead to an incident.

The 5 types of Causal Factors are

  1. Equipment Failures
  2. Incorrect Actions
  3. Missed Catches
  4. Violated Safeguards
  5. Mistakes that made the outcome worse

To get a more detailed overview of these 5 Types of Causal Factors, check out our video:

How AI Error is Similar

With these 5 Types of Causal Factors, we have a framework for how errors can lead to an incident. But, how does this compare to AI errors?

When AI messes up, it usually falls into one of these five categories:

  1. Equipment Failure – Servers go down, hardware glitches out.
  2. Misinformation – Giving false information.
    • Hallucinations – AI just makes stuff up.
  3. Biased decisions or broken logic
    • Overfitting – The model only performs well on training data.
    • Underfitting – The model is too simplistic to capture the complexity of the data.
    • Mode Collapse – Producing repetitive outputs (same image or text).
  4. Hacked Responses
    • Jailbreaking – Users craft prompts that trick the AI into ignoring its safety or content policies.
    • Instruction Injection – Hidden or embedded prompts are inserted within user input to override previous instructions.
    • Data Leakage Exploits – Prompts are designed to coax the model into revealing training data or confidential outputs.
  5. Incorrect data interpretation
    • Context Loss – AI misunderstands some context and performs poorly.
    • Ambiguity Misinterpretation

These are some of the ways AI can make mistakes, and just like human performance, these mistakes point to unique best practices that can be applied to improve performance.

To improve human performance, we have 7 categories of Best Practices to apply.

  • Procedures
  • Training
  • Quality Control
  • Communication
  • Human Engineering
  • Work Direction
  • Management System

Right now, we see much of the conversation around AI focused on 2-3 of these categories – Training, Management Systems, and Quality Control.

You can see this in the corrective actions taken when AI makes an error. They are similar to the most common corrective actions for human error.

The Most Common Corrective Actions for AI Errors

Just like we’ve seen in human error investigations, organizations often respond to AI mistakes with quick, surface-level solutions. When an AI system produces a faulty output, whether it’s a hallucinated fact, a biased decision, or a misinterpretation of context, the instinct is to patch it fast. These patches usually fall into a few predictable buckets: retrain the model, adjust the data, or impose more rules.

But here’s the problem: these fixes often address the symptom, not the root cause. Without understanding why the AI made the mistake in the first place, what data patterns, design choices, or oversight failures led to it, we risk repeating the same errors in new forms.

Let’s take a closer look at the most common corrective actions applied to AI failures:

Retraining the Model

  • Retrain on higher-quality data
  • Reinforcement learning from human feedback
  • Data augmentation

Adding rules to the model

  • Adding rules to steer behavior
  • Adding process filters or constraints

The 3 Most Common Corrective Actions for Human Error

These common corrective actions for human error might feel like the most straightforward ways to address mistakes, but they’re rarely the most effective. In fact, research and decades of field investigations show that these “go-to” solutions often fail to prevent recurrence. Why? Because they tend to focus on individual behavior, not the systemic conditions that set people—or AI—up to fail.

They are:

  • Discipline
  • Training
  • Procedures (adding new rules)

What we know from human performance is that training and rules are not the strongest corrective actions. There is a high likelihood of recurrence when you rely on training and rules to improve performance. Safeguards are not created equal. Some are stronger than others. People are not the most reliable safeguards.

The next question for us is whether AI will turn out to be more like man or machine regarding reliability.

Perhaps what we should be doing is investigating and treating AI Errors the same as Human Errors when it comes to investigations. This is where TapRooT® Root Cause Analysis can help. We are the leading process for Analyzing Human Performance difficulties and understanding how systems fail. You can learn how to understand where your systems set people up for success and failure.

If you want to learn more about creating stronger safeguards for human performance, check out a 5-Day Advanced TapRooT Root Cause Analysis Training or our Stopping Human Error Course.

Categories
Investigations
-->
Show Comments

Leave a Reply

Your email address will not be published. Required fields are marked *