
Artificial intelligence is rapidly becoming embedded in business operations, from supply chains and software development to customer service and procurement. While many organizations see AI as a powerful tool for efficiency, consultant and systems strategist Georg Meyer says leaders should also pay close attention to the risks that come with relying too heavily on automated systems.
According to Meyer, the biggest danger is not everyday technical mistakes, but rare events with major consequences. “The biggest risks are tail risks—‘black swan’ events that are very unlikely to occur but, if they do, would have extremely large consequences,” he says.
With traditional software, engineers can usually predict the kinds of mistakes that might occur. Human-built systems tend to fail in ways that make sense, miscounting something, placing a decimal in the wrong position, or making an incorrect assumption in a calculation. AI systems, however, behave differently.
“With AI, because there’s not a mind we can understand, the errors can be far more unpredictable,” Meyer explains.
He offers a practical example. Imagine a company asks AI to manage procurement with the goal of reducing inventory costs. In trying to optimize that objective, the system might quietly reduce orders for expensive but critical parts. The decision may appear efficient at first, but the consequences become clear only when production stops because a necessary component is missing.
“Without close monitoring, you may only discover the problem once you run out,” Meyer says.
Efficiency Can Create Fragility
Meyer also believes businesses sometimes make their systems more fragile while trying to make them more efficient.
In many modern warehouses, for example, the warehouse management system is the only place where inventory locations are recorded. If the system goes down, employees may know the parts exist somewhere in the building but have no way to locate them quickly.
“In a large warehouse, if the system is down, it’s functionally the same as not having any inventory at all,” Meyer explains.
For companies serving customers with strict production schedules, this can be extremely costly. Some contracts include “line-down penalties” that can reach tens of thousands of dollars per hour if parts cannot be delivered on time.
Automation failures can happen for many reasons, cyberattacks, software bugs, network outages, or cloud disruptions. When too many processes depend on a single system, that automation becomes a single point of failure.
“This is a common trade-off,” Meyer says. “To make things more efficient, you often reduce the friction that makes an organization resilient.”
Why Trust in AI Can Be Misplaced
Another challenge Meyer sees is how leaders think about trust when it comes to AI systems. Because the outputs often appear intelligent, many people treat the technology as if it truly understands what it is doing. In reality, Meyer says, these systems are closer to advanced prediction tools. “They’re treated as though they were intelligent rather than what they really are, very sophisticated auto-completers.”
He compares the situation to using a calculator. A calculator can perform arithmetic quickly, but it doesn’t actually understand mathematics. In most everyday cases the result is good enough, but when the outcome must be precise or the stakes are high, human expertise becomes essential. “If the math has to be right, you need a mathematician, not just a calculator,” Meyer explains.
For that reason, Meyer believes organizations must keep humans actively involved when deploying AI in important systems. Companies need people who understand the technology, its limitations, and its potential failure modes. Teams can build this intuition by testing AI against systems the business already understands and closely reviewing results. Over time, oversight can shift from constant monitoring to checking unusual outputs, but the principle remains the same: automation is powerful, Meyer says, but it works best when humans remain firmly in the loop.