AI Systems: Protection against distorted algorithms is a competitive advantage

Technological development promotes prosperity and well-being beyond materiality. Global Gross Domestic Product is 23 times higher than before the first industrial revolution, life expectancy has risen 245 percent, and working hours have fallen 49 percent. But technological developments also bring with them new risks that need to be addressed equally.

Artificial Intelligence (AI) stands for different reasons especially under the burning glass. When a credit card company uses AI technology to detect fraud in real-time and respond directly, it helps customers and businesses. But what if the algorithm does not work as it should? What if he blocks people with specific names, places of residence or a specific gender much faster than others? The consequences could be serious.

Disable (un) conscious prejudices

AI systems have two fundamental weaknesses: the actual algorithms and the database on which the programs develop independently. Both are prone to the same problem: (un) conscious prejudices and imbalances, so-called biases. Organizations must address this challenge with effective AI risk management if they do not want to leave the field to their competitors.


Most executives have little experience of how KI affects the different elements of their organization – from the software side to the data assets to the collaboration of people and technical systems. Because of this, C-levels often find it difficult to identify potential risks and weaknesses in their AI systems. This can have fatal consequences. In order to prevent distorted algorithms and biases in the database, one thing above all else is needed: An understanding of how complex, nuanced and networked the topic is.

Lots of potential – in both directions

AI systems have the potential to make decision-making fairer and more objective at all levels. This is not just theory: for example, a study from 2018 found that judicial decisions taken on the basis of AI systems can reduce skin color differences.

But that AI systems prevent human error is not automatic. AI systems can reproduce human and social prejudices if they are not free of distortions – faster than humans would be able to do. To prevent this, strategic risk management with regard to prejudices and biases within AI systems is therefore important.

Also interesting: Stop the AI ​​scare!

Principles for AI risk management

CEOs, experts, and other decision-makers should first and foremost clarify the risk of distorted AI systems and their consequences – and identify foreseeable weaknesses and sources of error at this point. This requires a multidisciplinary approach that incorporates the perspectives of all relevant parts of the organization.

For example, the example of a prominent – and in this case anonymised – European bank using AI systems to optimize its call center, mortgage lending and financial management is shown how this can work in practice. The Bank’s COO brought together experts from the business, IT, security and risk industries to assess and prioritize risks. The result was a structured risk identification that enabled a meaningful allocation of time and resources based on prioritization. That’s just the beginning.

Take the entire organization with you

When clarity is created on what urgent issues need to be addressed, robust control systems are needed second – throughout the organization. The development of AI systems must urgently be guided in comprehensible and controllable ways – in combination with internal guidelines, test procedures and contingency plans, as well as suitable training and development opportunities for employees.

The aforementioned European bank has set out robust decision-making principles with a clear objective: to determine at what point and how AI systems can be used if they affect the financial situation of bank customers. For example, in certain situations, a human employee may need to be released before a recommendation made by AI is passed on to customers. In addition, the Bank subjected its previous risk management to an in-depth review to eliminate possible vulnerabilities and to establish and strengthen the necessary safeguards.

It is important to tailor any risk-based provisions against biases in AI systems to the risks and to penetrate the subject with the necessary depth. It depends on the complexity of the algorithms used, how extensive and diverse the data used are and also where and how people intervene. Accordingly, the precautions must always be individualized, further demonstrating the requirement for the strategic planning of such a risk analysis.

Preventing biases becomes a competitive advantage

In the end, there is still much to learn about effective risk management for AI systems – and it will stay that way for the time being. The technology is still relatively new and is constantly evolving. However, one thing is already foreseeable: Ensuring that the results of AI systems are balanced, fair and free of biases will be one of the key success factors for 21st century companies. The following six steps can provide first, effective impulses:

  • Incorporate the opportunities and risks of AI systems into strategic considerations right from the start.
  • Establish processes and measures to test AI systems for biases and make the necessary adjustments.
  • To stimulate factual discussions about biases in human decision-making.
  • Discuss how humans and machines can work together in the best possible way.
  • Invest more in multidisciplinary research on biases and make more data accessible.
  • Encourage diversity in the AI ​​sector to reflect social diversity, thereby more effectively preventing bias and prejudice.

Organizations should not hesitate to devote to this topic. The more expensive the consequences of faulty AI systems become, and the more organizations have to deal with the risks, the more the ability to recognize and prevent biases becomes a new competitive advantage. There is no alternative to taking this strategically.

You might also be interested in this: Artificial intelligence needs creative intelligence

Peter Breuer is Senior Partner at corporate and strategy consultancy McKinsey. As an outstanding expert in big data analytics, he helps clients achieve excellence along the entire value chain.

Leave a Reply

Your email address will not be published. Required fields are marked *