codete ai gone wrong common neutral network training mistakes and how to avoid them main 621d40b3a3
Codete Blog

AI Gone Wrong: Common Neural Network Training Mistakes and How to Avoid Them

Karol Przystalski c529978f2b

01/09/2021 |

8 min read

Karol Przystalski

There's so much hype around AI today that it's hard to believe it sometimes actually fails. Truth is, AI goes wrong more frequently than you'd expect. The AI fail stories range from alarming to pretty embarrassing, but they all open our eyes to the limitations of AI and the process of building AI-driven products. 

In this article, we go over some of the most high-profile AI failures and the lessons we all got to learn from them. We show you the most common mistakes companies make when building AI-based products and expert tips that help to avoid AI failure.

 

Table of contents:

  1. When AI goes wrong: 4 stories of AI failure
  2. 3 traps that lead to AI failure 
  3. How to avoid AI failure?

When AI goes wrong: 4 stories of AI failure

AI fail #1: IBM's Watson for Oncology

Back in 2013, the technology giant IBM teamed up with the University of Texas MD Anderson Cancer Center to build a brand-new, AI-based oncology expert advisor. Sounds amazing, right? It was supposed to become an AI that cured cancer by helping doctors to uncover insights from the patient and research databases. 

Its press release declared: "MD Anderson is using the IBM Watson cognitive computing system for its mission to eradicate cancer."

However, in 2018 StatNews got hold of internal IBM documents and revealed that Watson was giving wrong cancer treatment advice that veered on danger. According to the information, it's IBM's engineers who were to blame. 

What went wrong? 

The software was trained on a very small number of hypothetical cancer patients instead of real patient data. 

As a result, the system gave what was deemed by doctors as unsafe and incorrect treatment recommendations. In one of the most memorable cases, Watson suggested that physicians give patients with severe bleeding a drug that could worsen the condition.

The project was benched in February 2017. In a report from University of Texas auditors, it was revealed that MD Anderson spent over $62 million and all it got was an AI giving out dangerous treatment recommendations.

 

AI fail #2: Amazon's AI recruitment solution 

AI and machine learning seem to have one big problem: bias. 

Amazon's AI tool for recruitment encapsulates this issue perfectly. Back in 2018, Amazon was looking to build an engine that could process candidate resumes, and - after analyzing their qualifications and all other relevant factors - it would come up with five candidates ready to be hired. 

Unfortunately, engineers soon realized that their AI engine considered male candidates to be better automatically. 

What went wrong? 

The problem was that Amazon's engineers trained the AI-based on an engineering job applicant's resume. They benchmarked the training data with the current employees of engineering teams. 

When you think about a typical candidate for a software engineering job, you're likely to picture a white male. Based on this training data, Amazon's AI learned that candidates who were more male and their skin was lighter are more likely to be a good fit for the engineering job. 

This example is slightly embarrassing, but AI bias can pose a huge risk to any organization that uses it in its decision-making processes.

 

AI fail #3: Microsoft's rude chatbot

When Microsoft announced the new chatbot back in 2016, the headlines were full of hope. The chatbot called Tay could automatically reply to people on Twitter and engage in casual conversation, often using teenage slang.

The project was the result of Microsoft's efforts in improving conversational interface understanding. As more and more people talked with Tay, the chatbot would learn how to respond more naturally and hold improved conversations. 

But what happened next was definitely unexpected. 

It didn't even take 24 hours after the launch for Internet trolls to corrupt the chatbot's personality. They flooded the bot with racist, anti-semitic, and misogynistic tweets. As a result, Tay became a disturbing Twitter account that kept on sharing some really terrifying opinions. After some effort to clean up the chatbot's timeline, Microsoft decided to stop their experiment.

What went wrong?

Naturally, Microsoft never really revealed how the algorithms worked, so we can't tell which part was faulty. However, allowing Tay to interact with and learn from all kinds of conversations without any filters applied to the content was just naïve.

 

AI fail #4: Amazon's facial recognition fail

It seems the world is full of AI failures – and image recognition is no exception. In 2018, the American Civil Liberties Union (ACLU) showed how Amazon's AI-based facial recognition system called Rekognition matched 28 US Congresspeople with criminal mugshots.

According to the ACLU, "Nearly 40 percent of Rekognition's false matches in our test were of people of color, even though they make up only 20 percent of Congress."

What went wrong?

The fact that facial recognition is racially biased shouldn't come as a surprise. In one study, researchers from MIT and the University of Toronto revealed that every facial recognition system they tested performed much better on lighter-skinned faces. 

This failure is about much more than just a technology failure. It refers to the failure of people, systems, and institutions that stand behind those systems. 

Unfortunately, recognition is still sold by Amazon, and law enforcement agencies across the United States are experimenting with tools like that to identify subjects.

3 traps that lead to AI failure 

1. Saving money on R&D

The truth is that AI projects require a lot of investment into experimentation and cutting-edge research. Any company looking to build a useful AI technology needs to invest in research and development; there's no way around it. Downsizing here isn't an option, or you risk failure (and potentially reputation loss).

2. Getting caught up in the technology bubble

AI technologies can't be developed in a vacuum. We need to consider the social circumstances that make AI necessary in the first place. The context of where our AI engine and algorithms will be deployed is just as important as the technology in question. 

3. Forgetting about human bias 

When rushing to keep up with the technology curve, development teams sometimes forget that they're humans who have inherent biases. This is particularly relevant to companies that work on data analytics projects developed for financial services, law enforcement, or healthcare. The Watson for Oncology fail is an excellent example of that.

How to avoid AI failure?

1. Get the right data

Your data matters a lot because you'll be using it to train your algorithms. The success of AI will be based on it, so make sure that you pick the right combination of data, training, and problem at hand. Be aware of potential bias creeping into your data sets and try to eliminate it as early as possible to avoid any impact on your model.

2. Maintain your AI steadily

As an AI engine becomes increasingly complex, its risks and costs grow as well. The longer you wait to maintain or repair your AI, the more expensive it will get. Solid and regular maintenance is critical to keep the AI delivering value and working in line with your business expectations. 

3. Have a plan for failure

You need to be prepared for failure in your AI project. Identify potential issues that you might face in the future and come up with new ways for detecting them. If your system goes down for an hour, what measures will you use to manage it? If your AI model breaks down, will you be able to refer to a working one quickly? Build a recovery path to stay on the safe side.

4. Improve your reaction times

As your AI scales and becomes more complex, correcting it is going to become more and more difficult. It's one thing when you have one or two models working together. When you're dealing with ten models learning from each other at all times, things get tricky. 

Develop a plan for identifying issues in your AI models and build corrective measures: 

  • Are you going to retrain them on your own?
  • How are you going to track data flows to identify the source of problems?

5. Change your AI together with the business

While AI is learning about the world, the world itself might start changing fast. For example, if a business problem that you're solving has radically changed because of a brand-new competitor on the scene or a new regulation, you need to make sure that your AI responds to that. 

After all, your AI project isn't just about technology - it's about many other layers ranging from operational and engineering to business and PR. 

When implementing your AI model, you need to have a plan for monitoring and measuring changes that might affect your outcomes. Ideally, you will also develop a system that makes these adjustments automatically. Building a change management model into the AI will help you keep it up with reality and prepare for changes that are simply bound to happen.

Neural network training - wrap up

AI is going strong, and its popularity in the tech landscape isn't going down anytime soon. 

Development teams looking to build cutting-edge products using AI algorithms should consider these massive AI failures as powerful lessons learned. After all, we're all here to learn from our mistakes and implement industry best practices to avoid them from happening in the first place. 

Are you looking for an experienced AI team capable of building models that are bulletproof and in line with current industry standards? Get in touch with us and let's talk about your project. We have experience in delivering AI-based applications to clients across many different sectors and know what it takes to build powerful AI that you can rely on.

Rated: 5.0 / 1 opinions
Karol Przystalski c529978f2b

Karol Przystalski

CTO at Codete. In 2015, he received his Ph.D. from the Institute of Fundamental Technological Research of the Polish Academy of Sciences. His area of expertise is artificial intelligence.

Our mission is to accelerate your growth through technology

Contact us

Codete Global
Spółka z ograniczoną odpowiedzialnością

Na Zjeździe 11
30-527 Kraków

NIP (VAT-ID): PL6762460401
REGON: 122745429
KRS: 0000983688

Get in Touch
  • icon facebook
  • icon linkedin
  • icon instagram
  • icon youtube
Offices
  • Kraków

    Na Zjeździe 11
    30-527 Kraków
    Poland

  • Lublin

    Wojciechowska 7E
    20-704 Lublin
    Poland

  • Berlin

    Bouchéstraße 12
    12435 Berlin
    Germany