April 9, 2024
Written by Ankur Patel

Transforming Risk Management with Automated Decision-Making, with Stephen Taylor, CIO at Vast Bank

Stephen Taylor, CIO of Vast Bank, explores the risks and consequences of AI hallucinations, factors contributing to their occurrence, and strategies like task decomposition and explainability for mitigation.
This is a summary of an episode of Pioneers, an educational podcast on AI led by our founder. Join 2,000+ business leaders and AI enthusiasts and be the first to know when new episodes go live. Subscribe to our newsletter here.

As AI systems become more advanced and ubiquitous, a growing concern has emerged: AI hallucinations. Hallucinations are when AI systems confidently produce incorrect, nonsensical, or misleading outputs, which can have serious consequences in high-stakes applications.

But these are not merely a theoretical problem; they have already manifested in real-world scenarios. In 2016, Microsoft's chatbot Tay began generating racist and offensive tweets within hours of its launch, forcing the company to shut it down. More recently, OpenAI's GPT-3 language model has been shown to generate plausible but factually incorrect information, highlighting the potential for AI to spread misinformation. And who could forget Google Gemini’s recent depiction of the founding father’s of the United States as people of color, which was a much publicized gaff for the tech giant.

As AI continues to permeate various industries, from healthcare and finance to legal and transportation, addressing the issue of AI hallucinations is paramount. Building trust and reliability in AI systems is crucial for widespread adoption and safe deployment. In this article, we will explore the risks and consequences of AI hallucinations, examine the factors contributing to their occurrence, and discuss strategies for mitigating their impact. By understanding and addressing this challenge, we can work towards developing AI systems that are not only powerful but also trustworthy and reliable.

Last week we spoke with Stephen Taylor, Chief Information Officer of Vast Bank, about using "decision AI" to automate decision-making processes and improve efficiency in banking operations like contract review and regulatory reporting. He dove into some of the key challenges of integrating AI with legacy systems as well as building trust in AI within the regulated finance industry. Before going deeper into the topic of AI hallucinations, check out our conversation here:

The Risks and Consequences of AI Hallucinations

AI hallucinations pose significant risks, particularly in high-stakes applications where the consequences of incorrect or misleading outputs can be severe. In healthcare, for example, an AI system that hallucinates could lead to misdiagnoses or inappropriate treatment recommendations, potentially jeopardizing patient safety.

A real-life example of this risk is the case of IBM Watson Health, which was designed to assist doctors in making cancer treatment decisions. However, internal documents revealed that the system sometimes gave "unsafe and incorrect" recommendations, such as suggesting that a cancer patient with severe bleeding should be given a drug that could worsen the bleeding.

The impact of AI hallucinations extends beyond individual instances and can have far-reaching consequences. As Stephen Taylor, CIO at Vast Bank, points out on the recent episode of Pioneers, "The biggest issues in mission-critical content creation are making sure that the data is right." Inaccurate or misleading outputs generated by AI systems can erode trust in the technology, hindering its adoption and potential benefits. This erosion of trust can be particularly damaging in sectors such as healthcare, where patient trust is crucial for effective treatment and care.

Real-world examples of AI hallucinations underscore the need for proactive measures to mitigate their risks. In addition to the aforementioned cases of Microsoft's Tay chatbot and OpenAI's GPT-3, there have been instances where AI systems have generated biased or discriminatory outputs. For example, a study by MIT researchers found that facial recognition systems exhibited higher error rates for people with darker skin tones, raising concerns about fairness and bias. Similarly, a ProPublica investigation revealed that an AI system used by U.S. courts to assess the risk of recidivism was biased against black defendants, falsely labeling them as more likely to re-offend than white defendants.

The consequences of AI hallucinations can also extend to the realm of public safety and security. In one alarming example, an AI-powered facial recognition system used by the Detroit Police Department wrongfully identified a man as a suspect in a shoplifting case, leading to his arrest. This case illustrates the potential for AI hallucinations to contribute to wrongful arrests and infringements on civil liberties.

To address the risks and consequences of AI hallucinations, it is crucial to develop robust frameworks and guidelines for AI development and deployment. This includes establishing rigorous testing and validation processes, implementing safeguards against biased or misleading outputs, and ensuring human oversight and intervention in critical applications. By proactively addressing these issues and adhering to ethical principles, we can work towards building AI systems that are reliable, trustworthy, and beneficial to society.

Factors Contributing to AI Hallucinations

Several factors contribute to the occurrence of AI hallucinations, ranging from limitations in training data to the complexity of AI models. One key challenge is the potential for biased or incomplete datasets used to train AI systems. If the training data is not representative of the real-world distribution or lacks diversity, the AI system may generate outputs that are skewed or inaccurate.

Another factor is the difficulty in dealing with unexpected or out-of-distribution inputs. AI systems are typically trained on a specific domain or dataset, and when presented with inputs that deviate significantly from the training data, they may generate erroneous or nonsensical outputs. This phenomenon was demonstrated by a research team at Google Brain, who found that deep learning models trained on a specific dataset could be easily fooled by carefully crafted adversarial examples. These adversarial examples, which are imperceptible to the human eye, can cause the AI system to make incorrect predictions with high confidence.

The complexity of AI models also plays a role in hallucinations. As models become more sophisticated and deep, the trade-off between performance and interpretability becomes more pronounced. Complex models may achieve higher accuracy on specific tasks but may also be more prone to generating unexpected or misleading outputs. This is particularly evident in the case of large language models, such as GPT-3, which can generate highly coherent and plausible text but may also produce factually incorrect or biased statements.Another contributing factor to AI hallucinations is the presence of noise or errors in the training data. If the data used to train the AI system contains inaccuracies, mislabeled examples, or other inconsistencies, the system may learn to generate outputs that reflect these errors.

This issue was highlighted by a study that found that popular datasets used for training AI systems in medical imaging contained a significant number of errors and inconsistencies, potentially leading to flawed predictions and diagnoses. Addressing these factors requires a multifaceted approach that includes the development of more diverse and representative datasets, the incorporation of domain expertise and common sense reasoning into AI models, and the promotion of interpretability and robustness as key priorities in AI research and development.

Breaking Down Tasks: A Key Strategy for Mitigating Hallucinations

Stephen Taylor emphasizes the importance of task decomposition in mitigating AI hallucinations. "I find most hallucinations come into the process whenever people are trying to do too much at once." By breaking down complex workflows into discrete, well-defined tasks, he argues that the risk of hallucinations can be reduced, as the AI system can focus on specific sub-tasks rather than attempting to handle the entire process in one go.Taylor further highlights the importance of task decomposition, stating, "One thing I found is that different tools are better for different specific sub-parts of that task model." By leveraging different AI tools and techniques for specific sub-tasks, the overall performance and reliability of the system can be improved.

Case studies have demonstrated the effectiveness of task decomposition in reducing AI hallucinations. For example, a research team from Stanford University developed a modular AI system for medical diagnosis that broke down the task into smaller components, such as image segmentation, feature extraction, and classification. By decomposing the problem and using specialized AI models for each sub-task, the system achieved higher accuracy and reduced the occurrence of hallucinations compared to a monolithic approach.

But it is important to note that task decomposition alone is not sufficient to eliminate AI hallucinations entirely. Human oversight and intervention remain crucial, particularly in critical applications where the consequences of errors can be severe. By combining task decomposition with human expertise and judgment, the risks associated with AI hallucinations can be further mitigated, ensuring the reliability and trustworthiness of the AI system.

Enhancing Explainability and Transparency in AI Systems

Explainability and transparency are key factors in building trust in AI systems and mitigating the impact of hallucinations. When AI systems can provide clear explanations for their outputs and decision-making processes, it becomes easier to identify and address potential errors or biases.

Techniques for improving AI interpretability include feature importance analysis, which identifies the most influential input features in the AI model's decision-making process. By understanding which features contribute the most to the output, developers can identify potential sources of bias or errors and take corrective actions. Counterfactual explanations, which provide insights into how the AI model's output would change if certain input features were different, are another valuable tool for enhancing explainability.

Taylor emphasizes the importance of data provenance and transparency, stating, "You go and explain this is where I got the data from. This is the system of record for this system. Then it gives an amount of credibility to that particular resource." By clearly communicating the sources and quality of the data used to train and operate AI systems, organizations can build trust and confidence in the reliability of the outputs.

The development of standardized frameworks and guidelines for AI explainability is crucial for ensuring consistency and accountability across different AI systems and applications. Initiatives such as the IEEE Standards Association's P7001 project on "Transparency of Autonomous Systems" aim to establish best practices and requirements for the transparency and explainability of AI systems. By adhering to these standards, organizations can demonstrate their commitment to responsible AI development and deployment.

Conclusion: Building Reliable and Trustworthy AI Systems

Addressing the challenge of AI hallucinations is crucial for realizing artificial intelligence's full potential and ensuring its safe and reliable deployment across various industries. By adopting strategies such as task decomposition, enhancing explainability and transparency, and ensuring human oversight, organizations can work towards building AI systems that are robust, trustworthy, and aligned with ethical principles.

As Stephen Taylor points out, the development of reliable AI systems also presents new opportunities for job creation and skills development. As AI systems become more prevalent, there will be a growing demand for professionals who can design, develop, and oversee these systems, ensuring their reliability and adherence to ethical standards.To fully address the challenge of AI hallucinations, collaboration between AI developers, domain experts, and policymakers is essential. By working together, these stakeholders can establish best practices, guidelines, and regulations that promote the responsible development and deployment of AI systems. This collaboration can also foster innovation and knowledge sharing, accelerating the progress towards reliable and trustworthy AI.

Want to learn more about AI in banking? Check out our episode on AI-driven lending with the COO @ Forum Credit Union.

Want to learn more about Generative AI Agents for your business? Enter your email and we’ll contact you as soon as possible.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.