All
July 30, 2025

AI and Compliance Risks With Maria Vullo

Maria Vullo, financial services expert, shares how she helps AI startups and banks align innovation with regulation.

This is a summary of an episode of Pioneers, an educational podcast on AI led by our founder. Join 3,600+ business leaders and AI enthusiasts and be the first to know when new episodes go live. Subscribe to our newsletter here.

TL;DR:

  • AI tools must be evaluated by use case, not as a one-size-fits-all risk—regulation depends on purpose and potential public impact.
  • Human error and intentional misconduct are still common in compliance, making automation a potential force for safer, fairer systems.
  • Financial institutions hesitate on AI not due to tech, but due to regulatory exposure and fear of creating audit trails they cannot control.
  • SaaS is losing favor in favor of private deployments as banks demand tighter control over sensitive data and AI decision-making.
  • To win enterprise trust, AI startups must show domain expertise, explainability, and willingness to customize for complex regulatory environments.

Before we dive into the key takeaways from this episode, be sure to catch the full episode here:

Meet Maria - Financial Services Expert

Maria Vullo, CEO of Vullo Advisory Services, leverages her experience as the former Superintendent of Financial Services for the State of New York to help banks, insurers, and startups navigate complex regulatory frameworks.

During her tenure, she spearheaded New York’s first cybersecurity regulation and led major enforcement actions involving anti-money laundering, OFAC compliance, and risk modeling.

Today, Maria works with fintechs and AI vendors looking to sell into regulated markets, helping them meet high standards around transparency, data governance, and explainability.

She brings a deep understanding of what financial institutions need to feel safe adopting new technologies—offering strategic guidance on everything from vendor contracting to compliance-by-design.

Maria is a strong advocate for innovation that serves both commercial value and public good, ensuring AI is both auditable and aligned with regulatory intent.

AI Regulation Starts With Use Case, Not Just the Technology

Maria Vullo argues that AI should not be regulated as a standalone technology, but based on its use case and potential for harm.

“Regulate AI for what use?” she asks.

For instance, if AI is being used to identify suspicious transactions, then the existing AML framework applies. “We already have a regulation for anti-money laundering,” she explains.

The critical question becomes how the technology is applied and whether it meets the standards already in place.

“You can have the best tech, but if you can't explain it, you’re not getting in the door.” — Maria Vullo

Vullo urges startups to align with regulatory expectations from the beginning rather than trying to retrofit compliance later. “AI is a tool. But the tool has to serve the purpose,” she says. In regulated markets, success depends on building for context, not just capability.

How Regulation Shapes Risk Tolerance in Financial Institutions

Financial institutions often hesitate to adopt new technologies, not because they doubt the tech, but because of the regulatory risk it introduces.

“Once you start measuring, you create risk,” says Maria. She explains that if AI uncovers previously unknown issues or biases, regulators may expect the institution to take corrective action, even if those issues predated the tool.

This paradox often leads to delayed adoption. “You may be better off not knowing in the first place,” she notes, describing how visibility can become a liability. For many banks, the perceived enforcement exposure of AI-generated insights outweighs its potential benefits. “The last thing they want is an audit trail they cannot defend,” says Maria.

That shapes everything from procurement to pilot testing.

Why AI Could Reduce Fraud And Human Misconduct

Maria brings a nuanced view of risk, arguing that AI may reduce, not increase, certain kinds of harm.

“People make mistakes. Sometimes intentionally,” she says, recalling her experience prosecuting financial crime.

In contrast, machines lack intent. “AI is not going to wake up one day and decide to break the law,” Maria explains. This makes AI a compelling tool for fraud detection, especially in areas like transaction monitoring and claims validation.

“Machines don’t have criminal minds,” she adds. Still, she cautions that models must be designed carefully to avoid introducing new types of harm.

“Financial institutions are not afraid of AI. They’re afraid of liability.” — Ankur Patel

The promise of AI lies in its consistency and auditability. “You can validate a model. You can’t always validate a person,” says Maria. That is a powerful compliance advantage.

Why Transparency and Explainability Are Regulatory Must-Haves

Maria stresses that transparency and explainability are non-negotiable in regulated environments. “We require model validation for AML systems. Why would AI be any different?” she asks.

For her, AI tools must be auditable and tied to an intended purpose, just like traditional statistical models. Explainability helps ensure regulators, auditors, and end users can understand how a decision was made.

“If something goes wrong, you have to be able to trace it,” Maria explains. This is especially critical in areas like credit scoring, fraud detection, and claims processing. “It is not enough for a model to work. It has to be defensible,” she says.

In the eyes of regulators, clarity often matters more than complexity. Explain it or don’t use it.

“SaaS is great until a regulator asks where the data went.” — Ankur Patel

The Compliance Trade-Off: Visibility vs. Accountability

Maria points out a central tension in AI adoption: the more you can see, the more you may be held accountable for. “There’s a risk that you discover too much,” she says.

While AI systems offer powerful insights into behavior, patterns, and anomalies, those same insights may create new regulatory obligations.

“If you know something is wrong and you don’t fix it, that’s worse than not knowing at all” — Maria Vullo

This makes compliance teams wary. The fear is not that AI will malfunction—it is that it will reveal uncomfortable truths.

“You can’t unsee what the system uncovers,” she says. As a result, some institutions delay adoption until they feel confident they can act on what AI shows them.

Would you like to learn more about AI and its development in highly regulated industries? Check out this episode on AI’s reality check inside big finance with Luke Giancarlo.

Book a 30-minute demo

Explore how our agentic AI can automate your workflows and boost profitability.

Get answers to all your questions

Discuss pricing & project roadmap

See how AI Agents work in real time

Learn AgentFlow manages all your agentic workflows

Uncover the best AI use cases for your business