Finance AI
September 10, 2025

Top AI Experimentation Tips From NayaOne Managing Director

Scott Sambucci, Managing Director at NayaOne, shares how he helps financial institutions adopt AI faster through structured validation and external testing.

This is a summary of an episode of Pioneers, an educational podcast on AI led by our founder. Join 3,600+ business leaders and AI enthusiasts and be the first to know when new episodes go live. Subscribe to our newsletter here.

TL;DR:

  • AI adoption is slow in financial services due to bureaucracy, not technology gaps. “Committees are where AI goes to die,” says Scott.
  • Enterprises should structure AI validation like experiments: clear KPIs, controlled variables, and measurable outcomes.
  • Instead of “betting” on vendors, test them in parallel using sandbox environments to reduce risk and accelerate decisions.
  • AI requires cultural change. Leaders must give teams permission and tools to fail fast, not slow.
  • Incumbents risk falling behind unless they enable rapid, low-friction experiments. “You can’t afford to wait 9 months to learn.”

Before we dive into the key takeaways from this episode, be sure to catch the full episode here:

Meet Scott - Managing Director at NayaOne

Scott Sambucci, Managing Director at NayaOne, helps financial institutions adopt AI through structured validation, vendor testing, and fail-fast experimentation.

With over two decades at the intersection of fintech, innovation, and enterprise adoption, Scott has seen firsthand how bureaucratic inertia can stall even the most promising AI efforts. At NayaOne, he works with banks and insurers to remove friction from vendor evaluations, often spinning up sandbox environments in hours, not months.

His core belief? Most organizations don’t need more AI options, but better ways to test, learn, and deploy the right ones. “People don’t know how to buy something they’ve never bought before,” Scott explains. “You have to teach them how to evaluate, not just sell.”

A former startup operator and economics researcher, Scott brings clarity to an AI market full of noise and helps large enterprises move from indecision to intelligent action.

Why AI Adoption Is Stuck in Committee Hell

One of the biggest blockers to AI adoption in financial services is not technical complexity, but institutional inertia. “Committees are where AI goes to die,” says Scott.

Instead of treating AI like an iterative infrastructure capability, many enterprises funnel it through slow-moving governance processes designed for product launches. “You talk to execs and they’ll say the board is asking, ‘Where’s all the AI adoption that was promised?’” he explains.

The real issue is fear of moving without perfect alignment. Even pilots get buried under months of risk, legal, and procurement reviews. As Scott puts it:

“It’s been nearly three years since ChatGPT launched, and institutions are still setting up their ‘AI Center of Excellence.’”

The result is analysis paralysis instead of tangible outcomes.

Structured Validation: The Cure for AI Paralysis

To move from indecision to implementation, Scott advocates for what he calls “structured validation,” a six-step framework to evaluate AI tools rigorously but efficiently. “That’s just a fancy phrase for testing or experimenting,” he notes.

“Unless you’re testing, you’re not going to get the learnings fast enough.” — Ankur Patel

Most teams say they want a POC but lack clear KPIs, timelines, or metrics. “It’s not how your chemistry teacher would have taught you to run an experiment,” Scott jokes.

The fix is to reframe evaluation like a science project: identify a business problem, define success criteria, run tests in a controlled environment, and document outcomes.

He adds, “It’s OK if the experiment doesn’t come out the way you want. What matters is what you learn.” This approach removes guesswork and enables better internal alignment before scaling.

Don't Bet on Vendors—Test Them in Parallel

Enterprises often treat vendor selection like a gamble, choosing based on relationships, roadmaps, or hope. Scott’s advice: stop betting and start testing. “You don’t have to make any bets anymore,” he says. “You’re literally choosing a sure thing because you can test and assess.”

At NayaOne, Scott sees institutions run structured comparisons between incumbents and startups. “So many times, the vendor they thought would be the best came in third,” he says.

Rather than waiting six months to see if a roadmap materializes, organizations should parallel process their options.

“Let your incumbent know you love them, but also let them know you're evaluating alternatives. That gives you optionality without risk.” The goal is to move beyond gut feeling to real performance data.

“Let’s all agree that none of us know anything before the call. Then it’s okay to ask questions.” — Scott Sambucci

Culture Shift: Give Teams Permission to Fail Fast

Scott emphasizes that AI adoption is not just a technical journey but a cultural one. Leaders must give teams permission to “fail fast” without career repercussions. “People already fail,” he says. “They just do it over a really long time.”

In legacy institutions, teams may delay findings or fake success metrics just to avoid blame. “That 32-year-old trying to make VP doesn’t want to risk screwing up,” Scott explains.

Instead, companies should reward the act of experimentation. “Let’s count how many times you tried new things, even if they didn’t work,” he suggests.

This mindset shift requires more than a memo from the board. It needs structural support that empowers teams to move quickly and safely without fear of failure.

Why External Sandboxes Will Become the New Standard

As AI becomes integral to operations, Scott believes external sandbox environments will become indispensable.

“Eventually, every bank will say, ‘How did we ever live without a sandbox?’” — Scott Sambucci

NayaOne’s model gives enterprises a secure, air-gapped space to test vendors, simulate integrations, and use synthetic data without touching production systems.

“You can spin up a full dev environment with Jupyter notebooks, UAT tools, and public or bespoke datasets—all within hours,” Scott explains. One bank even launched eight different agentic tests for 40 people in under 24 hours.

“In the old world, that would’ve taken a year,” he says. Sandboxes lower the barrier to experimentation, allowing teams to validate vendor claims quickly and confidently. The result is faster learning with far less friction.

Would you like to learn more about AI implementation? Check out this episode on simplifying AI implementation in financial services with AgentFlow.

Book a 30-minute demo

Explore how our agentic AI can automate your workflows and boost profitability.

Get answers to all your questions

Discuss pricing & project roadmap

See how AI Agents work in real time

Learn AgentFlow manages all your agentic workflows

Uncover the best AI use cases for your business