Press

Disruption in Data: Why Digital Migrations Fail and How to Succeed

April 29, 2025
Todd Stone

In this episode of The Disruptor Podcast, host John Kundtz interviews Caitlyn Truong, CEO and Co-founder of Zengines.

This show explores how Zengines disrupts organizations' efforts to automate the end-to-end data conversion process.

Caitlyn shares information from her background in electrical and computer engineering and consulting. She often saw the problems with data conversion and migration in large organizations, especially in financial services.

This led her to co-found Zengines to help ensure data stays useful during modernization.

Key points from their discussion

  • Understanding Data Migration: It includes all steps to get a new system running, from getting the data mapped and moved to post migration testing.
  • Common Pitfalls: Not accounting for all the steps, lack of a plan, not using tools, and not underestimating the work effort.
  • Advice for Smoother Migration: Understand all the steps, plan clearly, use tools (especially AI), empower business analysts, and work in steps.

Zengines' Disruptive Way

  • Zengines gives business analysts AI tools to automate mapping and changing data.
  • This "shift left" approach reduces the need for large teams, making data conversion faster and more efficient by letting the business directly influence data changes.
  • This approach moves the work earlier in the process, making data conversion less expensive, more productive, and quicker by letting the business directly affect how data is changed.


Stay tuned for Part 2 of this conversation, where Caitlyn and John shift gears and explore the challenges many enterprises face with Mainframe Modernization.

Listen to the full episode

You may also like

There's a moment every software or services company knows well: the contract is signed, the deal is officially closed, and the customer is excited to get started. And somewhere in the background, a critical clock starts ticking.

Before that new customer can use your platform or services, their data has to be ingested, mapped, migrated and ready. Before your team can recognize that revenue, the customer has to be live.

That gap - between acquisition and activation - is where data migration lives. And for financial services ISVs (Independent Software Vendors), fund administrators, and BPOs (Business Process Outsourcers) managing complex client portfolios, it's also where deals get expensive, relationships start to fray, and revenue recognition gets delayed longer than anyone planned.

Understanding where data migration fits in the customer lifecycle isn't just an implementation detail. It needs to be part of your revenue strategy.

Why Financial Services Data Makes This Harder

Not all customer onboarding is created equal. In financial services - whether you're a fund administrator onboarding a new institutional client, an ISV deploying a core banking or portfolio management platform, or a BPO taking on a new asset manager's operations -- the data arriving on day one is rarely simple.

Consider what a fund administrator typically ingests when a new client comes on board: historical position data across multiple asset classes, transactions spanning years, counterparty records, NAV history, fee structures, investor allocations, and often data exported from a prior administrator's system in formats that weren't designed for portability. Each element carries its own schema, its own quirks, and its own potential for discrepancy.

Layer on the operational context - multiple accounting bases, multiple base currencies, complex instrument types like securitized products, private equity, and alternatives -- and what looks like a single "data migration" becomes dozens of concurrent mapping challenges, each carrying downstream consequences if something is off.

In financial services, a data error isn't just a technical problem. It's a client trust problem. A calculation is wrong, an allocation doesn't reconcile, a NAV is misstated. The stakes make accuracy non-negotiable -- and that's exactly what makes speed and rigor so difficult to achieve simultaneously.

This is the environment in which ISVs and service managers are trying to compress onboarding timelines. The complexity isn't going away -- but the tools available to manage it have changed. See how AI-powered data conversion works end-to-end.

The Revenue Connection Most Teams Don't Talk About

For SaaS and subscription-based software companies, the revenue model is simple on paper: recurring revenue starts when the customer is live. But the path to live runs directly through data migration.

Two things happen when that migration drags:

  • Revenue recognition is delayed. In many deals, billing starts at go-live -- not at signature. Every week that the migration takes longer than planned is a week of revenue that hasn't landed yet. For a fund admin deploying a new client relationship with complex multi-asset data, that delay can extend for months.
  • Customer satisfaction erodes before the relationship even begins. The client just made a significant commitment to your platform. A slow, opaque, error-prone onboarding experience sets a damaging tone -- and in financial services, where trust is the foundation of every client relationship, that damage is hard to undo.

The average data migration involves dozens -- sometimes hundreds -- of hand-offs between source data, mapping logic, and target system requirements. Every hand-off is time. Every delay is cost. And every frustration belongs to your customer.

For organizations that onboard new clients repeatedly -- ISVs with subscription models, BPOs onboarding asset managers at scale, fund administrators adding new institutional mandates -- the compounding effect is significant. Slow migrations don't just affect one deal. They affect your team's capacity, your revenue forecast, and your reputation in a market where word travels fast.

Why Data Migration Takes Longer Than It Should

The challenge isn't that organizations don't know data migration matters. It's that the process itself is inherently challenging -- especially in financial services, where two root causes compound each other:

  • Data is unpredictable. Clients arrive with incomplete documentation, inconsistent formats, unknown data definitions, and data quality issues that only surface once you start looking. In fund administration, this often means discovering mid-project that a prior administrator's NAV history is stored in a non-standard format, or that position data across asset classes uses different identifier schemes. What appears to be a clean export from the source system rarely maps cleanly to the requirements of the target.
  • Migrations rely on manual judgment and inputs at every step. Without AI-driven tools, mapping and transforming data -- figuring out what goes where and how it needs to be shaped -- is a largely manual process. Business analysts toggle between spreadsheets, databases, and load files, making educated guesses and waiting for feedback. In financial services, where precision matters and every field has downstream implications for calculations, reporting, and compliance, that process can feel painstaking even when the team is experienced.

The result is a process that's slow, error prone, and difficult to scale.

How AI Changes the Math on Client Onboarding

AI-powered data migration tools change the fundamental economics of onboarding by automating the steps that typically consume the most time, encouraging logic accuracy through iterative cycles, and by bringing intelligence to the parts of the process that have historically required expensive expertise.

In a financial services context, this matters in specific, tangible ways:

  • Data profiling at the outset surfaces the scope of quality issues -- completeness rates by field, distribution of values, currency codes, unique values -- before the project is deep into execution. For a fund admin taking on a new client with years of historical data across multiple asset classes, this early visibility is the difference between a realistic timeline and a project that keeps slipping.
  • Predictive field mapping removes what is typically the most manual, time-intensive step at the start of any onboarding. Rather than building from a blank spreadsheet, teams begin with AI-generated predictions -- ranked by confidence, flagged for review -- turning weeks of setup into a validation exercise from day one.
  • AI-assisted transformation handles the rules that financial data requires: reformatting identifiers, standardizing currency codes, reconciling accounting bases, applying calculation logic consistently across thousands of records. What would otherwise require a systems engineer can be handled by a business analyst with the right tooling.
  • Connected platform intelligence is what makes speed repeatable. Because every step shares active metadata -- profiling informs mapping, mapping informs transformation, transformation informs testing -- nothing is re-explained between stations. For ISVs and BPOs with recurring onboarding needs, each new client moves through the same factory: same stations, same logic, same reliable output.

Zengines customers report accelerating data migrations by up to 80%, with business analysts working 6x faster -- without needing to bring in expensive engineering resources at every step.

That speed has a direct revenue translation. Faster go-live means faster billing. Fewer iterations means lower project cost. And a smooth, well-managed onboarding experience builds client confidence from day one -- which in financial services is not just a nice-to-have, it's the foundation of a long-term profitable relationship.

Built for Teams That Do This Again and Again

Repeatability is where the economics of AI-powered migration compound. For organizations that onboard clients regularly -- fund admins adding new mandates, ISVs growing their subscriber base, BPOs managing a steady flow of transitions -- the platform's connected intelligence doesn't reset between engagements. Profiling templates carry forward. Mapping predictions sharpen. Transformation logic built for one client becomes the foundation for the next.

The result is a factory, not a one-time build. Every new client moves through the same connected stations -- the same profiling, the same mapping intelligence, the same transformation framework -- producing consistent, reliable output at a pace that scales with the business rather than against it.

For ISVs managing subscription revenue, this means a meaningful reduction in the cost of new client acquisition. For BPOs and managed service providers, it means higher margin on every engagement. For fund administrators competing on operational excellence, it means a demonstrably faster, more accurate onboarding experience -- one that becomes a differentiator when competing for mandates from institutional investors who have seen poor transitions before and are paying close attention.

Once data is live, a related challenge in financial services is proving it arrived correctly -- especially for regulated institutions. Post-migration reconciliation is the phase where confidence is either built or broken, and where regulatory obligations are met or missed.

What This Means for Your Revenue Model

Revenue recognition is ultimately about time to value. The faster a client is live, the faster they realize the benefit of your platform or service -- and the faster your revenue cycle closes. Data migration is one of the most controllable variables in that equation.

The organizations winning on this front aren't necessarily those with the cleanest client data. They're the ones who have invested in tools and processes that make migration predictable, scalable, and fast -- regardless of what the source data looks like when it arrives. In financial services, where client data is inherently complex and the margin for error is narrow, that investment pays dividends on every deal.

Whether you're an ISV accelerating client onboarding into a financial platform, a fund administrator managing recurring mandates, or a BPO building a repeatable data ingestion practice -- treating data migration as a strategic capability, not just an onboarding task, is the difference between a revenue model that scales and one that stalls.

Ready to close the gap between client acquisition and revenue recognition?

See how Zengines accelerates data migration for financial services ISVs, fund administrators, and BPOs -- at every step of the client onboarding lifecycle. Schedule a demo to see it in action, or explore our resources library for more on AI-powered data conversion.

Boston, MA - March 4, 2026 - Zengines, an AI technology company specializing in data migration and mainframe and AS400 data lineage, today announced it has been selected to demo live at FinovateSpring 2026, taking place May 5–7 in San Diego, California.

Finovate is one of the most prestigious fintech event series, drawing over 1,200 senior-level executives from banks, credit unions, and financial institutions - including nine of the top 10 U.S. banks. Demo slots are awarded through a competitive application and selection process, with only the most innovative and market-ready fintech companies earning a spot on stage.

Zengines will use its seven-minute live demo - Finovate's signature format - to showcase its Data Lineage product: an AI-powered research and visualization tool purpose-built for large financial institutions managing the complexity of “black box” systems.

What sets Zengines apart? Traditional lineage tools show you the map - at the surface level. Zengines gives you the map and the context behind it - built exclusively for the decades-old COBOL, RPG, and PL/1 systems no one fully understands anymore.

Conventional tools produce technically accurate data flow diagrams. They cannot tell you why a calculation exists, what business rule drives it, or what it means for your regulatory obligations. That context is buried in the code itself - and Zengines is built to surface it.

Two things define the Zengines platform:

  1. Contextual lineage - Beyond data flow, Zengines captures the intent embedded in legacy code: calculation logic, branching conditions, field-level relationships, and business rules across thousands of modules. Raw lineage becomes actionable intelligence.
  1. Legacy-codebase focus - Zengines specifically targets COBOL, RPG, and PL/1: the systems where the stakes are highest. Decades of accumulated business logic. Subject matter experts retiring faster than institutions can document what they know. No individual holds the full picture - and that risk is growing.

Together, these enable three outcomes financial institutions are struggling to achieve today:

  • Regulatory compliance - Generate audit-ready lineage evidence for CDE, BCBS-239, and ORSA quickly and accurately
  • Safe modernization - Reverse-engineer the "why, where, and how" of legacy code before migrating or replacing systems
  • Live system confidence - Know your mainframe well enough to manage it: supporting teams, answering questions, and making changes with certainty
"Being selected to demo at Finovate is a meaningful validation of what we've built," said Caitlyn Truong, CEO and Co-Founder of Zengines. "The financial institutions in that room are dealing with exactly the challenges our lineage tool was designed to solve - regulatory mandates, modernization programs, and the 'black box' problem of legacy systems that no one can fully see into. We're excited to show them that contextual lineage is what actually moves the needle."
“Finovate demos are about showing, not telling, and Zengines’ contextual data lineage is something that I’m sure our audience is going to really appreciate seeing at FinovateSpring this May,” said Greg Palmer, VP and Host of Finovate. "The FI’s in our audience are wrestling with legacy infrastructure that's been accumulating complexity for decades. Zengines' ability to understand what's inside those systems before trying to modernize them or meet regulatory requirements is exactly the kind of solution that is likely to resonate with them.”

The Zengines Data Lineage tool is currently deployed at several Fortune 100 financial institutions across codebases spanning hundreds of thousands of source modules and tens of millions lines of code, where teams use it at enterprise scale  to accelerate analysis that previously took months down to minutes.

FinovateSpring 2026 will feature RegTech, AI, data optimization, and risk management among its key themes - making it an ideal stage for Zengines to connect with the financial institutions and consulting partners navigating solutions to support these exact priorities.

About Zengines

Zengines is an AI technology company helping financial institutions trace, map, change, and move their data to manage legacy systems, modernize, and meet regulatory compliance requirements. Our Mainframe Data Lineage solution goes beyond traditional lineage tools by delivering contextual intelligence - not just where data flows, but the business logic, calculation rules, and institutional knowledge embedded in decades of legacy code. Our Data Migration platform accelerates data conversion programs using AI, reducing time and risk across core conversions, system implementations, and new client onboarding. Zengines serves financial services firms and their technology and service provider partners - where the cost of getting data wrong is highest.

Learn more at zengines.ai

For Chief Risk Officers and Chief Actuaries at European insurers, Solvency II compliance has always demanded rigorous governance over how capital requirements get calculated. But as the framework evolves — with Directive 2025/2 now in force and Member States transposing amendments by January 2027 — the bar for data transparency is rising. And for carriers still running actuarial calculations, policy administration, or claims processing on legacy mainframe or AS/400s, meeting that bar gets harder every year.

Solvency II isn't just about holding enough capital. It's about proving you understand why your models produce the numbers they do — where the inputs originate, how they flow through your systems, and what business logic transforms them along the way. For insurers whose critical calculations still run on legacy languages like COBOL or RPG, that proof is becoming increasingly difficult to produce.

What Solvency II Actually Requires of Your Data

At its core, Solvency II's data governance requirements are deceptively simple. Article 82 of the Directive requires that data used for calculating technical provisions must be accurate, complete, and appropriate.

The Delegated Regulation (Articles 19-21 and 262-264) adds specificity around governance, internal controls, and modeling standards. EIOPA's guidelines go further, recommending that insurers implement structured data quality frameworks with regular monitoring, documented traceability, and clear management rules.

In practice, this means insurers need to demonstrate:

  • Data traceability: A clear, auditable path from source data through every transformation to the final regulatory output — whether that's a Solvency Capital Requirement calculation, a technical provision, or a Quantitative Reporting Template submission.
  • Calculation transparency: How does a policy record become a reserve estimate? What actuarial assumptions apply, and where do they come from?
  • Data quality governance: Structured frameworks with defined roles, KPIs, and continuous monitoring — not just point-in-time checks during reporting season.
  • Impact analysis capability: If an input changes, what downstream calculations and reports are affected?

For modern cloud-based platforms with well-documented APIs and metadata catalogs, these requirements are manageable. But for the legacy mainframe or AS/400 systems that still process the majority of core insurance transactions at many European carriers, this level of transparency requires genuine investigation.

The Legacy System Problem That Keeps Getting Worse

Many large European insurers run core business logic on mainframe or AS/400 systems that have been evolving for 30, 40, even 50+ years. Policy administration, claims processing, actuarial calculations, reinsurance — the systems that generate the numbers feeding Solvency II models were often written in COBOL by engineers who retired decades ago.

The documentation hasn't kept pace. In many cases, it was never comprehensive to begin with. Business rules were encoded directly into procedural code, updated incrementally over the years, and rarely re-documented after changes. The result is millions of lines of code that effectively are the documentation — if you can read them.

This creates a compounding problem for Solvency II compliance:

When supervisors or internal audit ask how a specific reserve calculation works, or where a risk factor in your internal model originates, the answer too often requires someone to trace it through the code manually. That trace depends on a shrinking pool of specialists who understand legacy COBOL systems — specialists who are increasingly close to retirement across the European insurance industry.

Every year the knowledge gap widens. And every year, the regulatory expectations for data transparency increase.

The Regulatory Pressure Is Intensifying

The Solvency II framework isn't standing still. The amending Directive published in January 2025 introduces significant updates that amplify data governance demands:

  • Enhanced ORSA requirements now mandate analysis of macroeconomic scenarios and systemic risk conditions — requiring even more data inputs with clear provenance.
  • Expanded reporting obligations split the Solvency and Financial Condition Report into separate sections for policyholders and market professionals, each requiring precise, auditable data.
  • New audit requirements mandate that the balance sheet disclosed in the SFCR be subject to external audit — increasing scrutiny on the data chain underlying reported figures.
  • Climate risk integration requires insurers to assess and report on climate-related financial risks, adding new data dimensions that must be traceable through existing systems.

National supervisors across Europe — from the ACPR in France to BaFin in Germany to the PRA in the UK — are tightening their expectations in parallel. The ACPR, for instance, has been specifically increasing its focus on the quality of data used by Solvency II functions, requiring actuarial, risk management, and internal audit teams to demonstrate traceability and solid evidence.

And the consequences of falling short are becoming tangible. Pillar 2 capital add-ons, supervisory intervention, and in severe cases, questions about the suitability of responsible executives — these aren't theoretical outcomes. They're tools that European supervisors have demonstrated willingness to use.

The Supervisory Fire Drill

Every CRO at a European insurer knows the scenario: a supervisor asks a pointed question about how a specific technical provision was calculated, or requests that you trace a data element from source through to its appearance in a QRT submission. Your team scrambles. The mainframe or AS/400 specialists — already stretched thin — get pulled from other work. Days or weeks pass before the answer materializes.

These examinations are becoming more frequent and more granular. Supervisors aren't just asking for high-level descriptions of data flows. They want attribute-level traceability. They want to see the actual business logic that transforms raw policy data into the numbers in your regulatory reports.

For carriers whose critical processing runs through legacy mainframe or AS/400s, these requests expose a fundamental vulnerability: institutional knowledge that exists only in people's heads, supported by code that only a handful of specialists can interpret.

The question isn't whether your supervisor will ask. It's whether you'll be able to answer confidently when they do.

Extracting Lineage from Legacy Systems

The good news: you don't have to replace your entire core system to solve the transparency problem. AI-powered tools can now parse legacy codebases and extract the data lineage that's been locked inside for decades.

This means:

  • Automated tracing of how data flows through COBOL and RPG modules, job schedulers, and database operations — across thousands of programs, without needing to know where to look.
  • Calculation logic extraction that reveals the actual mathematical expressions and business rules governing how risk data gets transformed — not just that Field A maps to Field B, but what happens during that transformation.
  • Visual mapping of branching conditions and downstream dependencies, so compliance teams can answer supervisor questions in hours instead of weeks.
  • Preserved institutional knowledge that doesn't walk out the door when your legacy specialists retire — because the logic is documented in a searchable, auditable format.

The goal isn't to decommission your legacy systems overnight. It's to shine a light into the black box — so you can demonstrate the governance and control that Solvency II demands over systems that still run your most critical functions.

From Compliance Burden to Strategic Advantage

The European insurers who navigate Solvency II most smoothly aren't necessarily the ones with the newest technology. They're the ones who can clearly articulate how their risk management processes work — including the parts that run on infrastructure built before many of today's actuaries were born.

That clarity doesn't require a multi-year transformation program. It requires the ability to extract and document what your systems already do, in a format that satisfies both internal governance requirements and supervisory scrutiny.

For CROs, Chief Actuaries, and compliance leaders managing legacy technology estates, that capability is rapidly moving from nice-to-have to essential — especially as the 2027 transposition deadline for the amended Solvency II Directive approaches.

The carriers that invest in legacy system transparency now won't just be better prepared for their next supervisory review. They'll have a foundation for every modernization decision that follows — because you can't confidently change what you don't fully understand.

Zengines helps European insurers extract data lineage and calculation logic from legacy mainframe or AS/400 systems. Our AI-powered platform parses COBOL and RPG code and related infrastructure to deliver the transparency that Solvency II demands — without requiring a rip-and-replace modernization.

Subscribe to our Insights