In this episode of The Disruptor Podcast, host John Kundtz interviews Caitlyn Truong, CEO and Co-founder of Zengines.
This show explores how Zengines disrupts organizations' efforts to automate the end-to-end data conversion process.
Caitlyn shares information from her background in electrical and computer engineering and consulting. She often saw the problems with data conversion and migration in large organizations, especially in financial services.
This led her to co-found Zengines to help ensure data stays useful during modernization.
Stay tuned for Part 2 of this conversation, where Caitlyn and John shift gears and explore the challenges many enterprises face with Mainframe Modernization.

There's a moment every software or services company knows well: the contract is signed, the deal is officially closed, and the customer is excited to get started. And somewhere in the background, a critical clock starts ticking.
Before that new customer can use your platform or services, their data has to be ingested, mapped, migrated and ready. Before your team can recognize that revenue, the customer has to be live.
That gap - between acquisition and activation - is where data migration lives. And for financial services ISVs (Independent Software Vendors), fund administrators, and BPOs (Business Process Outsourcers) managing complex client portfolios, it's also where deals get expensive, relationships start to fray, and revenue recognition gets delayed longer than anyone planned.
Understanding where data migration fits in the customer lifecycle isn't just an implementation detail. It needs to be part of your revenue strategy.
Not all customer onboarding is created equal. In financial services - whether you're a fund administrator onboarding a new institutional client, an ISV deploying a core banking or portfolio management platform, or a BPO taking on a new asset manager's operations -- the data arriving on day one is rarely simple.
Consider what a fund administrator typically ingests when a new client comes on board: historical position data across multiple asset classes, transactions spanning years, counterparty records, NAV history, fee structures, investor allocations, and often data exported from a prior administrator's system in formats that weren't designed for portability. Each element carries its own schema, its own quirks, and its own potential for discrepancy.
Layer on the operational context - multiple accounting bases, multiple base currencies, complex instrument types like securitized products, private equity, and alternatives -- and what looks like a single "data migration" becomes dozens of concurrent mapping challenges, each carrying downstream consequences if something is off.
In financial services, a data error isn't just a technical problem. It's a client trust problem. A calculation is wrong, an allocation doesn't reconcile, a NAV is misstated. The stakes make accuracy non-negotiable -- and that's exactly what makes speed and rigor so difficult to achieve simultaneously.
This is the environment in which ISVs and service managers are trying to compress onboarding timelines. The complexity isn't going away -- but the tools available to manage it have changed. See how AI-powered data conversion works end-to-end.
For SaaS and subscription-based software companies, the revenue model is simple on paper: recurring revenue starts when the customer is live. But the path to live runs directly through data migration.
Two things happen when that migration drags:
The average data migration involves dozens -- sometimes hundreds -- of hand-offs between source data, mapping logic, and target system requirements. Every hand-off is time. Every delay is cost. And every frustration belongs to your customer.
For organizations that onboard new clients repeatedly -- ISVs with subscription models, BPOs onboarding asset managers at scale, fund administrators adding new institutional mandates -- the compounding effect is significant. Slow migrations don't just affect one deal. They affect your team's capacity, your revenue forecast, and your reputation in a market where word travels fast.
The challenge isn't that organizations don't know data migration matters. It's that the process itself is inherently challenging -- especially in financial services, where two root causes compound each other:
The result is a process that's slow, error prone, and difficult to scale.
AI-powered data migration tools change the fundamental economics of onboarding by automating the steps that typically consume the most time, encouraging logic accuracy through iterative cycles, and by bringing intelligence to the parts of the process that have historically required expensive expertise.
In a financial services context, this matters in specific, tangible ways:
Zengines customers report accelerating data migrations by up to 80%, with business analysts working 6x faster -- without needing to bring in expensive engineering resources at every step.
That speed has a direct revenue translation. Faster go-live means faster billing. Fewer iterations means lower project cost. And a smooth, well-managed onboarding experience builds client confidence from day one -- which in financial services is not just a nice-to-have, it's the foundation of a long-term profitable relationship.
Repeatability is where the economics of AI-powered migration compound. For organizations that onboard clients regularly -- fund admins adding new mandates, ISVs growing their subscriber base, BPOs managing a steady flow of transitions -- the platform's connected intelligence doesn't reset between engagements. Profiling templates carry forward. Mapping predictions sharpen. Transformation logic built for one client becomes the foundation for the next.
The result is a factory, not a one-time build. Every new client moves through the same connected stations -- the same profiling, the same mapping intelligence, the same transformation framework -- producing consistent, reliable output at a pace that scales with the business rather than against it.
For ISVs managing subscription revenue, this means a meaningful reduction in the cost of new client acquisition. For BPOs and managed service providers, it means higher margin on every engagement. For fund administrators competing on operational excellence, it means a demonstrably faster, more accurate onboarding experience -- one that becomes a differentiator when competing for mandates from institutional investors who have seen poor transitions before and are paying close attention.
Once data is live, a related challenge in financial services is proving it arrived correctly -- especially for regulated institutions. Post-migration reconciliation is the phase where confidence is either built or broken, and where regulatory obligations are met or missed.
Revenue recognition is ultimately about time to value. The faster a client is live, the faster they realize the benefit of your platform or service -- and the faster your revenue cycle closes. Data migration is one of the most controllable variables in that equation.
The organizations winning on this front aren't necessarily those with the cleanest client data. They're the ones who have invested in tools and processes that make migration predictable, scalable, and fast -- regardless of what the source data looks like when it arrives. In financial services, where client data is inherently complex and the margin for error is narrow, that investment pays dividends on every deal.
Whether you're an ISV accelerating client onboarding into a financial platform, a fund administrator managing recurring mandates, or a BPO building a repeatable data ingestion practice -- treating data migration as a strategic capability, not just an onboarding task, is the difference between a revenue model that scales and one that stalls.
See how Zengines accelerates data migration for financial services ISVs, fund administrators, and BPOs -- at every step of the client onboarding lifecycle. Schedule a demo to see it in action, or explore our resources library for more on AI-powered data conversion.

Boston, MA - March 4, 2026 - Zengines, an AI technology company specializing in data migration and mainframe and AS400 data lineage, today announced it has been selected to demo live at FinovateSpring 2026, taking place May 5–7 in San Diego, California.
Finovate is one of the most prestigious fintech event series, drawing over 1,200 senior-level executives from banks, credit unions, and financial institutions - including nine of the top 10 U.S. banks. Demo slots are awarded through a competitive application and selection process, with only the most innovative and market-ready fintech companies earning a spot on stage.
Zengines will use its seven-minute live demo - Finovate's signature format - to showcase its Data Lineage product: an AI-powered research and visualization tool purpose-built for large financial institutions managing the complexity of “black box” systems.
What sets Zengines apart? Traditional lineage tools show you the map - at the surface level. Zengines gives you the map and the context behind it - built exclusively for the decades-old COBOL, RPG, and PL/1 systems no one fully understands anymore.
Conventional tools produce technically accurate data flow diagrams. They cannot tell you why a calculation exists, what business rule drives it, or what it means for your regulatory obligations. That context is buried in the code itself - and Zengines is built to surface it.
Two things define the Zengines platform:
Together, these enable three outcomes financial institutions are struggling to achieve today:
"Being selected to demo at Finovate is a meaningful validation of what we've built," said Caitlyn Truong, CEO and Co-Founder of Zengines. "The financial institutions in that room are dealing with exactly the challenges our lineage tool was designed to solve - regulatory mandates, modernization programs, and the 'black box' problem of legacy systems that no one can fully see into. We're excited to show them that contextual lineage is what actually moves the needle."
“Finovate demos are about showing, not telling, and Zengines’ contextual data lineage is something that I’m sure our audience is going to really appreciate seeing at FinovateSpring this May,” said Greg Palmer, VP and Host of Finovate. "The FI’s in our audience are wrestling with legacy infrastructure that's been accumulating complexity for decades. Zengines' ability to understand what's inside those systems before trying to modernize them or meet regulatory requirements is exactly the kind of solution that is likely to resonate with them.”
The Zengines Data Lineage tool is currently deployed at several Fortune 100 financial institutions across codebases spanning hundreds of thousands of source modules and tens of millions lines of code, where teams use it at enterprise scale to accelerate analysis that previously took months down to minutes.
FinovateSpring 2026 will feature RegTech, AI, data optimization, and risk management among its key themes - making it an ideal stage for Zengines to connect with the financial institutions and consulting partners navigating solutions to support these exact priorities.
Zengines is an AI technology company helping financial institutions trace, map, change, and move their data to manage legacy systems, modernize, and meet regulatory compliance requirements. Our Mainframe Data Lineage solution goes beyond traditional lineage tools by delivering contextual intelligence - not just where data flows, but the business logic, calculation rules, and institutional knowledge embedded in decades of legacy code. Our Data Migration platform accelerates data conversion programs using AI, reducing time and risk across core conversions, system implementations, and new client onboarding. Zengines serves financial services firms and their technology and service provider partners - where the cost of getting data wrong is highest.
Learn more at zengines.ai

For Chief Risk Officers and Chief Actuaries at European insurers, Solvency II compliance has always demanded rigorous governance over how capital requirements get calculated. But as the framework evolves — with Directive 2025/2 now in force and Member States transposing amendments by January 2027 — the bar for data transparency is rising. And for carriers still running actuarial calculations, policy administration, or claims processing on legacy mainframe or AS/400s, meeting that bar gets harder every year.
Solvency II isn't just about holding enough capital. It's about proving you understand why your models produce the numbers they do — where the inputs originate, how they flow through your systems, and what business logic transforms them along the way. For insurers whose critical calculations still run on legacy languages like COBOL or RPG, that proof is becoming increasingly difficult to produce.
At its core, Solvency II's data governance requirements are deceptively simple. Article 82 of the Directive requires that data used for calculating technical provisions must be accurate, complete, and appropriate.
The Delegated Regulation (Articles 19-21 and 262-264) adds specificity around governance, internal controls, and modeling standards. EIOPA's guidelines go further, recommending that insurers implement structured data quality frameworks with regular monitoring, documented traceability, and clear management rules.
In practice, this means insurers need to demonstrate:
For modern cloud-based platforms with well-documented APIs and metadata catalogs, these requirements are manageable. But for the legacy mainframe or AS/400 systems that still process the majority of core insurance transactions at many European carriers, this level of transparency requires genuine investigation.
Many large European insurers run core business logic on mainframe or AS/400 systems that have been evolving for 30, 40, even 50+ years. Policy administration, claims processing, actuarial calculations, reinsurance — the systems that generate the numbers feeding Solvency II models were often written in COBOL by engineers who retired decades ago.
The documentation hasn't kept pace. In many cases, it was never comprehensive to begin with. Business rules were encoded directly into procedural code, updated incrementally over the years, and rarely re-documented after changes. The result is millions of lines of code that effectively are the documentation — if you can read them.
This creates a compounding problem for Solvency II compliance:
When supervisors or internal audit ask how a specific reserve calculation works, or where a risk factor in your internal model originates, the answer too often requires someone to trace it through the code manually. That trace depends on a shrinking pool of specialists who understand legacy COBOL systems — specialists who are increasingly close to retirement across the European insurance industry.
Every year the knowledge gap widens. And every year, the regulatory expectations for data transparency increase.
The Solvency II framework isn't standing still. The amending Directive published in January 2025 introduces significant updates that amplify data governance demands:
National supervisors across Europe — from the ACPR in France to BaFin in Germany to the PRA in the UK — are tightening their expectations in parallel. The ACPR, for instance, has been specifically increasing its focus on the quality of data used by Solvency II functions, requiring actuarial, risk management, and internal audit teams to demonstrate traceability and solid evidence.
And the consequences of falling short are becoming tangible. Pillar 2 capital add-ons, supervisory intervention, and in severe cases, questions about the suitability of responsible executives — these aren't theoretical outcomes. They're tools that European supervisors have demonstrated willingness to use.
Every CRO at a European insurer knows the scenario: a supervisor asks a pointed question about how a specific technical provision was calculated, or requests that you trace a data element from source through to its appearance in a QRT submission. Your team scrambles. The mainframe or AS/400 specialists — already stretched thin — get pulled from other work. Days or weeks pass before the answer materializes.
These examinations are becoming more frequent and more granular. Supervisors aren't just asking for high-level descriptions of data flows. They want attribute-level traceability. They want to see the actual business logic that transforms raw policy data into the numbers in your regulatory reports.
For carriers whose critical processing runs through legacy mainframe or AS/400s, these requests expose a fundamental vulnerability: institutional knowledge that exists only in people's heads, supported by code that only a handful of specialists can interpret.
The question isn't whether your supervisor will ask. It's whether you'll be able to answer confidently when they do.
The good news: you don't have to replace your entire core system to solve the transparency problem. AI-powered tools can now parse legacy codebases and extract the data lineage that's been locked inside for decades.
This means:
The goal isn't to decommission your legacy systems overnight. It's to shine a light into the black box — so you can demonstrate the governance and control that Solvency II demands over systems that still run your most critical functions.
The European insurers who navigate Solvency II most smoothly aren't necessarily the ones with the newest technology. They're the ones who can clearly articulate how their risk management processes work — including the parts that run on infrastructure built before many of today's actuaries were born.
That clarity doesn't require a multi-year transformation program. It requires the ability to extract and document what your systems already do, in a format that satisfies both internal governance requirements and supervisory scrutiny.
For CROs, Chief Actuaries, and compliance leaders managing legacy technology estates, that capability is rapidly moving from nice-to-have to essential — especially as the 2027 transposition deadline for the amended Solvency II Directive approaches.
The carriers that invest in legacy system transparency now won't just be better prepared for their next supervisory review. They'll have a foundation for every modernization decision that follows — because you can't confidently change what you don't fully understand.
Zengines helps European insurers extract data lineage and calculation logic from legacy mainframe or AS/400 systems. Our AI-powered platform parses COBOL and RPG code and related infrastructure to deliver the transparency that Solvency II demands — without requiring a rip-and-replace modernization.
.png)