BOSTON, MA – October 29, 2025 – Zengines, the AI-powered data migration and data lineage platform, announces expanded support for RPG (Report Program Generator) language in its Data Lineage product. Organizations running IBM i (AS/400) systems can now rapidly analyze legacy RPG code alongside COBOL, dramatically accelerating modernization initiatives while reducing dependency on scarce programming expertise.
Many enterprises still rely on mission-critical applications written in RPG decades ago, creating what Zengines calls the "black box" problem – legacy technology where business logic, data flows, and requirements are locked away in legacy code with little to no documentation. As companies undertake digital transformation and cloud migration initiatives, understanding these legacy systems has become a critical bottleneck.
The challenge with RPG is particularly acute. While COBOL's descriptive, English-like syntax makes it easier to "read," RPG's fixed-format column specifications and cryptic operation codes require developers to decode what goes in which column while tracing through numbered indicators to follow the logic. This complexity, combined with a shrinking pool of RPG expertise, makes understanding these systems even more critical—and difficult—than their COBOL counterparts.
"The majority of our enterprise customers are running legacy technology across multiple platforms – both mainframe COBOL environments and IBM i systems with RPG code," said Caitlyn Truong, CEO of Zengines. "By expanding our support to include RPG alongside COBOL, we can now address the full spectrum of legacy code challenges these organizations face. This means our customers can leverage a single AI-powered platform to comprehensively analyze, understand and modernize their legacy technology estate, rather than cobbling together multiple point solutions or relying on increasingly scarce programming expertise across different languages and systems."
The enhanced Zengines Data Lineage platform automatically ingests RPG code, job schedulers, and related artifacts to deliver:
This capability is critical for organizations navigating system replacements, M&A integrations, compliance initiatives, and technology modernization programs where understanding legacy RPG logic is essential for de-risking implementations.
Managing and modernizing legacy systems break down when teams lack complete understanding of existing logic. Migrations stall when teams cannot achieve functional coverage or resolve test failures. When validating new systems against legacy outputs, discrepancies inevitably emerge – but without understanding why the old system produces specific results, teams cannot effectively test, replicate, or improve functionality.
"Our customers use Zengines to reverse-engineer business requirements from legacy code," added Truong. "When a new system returns a different result for an interest calculation compared to that of the 40-year-old RPG program, teams need to understand the original logic to make informed decisions about what to preserve and what to update. That's the power of shining a light into the black box."
RPG parsing capability is now available on Zengines Data Lineage platform. Organizations can analyze both COBOL and RPG codebases within a single integrated platform.
Zengines is a technology company that transforms how organizations handle data migrations and modernization inititatives. Zengines serves business analysts, developers, and transformation leaders who need to map, change, and move data across systems. With deep expertise in AI, data migration, and legacy systems, Zengines helps organizations reduce time, cost, and risk associated with their most challenging data initiatives.
Media Contact:
Todd Stone
President, Zengines
todd@zengines.ai

Every data migration has a moment of truth — when stakeholders ask, "Is everything actually correct in the new system?" Most teams don’t have the tools they need to answer that question.
Data migrations consume enormous time and budget. But for many organizations, the hardest part isn't moving the data — it's proving it arrived correctly. Post-migration reconciliation is the phase where confidence is either built or broken, where regulatory obligations are met or missed, and where the difference between a successful go-live and a costly rollback becomes clear.
For enterprises in financial services — and the consulting firms guiding them through modernization — reconciliation isn't optional. It's the entire point.
Most migration programs follow a familiar arc: assess the source data, map it to the target schema, transform it to meet the new system's requirements, load it, and validate. On paper, it's linear. In practice, the validation step is where many programs stall.
Here's why. Reconciliation requires you to answer a deceptively simple question: Does the data in the new system accurately represent what existed in the old one — and does it behave the same way?
That question has layers. At the surface level, it's a record count exercise — did all 2.3 million accounts make it across? But beneath that, reconciliation means confirming that values transformed correctly, that business logic was preserved, that calculated fields produce the same results, and that no data was silently dropped or corrupted in transit.
For organizations subject to regulatory frameworks like BCBS 239, CDD, or CIP, reconciliation also means demonstrating an auditable trail. Regulators don't just want to know that data moved — they want evidence that you understood what moved, why it changed, and that you can trace any value back to its origin.
Three factors make post-migration reconciliation consistently harder than teams anticipate.
The most effective migration programs don't treat reconciliation as a phase that happens at the end. They build verifiability into every step — so that by the time data lands in the new system, the evidence trail already exists.
This requires two complementary capabilities: intelligent migration tooling that tracks every mapping and transformation decision, and deep lineage analysis that surfaces the logic embedded in legacy systems so you actually know what "correct" looks like.
The mapping and transformation phase of any migration is where most reconciliation problems originate. When a business analyst maps a source field to a target field, applies a transformation rule, and moves on, that decision needs to be recorded — not buried in a spreadsheet that gets versioned twelve times.
AI-powered migration tooling can accelerate this phase significantly. Rather than manually comparing schemas side by side, pattern recognition algorithms can predict field mappings based on metadata, data types, and sample values, then surface confidence scores so analysts can prioritize validation effort where it matters most. Transformation rules — whether written manually or generated through natural language prompts — are applied consistently and logged systematically.
The result is that when a stakeholder later asks, "Why does this field look different in the new system?" — the answer is traceable. You can point to the specific mapping decision, the transformation rule that was applied, and the sample data that validated the match. That traceability is foundational to reconciliation.
Reconciliation gets exponentially harder when the source system is a mainframe running COBOL code that was last documented in the 1990s. When the new system produces a different calculation result than the old one, someone has to determine whether that's a migration error or simply a difference in business logic between the two platforms.
This is where mainframe data lineage becomes critical. By parsing COBOL modules, job control language, SQL, and associated files, lineage analysis can surface the calculation logic, branching conditions, data paths, and field-level relationships that define how the legacy system actually works — not how anyone thinks it works.
Consider a practical example: after migrating to a modern cloud platform, a reconciliation check reveals that an interest accrual calculation in the new system produces a different result than the legacy mainframe. Without lineage, the investigation could take weeks. An analyst would need to manually trace the variable through potentially thousands of lines of COBOL code, across multiple modules, identifying every branch condition and upstream dependency.
With lineage analysis, that same analyst can search for the variable, see its complete data path, understand the calculation logic and conditional branches that affect it, and determine whether the discrepancy stems from a migration error or a legitimate difference in how the two systems compute the value. What took weeks now takes hours — and the finding is documented, not locked in someone's head.
The real power of combining intelligent migration with legacy lineage is that reconciliation becomes a structured, evidence-based process rather than an ad hoc investigation.
When you can trace a value from its origin in a COBOL module, through the transformation rules applied during migration, to its final state in the target system — you have end-to-end data provenance. For regulated financial institutions, that provenance is exactly what auditors and compliance teams need. For consulting firms delivering these programs, it's the difference between a defensible methodology and a best-effort exercise.
For Tier 1 consulting firms and systems integrators delivering modernization programs, post-migration reconciliation is often where project timelines stretch and client confidence erodes. The migration itself may go seem to go smoothly, but then weeks of reconciliation cycles — investigating discrepancies, tracing values back through legacy systems, re-running transformation logic — consume budget and test relationships.
Tooling that accelerates both sides of this equation changes the engagement model. Migration mapping and transformation that would have taken a team months can be completed by a smaller team in weeks. Lineage analysis that would have required dedicated mainframe SMEs for months of manual code review becomes an interactive research exercise. And the reconciliation evidence is built into the process, not assembled after the fact.
This translates directly to engagement economics: faster delivery, reduced SME dependency, lower risk of costly rework, and a more compelling value proposition when scoping modernization programs.
Whether you're leading a migration internally or advising a client through one, these principles will strengthen your reconciliation outcomes.
Zengines combines AI-powered data migration with mainframe data lineage to help enterprises and consulting firms move data faster, with full traceability from source to target. Whether you're migrating off a legacy mainframe or onboarding data into a new platform, Zengines de-risks the process — including the critical reconciliation phase that proves your new system got it right.

Mainframes aren't going anywhere overnight. Despite the industry's push toward cloud migration and modernization, the reality is that many financial institutions still rely on mainframe systems to process millions of daily transactions, calculate interest accruals, manage account records, and run core business operations. And they will for years to come.
Modernization is the eventual reality for every organization still running on mainframe. But "eventual" is doing a lot of heavy lifting in that sentence. For many financial institutions, a full modernization effort is on the roadmap but years away — dependent on budget cycles, vendor timelines, regulatory considerations, and a hundred other competing priorities. In the meantime, these systems still need to be maintained — and that's where things get increasingly risky.
When a business requirement changes — say, a new regulation requires a different calculation methodology, or a product team needs to update how accrued interest is computed — someone has to go into the mainframe and update the code. Sounds straightforward enough. Except it's not.
Mainframe COBOL codebases are often decades old. They've been written, rewritten, and patched by generations of engineers, many of whom have long since left the organization. A single mainframe environment can contain tens of thousands of COBOL modules, each with hundreds or thousands of lines of code. Variables branch across modules. Tables are read and updated in ways that aren't always documented. Conditional logic sends data down different paths depending on record types, dates, or account classifications that may have made perfect sense in 1998 but aren't intuitive to anyone working today.
Before a mainframe engineer can write a single new line of code, they need to answer a deceptively simple question: What will this change affect?
And answering that question — tracing a variable backward through modules, understanding which tables get updated, identifying upstream and downstream dependencies — can take weeks or even months of manual investigation. One engineer we've worked with estimated that investigating the impact of a change takes substantially longer than actually making the change.
The term "black box" gets used a lot in mainframe conversations, and for good reason. The challenge isn't that the code doesn't work — it usually works remarkably well. The challenge is that nobody fully understands how and why it works the way it does.
Consider what a typical investigation looks like without modern tooling. An engineer receives a request from the business: "We need to update how we calculate X." To comply, that engineer has to:
Now multiply that by the reality that a single environment might have 50,000 to 500,000 to 5,000,000 modules. It's not hard to see why organizations describe their mainframe as a black box — and why changes feel so high-stakes.
The fear isn't hypothetical. When an engineer updates a module without fully understanding the dependencies, the consequences can ripple across systems. A calculation that looked isolated might feed into downstream reporting. A field that seemed unused might actually be read by another module under specific conditions. A change to one branch of conditional logic might alter outputs for an account type that wasn't part of the original requirement.
These kinds of unintended consequences don't always surface immediately. Sometimes they show up in reconciliation discrepancies weeks later. Sometimes a client calls and says, "My statement looks different this month." By that point, the investigation to find the root cause is just as painful as the original change — if not more so.
This is why many mainframe teams default to a conservative posture. They move slowly, pad timelines, and layer in extensive manual review. Not because they aren't skilled, but because the risk of getting it wrong is too high and the tools available to them haven't evolved with the complexity of the systems they manage.
This is where mainframe data lineage changes the equation. Rather than manually tracing code paths and building dependency maps from scratch every time a change is requested, data lineage technology can parse COBOL modules at scale and generate a comprehensive, searchable view of how data flows through the system.
With data lineage in place, that same engineer who used to spend months investigating a change can now:
Instead of navigating thousands of lines of raw COBOL to answer a single question, the engineer gets a curated, structured view of exactly the information they need. The investigation that used to take months can happen in minutes.

Much of the conversation around mainframe data lineage focuses on migration and modernization. And yes, lineage is critical for those efforts — but the value starts long before modernization kicks off.
Every time a business requirement changes, every time a regulation is updated, every time an engineer needs to write or modify code — they're navigating the same black box. Data lineage doesn't just prepare you for the future. It makes your mainframe safer and more manageable right now, during the months or years between today and the day you're ready to modernize.
For mainframe teams, it means less time investigating and more time executing. For risk and compliance leaders, it means greater confidence that changes won't introduce unintended consequences. For the business, it means faster turnaround on change requests without increasing operational risk.
Here's the other advantage of investing in data lineage now: when your organization is ready to modernize, you won't be starting from scratch.
Modernization isn't just about moving everything from the old system to the new one. It requires making deliberate decisions about what to bring forward and what to leave behind. Which business rules are still relevant? Which calculations need to be replicated exactly, and which should be redesigned? Which data paths reflect current requirements, and which are artifacts of decisions made decades ago?
Without lineage, those questions send teams back into the same manual investigation cycle — except now they're doing it across tens of thousands of modules under the pressure of a migration timeline. With lineage already in place, your team walks into modernization with a comprehensive understanding of how the current system works, what it does, and why.
And the value doesn't stop at cutover. Post-migration, lineage gives you a baseline for reconciliation. When the new system produces a different output than the old one — and it will — lineage helps you trace back to the original logic and understand why the results differ. Was it an intentional change? A missed business rule? A calculation that was carried over incorrectly? Instead of guessing, your team can pinpoint the source of the discrepancy and resolve it with confidence.
Organizations that rely on mainframes aren't behind — they're running proven, reliable infrastructure that processes critical transactions every day. The challenge has never been the mainframe itself. It's that the tools and processes for understanding what's inside it haven't kept pace with the complexity of the systems or the speed at which the business needs to evolve.
Data lineage closes that gap. Whether modernization is two years away or five, understanding what's inside the black box isn't something you can afford to wait on. Your teams need that visibility today to manage changes safely — and they'll need it even more when the time comes to move forward.
Zengines' Mainframe Data Lineage solution parses COBOL code at scale to give your team searchable, visual access to the data paths, calculation logic, dependencies, and business rules embedded in your mainframe.

For Chief Risk Officers and Chief Compliance Officers at insurance carriers, ORSA season brings a familiar tension: demonstrating that your organization truly understands its risk exposure -- while knowing that critical calculations still run through systems nobody fully understands anymore.
The Own Risk and Solvency Assessment (ORSA) isn't just paperwork. It's a commitment to regulators that you can trace how capital adequacy gets calculated, where stress test assumptions originate, and why your models produce the outputs they do. For carriers still running policy administration, actuarial calculations, or claims processing on legacy mainframes, that commitment gets harder to keep every year.
Most large insurers have mainframe systems that have been running -- and evolving -- for 30, 40, even 50+ years. The original architects retired decades ago. The business logic is encoded in millions of lines of COBOL across thousands of modules. And the documentation? It hasn’t been updated in years.
This creates a specific problem for ORSA compliance: when regulators ask how a particular reserve calculation works, or where a risk factor originates, the honest answer is often "we'd need to trace it through the code."
That trace can take weeks. Sometimes months. And even then, you're relying on the handful of mainframe specialists who can actually read the logic -- specialists who are increasingly close to retirement themselves.
ORSA requires carriers to demonstrate effective risk management governance. In practice, that means showing:
For modern cloud-based systems, this is straightforward. Metadata catalogs, audit logs, and documentation are built in. But for mainframe systems -- where the business logic is the documentation, buried in procedural code -- this level of transparency requires actual investigation.
Every CRO knows the scenario: an examiner asks a pointed question about a specific calculation. Your team scrambles to trace it back through the systems. The mainframe team pulls in their most senior developer (who was already over-allocated with other work). Days pass. The answer finally emerges -- but the process exposed just how fragile your institutional knowledge has become.
These fire drills are getting more frequent, not less. Regulators have become more sophisticated about data governance expectations. And the talent pool that understands legacy COBOL systems shrinks every year.
The question isn't whether you'll face this challenge. It's whether you'll face it reactively -- during an exam -- or proactively, on your own timeline.
The good news: you don't have to modernize your entire core system to solve the documentation problem. New AI-powered tools can parse legacy codebases and extract the data lineage that's been locked inside for decades.
This means:
The goal isn't to replace your legacy systems overnight. It's to shine a light into the black box -- so you can demonstrate governance and control over systems that still run critical functions.
The carriers who navigate ORSA most smoothly aren't the ones with the newest technology. They're the ones who can clearly articulate how their risk management processes work -- including the parts that run on 40-year-old infrastructure.
That clarity doesn't require a multi-year modernization program. It requires the ability to extract and visualize what your systems already do, in a format that satisfies both internal governance requirements and external regulatory scrutiny.
For CROs and CCOs managing legacy technology estates, that capability is becoming less of a nice-to-have and more of a prerequisite for confident compliance.
Zengines helps insurance carriers extract data lineage and governance controls from legacy mainframe systems. Our AI-powered platform parses COBOL code and related infrastructure to deliver the transparency regulators expect -- without requiring a rip-and-replace modernization.
.png)