Gregory Jenelos is a Customer Success Manager at Zengines, where he guides clients through successful platform adoption and ensures they realize maximum value from their data migration and conversion initiatives. With over 15 years of experience in data conversion, analytics, and IT consulting, he brings deep technical expertise and a proven track record of managing complex data transformation projects.
Prior to joining Zengines, Gregory served as a Conversion Manager and Data Architect at Sapiens, where he led critical data migration projects for insurance and financial services clients. Before that, he spent over seven years as a Senior IT Consultant at Ohio BWC, specializing in Oracle database management, data analytics, and financial analysis for workers' compensation systems. His extensive consulting experience also includes roles at CGI, Chase, and BrickStreet Insurance, where he consistently delivered data conversion and database solutions for large-scale enterprise systems.
Gregory's comprehensive background in Oracle databases, data warehousing, PL/SQL development, and financial analysis makes him uniquely qualified to help Zengines clients navigate their most challenging data migration scenarios. His hands-on experience positions him as a trusted advisor to organizations undergoing system modernization and data transformation initiatives.

.png)

Every data migration has a moment of truth — when stakeholders ask, "Is everything actually correct in the new system?" Most teams don’t have the tools they need to answer that question.
Data migrations consume enormous time and budget. But for many organizations, the hardest part isn't moving the data — it's proving it arrived correctly. Post-migration reconciliation is the phase where confidence is either built or broken, where regulatory obligations are met or missed, and where the difference between a successful go-live and a costly rollback becomes clear.
For enterprises in financial services — and the consulting firms guiding them through modernization — reconciliation isn't optional. The goal of any modernization, vendor change, or M&A integration is value realization — and reconciliation is the bookend that proves the change worked, giving stakeholders and regulators the confidence to move forward.
Most migration programs follow a familiar arc: assess the source data, map it to the target schema, transform it to meet the new system's requirements, load it, and validate. On paper, it's linear. In practice, the validation step is where many programs stall.
Here's why. Reconciliation requires you to answer a deceptively simple question: Does the data in the new system accurately represent what existed in the old one — and does it behave the same way?
That question has layers. At the surface level, it's a record count exercise — did all 2.3 million accounts make it across? But beneath that, reconciliation means confirming that values transformed correctly, that business logic was preserved, that calculated fields produce the same results, and that no data was silently dropped or corrupted in transit.
For organizations subject to regulatory frameworks like BCBS 239, CDD, or CIP, reconciliation also means demonstrating an auditable trail. Regulators don't just want to know that data moved — they want evidence that you understood what moved, why it changed, and that you can trace any value back to its origin.
Three factors make post-migration reconciliation consistently harder than teams anticipate.
The most effective migration programs don't treat reconciliation as a phase that happens at the end. They build verifiability into every step — so that by the time data lands in the new system, the evidence trail already exists.
This requires two complementary capabilities: intelligent migration tooling that tracks every mapping and transformation decision, and deep lineage analysis that surfaces the logic embedded in legacy systems so you actually know what "correct" looks like.
The mapping and transformation phase of any migration is where most reconciliation problems originate. When a business analyst maps a source field to a target field, applies a transformation rule, and moves on, that decision needs to be recorded — not buried in a spreadsheet that gets versioned twelve times.
AI-powered migration tooling can accelerate this phase significantly. Rather than manually comparing schemas side by side, pattern recognition algorithms can predict field mappings based on metadata, data types, and sample values, then surface confidence scores so analysts can prioritize validation effort where it matters most. Transformation rules — whether written manually or generated through natural language prompts — are applied consistently and logged systematically.
The result is that when a stakeholder later asks, "Why does this field look different in the new system?" — the answer is traceable. You can point to the specific mapping decision, the transformation rule that was applied, and the sample data that validated the match. That traceability is foundational to reconciliation.
Reconciliation gets exponentially harder when the source system is a mainframe running COBOL code that was last documented in the 1990s. When the new system produces a different calculation result than the old one, someone has to determine whether that's a migration error or simply a difference in business logic between the two platforms.
This is where mainframe data lineage becomes critical. By parsing COBOL modules, job control language, SQL, and associated files, lineage analysis can surface the calculation logic, branching conditions, data paths, and field-level relationships that define how the legacy system actually works — not how anyone thinks it works.
Consider a practical example: after migrating to a modern cloud platform, a reconciliation check reveals that an interest accrual calculation in the new system produces a different result than the legacy mainframe. Without lineage, the investigation could take weeks. An analyst would need to manually trace the variable through potentially thousands of lines of COBOL code, across multiple modules, identifying every branch condition and upstream dependency.
With lineage analysis, that same analyst can search for the variable, see its complete data path, understand the calculation logic and conditional branches that affect it, and determine whether the discrepancy stems from a migration error or a legitimate difference in how the two systems compute the value. What took weeks now takes hours — and the finding is documented, not locked in someone's head.
The real power of combining intelligent migration with legacy lineage is that reconciliation becomes a structured, evidence-based process rather than an ad hoc investigation.
When you can trace a value from its origin in a COBOL module, through the transformation rules applied during migration, to its final state in the target system — you have end-to-end data provenance. For regulated financial institutions, that provenance is exactly what auditors and compliance teams need. For consulting firms delivering these programs, it's the difference between a defensible methodology and a best-effort exercise.
For Tier 1 consulting firms and systems integrators delivering modernization programs, post-migration reconciliation is often where project timelines stretch and client confidence erodes. The migration itself may go seem to go smoothly, but then weeks of reconciliation cycles — investigating discrepancies, tracing values back through legacy systems, re-running transformation logic — consume budget and test relationships.
Tooling that accelerates both sides of this equation changes the engagement model. Migration mapping and transformation that would have taken a team months can be completed by a smaller team in weeks. Lineage analysis that would have required dedicated mainframe SMEs for months of manual code review becomes an interactive research exercise. And the reconciliation evidence is built into the process, not assembled after the fact.
This translates directly to engagement economics: faster delivery, reduced SME dependency, lower risk of costly rework, and a more compelling value proposition when scoping modernization programs.
Whether you're leading a migration internally or advising a client through one, these principles will strengthen your reconciliation outcomes.
The goal of any modernization program isn't the migration itself — it's the value that comes after. Faster operations, better insights, reduced risk, regulatory confidence. Reconciliation is the bookend that earns trust in the change and clears the path to that value.
Zengines combines AI-powered data migration with mainframe data lineage to give enterprises and consulting firms full traceability from source to target — so you can prove the migration worked and move forward with confidence.

Every enterprise eventually faces a pivotal question: should we connect our systems together, or move our data to a new home entirely? The answer seems simple until you're staring at a 40-year-old mainframe with dwindling support, a dozen point solutions held together by ever-growing integrations, and a budget that doesn't accommodate mistakes.
Data migration and data integration are often confused because they both involve moving data. But they serve fundamentally different purposes - and choosing the wrong approach can cost you years of technical debt, millions in maintenance, or worse, a failed transformation project.
Data migration is about transition and consolidation.
Systems reach end-of-life. Platforms get replaced. Acquisitions require consolidation. Companies outgrow their technology stack and need to move from functionally siloed point solutions to consolidated platforms.
Migration addresses all of these - relocating data from a source system to a target, transforming it to fit the new data model, then retiring the source. The result is a cleaner footprint: fewer systems, fewer dependencies, a tidier architecture.
Data integration is about coexistence.
You're connecting systems so they can share data continuously, in real-time or near-real-time. Both systems stay alive. Think of it like building a bridge between two cities - traffic flows both directions, indefinitely.
On the surface, integration can seem more appealing - it preserves optionality and avoids the hard decision of retiring systems. But optionality has carrying costs. Every bridge you build is a bridge you must maintain, monitor, and update when either system changes. Migration delivers a leaner architecture with less operational overhead.
Migration makes sense when you're ready to consolidate and simplify - especially for operational systems.
Consider migration when:
Integration makes sense when systems genuinely need to coexist and communicate -- particularly for analytical use cases.
Consider integration when:
Migration projects have traditionally been expensive upfront. Research shows that over 80% of data migration projects run over time or budget. A 2021 Forbes analysis found that 64% of data migrations exceed their forecast budget, with 54% overrunning on time.
But here's what those statistics don't capture: much of this cost and risk stems from outdated approaches to migration. Legacy migration projects often relied on manual analysis, hand-coded transformation scripts, and armies of consultants reverse-engineering undocumented systems. The migration itself wasn't inherently expensive - the lack of proper tooling made it expensive.
When migration succeeds, you have a clean slate. The old system is retired. There's no pipeline to maintain, no nightly sync jobs to monitor, no integration layer to update when either system changes. You've reduced your technology footprint.
Integration appears easier at first. You're not touching the legacy data - you're just building a bridge. The upfront cost looks manageable. But that bridge requires constant attention.
According to McKinsey, the "interest" on technical debt includes the complexity tax from "fragile point-to-point or batch data integrations." Engineering teams spend an average of 33% of their time managing technical debt, according to research from Stripe. When you build an integration instead of migrating, you're committing to that maintenance indefinitely.
Gartner estimates that about 40% of infrastructure systems across asset classes already carry significant technical debt. Organizations that ignore this debt spend up to 40% more on maintenance than peers who address it early.
The key insight: integration's "lower cost" is an illusion if you only look at upfront spend. When you factor in total cost of ownership - years of maintenance, incident response, and the opportunity cost of engineers maintaining pipes instead of building value - the calculus often favors migration.
Integration preserves optionality. You can defer the retirement decision. You can keep both systems running while you figure out the long-term strategy. But optionality has carrying costs, and those costs compound over time.
Migration forces a constraint - and constraints drive clarity. When you commit to migration, you're forced to answer hard questions: What data do we actually need? What's the canonical source of truth? What business rules should govern this data going forward? The result is a tidier, more intentional data architecture.
Many organizations choose integration because migration feels too hard. But "too hard" often means "too hard to decide." Integration lets you defer decisions. Migration forces them - and in doing so, delivers a cleaner outcome.
Ask yourself these questions:
For years, integration was perceived as the lesser evil - not because it was the right choice, but because migration seemed too expensive and risky. Organizations built integrations they didn't really want because migration felt out of reach.
That calculation is changing. Modern migration platforms are lowering the barrier to making the right choice - automating the analysis, transformation, and validation work that used to require armies of consultants. When migration's entry cost drops, total cost of ownership (TCO) becomes the deciding factor. And on TCO, migration often wins.
If you're modernizing legacy systems, consolidating point solutions into an ERP, or keeping operational systems lean for faster troubleshooting, migration gives you a cleaner footprint and eliminates technical debt. Yes, it requires commitment upfront. But you're trading short-term focus for long-term simplicity.
If you're feeding analytical systems, connecting platforms that both serve ongoing purposes, or need real-time data flow between coexisting systems, integration is the right tool. Just go in with your eyes open about the maintenance commitment you're making.
The worst outcome is choosing integration because migration seemed too hard - and then spending the next decade maintaining pipes to systems you should have retired years ago.
Zengines is an AI-native data migration platform built to lower the barrier to making the right choice. If you're weighing migration against integration - or stuck maintaining integrations you wish were migrations - we'd love to show you what's now possible. Let's talk.

Mainframe Managed Service Providers (MSPs) have built impressive capabilities over the last several decades. They excel at infrastructure management, code conversion, and supporting complex hosting environments. Many have invested millions in advanced tools for code parsing, refactoring, and other technical aspects of mainframe management and modernization. Yet despite these strengths, MSPs consistently encounter the same bottlenecks that threaten mainframe modernization project timelines, profit margins, and client satisfaction.
In this article, we’ll explore the most common gaps MSPs face, how Zengines platform helps fill those gaps, and why Mainframe MSPs are partnering with Zengines.
While MSPs have sophisticated tools for parsing and reverse engineering COBOL code—they can extract syntax, identify data structures, and map technical dependencies—they lack capabilities for intelligent business logic interpretation. These parsing tools tell you what the code does technically, but not why it does it from a business perspective.
Current approaches to understanding the embedded business rules within parsed code require:
Even with advanced parsing capabilities, MSPs still need human experts to bridge the gap between technical code structure and business logic understanding. This discovery phase often represents 30-40% of total project time, yet MSPs have limited tools to accelerate the critical transition from "code parsing" to "business intelligence."
The result: MSPs can quickly identify what exists in the codebase, but still struggle to efficiently understand what it means for the business—creating a bottleneck that no amount of technical parsing can solve.
A critical step in any mainframe modernization project involves migrating data from legacy mainframe systems to new modern platforms. This data migration activity often determines project success or failure, yet it's where many MSPs face their biggest challenges.
While MSPs excel at physical data ETL and have tools for moving data between systems, they struggle with the intelligence layer that makes migrations fast, accurate, and low-risk:
These gaps expose organizations to costly risks: project delays, budget overruns, compromised data integrity, and client dissatisfaction from failed transfers. Delays and cost overruns erode margins and strain client relationships. Yet the most significant threat remains post-go-live discovery of migration mistakes. Today’s approach of manual processes are inherently time-constrained—teams simply cannot identify and resolve all issues before deployment deadlines. Unfortunately, some problems surface only after go-live, forcing expensive emergency remediation that damages client trust and project profitability.
The result: MSPs can move data technically, but lack intelligence tools to do it efficiently, accurately, and with confidence—making data migration the highest-risk component of mainframe modernization projects.
Once data is migrated from mainframe systems to modern platforms, comprehensive testing and validation becomes critical to ensure business continuity and data integrity. This phase determines whether the migration truly preserves decades of embedded business logic and data relationships.
Without comprehensive understanding of embedded business logic and data interdependencies, MSPs face significant validation challenges:
The consequences: validation phases that stretch for months, expensive post-implementation fixes, user confidence issues, and potential business disruption when critical calculations or data relationships don't function as expected in the new system.
The result: MSPs have inadequate and non-optimized testing where teams test what they think is important rather than what the business actually depends on.
Zengines has built AI-powered solutions that directly address each of these critical gaps in MSP capabilities. Our platform works alongside existing MSP tools, enhancing their technical strengths with the missing intelligence layer that transforms good modernization projects into exceptional ones.
While parsing tools can extract technical code structures, Zengines Mainframe Data Lineage translates that technical information into actionable business intelligence:
MSP Impact: Transform your longest project phase into your fastest. Business logic discovery that previously required months and years of expert time now completes in days with comprehensive information that your entire team can understand and act upon.
Our AI Data Migration platform transforms data migration from a risky, manual process into an intelligent, automated workflow:
MSP Impact: Accelerate data migration timelines by 80% while dramatically reducing risk. Business analysts become 6x more productive, and data migration transforms from your highest-risk project component to a predictable, repeatable process.
Zengines doesn't just help with discovery and migration—it ensures successful validation:
MSP Impact: Transform validation from an uncertain phase into a systematic process that focuses on exceptions. Reduce validation timelines by 50% while dramatically improving coverage and reducing post-go-live surprises.
Unlike point solutions in the mainframe modernization ecosystem that address isolated problems, Zengines provides an integrated platform where business logic discovery, data migration, and validation work together seamlessly:
This integrated approach transforms modernization from a series of risky, disconnected phases into a cohesive, intelligent process that dramatically improves outcomes while reducing timelines and risk.
MSPs can deliver 50% faster overall project completion times. The discovery and data migration phases—traditionally the longest parts of modernization projects—now complete in a fraction of the time.
By automating the most labor-intensive aspects of modernization, MSPs can deliver projects with fewer billable hours while maintaining quality. This directly improves project profitability.
Clients appreciate faster time-to-value and reduced business disruption. Comprehensive business rules documentation also provides confidence that critical logic won't be lost during migration.
MSPs with Zengines capabilities can bid more aggressively on timeline and cost while delivering superior outcomes. This creates a significant competitive advantage in the marketplace.
Better understanding of business logic before migration dramatically reduces post-implementation surprises and costly remediation work.
As the mainframe skills shortage intensifies—with 70% of mainframe professionals retiring by 2030—MSPs face an existential challenge. Traditional manual approaches to business rules discovery and data migration are becoming unsustainable.
The most successful MSPs will be those that augment their technical expertise with AI-powered intelligence. Zengines provides that intelligence layer, allowing MSPs to focus on what they do best while dramatically improving client outcomes.
The question isn't whether to integrate AI-powered data intelligence into your modernization methodology. The question is whether you'll be an early adopter who gains competitive advantage, or a late adopter struggling to keep pace with more agile competitors.
.png)
Your new core banking system just went live. The migration appeared successful. Then Monday morning hits: customers can't access their accounts, transaction amounts don't match, and your reconciliation team is drowning in discrepancies. Sound familiar?
If you've ever been part of a major system migration, you've likely lived a version of this nightmare. What's worse is that this scenario isn't the exception—it's becoming the norm. A recent analysis of failed implementations reveals that organizations spend 60-80% of their post-migration effort on reconciliation and testing, yet they're doing it completely blind, without understanding WHY differences exist between old and new systems.
The result? Projects that should take months stretch into years, costs spiral out of control, and in the worst cases, customers are impacted for weeks while teams scramble to understand what went wrong.
Let's be honest about what post-migration reconciliation looks like today. Your team runs the same transaction through both the legacy system and the new system. The old system says the interest accrual is $5. The new system says it's $15. Now what?
"At this point in time, the business says who is right?" explains Caitlin Truong, CEO of Zengines. "Is it that we have a rule or some variation or some specific business rule that we need to make sure we account for, or is the software system wrong in how they are computing this calculation? They need to understand what was in that mainframe black box to make a decision."
The traditional approach looks like this:
The real cost isn't just time—it's risk. While your team plays detective with legacy systems, you're running parallel environments, paying for two systems, and hoping nothing breaks before you figure it out.
Here's what most organizations don't realize: the biggest risk in any migration isn't moving the data—it's understanding the why behind the data.
Legacy systems, particularly mainframes running COBOL code written decades ago, have become black boxes. The people who built them are retired. The business rules are buried in thousands of modules with cryptic variable names. The documentation, if it exists, is outdated.
"This process looks like the business writing a question and sending it to the mainframe SMEs and then waiting for a response," Truong observes. "That mainframe SME is then navigating and reading through COBOL code, traversing module after module, lookups and reference calls. It’s understandable that without additional tools, it takes some time for them to respond."
When you encounter a reconciliation break, you're not just debugging a technical issue—you're conducting digital archaeology, trying to reverse-engineer business requirements that were implemented 30+ years ago.
One of our global banking customers faced this exact challenge. They had 80,000 COBOL modules in their mainframe system. When their migration team encountered discrepancies during testing, it took over two months to get answers to simple questions. Their SMEs were overwhelmed, and the business team felt held hostage by their inability to understand their own system.
"When the business gets that answer they say, okay, that's helpful, but now you've spawned three more questions and so that's a painful process for the business to feel like they are held hostage a bit to the fact that they can't get answers themselves," explains Truong.
What if instead of discovering reconciliation issues during testing, you could predict and prevent them before they happen? What if business analysts could investigate discrepancies themselves in minutes instead of waiting months for SME responses?
This is exactly what our mainframe data lineage tool makes possible.
"This is the challenge we aimed to solve when we built our product. By democratizing that knowledge base and making it available for the business to get answers in plain English, they can successfully complete that conversion in a fraction of the time with far less risk," says Truong.
Here's how it works:
AI algorithms ingest your entire legacy codebase—COBOL modules, JCL scripts, database schemas, and job schedulers. Instead of humans manually navigating 80,000 lines of code, pattern recognition identifies the relationships, dependencies, and calculation logic automatically.
The AI doesn't just map data flow—it extracts the underlying business logic. That cryptic COBOL calculation becomes readable: "If asset type equals equity AND purchase date is before 2020, apply special accrual rate of 2.5%."
When your new system shows $15 and your old system shows $5, business analysts can immediately trace the calculation path. They see exactly why the difference exists: perhaps the new system doesn't account for that pre-2020 equity rule embedded in the legacy code.
Now your team can make strategic decisions: Do we want to replicate this legacy rule in the new system, or is this an opportunity to simplify our business logic? Instead of technical debugging, you're having business conversations.
Let me share a concrete example of this transformation in action. A financial services company was modernizing their core system and moving off their mainframe. Like many organizations, they were running parallel testing—executing the same transactions in both old and new systems to ensure consistency.
Before implementing AI-powered data lineage:
After implementing the solution:
"The business team presents their dashboard at the steering committee and program review every couple weeks," Truong shares. "Every time they ran into a break, they have a tool and the ability to answer why that break is there and how they plan to remediate it."
The most successful migrations we've seen follow a fundamentally different approach to reconciliation:
Before you migrate anything, understand what you're moving. Use AI to create a comprehensive map of your legacy system's business logic. Know the rules, conditions, and calculations that drive your current operations.
Instead of hoping for the best, use pattern recognition to identify the most likely sources of reconciliation breaks. Focus your testing efforts on the areas with the highest risk of discrepancies.
When breaks occur (and they will), empower your business team to investigate immediately. No more waiting for SME availability or technical resource allocation.
Transform reconciliation from a technical debugging exercise into a business optimization opportunity. Decide which legacy rules to preserve and which to retire.
"The ability to catch that upfront, as opposed to not knowing it and waiting until you're testing pre go-live or in a parallel run and then discovering these things," Truong emphasizes. "That's why you will encounter missed budgets, timelines, etc. Because you just couldn't answer these critical questions upfront."
Here's something most organizations don't consider: this capability doesn't become obsolete after your migration. You now have a living documentation system that can answer questions about your business logic indefinitely.
Need to understand why a customer's account behaves differently? Want to add a new product feature? Considering another system change? Your AI-powered lineage tool becomes a permanent asset for business intelligence and system understanding.
"When I say de-risk, not only do you de-risk a modernization program, but you also de-risk business operations," notes Truong. "Whether organizations are looking to leave their mainframe or keep their mainframe, leadership needs to make sure they have the tools that can empower their workforce to properly manage it."
Every migration involves risk. The question is whether you want to manage that risk proactively or react to problems as they emerge.
Traditional reconciliation approaches essentially accept risk—you hope the breaks will be manageable and that you can figure them out when they happen. AI-powered data lineage allows you to mitigate risk substantially by understanding your system completely before you make changes.
The choice is yours:
If you're planning a migration or struggling with an ongoing reconciliation challenge, you don't have to accept the traditional pain points as inevitable. AI-powered data lineage has already transformed reconciliation for organizations managing everything from simple CRM migrations to complex mainframe modernizations.
Schedule a demo to explore how AI can turn your legacy "black box" into transparent, understandable business intelligence.