Your new core banking system just went live. The migration appeared successful. Then Monday morning hits: customers can't access their accounts, transaction amounts don't match, and your reconciliation team is drowning in discrepancies. Sound familiar?
If you've ever been part of a major system migration, you've likely lived a version of this nightmare. What's worse is that this scenario isn't the exception—it's becoming the norm. A recent analysis of failed implementations reveals that organizations spend 60-80% of their post-migration effort on reconciliation and testing, yet they're doing it completely blind, without understanding WHY differences exist between old and new systems.
The result? Projects that should take months stretch into years, costs spiral out of control, and in the worst cases, customers are impacted for weeks while teams scramble to understand what went wrong.
Let's be honest about what post-migration reconciliation looks like today. Your team runs the same transaction through both the legacy system and the new system. The old system says the interest accrual is $5. The new system says it's $15. Now what?
"At this point in time, the business says who is right?" explains Caitlin Truong, CEO of Zengines. "Is it that we have a rule or some variation or some specific business rule that we need to make sure we account for, or is the software system wrong in how they are computing this calculation? They need to understand what was in that mainframe black box to make a decision."
The traditional approach looks like this:
The real cost isn't just time—it's risk. While your team plays detective with legacy systems, you're running parallel environments, paying for two systems, and hoping nothing breaks before you figure it out.
Here's what most organizations don't realize: the biggest risk in any migration isn't moving the data—it's understanding the why behind the data.
Legacy systems, particularly mainframes running COBOL code written decades ago, have become black boxes. The people who built them are retired. The business rules are buried in thousands of modules with cryptic variable names. The documentation, if it exists, is outdated.
"This process looks like the business writing a question and sending it to the mainframe SMEs and then waiting for a response," Truong observes. "That mainframe SME is then navigating and reading through COBOL code, traversing module after module, lookups and reference calls. It’s understandable that without additional tools, it takes some time for them to respond."
When you encounter a reconciliation break, you're not just debugging a technical issue—you're conducting digital archaeology, trying to reverse-engineer business requirements that were implemented 30+ years ago.
One of our global banking customers faced this exact challenge. They had 80,000 COBOL modules in their mainframe system. When their migration team encountered discrepancies during testing, it took over two months to get answers to simple questions. Their SMEs were overwhelmed, and the business team felt held hostage by their inability to understand their own system.
"When the business gets that answer they say, okay, that's helpful, but now you've spawned three more questions and so that's a painful process for the business to feel like they are held hostage a bit to the fact that they can't get answers themselves," explains Truong.
What if instead of discovering reconciliation issues during testing, you could predict and prevent them before they happen? What if business analysts could investigate discrepancies themselves in minutes instead of waiting months for SME responses?
This is exactly what our mainframe data lineage tool makes possible.
"This is the challenge we aimed to solve when we built our product. By democratizing that knowledge base and making it available for the business to get answers in plain English, they can successfully complete that conversion in a fraction of the time with far less risk," says Truong.
Here's how it works:
AI algorithms ingest your entire legacy codebase—COBOL modules, JCL scripts, database schemas, and job schedulers. Instead of humans manually navigating 80,000 lines of code, pattern recognition identifies the relationships, dependencies, and calculation logic automatically.
The AI doesn't just map data flow—it extracts the underlying business logic. That cryptic COBOL calculation becomes readable: "If asset type equals equity AND purchase date is before 2020, apply special accrual rate of 2.5%."
When your new system shows $15 and your old system shows $5, business analysts can immediately trace the calculation path. They see exactly why the difference exists: perhaps the new system doesn't account for that pre-2020 equity rule embedded in the legacy code.
Now your team can make strategic decisions: Do we want to replicate this legacy rule in the new system, or is this an opportunity to simplify our business logic? Instead of technical debugging, you're having business conversations.
Let me share a concrete example of this transformation in action. A financial services company was modernizing their core system and moving off their mainframe. Like many organizations, they were running parallel testing—executing the same transactions in both old and new systems to ensure consistency.
Before implementing AI-powered data lineage:
After implementing the solution:
"The business team presents their dashboard at the steering committee and program review every couple weeks," Truong shares. "Every time they ran into a break, they have a tool and the ability to answer why that break is there and how they plan to remediate it."
The most successful migrations we've seen follow a fundamentally different approach to reconciliation:
Before you migrate anything, understand what you're moving. Use AI to create a comprehensive map of your legacy system's business logic. Know the rules, conditions, and calculations that drive your current operations.
Instead of hoping for the best, use pattern recognition to identify the most likely sources of reconciliation breaks. Focus your testing efforts on the areas with the highest risk of discrepancies.
When breaks occur (and they will), empower your business team to investigate immediately. No more waiting for SME availability or technical resource allocation.
Transform reconciliation from a technical debugging exercise into a business optimization opportunity. Decide which legacy rules to preserve and which to retire.
"The ability to catch that upfront, as opposed to not knowing it and waiting until you're testing pre go-live or in a parallel run and then discovering these things," Truong emphasizes. "That's why you will encounter missed budgets, timelines, etc. Because you just couldn't answer these critical questions upfront."
Here's something most organizations don't consider: this capability doesn't become obsolete after your migration. You now have a living documentation system that can answer questions about your business logic indefinitely.
Need to understand why a customer's account behaves differently? Want to add a new product feature? Considering another system change? Your AI-powered lineage tool becomes a permanent asset for business intelligence and system understanding.
"When I say de-risk, not only do you de-risk a modernization program, but you also de-risk business operations," notes Truong. "Whether organizations are looking to leave their mainframe or keep their mainframe, leadership needs to make sure they have the tools that can empower their workforce to properly manage it."
Every migration involves risk. The question is whether you want to manage that risk proactively or react to problems as they emerge.
Traditional reconciliation approaches essentially accept risk—you hope the breaks will be manageable and that you can figure them out when they happen. AI-powered data lineage allows you to mitigate risk substantially by understanding your system completely before you make changes.
The choice is yours:
If you're planning a migration or struggling with an ongoing reconciliation challenge, you don't have to accept the traditional pain points as inevitable. AI-powered data lineage has already transformed reconciliation for organizations managing everything from simple CRM migrations to complex mainframe modernizations.
Schedule a demo to explore how AI can turn your legacy "black box" into transparent, understandable business intelligence.
IBM's RPG (Report Program Generator) began in 1959 with a simple mission: generate business reports quickly and efficiently. What started as RPG I evolved through multiple generations - RPG II, RPG III, RPG LE, and RPG IV - each adding capabilities that transformed it from a simple report tool into a full-featured business programming language. Today, RPG powers critical business applications across countless AS/400, iSeries, and IBM i systems. Yet for modern developers, understanding RPG's unique approach and legacy codebase presents distinct challenges that make comprehensive data lineage essential.
Built-in Program Cycle: RPG's fixed-logic cycle automatically handled file operations, making database processing incredibly efficient. The cycle read records, processed them, and wrote output with minimal programmer intervention - a major strength that processed data sequentially, making it ideal for report generation and business data handling.
Native Database Integration: RPG was designed specifically for IBM's database systems, providing direct interaction with database files and making it ideal for transactional systems where fast and reliable data processing is essential. It offered native access to DB2/400 and its predecessors, with automatic record locking, journaling, and data integrity features.
Rapid Business Application Development: For its intended purpose - business reports and data processing - RPG was remarkably fast to code. The fixed-format specifications (H, F, D, C specs) provided a structured framework that enforced consistency and simplified application creation.
Exceptional Performance and Scalability: RPG applications typically ran with exceptional efficiency on IBM hardware, processing massive datasets with minimal resource consumption. RPG programming language has the ability to handle large volumes of data efficiently.
Evolutionary Compatibility: The language's evolution path meant that RPG II code could often run unchanged on modern IBM i systems - a testament to IBM's commitment to backward compatibility that spans over 50 years.
RPG II (Late 1960s): The classic fixed-format version with its distinctive column-specific coding rules and built-in program logic cycle, used on System/3, System/32, System/34, and System/36.
RPG III (1978): Added subroutines, improved file handling, and more flexible data structures while maintaining the core cycle approach. Introduced with System/38, later rebranded as "RPG/400" on AS/400.
RPG LE - Limited Edition (1995): A simplified version of RPG IV designed for smaller systems, notably including a free compiler to improve accessibility.
RPG IV/ILE RPG (1994): The major evolution that introduced modular programming with procedures, prototypes, and the ability to create service programs within the Integrated Language Environment - finally bringing modern programming concepts to RPG.
Free-Format RPG (2013): Added within RPG IV, this broke away from the rigid column requirements while maintaining backward compatibility, allowing developers to write code similar to modern languages.
Steep Learning Curve: RPG's fixed-logic cycle and column-specific formatting are unlike any modern programming language. New developers must understand both the language syntax and the underlying program cycle concept, which can be particularly challenging.
Limited Object-Oriented Capabilities: Even modern RPG versions lack full object-oriented programming capabilities, making it difficult to apply contemporary design patterns and architectural approaches.
Cryptic Operation Codes: Traditional RPG used operation codes like "CHAIN," "SETLL," and "READE" with rigid column requirements that aren't intuitive to developers trained in modern, free-format languages.
Complex Maintenance Due to Evolution: The evolution from RPG II (late 1960s) through RPG III (1978) to RPG IV/ILE RPG (1994) and finally free-format coding (2013) created hybrid codebases mixing multiple RPG styles across nearly 50 years of development, making maintenance and understanding complex for teams working across different generations of the language.
Proprietary IBM-Only Ecosystem: Unlike standardized languages, RPG has always been IBM's proprietary language, creating vendor lock-in and concentrating expertise among IBM specialists rather than fostering broader community development.
RPG presents unique challenges that go beyond typical legacy system issues, rooted in decades of development practices:
Multiple Format Styles in Single Systems: A single system might contain RPG II fixed-format code (1960s-70s), RPG III subroutines (1978+), RPG LE simplified code (1995+), and RPG IV/ILE procedures with free-format sections (1994+) - all working together but following different conventions and programming paradigms developed across 50+ years, making unified understanding extremely challenging.
Embedded Business Logic: RPG's tight integration with IBM databases means business rules are often embedded directly in database access operations and the program cycle itself, making them hard to identify, extract, and document independently.
Minimal Documentation Culture: The RPG community traditionally relied on the language's self-documenting nature and the assumption that the program cycle made logic obvious, but this assumption breaks down when dealing with complex business logic or when original developers are no longer available.
Proprietary Ecosystem Isolation: RPG development was largely isolated within IBM midrange systems, creating knowledge silos. Unlike languages with broader communities and extensive online resources, RPG expertise became concentrated among IBM specialists, limiting knowledge transfer.
External File Dependencies: RPG applications often depend on externally described files (DDS) where data structure definitions live outside the program code, making data relationships and dependencies difficult to trace without specialized tools.
Given these unique challenges - multiple format styles, embedded business logic, and lost institutional knowledge - how do modern teams gain control over their RPG systems without risking business disruption? The answer lies in understanding what your systems actually do before attempting to change them.
Modern data lineage tools provide exactly this understanding by:
Analyzing all RPG variants within a single system, providing unified visibility across decades of development spanning RPG II through modern free-format code.
Mapping database relationships from database fields through program logic to output destinations, since RPG applications are inherently database-centric.
Discovering business rules by analyzing how data transforms as it moves through RPG programs, helping teams reverse-engineer undocumented logic.
Assessing impact before making changes, identifying all downstream dependencies - crucial given RPG's tight integration with business processes.
Planning modernization by understanding data flows, helping teams make informed decisions about which RPG components to modernize, replace, or retain.
RPG systems represent decades of business logic investment that often process a company's most critical transactions. While the language may seem archaic to modern eyes, the business logic it contains is frequently irreplaceable. Success in managing RPG systems requires treating them not as outdated code, but as repositories of critical business knowledge that need proper mapping and understanding.
Data lineage tools bridge the gap between RPG's unique characteristics and modern development practices, providing the visibility needed to safely maintain, enhance, plan modernization initiatives, extract business rules, and ensure data integrity during system changes. They make these valuable systems maintainable and evolutionary rather than simply survivable.
Interested in preserving and understanding your RPG-based systems? Call Zengines for a demo today.
When Grace Hopper and her team developed COBOL (Common Business-Oriented Language) in the late 1950s, they created something revolutionary: a programming language that business people could actually read. Today, over 65 years later, COBOL still processes an estimated 95% of ATM transactions and 80% of in-person transactions worldwide. Yet for modern development teams, working with COBOL systems presents unique challenges that make data lineage tools absolutely critical.
English-Like Readability: COBOL's English-like syntax is self-documenting and nearly self-explanatory, with an emphasis on verbosity and readability. Commands like MOVE CUSTOMER-NAME TO PRINT-LINE or IF ACCOUNT-BALANCE IS GREATER THAN ZERO made business logic transparent to non-programmers, setting it apart from more cryptic languages like FORTRAN. This was revolutionary - before COBOL, business logic looked like assembly language (L 5,CUSTNAME followed by ST 5,PRINTAREA) or early FORTRAN with mathematical notation that business managers couldn't decipher.
Precision Decimal Arithmetic: One of COBOL's biggest strengths is its strong support for large-precision fixed-point decimal calculations, a feature not necessarily native to many traditional programming languages. This capability helped set COBOL apart and drive its adoption by many large financial institutions. This eliminates floating-point errors critical in financial calculations.
Proven Stability and Scale: COBOL's imperative, procedural and (in its newer iterations) object-oriented configuration serves as the foundation for more than 40% of all online banking systems, supports 80% of in-person credit card transactions, handles 95% of all ATM transactions, and powers systems that generate more than USD 3 billion of commerce each day.
Excessive Verbosity: COBOL uses over 300 reserved words compared to more succinct languages. What made COBOL readable also made it lengthy, often resulting in monolithic programs that are hard to comprehend as a whole, despite their local readability.
Poor Structured Programming Support: COBOL has been criticized for its poor support for structured programming. The language lacks modern programming concepts like comprehensive object orientation, dynamic memory allocation, and advanced data structures that developers expect today.
Rigid Architecture and Maintenance Issues: By 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code, leading to major changes in COBOL-85 to help ease maintenance. The language's structure makes refactoring challenging, with changes cascading unpredictably through interconnected programs.
Limited Standard Library: COBOL lacks a large standard library, specifying only 43 statements, 87 functions, and just one class, limiting built-in functionality compared to modern languages.
Problematic Standardization Journey: While COBOL was standardized by ANSI in 1968, standardization was more aspirational than practical. By 2001, around 300 COBOL dialects had been created, and the 1974 standard's modular structure permitted 104,976 possible variants. COBOL-85 faced significant controversy and wasn't fully compatible with earlier versions, with the ANSI committee receiving over 2,200 mostly negative public responses. Vendor extensions continued to create portability challenges despite formal standards.
The biggest challenge isn't the language itself - it's the development ecosystem and practices that evolved around it from the 1960s through 1990s:
Inconsistent Documentation Standards: Many COBOL systems were built when comprehensive documentation was considered optional rather than essential. Comments were sparse, and business logic was often embedded directly in code without adequate explanation of business context or decision rationale.
Absence of Modern Development Practices: Early COBOL development predated modern version control systems, code review processes, and structured testing methodologies. Understanding how a program evolved - or why specific changes were made - is often impossible without institutional knowledge.
Monolithic Architecture: COBOL applications were typically built as large, interconnected systems where data flows through multiple programs in ways that aren't immediately obvious, making impact analysis extremely difficult.
Proprietary Vendor Extensions: While COBOL had standards, each vendor added extensions and enhancements. IBM's COBOL differs from Unisys COBOL, creating vendor lock-in that complicates understanding and portability.
Lost Institutional Knowledge: The business analysts and programmers who built these systems often retired without transferring their institutional knowledge about why certain design decisions were made, leaving current teams to reverse-engineer business requirements from code.
This is where modern data lineage tools become invaluable for teams working with COBOL systems:
COBOL's deep embedding in critical business processes represents a significant business challenge and risk that organizations must address. Success with COBOL modernization - whether maintaining, replacing, or transforming these systems - requires treating them as the complex, interconnected ecosystems they are. Data lineage tools provide the missing roadmap that makes COBOL systems understandable and manageable, enabling informed decisions about their future.
The next time you make an online payment, remember: there's probably COBOL code processing your transaction. And somewhere, a development team is using data lineage tools to keep that decades-old code running smoothly in our modern world.
To see and navigate your COBOL code in seconds, call Zengines.
Mistake #1: Underestimating embedded complexity.
Mainframe systems combine complex data formats AND decades of embedded business rules that create a web of interdependent complexity. VSAM files aren't simple databases - they contain redefinitions, multi-view records, and conditional logic that determines data values based on business states. COBOL programs embed business intelligence like customer-type based calculations, regulatory compliance rules, and transaction processing logic that's often undocumented. Teams treating mainframe data like standard files discover painful surprises during migration when they realize the "data" includes decades of business logic scattered throughout conditional statements and 88-level condition names. This complexity extends to testing: converting COBOL business rules and EBCDIC data formats demands extensive validation that most distributed-system testers can't handle without deep mainframe expertise.
Mistake #2: Delaying dependency discovery.
Mainframes feed dozens of systems through complex webs of middleware like WebSphere, CICS Transaction Gateway, Enterprise Service Bus, plus shared utilities, schedulers, and business processes. The costly mistake is waiting too long to thoroughly map all these connections, especially downstream data feeds and consumption patterns. Your data lineage must capture every system consuming mainframe data, from reporting tools to partner integrations, because modernization projects can't go live when teams discover late in development that preserving these data feeds and business process expectations requires extensive rework that wasn't budgeted or planned.
Mistake #3: Tolerating knowledge bottlenecks.
Relying on two or three mainframe experts for a million-line modernization project creates a devastating traffic jam where entire teams sit idle waiting for answers. Around 60% of mainframe specialists are approaching retirement, yet organizations attempt massive COBOL conversions with skeleton crews already stretched thin by daily operations. Your expensive development team, cloud architects, and business analysts become inefficient and underutilized because everything funnels through the same overworked experts. The business logic embedded in decades-old COBOL programs often exists nowhere else, creating dangerous single points of failure that can derail years of investment and waste millions in team resources.
Mistake #4: Modernizing everything indiscriminately.
Organizations waste enormous effort converting obsolete, duplicate, and inefficient code that should be retired or consolidated instead. Mainframe systems often contain massive amounts of redundant code - programs copied by developers who didn't understand dependencies, inefficient routines that were never optimized, and abandoned utilities that no longer serve any purpose. Research shows that 80% of legacy code hasn't been modified in over 5 years, yet teams spend months refactoring dead applications and duplicate logic that add no business value. The mistake is treating all millions of lines of code equally rather than analyzing which programs actually deliver business functionality. Proper assessment identifies code for retirement, consolidation, or optimization before expensive conversion, dramatically reducing modernization scope and cost.
Mistake #5: Starting without clear business objectives.
Many modernization projects fail because organizations begin with technology solutions rather than business outcomes. Teams focus on "moving to the cloud" or "getting off COBOL" without defining what success looks like in business terms. According to research, 80% of IT modernization efforts fall short of savings targets because they fail to address the right complexity. The costly mistake is launching modernization without stakeholder alignment on specific goals - whether that's reducing operational costs, reducing risk in business continuity, or enabling new capabilities. Projects that start with clear business cases and measurable objectives have significantly higher success rates and can demonstrate ROI that funds subsequent modernization phases.
If you want to avoid these mistakes or need helping overcoming these challenges, reach out to Zengines.