Data lineage is the comprehensive tracking of data usage within an organization. This includes how data originates, how it is transformed, how it is calculated, its movement between different systems, and ultimately how it is utilized in applications, reporting, analysis, and decision-making.
With the increasing complexities of business technology, data lineage analysis has become essential for most organizations. This article provides an overview of the fundamentals, importance, uses, and challenges of data lineage.
Data lineage facilitates improved data transparency, quality, and consistency by enabling organizations to track and understand the complete lifecycle of their data assets. It helps with decision-making when sourcing and using data. It also helps with transforming data, especially for larger organizations with mission-critical applications and intricate data landscapes.
There are several factors to consider with data lineage:
Data lineage plays a key role in keeping data valuable and effective in a business setting. Here are a few ways that data lineage can deliver benefits to an organization.
Data has incredible value in an information age. To realize the full value, data must be accurate and accessible. In other words, it becomes trustworthy only when it can be understood by anyone using it, and when the processing steps keep the data accurate. Data lineage provides transparency into the flow of data. It increases understanding and makes it easier for non-technical users to capture insights from existing datasets, especially for aggregated or calculated data.
Data management regulations are becoming more stringent each year. Regulatory standards are tightening, and effective data management is becoming increasingly important. Data lineage can help organizations comply with GDPR, CCPA, and other data privacy laws. The transparency of data lineage makes data access, audits, and overall accountability easier. Accurate data lineage is crucial for demonstrating compliance with regulatory requirements, thereby mitigating the risk of project delays, fines, and other penalties.
Data lineage enables stronger data governance by providing the data to monitor, manage and ensure compliance to issued standards and guidelines. Because data lineage offers traceability of origin, flow, transformation and destination, it allows businesses to improve data quality, reduce inconsistencies and errors, and strengthen data management practices.
Data lineage allows companies to trace the path of data from its current form back to its source. Data lineage offers a transparent record, facilitating the understanding and management of data variability and quality throughout its journey, and ensuring reliable data for decision-making. This is particularly relevant for companies modernizing existing systems.
With data lineage, trust in data accuracy and accessibility, improved data quality, and stronger ability to govern data all triangulate for better collaboration across teams. Data lineage avoids data siloing and facilitates interdepartmental activity. When data engineers and analysts utilize the same set of data, it fosters cross-functional teamwork and minimizes errors due to bad or in consistent data. Data lineage encourages a sense of unification as team members across an organization work from the same, trusted data.
There are multiple ways that data lineage can add business value to organizations.
Zengines has invested in data lineage capabilities to support end-to-end migration of data from existing source systems to new target business systems. Data lineage is often the first research step required to ensure an efficient and accurate data migration.
Data lineage exposes data quality issues by providing a clear view of the data journey, highlighting areas where inconsistencies or errors may have occurred. This makes it easier to engage in effective, detailed data analytics.
Consider, for instance, a financial services company with decades-old COBOL programs. Data lineage provides insights for organizations trying to replicate reporting or other outputs from these aging programs.
Data lineage makes it easier to identify and trace errors back to their source. Finding the root cause of an error quickly is extremely valuable in a world where time is at a premium.
An important aspect of data security and privacy compliance is keeping data safe guarded at all times. Data lineage provides an understanding of the data life cycle that can show information security groups the steps that must be reviewed and secured.
Comprehensive data lineage makes it easier to demonstrate compliance with data privacy regulations. For example, Banks and Payments Processors are subject to GLBA (Gramm-Leach-Bliley Act), PCI DSS(Payment Card Initiative - Data Security Standards), EU GDPR (European General Data Protection Regulation), and many other regulations that protect Personally Identifiable Information (PII). The knowledge of how any data element is used allows it to be protected, masked, or hidden when appropriate.
Data Mesh and Data Fabric are advanced data architectures that help to decentralize data and integrate it across diverse data sources. Understanding the data lineage allows data management teams to make trustworthy data available to Data Mesh / Data Fabric consumers. Data lineage makes it possible to determine the correct data to store and use for a given purpose (decision making, analytics, reporting, etc.). Data lineage is typically part of any new Data Mesh / Data Fabric initiative.
Data lineage is useful but can also face challenges. Here are a few potential issues.
Siloed data continues to be a major hurdle for tracing business data across departments and organizations. Consider when a security trade is being made. The security details are usually maintained in a reference data / Master Data Management application. The bid / ask information comes from many different market vendors and is updated continuously. The trading application computes the value of the trade, and any tax impact is computed in an investment accounting application. Is the same data being used across them all? Do they use different terminology? Do the applications all use the same pricing information? For accurate reporting and good decision making, it is vital that the same data is used in every step.
Mapping data lineage in increasingly complex environments is also a concern. Things like on-site and cloud storage, as well as remote, hybrid, and in-person work environments, make data complexity and fragmentation a growing issue that requires attention.
Historically, capturing and maintaining data lineage has been resource-intensive work performed by analysts with a deep understanding of the business. Given the quantity of data and code involved, a manual approach is prohibitively expensive for most companies. Most software solutions provide a partial view, only showing data stored in relational databases or excluding logic found in computer programs.
The best option is to find a balance between manual and automated solutions that enable cost-effective data lineage frameworks.
Data lineage is more than a backward-looking activity. Organizations also need to maintain up-to-date lineage information as systems are changed and replaced over time. In an era of constant change, data lineage teams are challenged to incorporate new forms of data usage or data transformation.
Data lineage is becoming a critical part of any company’s data management strategy. In an information age where data and analytics are king, data lineage enables companies to maintain clean, transparent, traceable datasets. This empowers data-driven decision-making and encourages cross-collaborative efforts.
Data lineage addresses a central part of business operations. It provides a powerful sense of digital clarity as organizations navigate increasingly complex tools, systems, and regulatory landscapes.
Forward-thinking technical and non-technical leaders alike should be encouraging their organizations to improve their data lineage strategies. Investments in data lineage result in a valuable new data assets that provide greater business agility and competitive advantage.
Data lineage isn’t just a nice-to-have—it’s essential for modern businesses navigating system changes, compliance pressures, and complex tech stacks. Whether you're migrating from legacy systems, improving analytics, or strengthening data governance, data lineage empowers teams to move faster, reduce risk, and make better decisions.
At Zengines, we’ve built our data lineage capabilities to do more than just document data flow. Our lineage engine integrates deeply with legacy codebases, like mainframe COBOL modules, and modern environments alike—giving you full visibility into how data is transformed, used, and governed across your systems. With AI-powered analysis, automation, and an intuitive interface, Zengines transforms lineage from a bottleneck into a business advantage.
Ready to see what intelligent data lineage can do for your organization?

The 2008 financial crisis exposed a shocking truth: major banks couldn't accurately report their own risk exposures in real-time.
When Lehman Brothers collapsed, regulators discovered that institutions didn't know their actual exposure to toxic assets -- not because they were hiding it, but because they genuinely couldn't aggregate their own data fast enough.
Fifteen years and billions in compliance spending later, only 2 out of 31 Global Systemically Important Banks fully comply with BCBS-239 -- the regulation designed to prevent this exact problem.
The bottleneck? Data lineage.
BCBS-239 applies from January 1, 2016 for Global Systemically Important Banks (G-SIBs) and is recommended by national supervisors for Domestic Systemically Important Banks (D-SIBs) three years after their designation. In practice, this means hundreds of banks worldwide are now expected to comply.
Unlike regulations with fixed annual filing deadlines, BCBS-239 is an ongoing compliance requirement. Supervisors can test a bank's compliance with occasional requests on selected risk issues with short deadlines, gauging a bank's capacity to aggregate risk data rapidly and produce risk reports.
Think of it as a fire drill that can happen at any moment -- and with increasingly serious consequences for failure.
More than a decade after publication and eight years past the compliance deadline, the results are dismal. Only 2 out of 31 assessed Global Systemically Important Banks fully comply with all principles, and no single principle has been fully implemented by all banks.
Even more troubling, the compliance level across all principles barely improved from an average of 3.14 in 2019 to 3.17 in 2022 on a scale of 1 ("non-compliant") to 4 ("fully compliant"). At this rate of improvement, full compliance is decades away.
The consequences are escalating. The ECB guide explicitly mentions:
The Basel Committee makes it clear that banks' progress towards BCBS 239 compliance in recent years has not been satisfactory and that increased measures on the part of the supervisory authorities are to be expected to accelerate implementation.
Most banks have responded to BCBS-239 with predictable tactics:
These tactics, as positive steps forward, are necessary but not sufficient to meeting compliance. In other words, they're checking boxes without fundamentally solving the problem.
The issue? Banks are treating BCBS-239 like a project with an end date, when it's actually an operational capability that must be demonstrated continuously.
Among the 14 principles, one capability has emerged as the make-or-break factor for compliance: data lineage.
Data lineage has been identified as one of the key challenges that banks have faced in aligning to the BCBS-239 principles, as it is one of the more time consuming and resource intensive activities demanded by the regulation.
Data lineage -- the ability to trace data from its original source through every transformation to its final destination -- sits at the intersection of virtually every BCBS-239 principle. The European Central Bank refers to data lineage as "a minimum requirement of data governance" in the latest BCBS 239 recommendations.
Here's why lineage is uniquely difficult:
It's invisible until you need it.
Unlike a data governance policy you can show an auditor or a data quality dashboard you can pull up, lineage is about proving flows, transformations, and dependencies that exist across dozens or hundreds of systems. You can't fake it in a PowerPoint.
It crosses organizational and system boundaries.
Complete lineage requires cooperation between IT, risk, finance, operations, and business units -- each with their own priorities, systems, and definitions. Further, data hand-off occurs in and between systems, databases and files, which adds to the complexity of connecting what happens at each hand-off. Regulators are increasingly requiring detailed traceability of reported information, which can only be achieved through lineage across organizations and systems.
It must be current and complete.
The ECB requires "complete and up-to-date data lineages on data attribute level (starting from data capture and including extraction, transformation and loading) for the risk indicators, and their critical data elements." A lineage document from six months ago is worthless if your systems have changed.
It must work under pressure.
Supervisors increasingly require institutions to demonstrate the effectiveness of their data frameworks through on-site inspections and fire drills, with data lineage providing the audit trail necessary for these reviews. When a regulator asks "prove this number came from where you say it came from," you have hours -- not days -- to respond.
While 11 of the 14 principles benefit from good data lineage, regulatory guidance makes it explicitly mandatory for eight:
Most banks have invested heavily in data catalogs, metadata management platforms, and governance frameworks. Yet they still can't produce lineage evidence under audit conditions. Why?
Traditional approaches have three fatal flaws:
Excel-based lineage documentation becomes outdated within weeks as systems change. By the time you finish documenting one data flow, three others have been modified. Manual approaches simply can't keep pace with modern banking environments.
Modern data lineage tools can map cloud warehouses and APIs, but they hit a wall when they encounter legacy mainframe systems. They can't parse COBOL code, decode JCL job schedulers, or trace data through decades-old custom applications -- exactly where banks' most critical risk calculations often live.
Lineage that stops at the data warehouse is fundamentally incomplete under BCBS-239's end-to-end data lineage requirements. Regulators want to see the complete path -- from original source system through every transformation, including hard-coded business logic in legacy applications, to the final risk report. Most tools miss 40-70% of the actual transformation logic.
This is where AI-powered solutions like Zengines fundamentally differ from traditional approaches.
Instead of manually documenting lineage, Zengines can automatically and comprehensively:
For many banks, the biggest lineage gap isn't in modern systems -- it's in legacy mainframes where critical risk calculations were encoded 20-60 years ago by developers who have long since retired. These systems are literal "black boxes": they produce numbers, but no one can explain how.
Zengines' Mainframe Data Lineage capability specifically addresses this challenge by:
This capability is essential for banks that need to prove how legacy calculations work -- whether for regulatory compliance, system modernization, or simply understanding their own risk models.
The critical question isn't "Do we have data lineage?" It's "Can we prove compliance through data lineage right now, under audit conditions, with short notice?"
Most banks would answer: "Well, sort of..."
That's not good enough anymore.
We've translated ECB supervisory expectations into a practical, principle-by-principle checklist. This isn't about aspirational capabilities or future roadmaps -- it's about what you can demonstrate today, under audit conditions, with short notice.
The bottleneck to full BCBS-239 compliance is clear: data lineage.
Traditional approaches -- manual documentation, point solutions, incomplete coverage -- can't solve this problem fast enough. The compliance deadline was 2016. Enforcement is escalating. Fire drills are becoming more frequent and demanding.
Banks that solve the lineage challenge with AI-powered automation will demonstrate compliance in hours instead of months. Those that don't will continue struggling with the same gaps, facing increasing regulatory pressure, and risking enforcement actions.
The technology to solve this exists today. The question is: how long can your bank afford to wait?
Schedule a demo with our team today to get started.

BOSTON, MA – October 29, 2025 – Zengines, the AI-powered data migration and data lineage platform, announces expanded support for RPG (Report Program Generator) language in its Data Lineage product. Organizations running IBM i (AS/400) systems can now rapidly analyze legacy RPG code alongside COBOL, dramatically accelerating modernization initiatives while reducing dependency on scarce programming expertise.
Many enterprises still rely on mission-critical applications written in RPG decades ago, creating what Zengines calls the "black box" problem – legacy technology where business logic, data flows, and requirements are locked away in legacy code with little to no documentation. As companies undertake digital transformation and cloud migration initiatives, understanding these legacy systems has become a critical bottleneck.
The challenge with RPG is particularly acute. While COBOL's descriptive, English-like syntax makes it easier to "read," RPG's fixed-format column specifications and cryptic operation codes require developers to decode what goes in which column while tracing through numbered indicators to follow the logic. This complexity, combined with a shrinking pool of RPG expertise, makes understanding these systems even more critical—and difficult—than their COBOL counterparts.
"The majority of our enterprise customers are running legacy technology across multiple platforms – both mainframe COBOL environments and IBM i systems with RPG code," said Caitlyn Truong, CEO of Zengines. "By expanding our support to include RPG alongside COBOL, we can now address the full spectrum of legacy code challenges these organizations face. This means our customers can leverage a single AI-powered platform to comprehensively analyze, understand and modernize their legacy technology estate, rather than cobbling together multiple point solutions or relying on increasingly scarce programming expertise across different languages and systems."
The enhanced Zengines Data Lineage platform automatically ingests RPG code, job schedulers, and related artifacts to deliver:
This capability is critical for organizations navigating system replacements, M&A integrations, compliance initiatives, and technology modernization programs where understanding legacy RPG logic is essential for de-risking implementations.
Managing and modernizing legacy systems break down when teams lack complete understanding of existing logic. Migrations stall when teams cannot achieve functional coverage or resolve test failures. When validating new systems against legacy outputs, discrepancies inevitably emerge – but without understanding why the old system produces specific results, teams cannot effectively test, replicate, or improve functionality.
"Our customers use Zengines to reverse-engineer business requirements from legacy code," added Truong. "When a new system returns a different result for an interest calculation compared to that of the 40-year-old RPG program, teams need to understand the original logic to make informed decisions about what to preserve and what to update. That's the power of shining a light into the black box."
RPG parsing capability is now available on Zengines Data Lineage platform. Organizations can analyze both COBOL and RPG codebases within a single integrated platform.
Zengines is a technology company that transforms how organizations handle data migrations and modernization inititatives. Zengines serves business analysts, developers, and transformation leaders who need to map, change, and move data across systems. With deep expertise in AI, data migration, and legacy systems, Zengines helps organizations reduce time, cost, and risk associated with their most challenging data initiatives.
Media Contact:
Todd Stone
President, Zengines
todd@zengines.ai
.jpg)
IBM's RPG (Report Program Generator) began in 1959 with a simple mission: generate business reports quickly and efficiently. What started as RPG I evolved through multiple generations - RPG II, RPG III, RPG LE, and RPG IV - each adding capabilities that transformed it from a simple report tool into a full-featured business programming language. Today, RPG powers critical business applications across countless AS/400, iSeries, and IBM i systems. Yet for modern developers, understanding RPG's unique approach and legacy codebase presents distinct challenges that make comprehensive data lineage essential.
Built-in Program Cycle: RPG's fixed-logic cycle automatically handled file operations, making database processing incredibly efficient. The cycle read records, processed them, and wrote output with minimal programmer intervention - a major strength that processed data sequentially, making it ideal for report generation and business data handling.
Native Database Integration: RPG was designed specifically for IBM's database systems, providing direct interaction with database files and making it ideal for transactional systems where fast and reliable data processing is essential. It offered native access to DB2/400 and its predecessors, with automatic record locking, journaling, and data integrity features.
Rapid Business Application Development: For its intended purpose - business reports and data processing - RPG was remarkably fast to code. The fixed-format specifications (H, F, D, C specs) provided a structured framework that enforced consistency and simplified application creation.
Exceptional Performance and Scalability: RPG applications typically ran with exceptional efficiency on IBM hardware, processing massive datasets with minimal resource consumption. RPG programming language has the ability to handle large volumes of data efficiently.
Evolutionary Compatibility: The language's evolution path meant that RPG II code could often run unchanged on modern IBM i systems - a testament to IBM's commitment to backward compatibility that spans over 50 years.
RPG II (Late 1960s): The classic fixed-format version with its distinctive column-specific coding rules and built-in program logic cycle, used on System/3, System/32, System/34, and System/36.
RPG III (1978): Added subroutines, improved file handling, and more flexible data structures while maintaining the core cycle approach. Introduced with System/38, later rebranded as "RPG/400" on AS/400.
RPG LE - Limited Edition (1995): A simplified version of RPG IV designed for smaller systems, notably including a free compiler to improve accessibility.
RPG IV/ILE RPG (1994): The major evolution that introduced modular programming with procedures, prototypes, and the ability to create service programs within the Integrated Language Environment - finally bringing modern programming concepts to RPG.
Free-Format RPG (2013): Added within RPG IV, this broke away from the rigid column requirements while maintaining backward compatibility, allowing developers to write code similar to modern languages.
Steep Learning Curve: RPG's fixed-logic cycle and column-specific formatting are unlike any modern programming language. New developers must understand both the language syntax and the underlying program cycle concept, which can be particularly challenging.
Limited Object-Oriented Capabilities: Even modern RPG versions lack full object-oriented programming capabilities, making it difficult to apply contemporary design patterns and architectural approaches.
Cryptic Operation Codes: Traditional RPG used operation codes like "CHAIN," "SETLL," and "READE" with rigid column requirements that aren't intuitive to developers trained in modern, free-format languages.
Complex Maintenance Due to Evolution: The evolution from RPG II (late 1960s) through RPG III (1978) to RPG IV/ILE RPG (1994) and finally free-format coding (2013) created hybrid codebases mixing multiple RPG styles across nearly 50 years of development, making maintenance and understanding complex for teams working across different generations of the language.
Proprietary IBM-Only Ecosystem: Unlike standardized languages, RPG has always been IBM's proprietary language, creating vendor lock-in and concentrating expertise among IBM specialists rather than fostering broader community development.
RPG presents unique challenges that go beyond typical legacy system issues, rooted in decades of development practices:
Given these unique challenges - multiple format styles, embedded business logic, and lost institutional knowledge - how do modern teams gain control over their RPG systems without risking business disruption? The answer lies in understanding what your systems actually do before attempting to change them.
Modern data lineage tools provide exactly this understanding by:
RPG systems represent decades of business logic investment that often process a company's most critical transactions. While the language may seem archaic to modern eyes, the business logic it contains is frequently irreplaceable. Success in managing RPG systems requires treating them not as outdated code, but as repositories of critical business knowledge that need proper mapping and understanding.
Data lineage tools bridge the gap between RPG's unique characteristics and modern development practices, providing the visibility needed to safely maintain, enhance, plan modernization initiatives, extract business rules, and ensure data integrity during system changes. They make these valuable systems maintainable and evolutionary rather than simply survivable.
.png)