Articles

Mainframe Data Lineage: The Critical Path to Regulatory Survival in Financial Services

July 17, 2025
Caitlyn Truong

The Billion-Dollar Cost of Operational Blindness

The financial services industry is learning expensive lessons about the true cost of treating mainframe systems as "black boxes." Over the past few years, three major banking institutions have paid nearly $1 billion in combined penalties—not for exotic trading losses or cyber breaches, but for fundamental failures in data visibility and risk management that proper mainframe data lineage could have prevented.

With mainframes processing 70% of global financial transactions daily, 95% of credit card transactions, and 87% of ATM transactions, these aren't isolated incidents—they're wake-up calls for an industry that can no longer afford operational blindness in its most critical infrastructure.

JPMorgan's $348M Wake-Up Call: When Trading Data Goes Dark

In March 2024, JPMorgan Chase paid $348 million in penalties for a decade-long failure that left billions of transactions unmonitored across 30+ global trading venues. The US Federal Reserve and Office of the Comptroller of the Currency found that "certain trading and order data through the CIB was not feeding into its trade surveillance platforms" between 2014 and 2023.

This wasn't oversight—it was systematic breakdown of market conduct risk controls required under US banking regulations.

The Mainframe Connection

JPMorgan, like 92 of the world's top 100 banks, relies heavily on mainframe systems for core trading operations. These IBM Z systems process the vast majority of transaction volume, but the critical problem emerges when trading data originates on mainframes and feeds downstream surveillance platforms. Without comprehensive data lineage, gaps create dangerous blind spots where billions in transactions can slip through unmonitored.

The $348 million penalty signals that regulators expect complete transparency in data flows. For banks running critical operations on mainframe systems without proper data lineage, JPMorgan's experience serves as an expensive reminder: you can't manage what you can't see.

Citi's $536M Data Governance Breakdown: A Decade of Blindness

The pain continued with Citibank's even costlier lesson. In October 2020, Citi received a $400M penalty from the Office of the Comptroller of the Currency, followed by an additional $136M in combined fines in 2024 from both the OCC and Federal Reserve—totaling $536M for systematic failures in data governance and risk data aggregation that regulators called "longstanding" and "widespread."

The Core Problem

The OCC found that Citi failed to establish effective risk data aggregation processes, develop comprehensive data governance plans, produce timely regulatory reporting, and adequately report data quality status. Some issues dated back to 2013—nearly a decade of compromised data visibility.

The Mainframe Reality

Like virtually all major banks, Citi runs core banking operations on mainframes where critical risk data originates. Every loan, trade, and customer transaction flows through these platforms before being aggregated into enterprise risk reports that regulators require. The problem? Most banks treat mainframes as "black boxes" where data transformations remain opaque to downstream risk management systems.

Citi's penalty represents the cost of operational blindness in critical infrastructure. The regulatory failures around data governance and risk aggregation highlight exactly the kind of visibility gaps that comprehensive mainframe data lineage addresses.

Danske Bank's $2B+ Problem: BCBS 239's Persistent Challenge

The pattern culminates with Danske Bank's ongoing struggle, which has resulted in $2B+ in penalties since 2020. While these stemmed from various violations, many could likely have been exposed earlier through proper BCBS 239 compliance. The bank's transaction monitoring failures and AML deficiencies represent clear gaps in the comprehensive risk data aggregation that BCBS 239 requires.

BCBS 239: Banking's Most Persistent Challenge

Nearly 11 years after publication and 9 years past its deadline, BCBS 239 remains banking's most persistent regulatory challenge. The November 2023 progress report reveals a sobering reality: only 2 out of 31 global systemically important banks achieved full compliance. Not a single principle has been fully implemented across all assessed banks.

The Escalating Consequences

The ECB has made BCBS 239 deficiencies a top supervisory priority for 2025-2027, explicitly warning that non-compliance could trigger "enforcement actions, capital add-ons, and removal of responsible executives." With regulatory patience exhausted, the consequences are no longer just financial—they're existential.

The Mainframe Blind Spot: Why Traditional Approaches Fail

Most BCBS 239 discussions miss a critical point: the majority of banks' risk data originates on mainframe systems that handle core banking operations and risk calculations. The Basel Committee's assessment highlights the core issue: "Several banks still lack complete data lineage, which complicates their ability to harmonize systems and detect data defects."

With mainframes handling 83% of all global banking transactions, understanding these systems is no longer optional. Yet banks continue to struggle because:

  • Legacy Complexity: Decades-old COBOL programs lack documentation and follow code patterns that traditional lineage tools can't interpret
  • Operational Opacity: Data transformations through complex JCL jobs, VSAM files, and DB2 operations remain invisible to downstream risk management systems
  • Technical Barriers: Business analysts can't access or understand mainframe data flows without deep technical expertise

How Mainframe Data Lineage Solves the Crisis

The solution lies in comprehensive mainframe data lineage that addresses these fundamental blind spots:

Complete Visibility: Modern tools can trace data flows from mainframe CICS transactions through DB2 operations to downstream systems, mapping exactly how critical risk data moves through complex transformations that conventional tools miss.

Business Accessibility: The right platforms enable business analysts to discover and act on mainframe information without requiring technical expertise—transforming data lineage from technical obscurity into actionable business intelligence.

Automated Monitoring: Real-time tracking of mainframe batch processes detects when critical risk calculations fail or produce inconsistent results, preventing the systematic failures that cost JPMorgan, Citi, and Danske Bank billions.

Regulatory Preparedness: Banks can trace exactly where specific data resides within mainframe environments and extract it rapidly when regulators demand it—the core capability that BCBS 239 requires.

The Regulatory Survival Imperative

After a decade of BCBS 239 implementation struggles and nearly $1 billion in recent penalties, it's clear traditional approaches aren't working. Banks still wrestling with data aggregation challenges haven't invested in understanding their mainframe data flows.

The evidence is overwhelming:

  • JPMorgan's trading surveillance gaps cost $348M
  • Citi's data governance failures cost $536M
  • Danske Bank's ongoing compliance failures exceed $2B
  • Only 2 of 31 major banks achieve full BCBS 239 compliance

With the ECB intensifying enforcement and supervisory patience exhausted, mainframe data lineage isn't just modernization—it's regulatory survival infrastructure.

The Path Forward: From Black Box to Transparency

The financial services industry stands at a crossroads. Banks can continue treating mainframe systems as mysterious legacy platforms while paying escalating regulatory penalties, or they can invest in the comprehensive data lineage capabilities that modern compliance demands.

The choice is clear: illuminate your mainframe data flows or continue paying the billion-dollar cost of operational blindness. With regulators expecting rapid and recurring risk data aggregation, banks can no longer afford to manage what they cannot see.

Ready to illuminate your mainframe data flows and achieve regulatory compliance? The path forward starts with understanding what you can't currently see—before regulators demand answers you can't provide.

You may also like

The Biggest Mistakes in Mainframe Modernization

Mistake #1: Underestimating embedded complexity.

Mainframe systems combine complex data formats AND decades of embedded business rules that create a web of interdependent complexity. VSAM files aren't simple databases - they contain redefinitions, multi-view records, and conditional logic that determines data values based on business states. COBOL programs embed business intelligence like customer-type based calculations, regulatory compliance rules, and transaction processing logic that's often undocumented. Teams treating mainframe data like standard files discover painful surprises during migration when they realize the "data" includes decades of business logic scattered throughout conditional statements and 88-level condition names. This complexity extends to testing: converting COBOL business rules and EBCDIC data formats demands extensive validation that most distributed-system testers can't handle without deep mainframe expertise.

Mistake #2: Delaying dependency discovery.

Mainframes feed dozens of systems through complex webs of middleware like WebSphere, CICS Transaction Gateway, Enterprise Service Bus, plus shared utilities, schedulers, and business processes. The costly mistake is waiting too long to thoroughly map all these connections, especially downstream data feeds and consumption patterns. Your data lineage must capture every system consuming mainframe data, from reporting tools to partner integrations, because modernization projects can't go live when teams discover late in development that preserving these data feeds and business process expectations requires extensive rework that wasn't budgeted or planned.

Mistake #3: Tolerating knowledge bottlenecks.

Relying on two or three mainframe experts for a million-line modernization project creates a devastating traffic jam where entire teams sit idle waiting for answers. Around 60% of mainframe specialists are approaching retirement, yet organizations attempt massive COBOL conversions with skeleton crews already stretched thin by daily operations. Your expensive development team, cloud architects, and business analysts become inefficient and underutilized because everything funnels through the same overworked experts.  The business logic embedded in decades-old COBOL programs often exists nowhere else, creating dangerous single points of failure that can derail years of investment and waste millions in team resources.

Mistake #4: Modernizing everything indiscriminately.

Organizations waste enormous effort converting obsolete, duplicate, and inefficient code that should be retired or consolidated instead. Mainframe systems often contain massive amounts of redundant code - programs copied by developers who didn't understand dependencies, inefficient routines that were never optimized, and abandoned utilities that no longer serve any purpose. Research shows that 80% of legacy code hasn't been modified in over 5 years, yet teams spend months refactoring dead applications and duplicate logic that add no business value. The mistake is treating all millions of lines of code equally rather than analyzing which programs actually deliver business functionality. Proper assessment identifies code for retirement, consolidation, or optimization before expensive conversion, dramatically reducing modernization scope and cost.

Mistake #5: Starting without clear business objectives.

Many modernization projects fail because organizations begin with technology solutions rather than business outcomes. Teams focus on "moving to the cloud" or "getting off COBOL" without defining what success looks like in business terms. According to research, 80% of IT modernization efforts fall short of savings targets because they fail to address the right complexity. The costly mistake is launching modernization without stakeholder alignment on specific goals - whether that's reducing operational costs, reducing risk in business continuity, or enabling new capabilities. Projects that start with clear business cases and measurable objectives have significantly higher success rates and can demonstrate ROI that funds subsequent modernization phases.

If you want to avoid these mistakes or need helping overcoming these challenges, reach out to Zengines.

1. Discover what you actually have.

Use automated assessment tools to catalog your entire mainframe environment—beyond obvious COBOL and DB2 applications. Many organizations discover forgotten Assembler routines, VSAM files, and IMS databases with critical business logic buried in decades-old systems. These tools expose mainframe artifacts that organizations didn't know they had, relationships they didn't realize existed, and assets that are no longer in use. Understanding complete data lineage from source files through transaction processing to downstream systems prevents costly surprises that derail projects. A Zengines customer discovered its assessment revealed integration points and data dependencies that weren't documented anywhere, saving them from breaking critical business processes.

2. Build strategic momentum with the right business case.

Build the business case that wins over skeptics and secures future funding. First step here is selecting the system / program / modules with visible pain points: high software license costs, batch windows that delay business operations, or compliance deadlines that create urgency. These issues make ROI easy to demonstrate and build stakeholder support. A major retailer chose supply chain applications that consumed significant license fees, delivering $2M+ annual savings that funded broader modernization. A financial services firm tackled notorious overnight batch processing, cutting runtime from 10 hours to 3 hours for $10M savings that transformed executive perception of modernization value.  Success with the right first application creates organizational momentum and proves modernization capabilities, making subsequent projects easier to approve and execute.

3. Be sensible with where you start to prove value.

Begin with read-only reporting or customer lookup systems that access mainframe data but don't update core business transactions. These applications have simpler data lineage paths and fewer CICS integration points, making them ideal proving grounds for your conversion methodology. They carry minimal business risk while demonstrating that your approach handles real mainframe data formats and business logic correctly. More importantly, this can validate data lineage mapping accuracy - you'll discover whether customer information actually flows the way you documented, and uncover valuable insights about data quality, usage patterns, and undocumented business rules embedded in COBOL programs. This intelligence becomes crucial for planning larger transaction-processing modernizations. Success with inquiry applications builds organizational confidence by showing concrete evidence that modernization works with your specific mainframe environment, making it significantly easier to secure executive approval and funding for mission-critical system upgrades.

Three Keys to Successful Mainframe Refactoring: A Practical Guide

With 96% of companies moving mainframe workloads to the cloud, yet 74% of modernization projects failing, organizations need a systematic approach to refactoring legacy systems. The difference between success and failure lies in addressing three critical challenges: dependency visibility, testing optimization, and knowledge democratization.

The Hidden Challenge

Mainframe systems built over decades contain intricate webs of dependencies that resist modernization, but the complexity runs deeper than most organizations realize. Unlike modern applications designed with clear interfaces, documentation standards and plentiful knowledge resources, legacy systems embed business logic within data relationships, file structures, and program interactions that create three critical failure points during mainframe refactoring:

Hidden Dependencies: Runtime data flows and dynamic relationships that static analysis cannot reveal, buried in millions of lines of code across interconnected systems.

Invisible Testing Gaps: Traditional validation approaches fail to catch the complex data transformations and business logic embedded in mainframe applications, leaving critical edge cases undiscovered until production.

Institutional Knowledge Scarcity: The deep understanding needed to navigate these invisible complexities exists only in the minds of departing veterans.

Any one of these challenges can derail a refactoring project. Combined, they create a perfect storm that explains why 74% of modernization efforts fail. Success requires ensuring this critical information is available throughout the refactoring effort, not left to chance or discovery during code transformation.

Key 1: Master Data Dependencies Before Code Conversion

The Problem: Runtime data flows and dynamic dependencies create invisible relationships that static analysis cannot reveal.

The Problem: Complex data flows and dynamic dependencies create invisible relationships that span program execution flows, database navigation patterns, and runtime behaviors.

Implementation Checklist

□ Trace Data Element Journeys Across All Systems

  • Identify program actions that reads, modifies, or depends on specific data structures
  • Map cross-application data sharing through job control language (JCL) and program execution sequences

□ Understand Database and Program Execution Patterns

  • Analyze JCL/CL job flows to understand program dependencies and execution order
  • Map hierarchical (IMS) and network (IDMS) database structures and navigation paths
  • Identify data-driven business logic that changes based on content and processing context

□ Access Hidden Business Rules

  • Identify validation logic embedded in program execution sequences
  • Discover error handling routines that function as business rules
  • Uncover edge cases handled through decades of modifications

□ Generate Impact Analysis

  • Visualize effects of modifying specific programs or data structures
  • Understand downstream impacts from changing data formats or program execution flows
  • Access comprehensive decomposition analysis for monolithic applications

What It Looks Like in Real Life

Manual Approach: Teams spend months interviewing SMEs, reading through millions of lines of undocumented code, and creating spreadsheets to track data flows and job dependencies. The scale and complexity make it impossible to find all relationships—critical dependencies exist in JCL execution sequences, database navigation patterns, and runtime behaviors that are buried in decades of modifications. Even after extensive documentation efforts, teams miss interconnected dependencies that cause production failures.

With Zengines: Complete data lineage mapping across all systems in days. Interactive visualization shows exactly how customer data flows from the 1985 COBOL program through job control sequences, database structures, and multiple processing steps, including execution patterns and database behaviors that documentation never captured.

Success Metrics

  • Complete visibility into data flows, program dependencies, and execution patterns
  • Real-time access to comprehensive refactoring complexity analysis
  • Zero surprises during code conversion phase

Key 2: Implement Data Lineage-Driven Testing

The Problem: Traditional testing approaches fail to validate the complex data transformations and business logic embedded in mainframe applications. While comprehensive testing includes performance, security, and integration aspects, the critical foundation is ensuring data accuracy and transformation correctness.

Implementation Checklist

□ Establish Validation Points at Every Data Transformation

  • Identify test checkpoints at each step where data changes hands between programs
  • Monitor intermediate calculations and business rule applications
  • Track data transformation throughout the process

□ Generate Comprehensive Data-Driven Test Scenarios

  • Create test cases covering all conditional logic branches based on data content
  • Build transaction sequences that replicate actual data flow patterns
  • Include edge cases and error conditions that exercise unusual data processing paths

□ Enable Data-Focused Shadow Testing

  • Process test data through refactored systems alongside legacy systems
  • Compare data transformation results at every lineage checkpoint
  • Monitor data accuracy and consistency during parallel data processing

□ Validate Data Integrity at Scale

  • Test with comprehensive datasets to identify data accuracy issues
  • Monitor for cumulative calculation errors in long-running data processes
  • Verify data transformations produce identical results to legacy systems

What It Looks Like in Real Life

Manual Approach: Testing teams manually create hundreds of test cases, then spend weeks comparing data outputs from old and new systems. The sheer volume of data transformation points makes comprehensive coverage impractical—when data discrepancies appear across thousands of calculation steps, teams have no way to trace where in the complex multi-program data flow the difference occurred. Manual comparison of data transformations across interconnected legacy systems becomes impossible at scale.

With Zengines: Enable test generation automation to create thousands of data scenarios based on actual processing patterns. Self-service validation at every data transformation checkpoint to pinpoint exactly where refactored logic produces different data results—down to the specific calculation or business rule application.

Success Metrics

  • Test coverage across all critical data transformation points
  • Validation of data accuracy and business logic correctness
  • Confidence in refactored data processing before cutover

Key 3: Democratize Institutional Knowledge

The Problem: Critical system knowledge exists only in the minds of retiring experts, creating bottlenecks that severely delay modernization projects.

Implementation Checklist

□ Access Comprehensive Data Relationship Mapping

  • Obtain complete visualization of how data flows between systems and programs
  • Understand business logic and transformation rules embedded in legacy code
  • Enable team members to explore system dependencies without expert consultation

□ Extract Business Context from Legacy Systems

  • Capture business rules and validation requirements from existing code
  • Link technical implementations to business processes and requirements
  • Create accessible knowledge bases with complete rule extraction

□ Enable Independent Impact Analysis

  • Provide capabilities to show downstream effects of proposed changes
  • Allow developers to trace data origins and dependencies during refactoring
  • Support business analysts in validating modernized logic

□ Eliminate SME Consultation Bottlenecks

  • Provide role-based access to comprehensive system analysis
  • Enable real-time exploration of data flows and business rules
  • Deliver complete context for development and testing teams

What It Looks Like in Real Life

Manual Approach: Junior developers submit tickets asking "What happens if I change this customer validation routine?" and wait 2 weeks for Frank to review the code and explain the downstream impacts. The interconnected nature of decades-old systems makes it impractical to document all relationships—Frank might remember 47 downstream systems, but miss the obscure batch job that runs monthly. The breadth of institutional knowledge across millions of lines of code is impossible to capture manually, creating constant bottlenecks as project velocity crawls.

With Zengines: Any team member clicks on the validation routine and instantly sees its complete impact map—every consuming program, all data flows, and business rules. Questions get answered in seconds instead of weeks, keeping modernization projects on track.

Success Metrics

  • 80% reduction in SME consultation requests
  • Independent access to system knowledge for all team members
  • Accelerated decision-making without knowledge transfer delays

Technology Enablers

Modern platforms like Zengines - Accelerate & De-Risk Your Data Projects  automate much of the dependency mapping, testing framework creation, and knowledge extraction.

Take Action

Successful mainframe refactoring demands more than code conversion expertise. Organizations that master data dependencies, implement lineage-driven testing, and democratize institutional knowledge create sustainable competitive advantages in their modernization efforts. The key is addressing these challenges systematically before beginning code transformation, not discovering them during production deployment.

Next Steps: Assess your current capabilities in each area and prioritize investments based on your specific modernization timeline and business requirements.

Subscribe to our Insights