Esther Jesurum

Chief Architect

Esther Jesurum is the Chief Architect at Zengines, leading the design and development of the company’s AI-driven data conversion platform. With a PhD in Applied Mathematics in Computer Science, she brings deep expertise in algorithms, complexity theory, and software architecture to drive data integration and transformation innovation.

Prior to Zengines, Esther held senior architecture and engineering roles at TIBCO, Aurea Software, and Progress Software, where she specialized in designing complex software systems, data transformation, and modernization solutions. With extensive experience across the entire software lifecycle, she combines technical excellence with a problem-solving mindset to build scalable, high-performance platforms.

Esther is committed to advancing AI-driven solutions that enhance efficiency and scalability in data transformation.

Posts by this Author

Three Keys to Successful Mainframe Refactoring: A Practical Guide

With 96% of companies moving mainframe workloads to the cloud, yet 74% of modernization projects failing, organizations need a systematic approach to refactoring legacy systems. The difference between success and failure lies in addressing three critical challenges: dependency visibility, testing optimization, and knowledge democratization.

The Hidden Challenge

Mainframe systems built over decades contain intricate webs of dependencies that resist modernization, but the complexity runs deeper than most organizations realize. Unlike modern applications designed with clear interfaces, documentation standards and plentiful knowledge resources, legacy systems embed business logic within data relationships, file structures, and program interactions that create three critical failure points during mainframe refactoring:

Hidden Dependencies: Runtime data flows and dynamic relationships that static analysis cannot reveal, buried in millions of lines of code across interconnected systems.

Invisible Testing Gaps: Traditional validation approaches fail to catch the complex data transformations and business logic embedded in mainframe applications, leaving critical edge cases undiscovered until production.

Institutional Knowledge Scarcity: The deep understanding needed to navigate these invisible complexities exists only in the minds of departing veterans.

Any one of these challenges can derail a refactoring project. Combined, they create a perfect storm that explains why 74% of modernization efforts fail. Success requires ensuring this critical information is available throughout the refactoring effort, not left to chance or discovery during code transformation.

Key 1: Master Data Dependencies Before Code Conversion

The Problem: Runtime data flows and dynamic dependencies create invisible relationships that static analysis cannot reveal.

The Problem: Complex data flows and dynamic dependencies create invisible relationships that span program execution flows, database navigation patterns, and runtime behaviors.

Implementation Checklist

□ Trace Data Element Journeys Across All Systems

  • Identify program actions that reads, modifies, or depends on specific data structures
  • Map cross-application data sharing through job control language (JCL) and program execution sequences

□ Understand Database and Program Execution Patterns

  • Analyze JCL/CL job flows to understand program dependencies and execution order
  • Map hierarchical (IMS) and network (IDMS) database structures and navigation paths
  • Identify data-driven business logic that changes based on content and processing context

□ Access Hidden Business Rules

  • Identify validation logic embedded in program execution sequences
  • Discover error handling routines that function as business rules
  • Uncover edge cases handled through decades of modifications

□ Generate Impact Analysis

  • Visualize effects of modifying specific programs or data structures
  • Understand downstream impacts from changing data formats or program execution flows
  • Access comprehensive decomposition analysis for monolithic applications

What It Looks Like in Real Life

Manual Approach: Teams spend months interviewing SMEs, reading through millions of lines of undocumented code, and creating spreadsheets to track data flows and job dependencies. The scale and complexity make it impossible to find all relationships—critical dependencies exist in JCL execution sequences, database navigation patterns, and runtime behaviors that are buried in decades of modifications. Even after extensive documentation efforts, teams miss interconnected dependencies that cause production failures.

With Zengines: Complete data lineage mapping across all systems in days. Interactive visualization shows exactly how customer data flows from the 1985 COBOL program through job control sequences, database structures, and multiple processing steps, including execution patterns and database behaviors that documentation never captured.

Success Metrics

  • Complete visibility into data flows, program dependencies, and execution patterns
  • Real-time access to comprehensive refactoring complexity analysis
  • Zero surprises during code conversion phase

Key 2: Implement Data Lineage-Driven Testing

The Problem: Traditional testing approaches fail to validate the complex data transformations and business logic embedded in mainframe applications. While comprehensive testing includes performance, security, and integration aspects, the critical foundation is ensuring data accuracy and transformation correctness.

Implementation Checklist

□ Establish Validation Points at Every Data Transformation

  • Identify test checkpoints at each step where data changes hands between programs
  • Monitor intermediate calculations and business rule applications
  • Track data transformation throughout the process

□ Generate Comprehensive Data-Driven Test Scenarios

  • Create test cases covering all conditional logic branches based on data content
  • Build transaction sequences that replicate actual data flow patterns
  • Include edge cases and error conditions that exercise unusual data processing paths

□ Enable Data-Focused Shadow Testing

  • Process test data through refactored systems alongside legacy systems
  • Compare data transformation results at every lineage checkpoint
  • Monitor data accuracy and consistency during parallel data processing

□ Validate Data Integrity at Scale

  • Test with comprehensive datasets to identify data accuracy issues
  • Monitor for cumulative calculation errors in long-running data processes
  • Verify data transformations produce identical results to legacy systems

What It Looks Like in Real Life

Manual Approach: Testing teams manually create hundreds of test cases, then spend weeks comparing data outputs from old and new systems. The sheer volume of data transformation points makes comprehensive coverage impractical—when data discrepancies appear across thousands of calculation steps, teams have no way to trace where in the complex multi-program data flow the difference occurred. Manual comparison of data transformations across interconnected legacy systems becomes impossible at scale.

With Zengines: Enable test generation automation to create thousands of data scenarios based on actual processing patterns. Self-service validation at every data transformation checkpoint to pinpoint exactly where refactored logic produces different data results—down to the specific calculation or business rule application.

Success Metrics

  • Test coverage across all critical data transformation points
  • Validation of data accuracy and business logic correctness
  • Confidence in refactored data processing before cutover

Key 3: Democratize Institutional Knowledge

The Problem: Critical system knowledge exists only in the minds of retiring experts, creating bottlenecks that severely delay modernization projects.

Implementation Checklist

□ Access Comprehensive Data Relationship Mapping

  • Obtain complete visualization of how data flows between systems and programs
  • Understand business logic and transformation rules embedded in legacy code
  • Enable team members to explore system dependencies without expert consultation

□ Extract Business Context from Legacy Systems

  • Capture business rules and validation requirements from existing code
  • Link technical implementations to business processes and requirements
  • Create accessible knowledge bases with complete rule extraction

□ Enable Independent Impact Analysis

  • Provide capabilities to show downstream effects of proposed changes
  • Allow developers to trace data origins and dependencies during refactoring
  • Support business analysts in validating modernized logic

□ Eliminate SME Consultation Bottlenecks

  • Provide role-based access to comprehensive system analysis
  • Enable real-time exploration of data flows and business rules
  • Deliver complete context for development and testing teams

What It Looks Like in Real Life

Manual Approach: Junior developers submit tickets asking "What happens if I change this customer validation routine?" and wait 2 weeks for Frank to review the code and explain the downstream impacts. The interconnected nature of decades-old systems makes it impractical to document all relationships—Frank might remember 47 downstream systems, but miss the obscure batch job that runs monthly. The breadth of institutional knowledge across millions of lines of code is impossible to capture manually, creating constant bottlenecks as project velocity crawls.

With Zengines: Any team member clicks on the validation routine and instantly sees its complete impact map—every consuming program, all data flows, and business rules. Questions get answered in seconds instead of weeks, keeping modernization projects on track.

Success Metrics

  • 80% reduction in SME consultation requests
  • Independent access to system knowledge for all team members
  • Accelerated decision-making without knowledge transfer delays

Technology Enablers

Modern platforms like Zengines - Accelerate & De-Risk Your Data Projects  automate much of the dependency mapping, testing framework creation, and knowledge extraction.

Take Action

Successful mainframe refactoring demands more than code conversion expertise. Organizations that master data dependencies, implement lineage-driven testing, and democratize institutional knowledge create sustainable competitive advantages in their modernization efforts. The key is addressing these challenges systematically before beginning code transformation, not discovering them during production deployment.

Next Steps: Assess your current capabilities in each area and prioritize investments based on your specific modernization timeline and business requirements.