Articles

Info-Tech LIVE 2025 Recap: A Data Migration Expert's Perspective on Transform IT -->Transform Everything

June 18, 2025
Caitlyn Truong

What do the Phoenix Suns, a Regional Healthcare Plan, Commercial HVAC software, and a Fortune 500 bank have in common? They all struggle with data migration headaches.

This revelation – while not entirely surprising to me as someone who's spent years in data migration – might shock many readers: every single organization, regardless of industry or size, faces the same fundamental data conversion challenges.

With over 3,000 IT executives gathered under one roof – I was able to test my hypotheses about both the interest of AI in data migrations and data migration pain points across an unprecedented cross-section of organizations in just three days. The conversations I had during networking sessions, booth visits, and between keynotes consistently reinforced that data migration remains one of the most pressing challenges facing organizations today – regardless of whether they're managing player statistics for a professional sports team or customer data for a local bank with three branches.

Day 1: Practical AI Over Theory

The conference opened with Dr. Tom Zehren's powerful keynote, "Transform IT. Transform Everything." His message struck a chord: IT leaders are navigating unprecedented global uncertainty, with the World Uncertainty Index spiking 481% in just six months. What resonated most with me was his call for IT professionals to evolve into "Enterprise Technology Officers" – leaders capable of driving organization-wide transformation rather than just maintaining systems.

This transformation mindset directly applies to data migration across organizations of all sizes – especially as every company races to implement AI capabilities. Too often, both large enterprises and growing businesses treat data conversion as a technical afterthought rather than the strategic foundation for business flexibility and AI readiness. The companies I spoke with that had successfully modernized their systems were those that approached data migration as an essential stepping stone to AI implementation, not just an IT project.

Malcolm Gladwell's keynote truly resonated with me. He recounted his work with Kennesaw State University and Jiwoo, an AI Assistant that helps future teachers practice responsive teaching. His phrase, "I'm building a case for Jiwoo," exemplified exactly what we're doing at Zengines – building AI that solves real, practical problems.

Gladwell urged leaders to stay curious when the path ahead is unclear, make educated experimental bets, and give teams freedom to challenge the status quo. This mirrors our approach: taking smart bets on AI-powered solutions rather than waiting for the "perfect" comprehensive data management platform.

Day 2: Strategic Bets and Addictive Leadership

John Rossman's "Winning With Big Bets in the Hyper Digital Era" keynote challenged the incremental thinking that plagues many IT initiatives. As a former Amazon executive who helped launch Amazon Marketplace, Rossman argued that "cautious, incremental projects rarely move the needle." Instead, organizations need well-governed big bets that tackle transformational opportunities head-on.

Rossman's "Build Backward" method resonated particularly strongly with me because it mirrors exactly how we developed our approach at Zengines. Instead of starting with technical specifications, we worked backward from the ultimate outcome every organization wants from data migration: a successful "Go Live" that maintains business continuity while unlocking new capabilities. This outcome-first thinking led us to focus on what really matters – data validation, business process continuity, and stakeholder confidence – rather than just technical data movement.

Steve Reese's presentation on "Addictive Leadership Stories in the League" provided fascinating insights from his role as CIO of the Phoenix Suns. His central question – "Are you the kind of leader you'd follow?" – cuts to the heart of what makes technology transformations successful.

Beyond the keynotes, Day 2's breakout sessions heavily focused on AI governance frameworks, with organizations of all sizes grappling with how to implement secure and responsible AI while maintaining competitive speed. What became clear across these discussions is that effective AI governance starts with clean, well-structured data – making data migration not just a technical prerequisite but a governance foundation. Organizations struggling with AI ethics, bias detection, and regulatory compliance consistently traced their challenges back to unreliable or fragmented data sources that added challenge and complexity to implement proper oversight and control mechanisms.

Universal Validation: The AI-Driven Data Migration Imperative

The most valuable aspect of Info-Tech LIVE wasn't just the keynotes – it was discovering how AI aspirations are driving data migration needs across organizations of every size. Whether I was talking with the CIO of a major healthcare system planning AI-powered diagnostics, a mid-market logistics company wanting AI route optimization, or a software development shop building AI-solutions for their clients, the conversation inevitably led to the same realization: their current data challenges couldn't support their AI ambitions.

The Universal AI-Data Challenge: Every organization, regardless of size, faces the same fundamental bottleneck: you can't implement effective AI solutions on fragmented, inconsistent, or poorly integrated data. This reality is driving a new wave of data migration projects that organizations previously might have delayed.

AI's Real Value

Throughout three days, the emphasis was clear: apply AI for measurable value, not trends. This aligns perfectly with our philosophy. We're solving specific problems:

  • Reducing data migration timelines from months to weeks
  • Eliminating guesswork in field mapping and transformation
  • Making business analysts 6x more productive
  • De-risking transformations through enhanced and accelerated data lineage

Transform IT, Transform Everything

Info-Tech's theme perfectly captures what we're seeing: organizations aren't just upgrading technology – they're fundamentally transforming operations. At the heart of every transformation is data migration. Organizations that recognize this shift early – and build migration capabilities rather than just executing migration projects – will have significant advantages in an AI-driven economy.

Zengines not just building a data migration tool – we're building an enduring capability for business transformation. When organizations can move data quickly and accurately, they can accelerate digital initiatives, adopt new technologies fearlessly, respond to market opportunities faster, and reduce transformation costs.

The Road Ahead

Malcolm Gladwell's thoughts on embracing uncertainty and making experimental bets stayed with me. Technology will continue evolving rapidly, but one constant remains: organizations will always need to move data between systems.

Our mission at Zengines is to make that process so seamless that data migration becomes an enabler of transformation rather than a barrier. Based on the conversations at Info-Tech LIVE, we're solving one of the most universal pain points in business technology.

The future belongs to organizations that can transform quickly and confidently. We're here to make sure data migration never stands in their way.

Interested in learning how Zengines can accelerate your next data migration or help you understand your legacy systems? Contact us to discuss your specific challenges.

You may also like

If you're evaluating Zengines for your data migration or data lineage projects, one of your first questions is likely: "Where will this run, and where will our data live?"

It's a critical question. Data migrations involve your most sensitive information, and your choice of deployment architecture impacts everything from security and compliance to speed-to-value and ongoing management.

The good news? Zengines offers four deployment options designed to meet different organizational needs. This guide will help you understand each option and identify which might be the best fit for your situation.

Understanding Your Deployment Options

Option 1: Zengines Hosted (AWS US Region)

What it is: Fully managed SaaS deployment in US-based AWS data centers

Who it's designed for:

  • Organizations based primarily in the United States
  • Teams who need to start analyzing and migrating data quickly
  • Teams who are focusing on their business transformation and don’t want to manage all the moving pieces associated with data migrations
  • Projects where regulatory requirements don't mandate specific data residency

Key benefits:

  • Fastest time to value: You can typically begin working with your data within days of signing up
  • Zero infrastructure overhead: No need to provision servers, manage updates, or monitor performance—Zengines handles all of that
  • Predictable, straightforward pricing: Standard subscription model with no infrastructure management costs

What to consider: If your organization has data sovereignty requirements (especially for EU data), strict requirements about data leaving your environment, or compliance frameworks that restrict US-based cloud processing, one of the other options below may be a better fit.

Option 2: Zengines Hosted (AWS Non-US Region)

What it is: Fully managed SaaS deployment in your preferred AWS region (EU, APAC, etc.)

Who it's designed for:

  • International organizations with regional data residency requirements
  • Companies subject to GDPR or other regional data protection regulations
  • Teams who want managed SaaS simplicity without US jurisdiction concerns

Key benefits:

  • Regional compliance: Meets data sovereignty requirements while maintaining all Zengines capabilities
  • Same fast deployment: No compromise on speed or features compared to US hosting
  • Still fully managed: Zengines continues to handle all infrastructure, updates, and monitoring

What to consider: While this addresses data residency, it's still a multi-tenant architecture with data processed in Zengines' cloud environment. If your compliance framework requires dedicated infrastructure or data that never leaves your environment, consider Option 3.

Option 3: Zengines Deployed on Your AWS Cloud Account

What it is: Zengines deployed entirely within your own AWS environment under your control

Who it's designed for:

  • Financial services, healthcare, and government organizations with stringent compliance requirements
  • Enterprises with security frameworks that prohibit multi-tenant SaaS or require tenant isolation at the account level
  • Organizations that need administrative control over the compute environment and network boundaries
  • Companies with mature AWS environments and DevOps capabilities

Key benefits:

  • Complete data sovereignty: Your data never leaves your environment
  • Maximum control: You define and enforce all security policies, access controls, and compliance measures
  • Dedicated infrastructure: No multi-tenant concerns; this is your exclusive Zengines instance
  • Integration with your security tools: Deploy within your existing security perimeter and monitoring systems

What to consider:

  • Setup time: Deployment typically takes 2-3 weeks rather than days
  • Resource requirements: Your IT team needs to provision AWS resources and support the deployment
  • Additional costs: This option includes additional support fees for Zengines to assist with deployment, configuration, and optimization
  • Prerequisites: You'll need an existing AWS environment and team members familiar with managing AWS infrastructure

Technical requirements: Zengines will provide detailed specifications for EC2 instances, storage, and AWS services needed. Having this conversation early with your infrastructure team helps ensure smooth deployment.

Option 4: Zengines on Azure or Google Cloud Platform (In Development)

What it is: Private cloud deployment on your Azure or GCP environment

Who it's designed for:

  • Organizations with significant commitments to Azure or Google Cloud
  • Companies whose cloud strategy or enterprise agreements make AWS deployment impractical

Current status: As of September 2025, multi-cloud support is in active development. If your organization has strong Azure or GCP requirements, we'd welcome a conversation about timeline and potential early adopter partnerships.

What to consider: If you need Zengines capabilities today and your only concern is cloud platform, Option 3 (AWS Cloud Account) might serve as a bridge solution until your preferred platform is supported.

Making Your Decision: Key Questions to Ask

As you evaluate which deployment option fits your needs, consider these questions:

Regulatory and Compliance:

  • Do we have specific data residency requirements (geographic restrictions on where data can be processed)?
  • Are we subject to regulations like GDPR, HIPAA, or financial services compliance frameworks?
  • Does our compliance framework require dedicated infrastructure?

Infrastructure and Resources:

  • Do we have an existing AWS, Azure, or GCP environment?
  • Do we have DevOps or infrastructure team members who can support Zengines deployment on our cloud account?
  • What's our organizational comfort level with managing cloud infrastructure?

Timeline and Urgency:

  • How quickly do we need to begin analyzing and migrating data?
  • Is a 2-3 week deployment timeframe acceptable, or do we need to start within days?

Security Requirements:

  • Does our security framework allow data processing in external cloud environments?
  • Do we require dedicated infrastructure, or is secure multi-tenant architecture acceptable?
  • What level of control do we need over the processing environment?

Budget Considerations:

  • What's our budget for not just software licensing but also infrastructure support?
  • Do we have budget for the additional support costs associated with private cloud deployment?

Comparing Your Options at a Glance

Factor US Hosted Regional Hosted Private AWS Azure/GCP
Setup Time Days Days 2-3 weeks TBD
Data Sovereignty US only Regional choice Full control Full control
Infrastructure Management Zengines Zengines Shared Shared
Your IT Involvement Minimal Minimal Moderate Moderate
Best For US-based, fast starts International, regional compliance Strict security/compliance Azure/GCP commitments

What Happens After You Choose?

  • For Options 1 & 2 (Zengines Hosted): After you sign up, you'll receive access credentials within 1-2 business days. You can immediately begin creating projects, uploading schemas, and working with data. Our team will schedule an onboarding and training session to help you get started.
  • For Option 3 (Your AWS): We'll schedule a technical workshop with your infrastructure team to review requirements, discuss your AWS environment, and plan the deployment. Zengines will provide detailed specifications and work alongside your team through the setup process. Once deployed, you'll receive training for both end users and administrators.
  • For Option 4 (Azure/GCP): If you're interested in Azure or GCP deployment, let's have a conversation about your timeline and requirements to better estimate the development effort.

Getting Started

Choosing the right deployment architecture is an important decision, but it shouldn't slow down your evaluation. Here's how to move forward:

  1. Start with an assessment of your compliance, security, and resource requirements using the questions above
  2. Have a conversation with our team about your specific situation—we've helped dozens of organizations navigate this decision
  3. Involve your stakeholders early: Security, compliance, and infrastructure teams should be part of the conversation from the beginning
  4. Consider a phased approach: Some organizations start with Option 1 or 2 for initial projects, then move to Option 3 as they expand usage
  5. Don't let deployment questions stop progress: We can work with you on pilot projects using sample data while larger deployment decisions are being made

Data migration and mainframe modernization are complex enough without worrying about whether your tools can work within your architecture. Zengines' flexible deployment options mean you don't have to compromise between the capabilities you need and the compliance, security, or infrastructure requirements you must meet.

Whether you need to start analyzing data tomorrow (hosted options) or require complete control within your own infrastructure (private cloud), there's a path forward.

Ready to discuss which deployment option fits your needs? Contact our team to start the conversation. We'll ask the right questions, understand your requirements, and help you make a confident decision.

BOSTON, MA – November 12, 2025 – Zengines is pleased to announce that the company's CEO and Co-Founder, Caitlyn Truong, has been recognized as a winner of the 2025 Info-Tech Awards by Info-Tech Research Group, a global leader in IT research and advisory.

Truong has been named a winner in the Women Leading IT award category.

The Info-Tech Awards celebrate outstanding achievements in IT, recognizing both individual leaders and organizations that have demonstrated exceptional leadership, innovation, and impact. The Women Leading IT Award celebrates exceptional women whose strength of leadership is driving innovation and transformation in their organization and the IT industry.

Since founding Zengines in 2020, Truong has led the development of AI-powered solutions that address two of the most pressing data management challenges facing enterprise organizations: data migration and data lineage. Under her leadership, Zengines has partnered with some of the largest enterprises to accelerate and de-risk their most critical business initiatives—from customer onboarding and system modernization to M&A integration and compliance requirements. The company's innovative approach helps organizations complete data conversions up to 80% faster while significantly reducing risk and cost, transforming processes that traditionally required large teams of specialists and months of manual work into streamlined operations achievable in minutes or days through AI-driven automation.

"I'm deeply honored by this recognition from Info-Tech and applaud their commitment to celebrating women in tech," says Caitlyn Truong, CEO of Zengines. "At Zengines, we're solving some of the most complex challenges the industry hasn't been able to crack: helping companies understand, modernize, and move their most valuable asset - their data. We're succeeding thanks to our incredible teammates - including women leaders who earned their place through grit and skill. When we amplify this power between women in tech - sharing knowledge, championing success, staying in the fight - we create leaders who know how to do hard things. That's the future worth building."

The 2025 Info-Tech Award winners were selected from a competitive pool of hundreds of candidates. The Women Leading IT Award winners were determined by their track record of innovation, leadership, and business impact, and their contribution to the advancement of women in technology through mentorship, advocacy, or initiatives that support diversity in IT.

"Women Leading IT within the 2025 Info-Tech Awards celebrates leaders whose vision and execution have driven measurable progress in innovation, inclusion, and organizational performance," says Tom Zehren, Chief Executive Officer at Info-Tech Research Group. "Congratulations to this year's honorees for strengthening their organizations through strategic leadership and opening doors for the future generation of IT leaders. Each Women Leading IT winner for 2025 exemplifies the strength of inclusive leadership that is shaping IT's next chapter."

To view the full list of winners and learn more about the Info-Tech Awards, please click here.

About Zengines

Zengines is a technology company that transforms how organizations handle data migrations and mainframe modernization. Zengines serves business analysts, developers, and transformation leaders who need to map, change, and move data across systems. With deep expertise in AI, data migration, and legacy systems, Zengines helps organizations reduce time, cost, and risk associated with their most challenging data initiatives. Learn more at zengines.ai.

About Info-Tech Research Group

Info-Tech Research Group is the world's leading research and advisory firm, proudly serving over 30,000 IT, HR, and marketing professionals. The company produces unbiased, highly relevant research and provides industry-leading advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

To learn more about Info-Tech Research Group or to access the latest research, visit infotech.com.

The 2008 financial crisis exposed a shocking truth: major banks couldn't accurately report their own risk exposures in real-time.

When Lehman Brothers collapsed, regulators discovered that institutions didn't know their actual exposure to toxic assets -- not because they were hiding it, but because they genuinely couldn't aggregate their own data fast enough.

Fifteen years and billions in compliance spending later, only 2 out of 31 Global Systemically Important Banks fully comply with BCBS-239 -- the regulation designed to prevent this exact problem.

The bottleneck? Data lineage.

Who Must Comply and When

BCBS-239 applies from January 1, 2016 for Global Systemically Important Banks (G-SIBs) and is recommended by national supervisors for Domestic Systemically Important Banks (D-SIBs) three years after their designation. In practice, this means hundreds of banks worldwide are now expected to comply.

Unlike regulations with fixed annual filing deadlines, BCBS-239 is an ongoing compliance requirement. Supervisors can test a bank's compliance with occasional requests on selected risk issues with short deadlines, gauging a bank's capacity to aggregate risk data rapidly and produce risk reports.

Think of it as a fire drill that can happen at any moment -- and with increasingly serious consequences for failure.

The Sobering Statistics

More than a decade after publication and eight years past the compliance deadline, the results are dismal. Only 2 out of 31 assessed Global Systemically Important Banks fully comply with all principles, and no single principle has been fully implemented by all banks.

Even more troubling, the compliance level across all principles barely improved from an average of 3.14 in 2019 to 3.17 in 2022 on a scale of 1 ("non-compliant") to 4 ("fully compliant"). At this rate of improvement, full compliance is decades away.

What Happens If Your Bank Fails BCBS-239 Compliance?

The consequences are escalating. The ECB guide explicitly mentions:

  • Enforcement actions against the institution
  • Capital add-ons to compensate for data risk
  • Removal of responsible executives who fail to drive compliance
  • Operational restrictions on new business lines or acquisitions

The Basel Committee makes it clear that banks' progress towards BCBS 239 compliance in recent years has not been satisfactory and that increased measures on the part of the supervisory authorities are to be expected to accelerate implementation.

What Banks Are Doing (And Why It's Not Enough)

Most banks have responded to BCBS-239 with predictable tactics:

  • Governance restructuring: Creating Chief Data Officer roles and data governance committees
  • Policy documentation: Writing comprehensive data management policies and frameworks
  • Technology investments: Purchasing disparate tools like data catalogs, metadata management tools, and master data management platforms
  • Remediation programs: Launching multi-year, multi-million dollar compliance initiatives

These tactics, as positive steps forward, are necessary but not sufficient to meeting compliance. In other words, they're checking boxes without fundamentally solving the problem.

The issue? Banks are treating BCBS-239 like a project with an end date, when it's actually an operational capability that must be demonstrated continuously.

The Data Lineage Bottleneck

Among the 14 principles, one capability has emerged as the make-or-break factor for compliance: data lineage.

Data lineage has been identified as one of the key challenges that banks have faced in aligning to the BCBS-239 principles, as it is one of the more time consuming and resource intensive activities demanded by the regulation.

Why Data Lineage Is Different

Data lineage -- the ability to trace data from its original source through every transformation to its final destination -- sits at the intersection of virtually every BCBS-239 principle. The European Central Bank refers to data lineage as "a minimum requirement of data governance" in the latest BCBS 239 recommendations.

Here's why lineage is uniquely difficult:

It's invisible until you need it.
Unlike a data governance policy you can show an auditor or a data quality dashboard you can pull up, lineage is about proving flows, transformations, and dependencies that exist across dozens or hundreds of systems. You can't fake it in a PowerPoint.

It crosses organizational and system boundaries.
Complete lineage requires cooperation between IT, risk, finance, operations, and business units -- each with their own priorities, systems, and definitions. Further, data hand-off occurs in and between systems, databases and files, which adds to the complexity of connecting what happens at each hand-off.  Regulators are increasingly requiring detailed traceability of reported information, which can only be achieved through lineage across organizations and systems.

It must be current and complete.
The ECB requires "complete and up-to-date data lineages on data attribute level (starting from data capture and including extraction, transformation and loading) for the risk indicators, and their critical data elements." A lineage document from six months ago is worthless if your systems have changed.

It must work under pressure.
Supervisors increasingly require institutions to demonstrate the effectiveness of their data frameworks through on-site inspections and fire drills, with data lineage providing the audit trail necessary for these reviews. When a regulator asks "prove this number came from where you say it came from," you have hours -- not days -- to respond.

The Eight Principles That Demand Data Lineage Proof

While 11 of the 14 principles benefit from good data lineage, regulatory guidance makes it explicitly mandatory for eight:

  • Principle 2 (Data Architecture): Demonstrate integrated data architecture through documented lineage flows
  • Principle 3 (Accuracy & Integrity): Prove data accuracy by showing traceable lineage from source to report
  • Principle 4 (Completeness): Demonstrate comprehensive risk coverage through lineage mapping
  • Principle 6 (Adaptability): Respond to ad-hoc requests using lineage to quickly identify relevant data
  • Principle 7 (Report Accuracy): Validate report numbers through documented lineage and audit trails
  • Principles 12-14 (Supervisory Review): Provide lineage evidence during audits and fire drills

The Technology Gap: Why Traditional Tools Fall Short

Most banks have invested heavily in data catalogs, metadata management platforms, and governance frameworks. Yet they still can't produce lineage evidence under audit conditions. Why?

Traditional approaches have three fatal flaws:

1. Manual Documentation

Excel-based lineage documentation becomes outdated within weeks as systems change. By the time you finish documenting one data flow, three others have been modified. Manual approaches simply can't keep pace with modern banking environments.

2. Point Solutions that only support newer applications

Modern data lineage tools can map cloud warehouses and APIs, but they hit a wall when they encounter legacy mainframe systems. They can't parse COBOL code, decode JCL job schedulers, or trace data through decades-old custom applications -- exactly where banks' most critical risk calculations often live.

3. Incomplete Coverage

Lineage that stops at the data warehouse is fundamentally incomplete under BCBS-239's end-to-end data lineage requirements. Regulators want to see the complete path -- from original source system through every transformation, including hard-coded business logic in legacy applications, to the final risk report. Most tools miss 40-70% of the actual transformation logic.

How AI-Powered Data Lineage Changes the Game

This is where AI-powered solutions like Zengines fundamentally differ from traditional approaches.

Instead of manually documenting lineage, Zengines can automatically and comprehensively:

  • Parse legacy mainframe code (COBOL, RPG, Focus, etc) to extract data flows and transformation logic
  • Trace calculations backward from any report field to ultimate source systems
  • Document relationships between tables, fields, programs, files and job schedulers
  • Generate audit-ready evidence in minutes instead of months
  • Maintain relevancy and currency through lineage updates as code changes

Solving the "Black Box" Problem

For many banks, the biggest lineage gap isn't in modern systems -- it's in legacy mainframes where critical risk calculations were encoded 20-60 years ago by developers who have long since retired. These systems are literal "black boxes": they produce numbers, but no one can explain how.

Zengines' Mainframe Data Lineage capability specifically addresses this challenge by:

  • Parsing COBOL and RPG modules to expose calculation logic and data dependencies
  • Tracing variables across millions of lines of legacy code
  • Identifying hard-coded values, conditional logic, and branching statements
  • Visualizing data flows across interconnected mainframe programs and external files
  • Extracting "requirements" that were never formally documented but are embedded in code

This capability is essential for banks that need to prove how legacy calculations work -- whether for regulatory compliance, system modernization, or simply understanding their own risk models.

Assessment: Can Your Bank Prove Compliance Right Now?

The critical question isn't "Do we have data lineage?" It's "Can we prove compliance through data lineage right now, under audit conditions, with short notice?"

Most banks would answer: "Well, sort of..."

That's not good enough anymore.

We've translated ECB supervisory expectations into a practical, principle-by-principle checklist. This isn't about aspirational capabilities or future roadmaps -- it's about what you can demonstrate today, under audit conditions, with short notice.

The Bottom Line

The bottleneck to full BCBS-239 compliance is clear: data lineage.

Traditional approaches -- manual documentation, point solutions, incomplete coverage -- can't solve this problem fast enough. The compliance deadline was 2016. Enforcement is escalating. Fire drills are becoming more frequent and demanding.

Banks that solve the lineage challenge with AI-powered automation will demonstrate compliance in hours instead of months. Those that don't will continue struggling with the same gaps, facing increasing regulatory pressure, and risking enforcement actions.

The technology to solve this exists today. The question is: how long can your bank afford to wait?

Schedule a demo with our team today to get started.

Subscribe to our Insights