When it comes to complex database migrations, especially for mission-critical systems in the financial sector, cutting corners in the early stages can turn into an expensive gamble. Yet, we often encounter organizations that view the discovery and solution design phase as an optional step — one they can skip or downsize to save time or budget. This mindset may seem practical on the surface, but in reality, it’s the fastest way to derail a project.
At ABCloudz, with two decades of experience and hundreds of successful migrations under our belt, we’ve learned one thing the hard way: if you don’t measure twice, you’ll definitely cut wrong. Every migration challenge we’ve seen (from performance degradation to data integrity issues or post-migration outages) could usually be traced back to something missed or misunderstood in the initial discovery phase.
Skipping discovery means overlooking subtle technical dependencies in the source system, failing to collect stakeholder requirements, or underestimating legacy system complexity. And when these hidden blockers surface mid-project, the costs can be steep: added delays, security or performance vulnerabilities, or even full-scale rearchitecting. In some cases, it also means missing out on business-critical functionality simply because no one asked the right questions upfront.
This is exactly why we treat the discovery and solution design phase not as a formality, but as the foundation of an effective database migration project plan and the starting point on which every successful migration is built. In the following sections, we’ll walk you through how ABCloudz conducted a comprehensive solution design for a major enterprise data system originally running on Oracle database — and how we charted its future state on PostgreSQL with precision, clarity, and full client confidence.
If you want a quick visual overview of how we typically approach the early phases of any database migration project, check out our 4-part video series:
- Workload Definition – what a workload is and how it shapes the migration scope
- Current and Future State Architecture – how we design future systems based on business needs
- Analysis – how we collect artifacts and define the actual migration effort
- Migration Solution Document – how we build the final specification and proposal
12-step Migration and Modernization Methodology
At ABCloudz, we have developed and refined a structured 12-step methodology as part of our legacy system modernization services, transforming even the most complex and large-scale migrations into a manageable, transparent process. This methodology is the result of decades of experience helping organizations address the challenges of data modernization, cloud adoption, and cross-platform migrations. Our recognition as an AWS Migration Competency Partner further validates our ability to deliver these transformations at scale, with a focus on security, reliability, and long-term value.
Each of the risks mentioned earlier (from hidden technical dependencies to unspoken business requirements, performance issues, and integration failures) is addressed directly within this framework. Rather than jumping into execution, we start by gaining a deep understanding of both the current environment and the desired future state. We collect system metrics, identify application and data dependencies, define workloads, and involve stakeholders early to ensure that every requirement is captured.
In the next section, we will explore how the project began with discovery and future state planning.
Phase 1: Discovery and future state design
The foundation of any successful migration is a deep understanding of both the current system landscape and the technical and business objectives that shape the future state. As the first step in our 12-step methodology, we focused on capturing a complete picture of the client’s legacy environment, analyzing key dependencies, and designing a clear and achievable target architecture.
Our work began with a structured discovery process. We collected architectural diagrams, inventories of database objects, and operational data about the source system, which consisted of a 17 TB Oracle 11g on-premises database. This system supported a wide range of internal banking processes and served as a central point of integration for several downstream applications, including analytics, reporting, and risk management services. The architecture included bidirectional ETL pipelines running on Apache Airflow, replication via Qlik CDC, a Spring Boot-based microservice, and external reporting through Power BI.
One of the major challenges we uncovered was the extensive use of Oracle DB Links to enable communication across distributed databases. These links connected multiple internal systems and played a central role in ETL jobs and data aggregation. We also identified over 10,000 database objects (including thousands of tables, hundreds of procedures, and dozens of materialized views) many of which had interdependencies across schemas, some of which were flagged as deprecated or contained invalid objects.
To accurately reflect the client’s business processes, we conducted interviews with key stakeholders across multiple departments. These sessions helped us capture critical operational requirements, clarify expectations around performance and availability, and identify technical constraints and regulatory compliance requirements.
Based on this comprehensive analysis, our architects developed a future state architecture centered on PostgreSQL 16, deployed on-premises. The new design eliminates Oracle DB Links in favor of PostgreSQL-compatible solutions such as Foreign Data Wrappers (FDW) and DataLink interfaces. A robust partitioning strategy was introduced for large tables, and migration paths were defined for legacy business logic using a mix of automated and manual code conversion techniques.
This architectural blueprint was thoroughly reviewed and approved by the client’s technical leadership. It established a clear direction for the rest of the migration project, aligning infrastructure decisions with business goals and technical realities.
For a detailed visual breakdown of how we define individual workloads and the architectural blueprint for a client, we recommend watching these short videos:
With the future state architecture validated and signed off, we moved forward with the next phase of our methodology. This included detailed analysis and scoping of the technical work required for database schema conversion, application remediation, script and ETL transformation, integration planning, and data migration. Each of these areas was assessed based on the future architecture we had just defined — a process we’ll explore in the next section.
Scope analysis across the remaining migration steps
With the future state architecture approved and a clear vision for the target system in place, our next objective was to assess the full scope of technical work required to get there. We focused on analyzing and structuring the work across steps 2 through 6 of our 12-step methodology. These steps include database schema conversion, application remediation, scripts and ETL migration, third-party integration planning, and the design of a robust data migration mechanism. Together, these activities define the core engine of the transformation.
We explain how we approach artifact analysis, code inspection, and effort estimation in this part of our Analysis video.
Step 2: Database schema conversion
As an AWS DMS Delivery Partner, ABCloudz is uniquely equipped to handle large-scale migrations using Amazon-native tooling, including AWS DMS and AWS Schema Conversion Tool (SCT). Using SCT, we performed a comprehensive scan of the Oracle source environment to evaluate automatic conversion readiness. The results highlighted over 27,000 conversion actions across more than 10,000 objects. While 99.9% of storage-related objects (tables, indexes, sequences) were convertible with minimal changes, only 41% of code-related objects (procedures, functions, packages, triggers) could be processed automatically.
Key challenges included unsupported Oracle features such as DB Links, autonomous_transaction pragma, complex PL/SQL structures, and usage of ANYDATA, XMLTYPE, and nested cursors. Many stored procedures also required manual conversion due to hardcoded logic or dependency on obsolete components.
This analysis enabled us to estimate the manual effort needed to complete schema migration, broken down by object type and conversion complexity.
Step 3: Application conversion and remediation
Next, we examined all application logic and middleware components that interacted with the database. This included Spring Boot microservices, internal reporting tools, and several integrations implemented via JDBC.
We identified the required changes to connectivity logic, SQL syntax, and transaction management patterns. For example, the legacy application relied on Oracle-specific features such as FORALL, anonymous PL/SQL blocks, and role-based permissions that had no direct equivalents in PostgreSQL. We also flagged areas where legacy code could be modernized as part of the transition, improving long-term maintainability and alignment with the PostgreSQL ecosystem.
Step 4: Scripts, ETL, and reporting conversion
ETL pipelines were another critical area. The client relied on Apache Airflow for bi-directional data movement, with numerous jobs using DB Links and materialized views for cross-database operations. Additionally, Qlik CDC was used for near-real-time replication to an analytics data warehouse.
We cataloged and classified all jobs, scripts, and pipelines by complexity, volume, and priority. Custom transformation logic, hardcoded schema names, and dependency on legacy views introduced additional scope to refactor these pipelines for compatibility with the new PostgreSQL architecture.
We also reviewed Power BI report dependencies to ensure that all data sources and metrics could be preserved post-migration.
Our migration best practices for managing complex dependencies are also illustrated in the Oracle to SQL Server project for a food manufacturer.
Step 5: Integration with third-party applications
This step involved analyzing integration points with platforms such as Salesforce, SAP DWH, and external Oracle instances. Our future architecture replaced DB Links with PostgreSQL FDW where external access was still required, while eliminating unnecessary remote dependencies.
We validated every API endpoint, data exchange mechanism, and credentials strategy to identify required changes for the target environment.
Step 6: Data migration planning
We then defined a high-level strategy for data migration, balancing reliability, downtime, and cost. The plan combined full export via Oracle expdp with CDC-based synchronization using Qlik or Oracle GoldenGate. We proposed a hybrid approach:
- Use pgLoader and COPY for bulk ingestion of large tables.
- Use Qlik CDC for real-time replication of changes made after dump generation.
This strategy minimized downtime and provided flexibility to test the target system incrementally before cutover.
Step 7 and beyond: Estimating the full project lifecycle
Once the core transformation scope was understood, we moved on to defining the strategy for step 7 (testing) and outlining estimated activities for steps 8 through 12. Our testing strategy was directly informed by the complexities discovered in the previous steps. We planned a multi-phase QA strategy, including unit testing, component validation, system integration testing, and acceptance testing of data integrity and performance.
For steps 8 to 12, which include performance tuning with a focus on performance optimization database tasks, deployment planning, documentation, project governance, and post-production support, we outlined realistic effort estimates based on our experience and system-specific factors.
The table below summarizes the approximate effort, in person-hours, for each of the 12 steps, including time already spent on discovery and design.
Migration Solution Document
The culmination of the extensive discovery, analysis, and architecture design phase was the creation of a detailed Migration Solution Document. This comprehensive document captures all key findings and architectural decisions into a single source of truth, guiding every subsequent step of the migration and modernization journey.
This Migration Solution Document serves as both a strategic roadmap and a tactical handbook, equipping the client’s team with the clarity needed to move forward confidently. At the time of publishing this blog post, the document is undergoing final review by the client’s technical and business stakeholders. Once approved, it will become the definitive guide for successfully executing the migration and modernization project, ensuring alignment with both technical requirements and strategic business objectives.
For a visual walkthrough of how we finalize and package our solution design into a comprehensive proposal, watch the fourth part of our series: Migration Solution Document.
For another example of Oracle-to-PostgreSQL migration for SaaS workloads, see our ODS Data Warehouse modernization case.
Why discovery pays off
We strongly encourage organizations considering migration or modernization projects not to compromise on initial analysis and planning. A thoughtful, well-executed discovery phase doesn’t merely set the stage — it defines the success of your entire migration strategy.
Without a comprehensive discovery and solution design, it would have been impossible to:
- Clearly understand the true scale, complexity, and interdependencies of the existing system.
- Identify and mitigate critical risks early, avoiding costly pitfalls down the line.
- Provide our client with accurate timelines and realistic effort estimates, ensuring transparency and effective resource planning.
If your organization is evaluating a similar modernization or migration effort, we at ABCloudz invite you to reach out. Let’s discuss your workloads, explore the potential of your migration journey, and begin with a discovery and solution design phase tailored to your unique needs. Together, we’ll ensure your modernization project meets your business goals, timelines, and expectations.
Contact us today to start your discovery phase right.