This blog post is the second in a series about modernizing a legacy payment platform. In the first post, we detailed how re-platforming the frontend with React enabled our customer to quickly deliver value to merchants by improving the platform’s tools and enhancing the user experience, all while keeping the legacy backend intact. This practical approach allowed the customer to increase revenue and achieve a faster return on investment, laying the foundation for deeper modernization.

With the frontend re-platformed, the next logical step is to address the platform’s monolithic architecture. To meet modern business demands for flexibility and scalability, the goal is to transition to a modular architecture based on micro-frontends and microservices. However, achieving this goal depends on a critical intermediate step: the establishment of a robust CI/CD pipeline. The existing monolithic deployment process was a significant barrier to achieving this transition.

Updates to the platform required synchronized deployments across all components, increasing the risk of downtime and making the development cycle slow and error-prone. Introducing independently deployable modules demanded a new level of automation, consistency, and scalability in the deployment process.

Phase 2 addressed this challenge by building an infrastructure capable of supporting distributed development and deployment. A well-designed CI/CD pipeline was essential to enable the modularity required for micro-frontends and microservices. This infrastructure automated the deployment lifecycle, reduced configuration conflicts, and laid the foundation for a scalable and future-ready architecture.

application modernization phases monolithic to microservices and microfrontends

In this blog post, we’ll explore how we designed the CI/CD architecture to meet these needs and supported the seamless transition to modular development in the next phase.

Proposed CI/CD architecture

To support the modernization of the platform, we needed a CI/CD architecture that could seamlessly integrate with the planned micro-frontend and microservice structure. The deployment pipelines had to be designed with two key goals in mind: flexibility for modular deployments and reliability for high availability.

Requirements for the CI/CD pipeline

Our approach focused on addressing the specific demands of the platform’s architecture:

  • Independent deployments: The frontend and backend are broken into smaller modules. Each must have its own pipeline for deployment without disrupting others.
  • Scalability and fault tolerance: The platform needed dynamic scaling and mechanisms to handle failures without downtime.
  • Consistency and automation: Deployments across development, staging, and production environments had to be consistent and automated to reduce errors and save time.
  • Secure traffic routing: A reliable system for directing requests between microservices and micro-frontends was essential for efficient communication.

Choosing the right tools and services for the CI/CD pipeline

We chose tools and processes that directly addressed these requirements:

  • Kubernetes for orchestrating and scaling containers across environments.
  • Rancher to simplify cluster management and role-based access control.
  • GitLab CI/CD for automating the build, test, and deploy lifecycle.
  • Helm for standardizing configurations and ensuring reproducibility.
  • Nginx Ingress controller to securely and efficiently route traffic.

Technical implementation details

We built the infrastructure for the CI/CD pipeline to meet the specific needs of the modernized payment platform using micro-frontend and microservice architectures.

Infrastructure setup

To establish a strong foundation, we deployed a self-hosted Kubernetes cluster managed through Rancher. Rancher simplified the cluster management process, providing an intuitive interface for provisioning, monitoring, and role-based access control. The Kubernetes cluster was distributed across three virtual machines, which were configured as both master and worker nodes to ensure high availability and redundancy.

This setup created a fault-tolerant environment capable of handling node failures without disrupting operations, a critical requirement for maintaining uninterrupted services in a financial platform.

CI/CD pipeline

We designed and implemented a fully automated CI/CD pipeline using GitLab CI/CD to streamline the lifecycle of building, testing, and deploying services, with each stage tailored to support the modular nature of the platform’s architecture.

During the build stage, the pipeline pulled the source code from the repository. We created Docker images for each microservice and micro-frontend module and securely stored them in the GitLab container registry, ensuring a reliable source for deployment artifacts and simplifying version management.

We executed automated integration tests to confirm seamless interactions between components, focusing particularly on backend microservices and frontend modules. For more complex scenarios, we set up user acceptance testing (UAT) environments, enabling a controlled group to validate features and confirm they met the expected business requirements.

In the deploy stage, we used Helm charts to manage Kubernetes manifests and configurations, simplifying and standardizing the process. Our team deployed services to Kubernetes in environment-specific namespaces, such as development, staging, and production. This strategy isolated environments, reduced configuration conflicts, and streamlined the deployment process.

Traffic management

We used the Nginx Ingress controller to efficiently route traffic within the platform. Nginx Ingress directed external traffic to the appropriate Kubernetes namespace and services based on pre-configured rules. To ensure data integrity and security, we implemented TLS and mTLS encryption, which created a secure channel for sensitive communication between clients and the platform.

Monitoring and observability

We integrated Prometheus to collect metrics and Grafana to visualize them, ensuring complete visibility and control over the infrastructure. Prometheus actively gathered detailed metrics from the Kubernetes cluster, including resource utilization, service performance, and traffic patterns. Grafana displayed these metrics in real time through customizable dashboards, providing actionable insights into system health and performance.

To address potential issues proactively, we configured Alertmanager to generate real-time alerts based on predefined thresholds. The alerts were sent to relevant teams through multiple channels, such as email or Slack, enabling quick responses to potential issues.

Let us modernize your deployment process

The CI/CD pipeline has significantly improved deployment efficiency, reducing times from hours to minutes while maintaining consistent performance during peak loads through Kubernetes’ dynamic scaling. Fault tolerance minimizes service interruptions, and the architecture establishes a solid foundation for the modernization into a fully distributed microservice and micro-frontend ecosystem, which we will explore in detail in the next post of this series.

Partner with us to build a CI/CD pipeline that ensures seamless deployments and prepares your platform for a scalable, distributed architecture. Contact us today to get started.

Contact us