Systems Architecture for Online Self-care – Monolithic

Client Server Systems
Client Server Systems

In the present day, there is a widespread migration of systems design from monolithic architectures to microservices-driven architectures. Dreadful learnings from the monolithic nature of implementation solutions have driven the need for more decoupled components and services that enable faster deployments.

Nonetheless, various industries are not suited to the new “Fast-paced” environment. For a hug number of solutions, there is still the need to perform extensive designs, development and testing, relying more on a waterfall model for solution deployment. Certainly, such solutions usually comfortably immense budgets and could involve human safety components latched on, such as the development of jet engines and spacecraft.

This post falls into the monolithic segment of solutions and presents the architecture for a use case – online self-care, a very common service provided by various corporations, such as banks and service providers like telecoms. This is not a representative of all that is out there but showcases a sample real-life architecture. The figure below presents the architecture.

Online Self-care Architecture (monolithic architecture)

The architecture consists of three main zones as follows:

  1. Internet facing
  2. De-militarized zone
  3. Internal network

The purpose for the zoning is for the single important reason. Security. Between each zone, there are various firewalls filtering all forms of traffic. I do not go into details in this post as its purpose is simply to present the architecture, for those who might be faced with designing something similar. It is assumed that once you arrive at the preferred architecture, you would do some further study on the details that need to be dealt with in this integration parts.

For professionals working on cloud systems, such as AWS, Azure and GCP, there are a few common components that would seem familiar, such as the load balancer. In the present time, projects that do not require substantial design and waterfall-type management, will rely on a cloud-deployed load-balancer.

The front-facing enables users to call the URL from a common browser. The main security required here is HTTPS, to enable communication encryption. Then a load-balancer comes along to distribute the load to two web servers. To ensure, or improve, availability, the load-balancer has partner fail-over load balancer (one of many, on-premise baggages!). the load balancer is also sticky, to ensure that users are served the same servers they started with.

In a SOA middleware-driven environment, there would be the API layer and the middleware itself, which hosts the enterprise services. These form a layer above the internal systems and provide a platform to build business processes via BPEL (Business Process Execution Language).

Lastly, you have the internal systems that serve billing, charging, customer care, accounts, etc. each could have its own separate database, or they could all be connected to a single enterprise database if that was planned for the start, or in the process of a migration.

This is a usual design, one of many for this use case, in a large enterprise, where all the servers and systems are deployed on-premise or within managed data centres. Though I have singly highlighted the “on-premise” baggage earlier, there are various other bottlenecks and challenges involved in this architecture. We have only seen the hardware architecture here. However, there are special and separate concerns that have contributed to drive the need for microservices architectures. I shall discuss these in future posts, believing that it would inform you on why microservices are a better option in cases like this. Nonetheless, I maintain that microservices would not always be the best option in all cases. Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: