7 mins
read
Let’s first start with a misconception: “microservices are only an IT topic”. False! Implementing a microservices architecture helps to solve IT problems, yes, but it also allows to considerably help the business aspect of a company. Let’s see how this is all linked together.
Legacy Application
A Legacy Platform is something heavy to maintain, to make evolve, and can run at a high cost. It often consists of one big database to manage it all, with close to zero components isolation which ultimately leads to an application to be one big block. The results?
- Long release cycles time: making updates, changes, and development of new features on infrastructure & software which are highly coupled and interdependent undoubtedly leads to more complexity and thus longer release cycles.
- Operational issues: as seen in the previous point, managing monolithic legacy applications is complex and leads to an increase in operational issues over time.
- Recurring bugs: the risk of accumulating technical debts in such context is highly increased - every change, every fix on the system risks jeopardizing other components.
In today’s world, if a Product Owner asks his team for a new feature that is relatively simple to develop and implement, it will not be “acceptable” to have to wait months in order for the feature to be client-ready. The impact here is multiples: the customers need to wait longer to use new features, the go-to-market time is higher, the delivering of values to clients takes longer which in the end is hurting the business.
The impact on the employees is also not to minimize, working with such systems can be harmful in the long run, leading the team to adopt a more conservative mindset, be resistant to change and lose agility.
Microservices approach and mindset
The approach to microservices changes straight from a people perspective, again the scope of such an approach is not only “tech” but also people related. In the context of a monolithic application, a team is split by technical capabilities: there is a Database Administrator team, a Back-End team, a Front-End team, etc. In the context of microservices, the different teams will be organized based on business capabilities: User Management team, Paiement team, Product Catalog team, etc. Those teams will aim to organize themself to publish unique microservices and expose those services to clients, either external or internal (i.e. to the other teams).
This approach pushes the different teams to gain in agility, to develop services that are independent of each other, and to move toward “self-contained” features. Each service will be API-based and develop with its own data-model - if it makes sense - in order to better fit the use-case. For example, we can use a NoSQL database to manage a product catalog and a Relational database for user management - it will be totally doable in a microservices oriented architecture and close to impossible to have such an approach in a monolithic application.
Microservices will also make you do a step forward automation. It pushes continuous delivery by design, as each service is independent, it allows to have dedicated automated deployment pipelines for each feature, managed end-to-end by the team responsible for the feature. Why is it highly recommended? Simply because the number of microservices will add up as the number of features and capabilities of a product increases and deploying a large number of services manually every time will become a real pain to do and operate in the long run. We can see this as a limitation or as an opportunity to take the direction of automation :-) I clearly see it as a way to explore automated test-driven development, ease the operations by implementing controls, automated deployment pipelines, etc. and avoid too many manual operations.
Detour on container
Let’s now put the containers and more specifically Docker under the spotlight and understand why it is a substantial solution in the case of microservices architecture.
Docker is a tool to run applications in an isolated environment. It gives similar benefits to running virtual machines:
- Your application will always run in the same environment and thus avoid inconsistencies in the way your application will operate. If it is working on a machine, it will work the same way on any other machine or server.
- It allows better sandboxing, having dedicated virtual machines for each stage of a project (development, testing, pre-production, production …). It brings more security and avoids the risk of conflicts between different projects or different versions of a project.
- It eases the use of a project by another person - no need of installing different tools and dependencies that a project needs but just to take the already setup virtual machine.
Docker will capitalize on those benefits but abstract the burden and complexity that managing virtual machines can bring. Instead of VMs, we have then containers. The code, as well as the environment, takes place in those containers but it still differs from a full virtual machine.
In the case of VM, each machine has its own operating system, including the kernel - which is the core of the OS and manages low-level operations. This setup makes it bulky for the machine or server where those VMs are hosted.
In the case of containers, each container will use the kernel of its host, the core of the OS will thus be shared between the host and the container. All that is above the kernel will still be split - it actually represents what a Linux distribution is, all its specificities are contained in the layers above the kernel. Thus, docker leverages the UNIX file system to create isolated environments and create a compromise, the sandboxing is not as strict as with a virtual machine but it still remains enough for the majority of use cases. Also, the benefit that brings docker largely compensates: ability to launch a container within seconds compared to minutes for VMs, savings of resources needed to run containers, disk space optimization, to name only a few.
There are three concepts to understand when talking about containers: the Dockerfile, the Image, and the container itself.
The container is a running instance of an image. An image is a template of an environment of which we want a snapshot at a specific moment, it contains the operating system, the software, and the application code, all bundled together into a file. The images are defined using Dockerfile which is a text file with a list of steps to perform in order to create an image. Those steps can be the configuration of the operating system, installation of the needed software, copy the project file at the right places, etc.
The process is then as follow:
A Dockerfile is written to “BUILD” an image that can then be “RUN” to get an actual container.
Architecture
How do we put in place all this in a Cloud / AWS context? Let’s do a simple PoC!
We will leverage three key AWS services:
- Amazon API Gateway: it will be the unique entry point for all our APIs and microservices defined in the back-end - those services can be either internal or external. It allows to add an abstraction layer for each service - a client will be able to access or not a service depending on the rules that will be defined beforehand.
- Amazon Cognito: it will be used for all what is User Registry. With Cognito, no need to have a dedicated database nor to configure a LDAP, simply put users (human or machine) inside AWS Cognito and API gateway will be able to manage the authentication. Once a request will be allowed, it will be transferred to Fargate.
- AWS Fargate: it is a service that will manage containers without the need to have a dedicated manually operated server. Fargate will also manage the scalability - up and down depending on the number of requests that will be sent to the container.
Here is how the architecture will look:
Conclusion
Microservices are not only about tech! It's a way to bring to businesses more agility, faster time-to-market, push toward automation, and drive innovation. Leveraging the power of the Cloud will give you all the tech-stack needed to develop such architecture in the best possible way. Planning ahead your transition to a microservices approach is the first key to successful project implementation, don’t hesitate to reach out to me if you need any additional information/advice on the topic, I will be happy to share my experience with you!
Resources
Don’t hesitate the check the full video on the subject to below.
- https://aws.amazon.com/cognito/
- https://aws.amazon.com/api-gateway/
- https://aws.amazon.com/fargate/
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
- https://martinfowler.com/articles/microservices.html