Microservices Architecture for the Enterprise

Apollo Software Labs
4 min readJan 29, 2020

--

The motivation to build microservices architecture as opposed to a monolith may come from different reasons.

  • Take advantage of scalability
  • Deploy smaller changes more frequently
  • Break out different domains to different teams

Other benefits

  • Allows UI devs to focus on the front-end angular development and not worry about the middle and backend tiers.
  • Let Java Devs be Java Devs and not have them worry about UI.
  • Quicker build and deploy Pipeline runs.
  • Never see a blank screen. One or more microservices may fail resulting in some user function not working, but you will never see a blank screen.

How do we implement microservices architecture while presenting a single web application to the customer ?

Technical Considerations

Microservices Architecture
  • See Microservice Boundaries for tips on decomposing a system into a set of microservices.
  • Use OAuth2 Authorization Code Flow grant type since OAuth2 has become the default authentication/authorization mechanism.
  • Consider API Gateway Pattern to handle OAuth2 interaction with an OAuth2 server such as Cognito, Auth0, FusionAuth or Okta. Other concerns such as auditing can also be implemented in this layer.
    Here is an excellent article from FusionAuth that will help you implement OAuth2 interaction in Gateway layer using NodeJS, in literally, 5 minutes.
  • Perform aggregation of data from multiple microservices, when needed, in the API Gateway. You can also use GraphQL here to fetch data from multiple microservices. GraphQL Data Fetcher pattern can also help avoid the N+1 calls from the FrontEnd.
    Avoid having a microservice call another microservice directly.
  • When the gateway and microservices are deployed to the same Kubernetes cluster, use kubernetes service addresses for communication between the Gateway Orchestrator and Microservices. Using kubernetes service addresses will keep all traffic within the cluster. Since this traffic is within cluster, you can use HTTP or install certificates to secure with HTTPS.
  • Traffic from users to Web UI and to the Web Gateway should always use HTTPS.
  • Use Redis or another backend available to the Gateway for session persistence. If you are using NodeJS, you could get away with cookie-session library session persistence if your cookie contents wont exceed cookie max length.
  • When a call is made from the front end to an endpoint on the Gateway, the Gateway extracts the Access token from the session store or cookie, and passes the access token as bearer token in the header when invoking REST backend microservices. User identity attributes and roles in the token will allow the microservices to incorporate fine grained authorization checks.
  • Configure your backend microservices to validate the token. If you are using Java, spring-oauth2 library will help you implement this very easily.
  • Use Lambda Authorizer Or Token Inline Okta Hook if you need to inject custom user attributes, not available in the Identity Provider. For example, your identity provider may have basic user information such as first name, last name and email. If your authorization checks rely on role information governed by another system, a Okta Token Inline Hook might to that system can help you get those custom attributes and roles into the access token.
  • Use Kafka Publish/Subscribe for asynchronous messaging between microservices. Pass user information as part of message header to identify which user triggered the message.
    Since we are avoiding having a microservice call another microservice directly, if you have a business transaction that spans multiple microservices, orchestrate that at the API Gateway layer OR implement Long Polling. Long Polling will allow you to initiate the business transaction by invoking the first microservice, and then via Long Polling wait until the first microservice publishes request to another via Kafka and get the result back via Kafka.
  • Sync common entities shared between microservices using either Kafka Pub/Sub or Kafka Connect with Debezium Connector or AWS DMS.

Challenges

When designing Domain driven Microservices, you will immediately find out that although your domains may look independent at the outset, there will be common entities needed in different domains.

For example, you may have a domain driven microservice to manage customers that is responsible for adding new customer customer accounts and updating them. But your orders system will also need some basic customer information to display past orders, etc.

A simple approach to this problem is to allow duplicate entities across domains to exist. Otherwise, the microservice developer cannot fetch related data using a simple join and you would be introducing a REST call from one microservice to another.

When duplicate entities across domains exist, clearly identify the service owner for that entity and ensure that the service is responsible for sending out notifications on any new entities or updates to existing entities. The other services will need to subscribe to these notifications and update themselves.

You may also have a situation where your web application is pulling that small slice of common data from different services. Hence, an update made to customer address will reflect immediately to the consumers of customer service but may not show without some sort of refresh action in another area of the web application where the customer address information is fetched as secondary data from another microservice.

Originally published at https://dev.to on January 29, 2020.

--

--

Apollo Software Labs
Apollo Software Labs

Written by Apollo Software Labs

Hands-on Solution Architect passionate about building secure, scalable, and high performance solutions in the cloud.

No responses yet