Azure Application Gateway with Service Fabric

Lili Xu
4 min readNov 9, 2020

--

This post is mainly about different solutions of configuring azure application gateway as a L7 load balancer for service fabric cluster.

L7 Load Balancer

By default, provisioning the service fabric cluster and the VMSS are associated with a default L4 load balancer. It exposes the ports, bind with static DNS name and takes in external traffic.

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services#service-fabric-in-azure

Because the default load balancer resides in transport layer, it does not support TLS termination, path based routing, session cookie and any advanced features which could only get from a load balancer on application layer.

If the service is a web service, calling different ports could cause CORS issues. Meanwhile, exposing the reverse proxy to takes the external call would cause all microservices in the cluster that expose an HTTP endpoint are addressable from outside the cluster, which is not a recommended way as stated here.

Forward to Internal L4 Load Balancer

As stated in Service Fabric official doc, Azure Application Gateway isn’t directly integrated with Service Fabric. So that there’s no SF service address resolving features in AAG.

A quick resolution is to deploy cluster with the built in internal load balancer, add the lb rule for all service ports, and configure the AAG backend pool target to the internal lb address.

For achieving the e2e SSL encryption between the AAG and node, we would need to have a cert with CN=10.0.0.10 (internal lb ip, we could define any ip in the same subnet) and bind to the service listeners. For AAG HTTP setting, we would also need to add the CA certificate’s public key as trusted root, usually we cannot get a certificate with ip address signed by public CA.

With the path-based routing, it’s also possible to take the original reverse proxy request without port number, but instead through AAG 443 port and do path-based routing to 10.0.0.10:19081, the reverse proxy port opened on internal load balancer.

The risk of this solution is the loss of the session stickiness. Because the request is forwarded to another LB, additional session affinity would be built between the internal LB and the target node. However, there’s no way for AAG to notify the session info to that L4 internal LB to make the target selection.

If the service is a web service relying on certain cookie to identify the user, or the service is not entirely stateless for each user request, this solution is not going to work.

Forward to VMSS Nodes

There’s another solution that forwards the request from AAG to the VMSS node directly.

As AAG does not integrate with Service fabric fully, AAG has no information of whether the service is live on a certain node. so that each service need to have its instance on all nodes.

In service fabric service manifest, set instance count to -1 for using the dynamic instance count. InstanceCount = -1 is an instruction to Service Fabric that says “Run this stateless service on every node.”

<StatelessService ServiceTypeName=”ServiveType” InstanceCount=”-1">

In AAG’s backend pool setting, add the VMSS in the same subnet where SF cluster is hosted on instead of the internal lb ip.

There would be a TLS verification problem here if we directly configure request forwarding to VMSS nodes. Each node is now having different IP addresses from AAG’s perspective and if all the services are still listening with one certificate with same common name, AAG would throw error on Backend certificate invalid common name (CN).

Hence, we would need to manually override the host name and server name indication(SNI) to match the TLS certificate hosted by the backend server so that AAG check with the server’s certificate with the overriden host info.

The default health probe will automatically pick up the backend address from the overridden domain name. we could also setup custom health probe.

The health check is now expected to be healthy after the configuration.

With this solution, AAG directly forwards request to the target node and port where the service listen on. The session affinity between client and target service is achieved as well.

--

--