1. Home
  2. Snapt Aria Active-Active Redundancy Deployment Guide for Azure

Snapt Aria Active-Active Redundancy Deployment Guide for Azure

Deployment Steps

  • Create availability set.
  • Create 2 Snapt Aria VM’s running in the same azure resource group and part of an availability set.
  • Create an Azure LB with Snapt Aria ADC’s as back-ends (basic SKU).
  • Setup Snapt Aria Redundancy/Replication.

Create availability set


This step’s availability set will be used when setting up the Snapt Aria ADC VM’s in the next step.

Deploy Snapt Aria ADC VMs via the Azure Marketplace.

Enter your virtual machine details. Use the resource group and availability set created in the previous step.

Add your virtual machine to a network. A network interface card (NIC) will automatically be added when adding a virtual network.

Repeat the Virtual Machine creation steps until you have at least two Snapt Aria ADCs.

Create an Azure Load Balancer

Ensure that the load balancer is created in the same resource group as the Snapt Aria ADC VM’s, created in the previous step.


Next, create a back-end pool on the Azure load balancer.

Select the same Virtual Network used to create the Snapt Aria ADC VM’s.


Select “Associated to” and add the previously created Snapt Aria ADC VM’s to the back-end pool.

Create a load balancer health probe to monitor the Snapt Aria ADC VM’s status.

  • Add health probe (Load Balancers->”LB name”->Health Probes)
    • Set name
    • Set protocol to TCP
    • Set the port to the one that Snapt Aria ADC will be listening on (commonly 80/443) for web-services. 

Add load balancing rules (Load Balancers->”LB name”->Load Balancing Rules). Adding rules will allow the Azure load balancer to send the traffic to the Snapt Aria ADC VM’s.

  • Give it a name
  • Set protocol to TCP
  • Set port and back-end port (80 in this example for HTTP traffic)
  • Select backed pool containing Snapt Aria ADC VM’s
  • Select health probe using the same port and protocol as the load balancing rule.
  • Session persistence = none ( for http traffic we will use cookies to track sessions on the Snapt Aria ADC’s)

Create Network Address Translation rules (NAT) to access the Snapt Aria ADC user-interface. 

Each Snapt Aria ADC will need a Network Address Translation to port 8080.

Create NAT rule to primary Snapt Aria ADC VM:

(Load Balancers->”LB name”->Inbound NAT Rules)

  • Name
  • Select the LB PIP (Public IP)
  • Leave service as “Custom.”
  • Set Protocol to TCP
  • Choose a free port. We use port 9000 in this example
  • Select the Primary Snapt Aria ADC as the Target VM
  • Select the local IP of the Snapt Aria ADC VM. Network IP Configuration (10.0.2.4)
  • Set “Port mapping” to Custom
  • Finally, set the internal port for the Snapt UI (default 8080 for HTTP or 8081 for HTTPS)
  • Click add

  • Create a NAT rule to secondary Snapt Aria ADC VM:
    • Name
    • Port (9001); any open port can be used.
    • Target Virtual Machine (Secondary Snapt Aria ADC VM)
    • Network IP Configuration (10.0.2.5)

Setup Snapt Aria Redundancy/Replication

  • Connect to your Aria VM’s via the LB Public IP (PIP) on the NAT ports just created. (in this case, port 9000 & 9001)

  • Enter your Snapt Aria login credentials and select the appropriate license for each VM.
  • Log in to each of them using the default user and password (admin/admin). (We highly recommend changing the default user and password before continuing)
  • Install all required plugins on each VM; Note: the Redundancy V2 plugin is required.
  • Configure Redundancy plugin:
    • For Azure, or any other configuration where the Snapt Aria ADCs run in active/active mode, you only need to setup the replication portion of the Redundancy plugin; replication will ensure the configs are in sync across both of the nodes.
    • On the primary Snapt Aria ADC server – Navigate to Setup->Redundancy V2->Settings
    • Set the password, select the primary network interface (eg: eth0), and ensure the primary Snapt Aria ADC is set as master. Then click save.
    • Now navigate to local replication page on the master node. (Setup->Redundancy V2->Local Replication). Ensure all of the boxes corresponding to the configs you wish to sync with the other nodes are checked. Then click the save button on the right-hand side.
  • On the bottom of the same page, add the slave node.
    • Enter the slave IP address ( We can use the local address of the slave as both VM’s are attached to the same subnet )
    • Set the slave port. 8080 for HTTP or 8081 for HTTPS ( Both are active and available by default )
    • Select the protocol. Again HTTP for port 8080 and HTTPS for port 8081
    • Finally, retrieve the slave key from your secondary VM by navigating to Setup->Redundancy V2->Local Replication on the slave node.
  • Fill in the slave key and click “Add slave”.
  • After a couple of minutes, you should see configs replicating to the slave. (Check the Log tab on the master or slave to confirm that everything is in-sync.)
  • NOTE: You do not need to start the Redundancy service for replication to work. Replication takes place automatically once the slave node is connected.

Example: Creating a Highly Available HTTP load balancer

  • Using the “Create a Load Balancer Wizard” (Balancer->Create a Load Balancer), select the standard HTTP Load Balancer.
  • Give the LB a name.
  • Set the IP address to ‘Any’. Note: For Aria HA deployments in Azure we recommend setting the IP address to “Any” (0.0.0.0). This simplifies the config as each Aria node will have a different internal IP).
  • Set the listening port. (Ensure that you have a load balancing rule setup for this port on your Azure Load Balancer config, we are making use of the Port80 rule we set up earlier).
  • Add your back-end servers and click Add Wizard Group.

  • Now reload the balancer to make the newly added service live.

The replication log on the slave can see the HTTP load balancer just created, and it has been automatically replicated. The service is now live on both Snapt Aria ADC’s and can be reached through the Azure LB PIP.

NOTE: To use specific internal IP addresses, you need to create an additional back-end pool pointing to those IPs.

 

 

Updated on June 7, 2021


Was this article helpful?