ASA Firewalls | High Availability with Clustering
So, you know you need high availability in
the network And, you’ve decided that ASA clustering is worth a closer look The good news is that I can help you with that. Clustering is one of the best methods of high availability for the ASA. All ASA’s in the cluster are active, and if you have the 5585’s, they scale well In this video, we’ll have a look at how
ASA clustering works, and how traffic flows through the cluster. If you would like a refresher on other types of HA, I recommend checking out my ‘ASA
High Availability’ video. Each ASA in a cluster is a cluster member.
Try to think of a cluster as a single unit. Sure, it’s made up of ASA’s, but it behaves
as a single unit. In this unit, one of the ASA’s is the brain.
It’s called the ‘primary’, and it performs most of the control plane tasks. If you’re
not familiar with control plane functions, think of them as any traffic sent to or from
the cluster. This is different to traffic sent through it. An Example is OSPF
All other cluster members are ‘secondary’ units. All cluster members form the data plane.
This means that they all forward traffic. Some features will only work on the primary
ASA, as they are control plane features. These features are called centralised features.
An example of this is OSPF. The primary participates in dynamic routing and builds a routing table
The primary ASA replicates the routes to the secondaries. The secondary ASA’s never form
neighbour relationships with other routers. The cluster members decide on the primary
by an election. You can set a priority on each ASA to influence the election. The lowest
number is the highest priority If there’s a tie, serial numbers are used
to decide on the primary There’s something that I really need to
emphasise though… There is no pre-emption in a cluster. If a member joins with a better
priority, it does not automatically become the primary. An election is only held if the
current primary fails. An election is only automatically held if
the primary fails Imagine what would happen if we did have pre-emption.
Remember how I said that some features are centralised? If the primary were to change,
all those centralised connections, such as VPN’s, would drop. They would then have to
be re-established with the new primary. Not exactly ideal is it? Remember that we like to think of the cluster
as an entire unit But what happens if a member in that unit
fails? Well, quite simply, it is removed from the cluster
But an entire member failure isn’t the only threat to the cluster.
Interfaces in a cluster are critical If an interface on the cluster fails, that
will also cause the member to be removed This is true of data interfaces as well as
the cluster control link. After repairing the interfaces, the member can rejoin the
cluster. As a final note, you will find documentation
that refers to ‘master’ and ‘slave’ roles. These are the same as primary and secondary
in the newer documentation. You can think of each cluster member as a
line card in a chassis. Each line card is connected by the backplane.
In the ASA cluster, a group of ports are configured on a separate network, called the Cluster
Control Link. This is the cluster’s backplane. The CCL carries control messages, such as
elections, config replication, and health monitoring.
It also carries some data traffic. This includes queries about which member a particular packet
should be delivered to. But, we’ll look at that more later The CCL uses dedicated ports that connect
to a switch If the CCL goes down on a member, the member
is removed from the cluster. Even in a two-member cluster, the switch connection
is mandatory Why you ask? Well, the CCL is a critical part
of the cluster. Imagine for a moment that the ASA’s were directly connected
What if the CCL from one member were connected directly to another, and one member ASA failed?
The CCL would go down on both members. The second member would also be removed from the
cluster, and the entire cluster would fail. Connecting to a switch will prevent this
If an ASA fails, the link to the switch goes down
But, the links to the other ASA will stay up.
To enhance Redundancy, use a vPC or VSS connection Every packet that passes through the cluster
is classified into a connection. Source and destination IP addresses, ports and protocol
form this classification The cluster tracks each connection. This is
so each connection can be handled by a single ASA member
To do this, there are three connection-oriented roles that ASA’s may hold.
The first is the owner role Each connection is owned by a single member.
All packets from a connection must pass through the connection owner.
When the first packet from a new connection arrives, the member to receive the packet becomes the owner. It begins tracking
the connection A single ASA may own thousands of connections When the owner tracks a connection, it sends
a backup copy of the connection details to another ASA. This ASA is called the director
The director maintains a backup of the connection state. There is only one director per connection
To select a director, the owner calculates a hash of the connection details. This makes
it easy for other members to find the director later A member may get a packet for a connection
that it doesn’t own. This member is called a forwarder
The forwarder calculates the hash value for the connection to find the director. The director
may also be a forwarder. The forwarder then queries the director to
find the owner. It forwards the packet to the owner to process. If an owner were to fail, you may think that the director will automatically become the
new owner. If you thought that, you’d be wrong.
Instead, the first member to receive a packet from the connection will become the owner.
It will first query the director to get the connection state information. It then resumes
forwarding packets Cluster members are connected to switches.
These switches need to decide which cluster members to deliver packets to. There are a
few different ways this could be done. The first is with an etherchannel, which is
the recommended solution As shown here, this could be a connection
to a single switch. A better way is to use vPC or VSS for redundancy
In this solution, normal etherchannel load balancing methods allocate packets to cluster
members. If you need a refresher on how this works, have a look at the etherchannel article
on the Network Direction site. The second option is to use policy based routing.
This option uses ACLs and route maps to direct traffic ECMP is the third option. This is where the switches and the cluster members run a dynamic
routing protocol. Each ASA member appears to be a path to the destination, where each
path has an equal cost. The routing protocol load balances traffic
the same way it would for any ECMP topology The final option is new, and is called Intelligent
Traffic Director, or ITD This is Cisco’s proprietary load balancer
on the Nexus platform. It’s like PBR, but is more granular and automated in the way
it allocates traffic When selecting a method, try to pick one that
will deliver all packets from a connection to the same ASA member evey time. This prevents the
need for forwarders to pass packets over the control link.
Where possible, try to avoid using NAT I know what you’re thinking… ‘We need
NAT! there’s no getting around it!’. Well, that’s probably true. But consider
what happens with NAT. By its very nature, NAT changes IP addresses and ports. Generally,
load balancing is based around IP addresses and ports. This means that load-balancing
may become asymmetric. If this happens, the packet will be forwarded over the cluster
control link All data interfaces in a cluster must be in
the same mode. The mode can be set to spanned-etherchannel or individual
Spanned-Etherchannel mode may use a single switch, or a pair of switches. The recommended
method is to use vPC or VSS From the ASA perspective, all interfaces are
bound into a single logical link. This is like Etherchannel, but across all devices
at once. The ASA uses a modified version of LACP, called
cLACP to make this possible The switch pair appears as a single switch
to the ASA, and the ASA cluster appears as a single ASA to the switches.
Spanned-Etherchannel is recommend for two main reasons. For one, it doesn’t require
complex configuration. Also, it has very fast convergence, already built in, if there’s a failure. Individual mode is a bit different. Interfaces
are not bundled into a virtual interface. In fact, each interface has its own IP address
Connected routers or switches pass packets to the ASA interfaces using PBR, ECMP, or
ITD As individual interface mode IP addressing,
transparent mode is not supported. Management interfaces are also set to either
spanned-etherchannel or individual interface mode. This mode does not have to match the
data interfaces. It is recommended to use individual interface
mode for management. This means that each ASA can have it’s own management IP. This is useful for troubleshooting issues on a specific member In either case, console connections are still available Now that we’ve covered the theory, be sure
to check out the Cluster Configuration video to see it all in action.
Also, there’s a lot more information in the full article, so it’s a good idea to
have a read of that as well If you’ve enjoyed this video, or if it’s
been helpful in any way, please ‘like’ the video, or drop a comment below.
Subscribe to this channel or follow me on twitter to be notified of new videos and articles.
Thanks for watching