Please read these 2 short paragraphs from the same chapter of a Cisco press book regarding data center architectures.
SLB Router Mode
The servers typically use the SLB inside address as their default gateway. As reply traffic
from the server to the end user passes through the SLB, the SLB changes the server’s IP address
to the appropriate VIP address. Therefore, the end user has no way of telling that
there is an SLB device in the path, nor does the end user see the IP address of the real
SLB One-Armed Mode
Inbound end-user traffic is routed to the VIP on the SLB device. The SLB device then
translates the IP destination address to a physical server IP address and forwards the traffic
to the physical server, the same as it does in routed mode. The main difference is that
return traffic must be forced to go to the SLB device so that the source IP address of traffic
from the physical server can be translated back to the VIP that the end user device
thinks it is communicating with.
How is that a difference from routed mode? From the authors own description of the two architectures -- routed SLB and one (or two) armed mode -- the return traffic from the physical server goes back through the SLB and gets NAT'ed to the server farm VIP address.
What am I missing?
What do you mean that the SLB will be sitting on another vlan? Lets talk interfaces because describing the whole device lacks necessary detail, right?
I guess you mean that the single interface on the SLB that is used for the hair-pinned traffic is sitting on vlan 12. Correct?
Is it a L2 or L3 interface? Im assuming that since the SVI for vlan 12 is on the MSFC that the interface on the SLB is L2.
What about the serverfarm VIP? Is the VIP bound to any interface or is it just floating, so to speak? Is it on VLAN 12?
I guess what I am missing are the details regarding the traffic flow and how each interface is configured.
if you have sample configurations for a router/LB/switch that is using SLB in one armed mode, that would be great....
I don't have a config but lets assume we are dealing with a standalone load-balancer because in effect it doesn't really matter.
In one-armed mode the load-balancer (LB) will be on it's own dedicated vlan eg vlan 12. There would be a L3 SVI on the MSFC for vlan 12 and the LB would have one interface connecting to a port on the switch. This interface would have an IP address from the vlan 12 subnet. So yes in effect the traffic is hairpinned out of this interface.
So traffic comes from a client. It arrives at the 6500 on vlan 10. The packet is destined to a VIP. The MSFC routes the VIPs to the LB interface IP address so the LB is in effect a L3 next-hop from the MSFC.
The packet arrives at the LB. The LB then selects a server and NATs the VIP address to the real server address. It then NATs the client IP address to an address that the MSFC routes to the LB. The packet is then sent back out of the LB interface to the SVI for vlan 12 on the MSFC. The MSFC then routes this onto the server vlan which is vlan 11.
The return flow is from server to vlan 11 SVI, then to LB on vlan 12 which does the necessary NAT on both src/dest, then back to the SVI for vlan 12 on the MSFC and from there the packet is routed back to the client.
Now i have to be honest all this explanation has been done from memory and it was with a CSM-S module in a 6500 switch. But that is virtualised to an extent ie. there are no physical interfaces on the CSM-S so the same should apply to the SLB. Whether or not you actually configure it the same way, i'll need to check so when i get a moment i'll have a read of the SLB docs.