• Nginx does level 7 load balancing to the individual services based on url/service. The server behind LBs repond the request directly, because all servers will have the same VIP bound to their loopback interface
  • LVS does level 4 load balancing that LBs multiple Nginx instances. It uses keepalived, which is on active-standby model, to ensure LVS’s HA. Note that we don’t use keepalived on Nginx, since LVS will handle that.
    • Use lvs/f5 IN FRONT OF nginx, e.g. f5 can do 100k qps, and use the same keepalived + virtual IP trick on lvs, i.e., LVS will share the same VIP. This should be enough for most cases.
    • If a single LVS is not enougu, we can let DNS point to multiple virtual ips backed by the lvl 4 load balancer
  • All level 4 LB instances in the same cluster will have the same virtual IP and same virtual MAC address. Note that level 4 LB can easily handle 100k qps
  • DR mode: to change mac address on the data link layer. LB will not change ip address, but change target’s MAC address. All physical servicers and LB servers have the same virtual IP. The web server can respond to the user directly, bypassing LB
    • Use ARP to broacast VIP, and machine with such IP will reply with its own MAC address, but only LB can respond to the ARP reqeust
  • Problem with source IP hash with lvl 4 load balancing
    • When source IP is NAT gateway for large LAN
    • When mobile client reconnects, hard to go back to the original server with consistent hashing