We are currently transitioning from a legacy guest overlay network using gre tunnels w/ vrf that I implemented a year ago. We are piloting controller-based guest using auto-anchor mobility. I currently have 1 site up & running no problem. I'm looking to flesh out a few issues beyond the strawman of documentation found in the mobility SRNDs as well as the 4.1/4.2 config guides. I've also been watching the forum pretty closely. That being said, if I've missed a document or thread somewhere that answers my concerns, please point me in the proper direction. Here goes:
A bit about our network. 2 Centralized datacenters with internet POPs servicing 30 hospitals nationwide. All hospitals dual-star hub & spoke with DS-3s to both DCs. Foreign controllers @ hospitals are WISMs with the anchors @ the DC's being serviced by N-by 4402's. Current standard code is 4.185 for wisms & 4.2 for the 4402's. We'll likely move the hospitals to 4.2 assuming that the caveat with rebooting WISMs is ever resolved as we had our share of that with 4.181. Now onto the questions:
1) Mobility groups for anchors vs foreign - I've read conflicting documentation on CCO regarding this. Currently we're adopting the opinion that all hospitals' foreign controllers will use a corporate 'standard' mobility group, call it "CORPMOBGRP" and that all anchor controllers (in both DCs) will be members of a separate mobgroup, say, "DMZMOBGRP". [Although it's not related to guest, wouldn't mind hearing if anyone sees any pitfalls w/ standardizing the foreign mobgrp names across the enterprise.] So my question would be, can anyone confirm that a) Anchors should all be in different mobgrp from the foreign controllers and that b) All anchors should be in the same mobgrp and not one for DC1 & one for DC2. A business requirement is to provide failover for guest services between DCs, although it does not HAVE to be completely transparent.
2) Guest LAN interfaces on anchor - Understanding that DHCP is serviced based on which anchor lan interface the anchor wlan maps to and further understanding that it seem to be traditional best practice to break the hospitals up into separate IP / DHCP spaces my goal would be to create a different anchor interface and wlan for each major hospital system. Of course the connection from the anchors to the next hop firewall or router will be a .q trunk with multiple vlans. Does this sound like a reasonable approach without peril?
Also, when I tried to set up the 2nd hospital on Guest last week, I was getting some odd errors in WCS when trying to apply the wlan template. It was telling me that another wlan already existed (this is on the anchor, remember) that had the same SSID (guestnet) as the new one I was attempting to deploy. While it IS true, that there are multiple wlan profile templates that contain within them the same SSID, it's my understanding that that shouldn't matter since the wlan profile is 'differrent' (i.e. guestnet-hospital1 vs guestnet-hospital2) although they have SSID values "guestnet" that are the same. I can't imagine that Cisco would think that there'd be any substantial usefulness in this guest solution if every one of my facilities had to either A) Implement a different ssid for guest @ EVERY HOSPITAL or B) Map ALL 30-40 hospitals to 1 anchor WLAN profile to be able to use a "unified" ssid withint the WLAN profie definition. Kind of wondering if this is another area where you can use an 'arbitrary' value bring things up since the actual SSID (not wlan profile name) SHOULD only be locally significant @ the foreign location.
Thanks for sticking with me this far. We've got a good account team and an even better, respected wireless firm working with us on this and everyone's got just enough experience to get 1 site up & running or 1 wlan / interface defined but when it get's complex w/ multi-foreign-site, multi-anchor-site, etc that's NOT right out of the config guide then things start to get dicey.